Sunteți pe pagina 1din 13

Hendrickson 1

Marcus Hendrickson
Professor Benson
English 102
10 May 2016
Artificial Positive Intelligence
Bombs rain from the sky, rattling your skull as a constant tinnitus rings in your
ears. The new computer overlord broadcasts over every available speaker to preach about
the human species imminent eradication. Just as a mechanical arm bursts through the
door, you turn the television off. Science fiction uses the concept of artificial intelligence,
often recycling storylines of a dystopian future. The general populace has come to fear AI
due to these portrayals of omnipresent AI. Friendly artificial intelligence does exist in
popular culture: Who could forget Disneys loving character WALL-E? Or the helpful
TARS in the hit motion picture Interstellar (Hogan)? Alas, artificial intelligence still has
a negative connotation despite these positive characters. The pessimistic outlook is
unfortunate due to the advancements society has made in the field of artificial
intelligence. Self-driving cars, voice recognition, ATMs, anti-fraud protection from
identity theft, as well as global and national security systems are all major contributions
to society, and a result of AI (Haaxma-Jurek). Major software and computer companies
have achieved this growth in AI by investing in the field (Person). Experts such as San
Diego State University Professor Vernor Vinge and esteemed computer scientist Ray
Kurzweil predicted that AI would reach human-level intelligence within the next fifty
years. These predictions were viewed as completely unrealistic prior to the rapid
advancement of AI (Two). The improved capabilities of AI are welcoming in a golden

Hendrickson 2

age of improvement and innovation for our world and everyday life. While the thought of
artificial intelligence comparable to humans may seem threatening to humanity, if society
thoroughly prepares for and accepts the creation of sentient AI, its potential to benefit
humanity as a whole is endless.
Noreen Herzfeld discusses how the portrayal of human comparable AI in science
fiction can be separated into two categories (512). AI is either functional in that humans
use its knowledge and power, or it is classified as relational when humans have
companionship with the AI (Herzfeld 512). Examples of functional AI spiraling out of
control can be seen in Skynet in The Terminator or Hal in 2001: A Space Odyssey
(Hogan). These movies tend to be bleaker and involve AI killing humans. Relational uses
are seen less often but include films such as WALL-E and I-Robot (Hogan). These
repeated themes in science fiction foster a common fear of powerful AI. This is due to the
functional AI that becomes a menace to humans in films becoming more prevalent than
its relational counterpart (Herzfeld 512). This is important because this fear of AI breeds
hesitation in the minds of people that prevents it from becoming beneficial to society.
This apprehension that AI will become a malevolent enemy not only keeps people from
preparing to effectively use sentient AI, but also makes these negative outcomes more
likely.
John McCarthy, who is recognized as the father of AI, first coined the term
artificial intelligence in 1956; it must be defined before discussing AIs current and
potential effects any further (Herzfeld 510). The idea of AI as artificial is easy to
understand in that it is neither natural nor biological. Efforts are currently being made to
improve AI through biological manners, but it is an actual science of creating and

Hendrickson 3

engineering intelligent machines- specifically computer programs (De Garris; McCarthy).


The intelligence aspect of AI is much harder to define, as the definition of intelligence
isn't precisely agreed upon in the scientific community (Herzfeld 509). While the
opinions on what intelligence is differ, they do possess similarities. The Gale
Encyclopedia of Science states that a possible definition of intelligence is the acquisition
and application of knowledge (343). John McCarthy defines intelligence as the
computational part of the ability to achieve goals in the world. These statements
exceptionally explain how intelligence is incorporated in AI. An artificial intelligence is
one that is able to take outside information and achieve goals with said information. This
means that when confronted with different situations, AI can come up with resolutions
depending on the presented information. In some cases, AI will even learn.
The idea of a growing, learning, human-comparable artificial intelligence is one
that haunts science fiction but is a reality. The creation of these human-level AIs may
seem impossible, but in the eyes of the artificial intelligence field, it remains the
overarching goal (McCarthy). The definition and clarification of what these artificial
intelligences are is important as well. AI systems that can complete more than one task
and are able to learn to do it better can be classified numerous ways (such as strong AI or
artificial general intelligence) (Herzfeld 510). A more general and publicly understood
difference between AI and artificial general intelligence is that AGI has reached the full
level of human sentience (Herzfeld 510). The event when AI reaches the intelligence
level of humans is known as a singularity (Two). It is at this point that AI crosses into the
realm of AGI. This phenomenon of creating something as intelligent as humans is
profound and it raises just as many ethical and practical questions as it solves. A

Hendrickson 4

singularity event also splits the timeline of artificial intelligence and its effects on
humanity in half into a period before and after the singularity. This is because the
advancements from the singularity will be entirely different than the gains previously
made by AI (Two). McCarthy believes the ultimate goal for the use of AGI is to solve
issues and problems as well as or better than humans. McCarthys statement is imperative
because when singularity does happen, not only will it be able to solve problems as well
as humans, but better. This would lead to improved quality of life for all of humanity as a
result of preparing for the invention of AGI.
The rise of AI in our society is one that cannot be prevented. Kevin Kelly,
founding member and former executive editor of Wired, highlights this issue. Kelly talks
about how we should not be in a competition against AI in his essay Better than Human:
Why Robots Will - and Must - Take Our Jobs. He empowers humanity to embrace the
idea of competition with AI on our side (311). This is essential in realizing how AI can be
positive for humanity. Discarding apocalyptic functional AI stereotype, and working in
tandem with AI allows the relationship idea of AI to be embraced. Some current
examples of these kinds of relationships are taking place in games. A very famous
example of AI besting humans occurred in the game chess. Clive Thompson, a prominent
writer on technology appearing in Wired and the New York Times, explains why this idea
of competing with AI is so powerful. In Thompsons book Smarter Than You Think, he
describes how after an AI beat the world's best chess player; a fun little experiment took
place (343-344). A tournament was held where all participants were allowed to use a
computer containing a chess AI of their choice (344). The result was that even novice
chess players who were able to oust the best chess players in the world, as well as the

Hendrickson 5

most powerful chess AIs (Thompson 345-346). The defining factor for who placed the
highest was the participant's ability to work effectively with their computer (Thompson
346). AI will not inhibit humans, only propel humans forward if used efficiently and
correctly.
Pre-singularity artificial intelligence is improving the global economy. In the
educational mini documentary Humans Need Not Apply educational group CGP Grey
shows the economic effect of AI was less obvious in the past. Robots powered by AI
were only on assembly lines and required human operators to work; AI was not visible in
our daily lives. Fully autonomous robots powered by AI are here. They are starting to
capture public attention, and they need to be widely addressed (CGP Grey). AI is cheaper
to employ than its human counterpart, making it an economically inefficient to choose
otherwise. (CGP Grey). This creates an issue though, because since AI does not
necessarily have to do the job perfectly, just better, all jobs are at risk (CGP Grey). The
thought of AI replacing us as a workforce is not a bad idea. It would push a higher
standard of living into a new era. Renowned economist Klaus Schwab puts it rather
eloquently stating, it can also lift humanity into a new collective and moral
consciousness. This proves two points, first that the effects AI will have on the economy
are in effect. Second, Schwab and CGP points argue that the presence of AI in the
workforce will improve the economy. How we choose to prepare for AIs introduction is
what determines if it is a smooth transition to a fully autonomous society.
This leads into what Klaus Schwab refers to as the fourth industrial revolution.
The fourth industrial revolution is the foreseen transition and acceptance of new
production processes and technologies in today's society (Schwab). Schwab believes that

Hendrickson 6

the fourth industrial revolution is going to change humanity drastically, and for the better.
It will reduce poverty, violence levels, raise global income, and increase the quality of
life of all (Schwab). This is important in considering AI and its overall effects on
humanity, because as CGP Grey has stated it has already increased abundance in our
economy. Schwab believes the economic improvements will only grow stronger. Mark
Thoma, an economist from University of Oregon who writes for CNBC, believes that AI
and the revolution it brings will increase productivity significantly in his article on CBS
website. This supports Schwab's theory that AI will positively affect the world economy.
Schwab and Thoma agree that AGI will increase the world population's standard of
living, further proving the idea that AI will have a positive effect on humanity. However,
some people would like to perpetuate the belief that we are not ready.
Bill Joy, the chief scientist at Sun Microsystems, discussed in an article on
WIRED why we as humans would benefit from waiting to delve deeper into the world of
AI. Saying that even though we as a race are accustomed to living with almost routine
scientific breakthroughs, we have yet to come to terms with the fact that the most
compelling 21st-century technologies robotics, genetic engineering, and
nanotechnology pose a different threat than the technologies that have come before.
While Joys assumption of the future for AI is bleak, he does not say we shouldn't
investigate it at some point. Joy relates the idea of AI and future technologies to the
nuclear technologies. While it brings major advancements to society, it also brings major
threats. Just as the human race may have contributed to not pushing the development of
nuclear technology so strongly and avoiding issues such as atomic bombs and the cold
war, we may benefit from do the same with AI. AI can have amazing effects if used

Hendrickson 7

responsibly, whether it is through Kurzweil's belief of transferring our minds into the
digital realm, or through vast economic gains to society (Two). Just as how the benefits
from nuclear technology can be overlooked due to the devastation it has wrought, the
same can happen for AI. It just depends on how we as a world handle it.
The ethical implications of AGI should be considered prior to its development.
Hugo de Garis, a Director at Artificial Brains Lab at Xiamen University in China,
believes that as a society we will start to question the morality of AI in our world (De
Garis). De Garis goes on to discuss how as humans see the rapid advancements of robots
due to breakthroughs such as deep learning, they will start to fear that the reaching of
singularity is very close. Kurzweil mentions the phenomenon in which technology has a
habit of sneaking up on society and how people are usually unprepared for it (Two). This
supports De Garis belief that humans will make an alarming split into two categories due
to the quick rise of robotics and AI by affirming the unpreparedness of humans for
singularity. This is where De Garis and other apocalyptic AI experts such as Robert
Geraci, who is a professor at Manhattan College in Religious Studies and author of
multiple essays on apocalyptic AIs religious connections, agree. Humans themselves
will be the cause of any danger that could occur from AI. This is influential in the
argument for the future advancement of AI, as it puts the fear of any negative results
from AI on humans instead of the computer. Schwab touches on this idea saying All of
us are responsible for guiding its (AI) evolution, in the decisions we make on a daily
basis as citizens, consumers, and investors. De Garis states that there will not be any
conflict between humans and AI, only humans against humans in the debate of AI.

Hendrickson 8

A lot of the negative stigma of that spawn from AI has to do with singularity and
how it will cause an apocalypse. Recently, many of the large public figures who had
expressed worry for AI have changed their mind. Bill Gates, founder of Microsoft and a
public figure on technology, went from a rather negative view on the future of AI to a
much more positive stance. During an interview about IBM stocks Bill Gates talked
about how AI has a lot of potential to do good things in the world, saying how it can have
a large positive impact in the next twenty years (Belvedere). Bill Gates alludes to
singularity in the far future, but still talks about how that is an issue we have to deal with,
something that we as humans must take responsibility for (Belvedere). Elon Musk is a
highly recognized technological entrepreneur, having founded the electric car company
Tesla, and even starting a private space exploration company. The views that Elon Musk
holds regarding AI mirrored those of Bill Gates for some time, but have changed to have
a more preparatory outlook. Elon Musk is currently heading a group of investors who are
contributing one billion dollars to a non-profit research company by the name of OpenAI
that hopes to advance AI that will aid humanity (Townsend). Elon Musk believes that by
proactively equipping everyone with AI, future problems with AGI can be prevented by
relying on the ideal that there is more good than bad in the world (Metz). Kurzweil even
talks about this idea, acknowledging that there is some form of AI in everyone's phone
and that eventually there will be intelligent AI everywhere, mitigating any negative
effects. This again puts the responsibility of AI being positive or negative for humanity,
on humans. Society must take responsibility for AI preparation, as it is a tool that we can
be majorly beneficial.

Hendrickson 9

Science fiction is becoming reality with the breakthroughs of biological


programing, light powered computer chips, and neural nets (Herzfeld, De Garis). One of
the more promising advancements that has been made is something called deep learning.
Deep learning is a recent advancement in AI that is allowing it to achieve feats that were
only a dream before. Robert Hoff talks about not only how deep learning came to be but
why it is so important. Hoff discusses how with new age algorithms and technology it is
now possible to create neural networks that act similarly to a humans neocortex. This
allows AI to now think deeply and start making breakthroughs in reading, understanding
differences in shapes, and sounds. Humans are not capable of creating these massive
artificial neural networks. For this reason, other computers are these creating artificial
brains (CGP Grey). Robots building other robots may seem straight out of science fiction,
but it is very real and alludes to why people like Bill Joy believe AI could pose a threat.
The success of AI is dependent on its human creators. People using tools to
create better tools isn't a revolutionary idea, it is something that humans have been doing
forever. It has led to some very impressive advancements, as well as some very grim
weapons and mistakes. However, this was never the tools fault. There was never a point
in which the hammer and anvil accidently forged a sword. Humans created all of these
tools and are responsible for their usage. The argument humans can create bad things on
accident is incorrect. A nuclear bomb wasn't just made on a whim, it took planning,
resources and knowledge of what was being made. The same can be said for artificial
intelligence, and artificial general intelligence. While AI may hold some special
exception in that it can create new AIs, it cannot be done without human supervision

Hendrickson 10

(CGP Grey). When it comes down to it, the responsibility of the effects of what humans
create cannot be displaced on to the creation.
Artificial intelligence is going to change the world, that is the reality. How we
handle all of the sweeping changes and ethical questions AGI brings with it is dependent
upon us. Artificial intelligence has the potential to improve the world's economy greatly,
while it also has the ability to throw the world through the loop of mass unemployment if
we are not prepared (Thoma; CGP Grey). Humans are responsible for implementing AGI
successfully to improve the standard of living worldwide. It is up to humans to make sure
AGI does not lay off the world and render us useless. If the world is not fully prepared
for AI reaching singularity, we will be faced with immense issues of war and existential
questions (De Garis). If the world does not prepare for AGI, it may face a war that Hugo
De Garis estimates will lead to the death of billions. Questioning who we are is important
as Kurzweil says that with AGI we may very well have the opportunity to upload our
consciousness. Humans have the potential to redefine the world with AGI; artificial
intelligence does not have the potential to devastate the world or lift it up. If we address
AI as actively as field leaders such as Kurzweil, Musk, and Schwab are then there is
much to look forward to. However, if the trend of ignoring the rapid advancement of AI
is continued there is very little to look forward to. Humans should look towards AI in a
positive light as Charles Munger, the vice chairman of Berkshire Hathaway, said during
the same interview with Bill Gates. Munger likes the idea of AI because we are so short
of the real thing (Belvedere).

Hendrickson 11

Works Cited
Belvedere, Matthew J. "Bill Gates: No Reason to Fear AI Yet; in Fact, It Could Be Your
New Assistant." CNBC. CNBC, 02 May 2016. Web. 02 May 2016.
CGP Grey. Humans Need Not Apply. Online video clip. YouTube. YouTube, 13
August 201. Web. 2
DeGaris, Hugo. The Artilect War: Cosmists vs. Terrans; A Bitter Controversy
Concerning Whether Humanity Should Build Godlike Massively Intelligent
Machines. Palm Springs: ETC, 2005. Google Books. Web. 28 April 2016.
Geraci, Robert M. "Apocalyptic AI: Religion and the promise of artificial intelligence."
Journal of the American Academy of Religion 76.1 (2008): 138-166.
Haaxma-Jurek, Johanna. "Artificial Intelligence." The Gale Encyclopedia of Science. Ed.
K. Lee Lerner and Brenda Wilmoth Lerner. 5th ed. Vol. 1. Farmington Hills, MI:
Gale, 2014. 341-346. Gale Virtual Reference Library. Web. 28 Apr. 2016.
Herzfeld, Noreen L. "Artificial Intelligence." Encyclopedia of Religion. Ed. Lindsay
Jones. 2nd ed. Vol. 1. Detroit: Macmillan Reference USA, 2005. 509-513. Gale
Virtual Reference Library. Web. 27 Apr. 2016.
Hogan, Michael, and Greg Whitmore. "The Top 20 Artificial Intelligence Films - in
Pictures." The Guardian. Guardian News and Media, 08 Jan. 2015. Web. 12 May
2016.
Joy, Bill. "Why the Future Doesnt Need Us." Wired. Conde Nast Digital, 1 Apr. 2000.
Web. 23 May 2016.

Hendrickson 12

Kelly, Kevin. Better than Human: Why Robots Will - And Must - Take Our Jobs Graff,
Gerald, and Cathy Birkenstein They Say / I Say: The Moves That Matter in
Academic Writing. New York: W.W. Norton, 2010. 299-312. Print.
McCarthy, John. "What Is Artificial Intelligence?" Formal Reasoning Group. Stanford
University, 12 Nov. 2007. Web. 27 Apr. 2016.
Metz, Cade. "Elon Musks Billion-Dollar AI Plan Is About Far More Than Saving the
World." Wired. Conde Nast Digital, 15 Dec. 2015. Web. 9 May 2016.
Person, and Robert D. Hof. "Deep Learning." MIT Technology Review. MIT Technology
Review, 23 Apr. 2013. Web. 12 May 2016.
Schwab, Klaus. "The Fourth Industrial Revolution: What It Means, How to Respond."
Weforum.org. World Economic Forum, 14 Jan. 2016. Web. 9 May 2016.
Thoma, Mark. "What Happens If Robots Take All the Jobs?" CBSNews. CBS Interactive
Inc., 21 Jan. 2016. Web. 09 May 2016.
Thompson, Clive. Smarter than You Think Graff, Gerald, and Cathy Birkenstein They
Say / I Say: The Moves That Matter in Academic Writing. New York: W.W.
Norton, 2010. 340-346. Print.
Townsend, Tess. "Why Elon Musk Is Nervous About Artificial Intelligence." Inc., 14
Dec. 2015. Web. 9 May 2016.
Two of the Smartest People in the World on What Will Happen to Our Brains and
Everything Else. Perf. Neil DeGrasse Tyson and Ray Kurzweil. Tech Insider.
Tech Insider, 18 Jan. 2016. Web.

Hendrickson 13

Vinge, Vernor. "The Coming Technological Singularity: How to Survive in the Posthuman Era." NASA. Lewis Research Center, Vision 21: Interdisciplinary Science
and Engineering in the Era of Cyberspace (1993): 11-22. Web.

S-ar putea să vă placă și