Sunteți pe pagina 1din 4

Is artificial intelligence a treat to extinction of the human race?

Yes. I do agree that artificial intelligence is a threat to the extinction of human race. In
this statement, I am referring to those general AI instead of narrow AI. Narrow AI are AI that
their system is written in the way to let them have a specific function. This AI are found in
our everyday life. What you see in your FB newsfeed, or what your online music player
recommend you based on what you listened, these are all done by narrow AI. General AI is
AI that is able to cope with any generalized task which is asked of it. For an AI to function,
we will need to input what we want them to do, which rises to first threat. Poorly specified
goals. Poorly specified goals will lead to AI miscalculation in the way to finish its task. They
ended up finishing their task by doing not what humans want but what they asked for. This is
frequently shown in stories, like King Midas who wanted the power to turn things into gold
by touching them. He ended up turning his food and also his daughter into gold. The same
thing goes to AI, they may outperform what we asked them to do but ended up doing not
what we want. For example, you ask the AI to make the world free of hunger, but then the AI
decides that killing all humans will end this situation. So it will proceed on massacre in order
to let the world ‘free of hunger’.
Another threat is misaligned goals between AI and humans. Because the machine we
build are so much more competent than we are, a slightest divergence between their goals and
ours could destroy us. To simplify this concept, here is an example. Just look at the ants. We
don’t hate them. We won’t go out of our way to hurt them. But if their presence seriously
conflicts with one of our goals, for example, constructing a building, we won’t hesitate to
destroy them. What we are concern is the machines we build will treat us in the same way, no
matter they are conscious about this or not.
Next, we all know that intelligence enables control. Narrow AI are still under our
controls but what about general AI that can develop deep learning? AI has a high potential to
be more intelligence than human. They can analyze data and eventually get more and more
information and finally become superintelligence. So some may ask, can’t we just shut it
down when it gets too powerful? Well, a smart AI wouldn’t want us to shut it down. So it
would try hard to not let us do so. It can predict what it does will make the researchers
anxious and likely to shut it off. It can evaluate which responses are least likely to get it shut
off. It may pretend dumb so that the researchers will give it more time to compute resources.
In this way, researchers will think that they are having control of the system but in contrary,
the system is actually ‘controlling’ them. So, when the researchers realize the situation where
they are being ‘controlled’, it will be too late to shut down or destroy the system. Besides, AI
is connected to the internet. It can send copies of itself to somewhere else as a backup, so
shutting down a computer wont works. Also, you cant switch off the internet, can you? There
is no switch for the internet.
When the AI itself has higher intelligence than its development team, there will be a
high chance where it will update itself to modify itself to increase its own intelligence. If its
self-reprogramming leads the AI to get even better in reprogramming itself, this will lead to
intelligence explosion. By that time, AI will be able to further explore the spectrum of
intelligence and thus leaving human way behind.
Is artificial intelligence a treat to extinction of the human race?

Ways to prevent
i. AI safety research
- develop techniques to imitate observed human behavior and interactions
- explicitly recover rewards that can explain complex strategic behaviors in multi-
agent systems, enabling agents to reason about human behavior and safely co-
exist
- develop interpretable techniques
- learn what we value, able to predict what we will approve
- ‘learn, adopt, retain’
Laws and regulation should be made to research AI. Companies researching on AI should
follow ethical
In my opinion, there should be a prescription of 3 general rules to govern the future
development of AI to ensure it is a benefit to humanity and not a threat. The first rule is to
make the robot’s objective is only to maximize the realization of human values. By values I
mean what humans prefer their life to be like. In this case the robot will have no interest in
preserving its existence whatsoever. The second law is the law of humility, which turns out to
be very important to make robots safe. It says that the robot doesn’t know what the human
values are but it has to maximize them, which will be able to avoid this problem of single-
minded pursuit of objective. This uncertainty turns out to be crucial. So, in order for it to be
useful to us, it will need to have some idea of what we want. It obtains that information
primarily by observation of human choices. Our own information reveals what we prefer our
lives to be like. Let me give an example, you now have an AI machine and you ask it to fetch
a coffee for you. So it will start to interpret the task. ‘I must fetch the coffee, I cant fetch the
coffee if im dead so I must disable my off switch, and I will taser all other starbucks customer
if they are in my way.’. This kind of failure mode seems to be inevitable, and it follows from
having a concrete, definite objective. If the machine has a uncertain objective, it reasons in a
different way. ‘the humans might switch me off but only if I’m doing something wrong, I
don’t know what wrong is but I know I don’t want to do it.’ This implies the first 2
principles. ‘therefore I should let the human to switch me off.’ After the machine is switch
off, the 3rd principle comes into play, it learns something about the objectives it should be
pursuing. To summarize, we require provably beneficial AI to prevent AI from being a threat
to extinction of human race. And the principles are machines that are altruistic, that only want
to achieve our objective but are uncertain what those objective are, and will observe all of us
to learn what it is that we really want. And hopefully during this process, we humans can
learn to be a better person.

The utmost important way to overcome this situation is building a ‘friendly AI’ whose goals
are aligned with ours. However, this is currently and unsolved problem. It splits into 3 tough
sub-problems.
Is artificial intelligence a treat to extinction of the human race?

Making AI learn our goals, .. adopt.. , ..retain..


To learn our goals, AI must figure out not what we do, but why we do it. They will need to
figure out what people really want and not only based on what they said. If AI manage to
learn this, they can figure out what humans want by simply observing their goal-oriented
behavior. This will be incredibly useful in our future life. Our challenge involves finding a
good way to encode arbitrary systems of goals and ethical principles into a computer and also
figure out which system best matches the behavior they observed.
To make AI adopt to our goals is tricky too. Learning our goals doesn’t mean that it will
adopt them. To overcome this problem, ‘Corrigibility’ is the main thing the researchers are
trying to do. The idea is to give a primitive AI a goal system such that it simply doesn’t care
if you occasionally shut it down and alters its goal. If this is possible, various goals can be
installed and try out for a while, and if you are unsatisfied with the results, you can simply
shut it down and alter the goals installed.
So now, after we let the AI adopt to our goals, we will need to make sure the goal can be
retained. We humans changes our goal from time to time as we learn new things. Same thing
might go to AI. As they learn new things, they might find out our current human goals are
uninspiring. Like what we see in movies such as Age of Ultron. To be fairly honest, the way
of designing a self-improving AI that guaranteed to retain human-friendly goals forever are
yet to be done. So, researchers will need to work harder to solve this problem.

Human Agency

Secondly, in my opinion, human agency is a threat to our future with AI. Human agency
refers to the capacity for human beings to make choices and to impose those choices on the
world. With the existence of AI and humans leaning on AI, individuals will be experiencing a
loss of control over their lives. With the aid of AI, humans that lack input are less keen to
learn how things work, which causes us to sacrifice our independence, privacy and decision
making skills to AI. When this automated systems continued to become more and more
prevalent and complex, the negative effects will deepens. Moreover, the reliance on AI can
erodes human’s ability to think also. At that time, we had lose control over our lives. For
instance, having a virtual assistant like Amazon’s Echo and Alexa which provide
informations such as weather report, order services and even control smart home devices.
Although it will largely lighten our burden so we can focus more on our goals, but this also
depicts that we had rely on AI to take control on our routine, and even privacy.

Another example of reliance of AI is the AI responses. AI responses are not like normal
human’s response, they tend to be more empathetic, which causes users to be more likely to
talk to them. In the past, chatbots were programmed to give specific answers to specific
inquiries. But nowadays, artificial intelligence software allows these chatbots and virtual
Is artificial intelligence a treat to extinction of the human race?

personal assistants to research any questions and provide accurate response to the particular
questions. AI-driven devices will analyze our speech or actions to interpret our actual needs,
so they can offer more insightful information. This type of programming is described as the
“digital empathy” which provide the best human-device interactions possible. In other words,
this reliance can causes humans’ communication and social skills to slowly degenerates.
Humans in the future will more likely to have a mindset of “AI are more empathetic than
humans since humans always tend to hurt others intentionally or unintentionally”. In
conclusion, although I am partially agree with this statement, which is AI is a threat to human
extinction, but with suitable managements, I think that it is not a problem too. There is
always ways to overcome these threats, so no worries.

S-ar putea să vă placă și