Documente Academic
Documente Profesional
Documente Cultură
States military?
David Bygel
Abstract
With a recent push for autonomous weapons in the military, many have wondered about the role
of artificial intelligence in the military. As our technology continues to become more advanced,
there is a larger possibility that weapons will soon be manning themselves. In this paper, I will
explore the role that artificial intelligence plays in our military, including what it may look like,
and the alternative routes we can take. This paper draws primarily on sources written by
professionals well versed in the field of artificial intelligence, and international law. What was
found shows that artificial intelligence can be extremely dangerous if used to power weapons. AI
has already surpassed humans in most of the scientific fields, however, studies continue to show
that it is notoriously difficult to create a moral code. This means that at this point in time AI will
not be able to tell enemies apart. A large motivation for the AI-powered weapons is that they will
cut costs, and reduce manpower. However, with the semi-autonomous weapons that we have
seen already, that belief is only a myth and so far we have only seen an increase in cost and
manpower ever since they were deployed.
9 February 2018
Bygel
1
Bygel
Part I: Introduction
For years, artificial intelligence has been what many have dreamed of. We see it in
movies, books, and magazines. Often times however, AI has not been presented accurately. In
this paper, I will aim to teach the reader what artificial intelligence is, and the role it plays in our
military.
Throughout time, the United States military, along with militaries across the world have
rushed to find the newest and most advanced technologies they can get their hands on. Recently,
autonomous weapons and artificial intelligence have been a prime example. While intelligent
machines have the potential to create tremendous benefits for our world, they also have potential
to contribute the exact opposite. This is why many are skeptical of the idea of a highly advanced,
automated military.
While we have not yet deployed a fully autonomous weapon, many worry that when we
do, we will be deploying an autonomous killing machine. On the other hand, some believe that
these are myths and that artificial intelligence will only make a country more powerful and more
advanced. While both arguments are valid, I believe that the United States military should
continue to pursue artificial intelligence, however, they should avoid the production of fully
2
Bygel
Much of what we use today is automated: the thermostats in our homes, the yard
sprinklers outside, and the air conditioning. We even have the ability to pay our bills through an
automated system. While these systems are considered ‘autonomous’ because they perform
actions without human interaction, most, if not all of them are set up and designed with certain
responses to all scenarios. Throughout this paper, I will be using the term ‘autonomous weapons’
to describe systems equipped with artificial intelligence that are not set up to respond under
every condition.
The search for true artificial intelligence in the military has been around for a long time.
In fact, it can be traced back to the Cold War and the arms race. In his article titled, “It’s already
too late to stop the AI arms race—We must manage it instead,” Edward Geist says, “There has
been an artificial-intelligence arms race since before there was artificial intelligence” (318).
Fortunately, by the end of the era, AI never reached the point that countries wanted it to.
Although they developed weapons that could select targets and fire, like acoustic homing
torpedoes, the other ideas remained on the drawing board so that we could revisit them in the
future. Many believe that the ‘future’ is now, and believe that we are looking to expand our
This belief has become so strong that recently that an open letter circulated across the
world that was calling for a ban on autonomous weapons. This letter included notable signatures
from people like Tesla and SpaceX CEO Elon Musk and physicist Stephen Hawking, alongside
leading AI scientists like Google Director of Research Peter Norvig (Geist 318).
3
Bygel
So, what exactly is artificial intelligence, and how does it work? A common
understanding is that it will allow a computer to think like and perform tasks similarly, if not
better than, the average human being. In terms of its role in the military, artificial intelligence
will allow a system or weapon to operate and function on its own allowing for the creation of
autonomous weapons. This means that the weapon system is now capable of selecting and firing
targets of its own choice. If this is the case, AI can help cut costs and manpower in the military
simply by automating tasks originally meant for soldiers on the ground. This is why a large part
the U.S. military and other countries are so interested in the continued development. However,
the thought that advanced artificial intelligence is capable of manning autonomous weapons is
fearful to some.
We can draw examples as to why many are skeptical when looking at past uses of AI in
the military. First off, it is important to look at the use of the Patriot defense system (a
semi-autonomous weapon system). In the journal titled, “The Myths and Costs of Autonomous
Weapon Systems.” Robert Hoffman and two other researchers write "The Patriot air defense
system was one of the first US weapons to employ "lethal autonomy," which refers to systems
that are able to apply lethal force with minimal or no human oversight" (48). The Patriot missile
system was designed to protect surrounding areas from enemy missiles, jets, and other threats
inbound via air. When the weapon system was originally deployed during Operation Iraqi
Freedom in 2003, it was an initial success as it properly engaged nine enemy ballistic missiles.
Although, its success was short-lived, as it later engaged friendly aircraft resulting in two
instances of fratricide (the accidental killing of one's own forces in war) (48).
4
Bygel
Secondly, we can look at the Predator drone. Similar to the Patriot missile system,
predator drones were a success at first. At first, all pilots needed to do was plug in coordinates
and get the drone off the ground. After this, they had monitor sub-systems to keep the aircraft
airborne. It was a new way to gain intelligence from the air, and a new way to perform
reconnaissance missions. Unfortunately, after the engineers at General Atomics, along with other
weapon manufacturers added Hellfire missiles to the drone, the number of civilian casualties in
conflict rose significantly (50). If this is any indication of where we are going with AI-powered
weapons, there is no surprise why many people are fearful of the potential role of artificial
important to look back at what was costly, and what proved to be successful. To start, the very
first incident came when the "Patriot system misclassified a British Tornado fighter-bomber as
an anti-radiation missile, launched its own missile at the aircraft, and killed the two
crewmembers aboard" (Hoffman 48). After this, the original system went into ‘standby mode.'
Essentially, when the system was on standby, it was required to alert a soldier on the ground
before firing a missile. This means that the soldier was making the final call. While this reduced
fratricide, it defeated the original purpose, which was to eliminate the human-machine
Though, in less than a month after putting the system into standby mode, the Patriot
missile defense system engaged and shot down a Navy F-18, killing its pilot (48). After these
5
Bygel
two tragic incidents, it was apparent that this system was not doing what it was initially designed
to and many did not believe in the system. When quoting several Army commanders, Hoffman
writes:
After the two incidents and the tragic loss of three lives, senior Army commanders
expressed their concern over the apparent “lack of vigilance” on the part of Patriot crew
After reading this statement from Army commanders, words such as "unwarranted trust
in automation” and, “lack of vigilance” really stick out. It is apparent that the system was not
quite ready, and that the technology being used displayed a "lack of cognizance." If this was the
case, why would they create, and even deploy the system? We know that the goal of automation
is to use artificial intelligence to cut military costs and manpower, while also increasing
effectiveness and accuracy on the battlefield. However, the creation of the Patriot missile and air
defense system almost seems to have been rushed. In the article, “The Myths and Costs of
Autonomous Weapon Systems,” John Hawley relays information from a Patriot crew member
who said, “We thought that automation did not require well-trained crews. I guess we were
wrong” (249). This came after the two fratricide incidents, requiring the military to train and
teach members much more than what they initially needed. If the Patriot missile defense system
is any indication of where autonomous weapons are taking us, it is important that countries
acknowledge the downsides. In this case, not only did automation drive costs up, rather than
cutting them, it also required more soldiers to man the system as the crewmember said.
6
Bygel
autonomous weapons, its capabilities can be alarming. Because autonomy works on its own
without any human-machine interaction, it is not surprising that a large number of people believe
know that just because a machine is equipped with AI, it does not necessarily mean its only goal
Regulated?” Authors Amitai Etzioni and Oren Etzioni agree with the findings of a panel
organized as part of Stanford University’s One Hundred Year Study of Artificial Intelligence.
The findings emphasize the possibility that artificial intelligence has the potential to become
more dangerous than nuclear weapons. If this is the case, regulation or a complete ban on
While I do not believe autonomous weapons are necessary in the military, the continued
However, we must still be careful with it. Even though the U.S. military is spearheading the
rapid development of autonomous weapons, there are many different routes that could be taken.
In the journal, “Military and National Security Implication of Nanotechnology,” Jitendra S. Tate
mentions, “This intellect could possibly outperform human capabilities in practically every field
from scientific research to social interactions. Aspirations to surpass human capabilities include
tennis, baseball, and other daily tasks demanding motion and common sense reasoning“ (Tate et
al. 21). Even today, artificial intelligence has provided us with answers that allow us to build and
7
Bygel
AI has been an active and dynamic field of research and development since its
2014). In past decades, this has led to the development of smart systems, including
phones, laptops, medical instruments, and navigation software. (Tate et al. 21).
A complete ban on advanced artificial intelligence in the military would not be beneficial.
Because AI has the ability to outperform us in every scientific field, AI could lead to plenty of
new groundbreaking discoveries that we may not find without the help of artificial intelligence.
As I have said, we have yet to experiment with weapons completely powered by AI.
However, we have used semi-autonomous weapons. The U.S. Navy, for example, can land
unpiloted drones such as the Predator on moving aircraft carriers. In the article titled, “The Case
for Regulating Fully Autonomous Weapons,” author John Lewis states, “South Korea and Israel
have deployed automated sentry guns capable of selecting targets and alerting a human operator,
who makes the final decision to fire” (1311). While both of these systems use autonomous
features, neither of them are fully autonomous. This is because both systems require
human-machine interaction, like when to fire. With today’s technology, we still need to use
human-machine interactive systems, so we do not repeat what happened with the Patriot missile
defense system. Still, many will bring up our current state with the Predator drone and argue that
a large number of civilian casualties are due to this human-machine interaction. While I do
believe that this system can be refined, it would not make sense to completely automate this
system, especially when looking at the cases of fratricide that came as a result of the Patriot
system.
8
Bygel
so that we reduce civilian casualties. The problem with the Patriot Defense System was that not
only did the operators not fully understand the system, they also were not trained for the right
roles. As Robert Hoffman says, “With the change in mode, the role of Patriot crewmembers
When looking at the Predator drones and the role that they have played in our military,
one of the main concerns is civilian casualties. When looking at the predator system, it was a
success at first. Before the weaponization of drones, pilots simply needed to plug in coordinates
and manage the takeoff and landing. In 2004 however, pilots no longer needed to know how to
manage the takeoff and landing procedures, and in 2010 operators no longer needed to attend six
months of training before becoming part of the flight crew. The Predator drone was an initial
success. It was not until General Atomics started arming the drones with Hellfire missiles that
The problem with the Predator system after weaponization was the operators’ blind
faith in automated systems. “After typing in waypoints for a mission, early Predator pilots only
monitored the subsystems the aircraft needed to stay airborne” (Hoffman et al. 250). The pilots
became so used to the previous system that when the new Predator and Reaper drones equipped
with Hellfire missiles were added to their arsenal, “Predator crews could not accomplish their
9
Bygel
In addition, the Predator community found themselves more effective when they were
able to rely on social connections between other pilots. It is already known that automation does
not necessarily reduce manpower requirements in either the quantity or quality of trained
personnel needed to employ a weapon system effectively (Hoffman et al. 251). If this is the case,
why not just have humans and the machines work together. Many believe that they should. The
autonomous weapons have the ability to identify and gain intelligence from incredible distances
away. While, “Those who are most familiar with the combat environment – warfighters – should
be intimately involved in the development of automated weapon systems, and the Defense
(Hoffman et al.) While there is no doubt that both the Predator drone and Patriot missile system
lived up to some of the promises of military automation. However, at the same time, there are
still plenty of areas to refine when looking these systems. If anything these systems show us that
not only do autonomous weapons not reduce manpower or training, they also fail to cut costs. On
the contrary, autonomous weapons seem to increase the amount of training for up and coming
pilots and operators, while driving up the costs of these drones and systems at the same time
(Hoffman et al 251).
Expanding on this idea, we want to look at where the U.S. is heading in terms of AI and
autonomous weapons. As of now the Predator and the Reaper drones are our most advanced,
automated weapons. So if our military continues to experiment with autonomous weapons, what
will happen? Many influential people have signed a letter to the United Nations calling for a ban
on autonomous weapons, and though this campaign has tons of momentum behind it, there is
still a possibility that this will not accomplish anything. When looking at the legal framework
10
Bygel
behind fully autonomous weapons, there would be no reason to treat them differently than any
other means in war. Mainly because these weapons are operating on their own through the use of
artificial intelligence, simply with the objective to identify and kill targets. Many will say that
they deserve to be in a different category that would set them aside from other systems operated
by humans. Even then, this will not be the reason that will lead to a full-on ban, simply because
these systems were put together and developed by humans. As Jeroen van den Boogaard from
the Amsterdam Center for International Law puts it, “Ultimately, however, even autonomous
weapons systems are at some point initially activated by human operators. Therefore, the general
rules of weapons law and specific targeting rules of IHL apply equally to autonomous weapons
systems as they do to any other means of warfare” (Boogaard 8-9). Essentially, Boogaard is
saying that there would be no reason for a ban, simply because these weapons are built for and
Looking forward, there is no doubt that the military will try to remove the human element
from the systems because that is one of the original goals of developing autonomous weapons. In
order for a ban or regulation on autonomous weapons, the weapons must first be assessed by the
effect it has on human fighters. Though because no such weapon has been developed, we can
compare fully autonomous weapons with landmines. Landmines are similar because both look to
play the same role in our military, to operate and kill without any further input from a human
operator. However, these still are not directly the same. In the Yale Law Journal, John Lewis
mentions, “Unlike landmines, fully autonomous weapons will likely be subject to tracking and
remote deactivation by design. The “temporal indiscriminateness” of landmines, which can kill
many years after they are placed, is therefore almost non-existent with respect to fully
11
Bygel
autonomous weapons” (Lewis 1320). Anti-personnel landmines were designed to cause severe
wounds to the limbs of the affected soldier. “These wounds often cause the soldiers to lose (parts
of) their limbs. Such consequences contributed for some militaries to agree on the complete
prohibition of antipersonnel landmines in those States” Boogaard states (Boogaard 10). This
comparison gives us a relatively good idea of the role that autonomous weapons will play in our
military. It also shows that not only can autonomous weapons be deadly, though it also shows us
that we must wait to see what exactly these weapons can do before a ban or regulation appears.
What could possibly go wrong? This is a question that many people ask, and when
speaking in terms of autonomous weapons and the military, the answer is a lot. When looking at
fully autonomous weapons, there are many faults that can arise. First, we can look at ethical
Psychopathology of Intelligent Machines,” David Atkinson says, “Ethical reasoning may fail due
to bounded rationality. Depending on the circumstances, knowledge and analysis of the situation
and actors may not be sufficient to reason about the duty to ethical concerns” (Atkinson 5).
Essentially, this means that depending on the situation, the AI-equipped autonomous weapons
may not know what to do in certain scenarios unless actions have been previously programmed
by a human operator. It is also true that creating an ethical code that is complete, unambiguous
and one that can be applied correctly in every situation is notoriously difficult. The problem with
this is that we are removing the human interaction, essentially showing that there are not going to
be any ethics involved. This is why we cannot remove the human-machine interaction.
Another problem can be seen when looking at the learning, knowledge, and belief of
these systems. Atkinson says, “...The most glaring example of this type of fault mode is the
12
Bygel
failure of truth maintenance, i.e., the failure to retract assertions previously thought to be true
which are now rendered invalid by new information” (Atkinson 6). Essentially this looks at how
well a system and its users will learn from their mistakes. Depending on how advanced the
artificial intelligence involved is, these future systems could end up becoming exactly like the
Patriot system. This idea can relate back to the argument of using autonomous weapons and
human controllers in coordination. While the military will not be reducing costs or manpower, it
is making the system more reliable, allowing for it to function in the way the designers hoped
for.
Finally, I believe that it is important that we continue to research AI and the potential of
autonomous weapons. Because there are not any complete fully autonomous weapons, it can be
hard to know exactly what is in store for us and what their role is in the military. This is exactly
why we need to continue to research these weapons before actually using them, or calling for a
ban. At this point, we have no idea what they are capable of.
Over the course of this paper, I have shown what fully autonomous weapons are, as
well as the benefits and costs of them in the military. There are good points on both sides of this
debate, however many, including Stephen Hawking, Elon Musk, and Peter Norvig call for a
complete ban on autonomous weapons. I believe that we should ban the use of autonomous
weapons in the military, while also continuing the development of artificial intelligence. First,
we should stop autonomous weapons because they are relatively new and we do not know what
13
Bygel
they are capable of. Second, we do not want to get rid of human-machine interaction, especially
Again, I want to emphasize why the U.S. and other countries should not proceed with
fully autonomous weapons. One of my biggest concerns is the fact that at its current state,
artificial intelligence is relatively new and still cannot tell a threat apart from a normal civilian.
That being said, if a fully autonomous weapon senses a threat, no matter who the target, the
So what role should advanced artificial intelligence play in the United States military? As
of now, I do not think it is smart to proceed with automated weapons, especially when looking at
past examples, like the Patriot system. Furthermore, autonomous systems lack feeling and
emotion, meaning that no matter who the target, if it detects a threat, it will likely fire. I do not
believe that it is necessary to remove soldiers and drone pilots, because while civilian casualties
are high, we should look to refine our current system rather than implementing an entirely new
one. Artificial intelligence is also relatively new, and not as advanced as it should be to manage
these weapons by itself. There are still many uses for AI in our military, and if it aims to keep
soldiers and civilians safe, it is important that we research this first. If artificial intelligence truly
can outperform humans in every scientific field, creating medical technology and finding new
ways to use AI in our military that will limit lives lost and improve the safety of our countries,
should be our first approach towards the ever-expanding world of artificial intelligence.
14
Bygel
Works Cited:
Etzioni, Amitai, and Oren Etzioni. “Should Artificial Intelligence Be Regulated?” Issues in
Science & Technology, vol. 33, no. 4, Summer 2017, pp. 32–36.
Geist, Edward Moore. “It’s Already Too Late to Stop the AI Arms Race—We Must Manage It
Instead.” Bulletin of the Atomic Scientists, vol. 72, no. 5, Sept. 2016, pp. 318–21.
EBSCOhost, doi:10.1080/00963402.2016.1216672.
Hoffman, Robert R., et al. “The Myths and Costs of Autonomous Weapon Systems.” Bulletin of
the Atomic Scientists, vol. 72, no. 4, July 2016, pp. 247–55. EBSCOhost,
doi:10.1080/00963402.2016.1194619.
Lewis, John. “The Case for Regulating Fully Autonomous Weapons.” Yale Law Journal, vol.
Omohundro, Steve. “Autonomous Technology and the Greater Human Good.” Journal of
Experimental & Theoretical Artificial Intelligence, vol. 26, no. 3, Sept. 2014, pp.
303–15. Aph.
Tate, Jitendra S., et al. “Military and National Security Implications of Nanotechnology.”
Journal of Technology Studies, vol. 41, no. 1, Spring 2015, pp. 20–28.
15
Bygel
van den Boogaard, Jeroen. "Proportionality and Autonomous Weapons Systems." Journal of
International Humanitarian Legal Studies, vol. 6, no. 2, Oct. 2015, pp. 247-283.
EBSCOhost, doi:10.1163/18781527-00602007.
16