Sunteți pe pagina 1din 5

Jeremy Burns

12 February 2020

Robert DeFrank

CC2 Section 11

AI and War

Brown, A. S. (2010). The Drone Warriors. Mechanical Engineering, 132(1), 22–27.

https://doi.org/10.1115/1.2010-Jan-1

The US is already making extensive use of robots and AI in its armed forces, most

notably bomb disposal units are sent in to reduce the risk to human life when disarming

bombs. AI soldiers and weapons however pose a larger risk, with potential to ignite more

conflict as war becomes cheaper, more effective, and less risky. Yet also we may be

hindering the ability of the military based off of the recommendations of an ultimately

limited algorithmic process that is unable to adapt on the fly to any situation. Robot and

drone based weapons also have a far more deadly potential as they lack the instinct of self

preservation in a human soldier. While this removes emotional bias as an influence to

tactical decision making in combat. It also means that robots can make moves humans

would never make with deadly results and the only friendly casualty would be a cheap,

replaceable machine.

Don’t Let Bots Pull the Trigger. (2019). Scientific American, 320(3), 6.

The author warns against the use of AI weapons, believing that they are worthy of

being treated like chemical, biological or other outlawed weapons. Countries are

developing automatic systems that identify and eliminate targets, but this kind of
advancement poses real dangers. First and foremost is giving more power to smaller

rogue states and terrorist groups, who would easily abuse the cheap technology to bolster

their ranks and abuse the fact that deadly AI weapons can essentially be made with

something as simple as a computer and a 3-D printer. The largest opposition is coming

from countries such as The U.S. and Russia who are competing in a Cold War like arms

race to develop the best and/or most revolutionary killing machine.

Geist, E. M. (2016). It’s already too late to stop the AI arms race—We must manage it instead.

Bulletin of the Atomic Scientists, 72(5), 318–321.

https://doi.org/10.1080/00963402.2016.1216672

Geist argues that although AI has several looming dangers, an international ban is

unlikely and an AI arms race is inevitable. He elaborates, saying the idea of autonomous

weapons has been around since the cold war with talks of such weapons as unarmed

nuclear aircraft. The reality ended up being that AI was instead more practically used for

systems and algorithms that create and refine war strategies that ignore potential ethical

and political consequences.

Marks, P. (2006). Robot infantry get ready for the battlefield. New Scientist, 191(2570), 28.

https://doi.org/10.1016/S0262-4079(06)60556-3

Marks claims that robots in combat roles on the battlefield are only a short way

away claiming that situations seen in older films such as “RoboCop” which may have

seemed outlandish and far from reality are now closer to becoming a reality than ever

before. He elaborates that machines are already used defensively to great effect against

improvised explosive devices. Now newer models equipped with weapons such as
shotguns are being developed for combat, still under human supervision. The ultimate

goal for the Office of Naval Research is to develop recognition software effective and

sophisticated enough to distinguish “cooperative and uncooperative people”. Some are

put off by this goal, but those developing the robot weapons are confident in their project.

Meuser, C. (2016). Franken Weapons Loom on the Horizon. U.S. Naval Institute Proceedings,

142(7), 28–33.

Meuser cautions the creation of AI weaponry by relating it to the story of

Frakenstien’s monster. He warns that giving too much control or will to our weaponry

can yield disastrous results. He cites that AI can be helpful as Some AI automation is

used on Navy ships as a way to lighten the mental load of mundane tasks on humans. The

extreme is reached with the “Auto Special Doctrine” a system put into place in order to

reduce reaction time and human error involved with firing on a target. AI recognition is

strong, but still prone to flaws.

Payne, K. (2018). AI, warbot. New Scientist, 239(3195), 40–43. https://doi.org/10.1016/S0262-

4079(18)31663-4

Payne states that AI is a tool with potential for both peaceful and violent

situations and therefore becomes much harder to regulate due to the complex ethical

implications behind giving important decisions over to an inhuman machine. AI is used

to gather information and carry out orders. This again raises the issue of if AI can be

responsible for picking targets and determining who lives and who dies. He elaborates on

how historically human ingenuity for strategy has led to victory throughout history. He

essentially argues that replicating strategic thinking in AI is extremely difficult as humans


are generally able to try and adapt to what their opponent is thinking. AI is smart and fast,

but does not have the same capacity for creativity as humans. He also warns that AI

encourages aggressive behavior, as AI is extremely effective at doing whatever it takes to

defeat the enemy. With the decreased casualty risk, war initiated by AI becomes more

appealing.

The New York Times. (2019, December 13) A.I. Is Making it Easier to Kill (You). Here’s How. |

NYT [Video]. Youtube. https://www.youtube.com/watch?v=GFD_Cgr2zho

The video explores the growing AI weapons industry and emphasizes the dangers

behind creating newer, more efficient weapons using software found in everyday devices

such as phones that use facial recognition or search history pattern analysis to find what

you are going to search for next. All these elements are being implemented into cheap,

easily mass produced weapons that could streamline killing and severely reduce the risk

to the attacker. The video also relates the development of new weapons technology back

to the Civil War when the gatling gun was invented as a way to reduce the need for men

on the battlefield thus saving lives. This of course was not really true, the gatling gun led

to the machine gun and the legacy of the invention became pain and death. The idea of

the line between moral and right is also explored as an anecdote about a child being used

by terrorists is used to show that even though humans chose not to shoot the girl, a robot

would likely not make the same choice and would be legally valid to do so.

Underwood S. Potential and Peril: The outlook for artificial intelligence-based autonomous

weapons. Communications of the ACM. 2017;60(6):17-19. doi:10.1145/3077231


Underwood examines how there is an argument to be had on whether to restrict or

ban robot or autonomous weapons altogether. One of the major points of contention is

human influence on autonomous weapons is more or less likely to lead to civilian

casualties. There have been several pushes by organizations such as The Campaign to

Stop Killer Robots to keep humans on all AI units being deployed. Although there is push

back, governments are still marching forward on their research and development efforts

into the vast, morally gray world of artificial intelligence in order to beef up their own

defensive and offensive capabilities. Although if a tragedy were to happen at the hands of

a robot setting the blame on any one party would be extremely difficult.

S-ar putea să vă placă și