Sunteți pe pagina 1din 445

ARMYOF

NONE

AutonomousWeaponsand

theFutureofWar

PAULSCHARRE

ARMYOF NONE AutonomousWeaponsand theFutureofWar PAULSCHARRE

ForDavey,William,andElla,

thattheworldmightbeabetterplace.

AndforHeather.

Thanksforeverything.

Contents

INTRODUCTION

ThePowerOverLifeandDeath

PARTI / ROBOPOCALYPSENOW

1 THECOMINGSWARM

TheMilitaryRoboticsRevolution

2 THETERMINATORANDTHEROOMBA

WhatIsAutonomy?

3 MACHINESTHATKILL

WhatIsanAutonomousWeapon?

PARTII / BUILDINGTHETERMINATOR

4 THEFUTUREBEINGBUILTTODAY

AutonomousMissiles,Drones,andRobotSwarms

5 INSIDETHEPUZZLEPALACE

IsthePentagonBuildingAutonomousWeapons?

6 CROSSINGTHETHRESHOLD

ApprovingAutonomousWeapons

7

WORLDWARR

RoboticWeaponsaroundtheWorld

8 GARAGEBOTS

DIYKillerRobots

PARTIII / RUNAWAYGUN

9 ROBOTSRUNAMOK

FailureinAutonomousSystems

10 COMMANDANDDECISION

CanAutonomousWeaponsBeUsedSafely?

11 BLACKBOX

TheWeird,AlienWorldofDeepNeuralNetworks

12 FAILINGDEADLY

TheRiskofAutonomousWeapons

PARTIV / FLASHWAR

13 BOTVS.BOT

AnArmsRaceinSpeed

14 THEINVISIBLEWAR

AutonomyinCyberspace

15 “SUMMONINGTHEDEMON”

TheRiseofIntelligentMachines

PARTV / THEFIGHTTOBANAUTONOMOUSWEAPONS

16 ROBOTSONTRIAL

AutonomousWeaponsandtheLawsofWar

17 SOULLESSKILLERS

TheMoralityofAutonomousWeapons

18

PLAYINGWITHFIRE

AutonomousWeaponsandStability

PARTVI / AVERTINGARMAGEDDON:THEWEAPONOFPOLICY

19 CENTAURWARFIGHTERS

Humans+Machines

20 THEPOPEANDTHECROSSBOW

TheMixedHistoryofArmsControl

21 AREAUTONOMOUSWEAPONSINEVITABLE?

TheSearchforLethalLawsofRobotics

CONCLUSION

NoFatebutWhatWeMake

Notes

Acknowledgments

Abbreviations

IllustrationCredits

Index

ARMYOF

NONE

Introduction

THEPOWEROVERLIFEANDDEATH

THEMANWHOSAVEDTHEWORLD

OnthenightofSeptember26,1983,theworldalmostended.

It was the height of the Cold War, and each side bristled with nuclear weapons. Earlier that spring, President Reagan had announced the Strategic DefenseInitiative,nicknamed“StarWars,”aplannedmissiledefenseshieldthat threatenedtoupendtheColdWar’sdelicatebalance.Justthreeweeksearlieron

September1,theSovietmilitaryhadshotdownacommercialairlinerflying

fromAlaskatoSeoulthathadstrayedintoSovietairspace.Twohundredand sixty-ninepeoplehadbeenkilled,includinganAmericancongressman.Fearing retaliation,theSovietUnionwasonalert. TheSovietUniondeployedasatelliteearlywarningsystemcalledOkoto

watchforU.S.missilelaunches.JustaftermidnightonSeptember26,thesystem

issuedagravereport:theUnitedStateshadlaunchedanuclearmissileatthe SovietUnion. Lieutenant Colonel Stanislav Petrov was on duty that night in bunker

Serpukhov-15outsideMoscow,anditwashisresponsibilitytoreportthemissile

launchupthechainofcommandtohissuperiors.Inthebunker,sirensblaredand

agiantredbacklitscreenflashed“launch,”warninghimofthedetectedmissile,

butstillPetrovwasuncertain.Okowasnew,andheworriedthatthelaunch

mightbeanerror,abuginthesystem.Hewaited.

Anotherlaunch.Twomissileswereinbound.Thenanother.Andanother.

Andanother—fivealtogether.Thescreenflashing“launch”switchedto“missile

strike.” The system reported the highest confidence level. There was no ambiguity:anuclearstrikewasonitsway.Sovietmilitarycommandwouldhave only minutes to decide what to do before the missiles would explode over Moscow. Petrovhadafunnyfeeling.WhywouldtheUnitedStateslaunchonlyfive missiles? It didn’t make sense. Areal surprise attack would be massive, an overwhelmingstriketowipeoutSovietmissilesontheground.Petrovwasn’t convincedtheattackwasreal.Buthewasn’tcertainitwasafalsealarm,either. Withoneeyeonthecomputerreadouts,Petrovcalledtheground-basedradar operatorsforconfirmation.Ifthemissileswerereal,theywouldshowupon Soviet ground-based radars as they arced over the horizon. Puzzlingly, the groundradarsdetectednothing.

Petrovputtheoddsofthestrikebeingrealat50/50,noeasiertopredictthan

acoinflip.Heneededmoreinformation.Heneededmoretime.Allhehadtodo waspickupthephone,butthepossibleconsequenceswereenormous.Ifhetold Sovietcommandtofirenuclearmissiles,millionswoulddie.Itcouldbethestart ofWorldWarIII. Petrovwentwithhisgutandcalledhissuperiorstoinformthemthesystem wasmalfunctioning.Hewasright:therewasnoattack.Sunlightreflectingoff cloud tops had triggered a false alarm in Soviet satellites. The system was wrong.HumanitywassavedfrompotentialArmageddonbyahuman“inthe loop.” WhatwouldamachinehavedoneinPetrov’splace?Theanswerisclear:the machinewouldhavedonewhateveritwasprogrammedtodo,withoutever understandingtheconsequencesofitsactions.

THESNIPER’SCHOICE

Inthespringof2004—twodecadeslater,inadifferentcountry,inadifferent

war—IstareddownthescopeofmysniperrifleatopamountaininAfghanistan. My sniper team had been sent to the Afghanistan-Pakistan border to scout infiltrationrouteswhereTalibanfightersweresuspectedofcrossingbackinto

Afghanistan.Wehikedupthemountainallnight,our120-poundpacksweighing

heavilyonthejaggedandbrokenterrain.Astheskyintheeastbegantolighten, wetuckedourselvesinbehindarockoutcropping—thebestcoverwecouldfind. Wehopedourpositionwouldconcealusatdaybreak. It didn’t. A farmer spied our heads bobbing above the shallow rock

outcroppingasthevillagebeneathuswoketostarttheirday.We’dbeenspotted. Of course, that didn’t change the mission. We kept watch, tallying the movementwecouldseeupanddowntheroadinthevalleybelow.Andwe waited. Itwasn’tlongbeforewehadcompany. Ayounggirlofmaybefiveorsixheadedoutofthevillageandupourway, twogoatsintrail.Ostensiblyshewasjustherdinggoats,butshewalkedalong slow loop around us, frequently glancing in our direction. It wasn’t a very convincingruse.ShewasspottingforTalibanfighters.Welaterrealizedthatthe chirpingsoundwe’dheardasshecircledus,whichwetooktobeherwhistling tohergoats,wasthechirpofaradioshewascarrying.Sheslowlycircledus,all thewhilereportingonourposition.Wewatchedher.Shewatchedus. Sheleft,andtheTalibanfighterscamesoonafter. We got the drop on them—we spotted them moving up a draw in the mountainside that they thought hid them from our position. The crackle of gunfirefromtheensuingfirefightbroughttheentirevillageoutoftheirhomes.It echoedacrossthevalleyfloorandback,alertingeveryonewithinadozenmiles toourpresence.TheTalibanwho’dtriedtosneakuponushadeitherrunor weredead,buttheywouldreturninlargernumbers.Thecrowdofvillagers swelledbelowourposition,andtheydidn’tlookfriendly.Iftheydecidedtomob us,wewouldn’thavebeenabletoholdthemalloff. “Scharre,”mysquadleadersaid.“Callforexfil.” Ihoppedontheradio.“ThisisMike-One-Two-Romeo,”Ialertedourquick reactionforce,“thevillageismassingonourposition.We’regoingtoneedan exfil.”Today’smissionwasover.Wewouldregroupandmovetoanew,better positionundercoverofdarknessthatnight. Back in the shelter of the safe house, we discussed what we would do differentlyiffacedwiththatsituationagain.Here’sthething:thelawsofwar don’tsetanageforcombatants.Behaviordetermineswhetherornotapersonis acombatant.Ifapersonisparticipatinginhostilities,astheyounggirlwas doingbyspottingfortheenemy,thentheyarealawfultargetforengagement. Killingacivilianwhohadstumbledacrossourpositionwouldhavebeenawar crime,butitwouldhavebeenlegaltokillthegirl. Ofcourse,itwouldhavebeenwrong.Morally,ifnotlegally. Inourdiscussion,nooneneededtorecitethelawsofwarorrefertoabstract ethicalprinciples.Nooneneededtoappealtoempathy.Thehorrifyingnotionof shootingachildinthatsituationdidn’tevencomeup.Weallknewitwouldhave beenwrongwithoutneedingtosayit.Wardoesforceawfulanddifficultchoices

onsoldiers,butthiswasn’toneofthem.

Contextiseverything.Whatwouldamachinehavedoneinourplace?Ifit

hadbeenprogrammedtokilllawfulenemycombatants,itwouldhaveattacked

thelittlegirl.Wouldarobotknowwhenitislawfultokill,butwrong?

THEDECISION

Life-and-deathchoicesinwararenottobetakenlightly,whetherthestakesare millions of lives or the fate of a single child. Laws of war and rules of engagementframethedecisionssoldiersfaceamidtheconfusionofcombat,but sound judgment is often required to discern the right choice in any given situation. Technologyhasbroughtustoacrucialthresholdinhumanity’srelationship with war. In future wars, machines may make life-and-death engagement decisions all on their own. Militaries around the globe are racing to deploy robotsatsea,ontheground,andintheair—morethanninetycountrieshave dronespatrollingtheskies.Theserobotsareincreasinglyautonomousandmany arearmed.Theyoperateunderhumancontrolfornow,butwhathappenswhena PredatordronehasasmuchautonomyasaGooglecar?Whatauthorityshould wegivemachinesovertheultimatedecision—lifeordeath? Thisisnotsciencefiction.Morethanthirtynationsalreadyhavedefensive supervised autonomous weapons for situations in which the speed of engagementsistoofastforhumanstorespond.Thesesystems,usedtodefend ships and bases against saturation attacks from rockets and missiles, are supervisedbyhumanswhocaninterveneifnecessary—butotherweapons,like theIsraeliHarpydrone,havealreadycrossedthelinetofullautonomy.Unlike thePredatordrone,whichiscontrolledbyahuman,theHarpycansearchawide area for enemy radars and, once it finds one, destroy it without asking permission. It’s been sold to a handful of countries and China has reverse engineereditsownvariant.Widerproliferationisadefinitepossibility,andthe Harpymayonlybethebeginning.SouthKoreahasdeployedaroboticsentry guntothedemilitarizedzoneborderingNorthKorea.Israelhasusedarmed groundrobotstopatrolitsGazaborder.Russiaisbuildingasuiteofarmed groundrobotsforwarontheplainsofEurope.Sixteennationsalreadyhave armeddrones,andanotherdozenormoreareopenlypursuingdevelopment. These developments are part of a deeper technology trend: the rise of artificial intelligence (AI), which some have called the “next industrial

revolution.”TechnologyguruKevinKellyhascomparedAItoelectricity:justas electricitybringsobjectsallaroundustolifewithpower,sotoowillAIbring themtolifewithintelligence.AIenablesmoresophisticatedandautonomous robots,fromwarehouserobotstonext-generationdrones,andcanhelpprocess large amounts of data and make decisions to power Twitter bots, program subwayrepairschedules,andevenmakemedicaldiagnoses.Inwar,AIsystems canhelphumansmakedecisions—ortheycanbedelegatedauthoritytomake decisionsontheirown. The rise of artificial intelligence will transform warfare. In the early twentiethcentury,militariesharnessedtheindustrialrevolutiontobringtanks, aircraft,andmachinegunstowar,unleashingdestructiononanunprecedented scale. Mechanization enabled the creation of machines that were physically strongerandfaster thanhumans,atleastforcertain tasks.Similarly,the AI revolutionisenablingthecognitizationofmachines,creatingmachinesthatare smarterandfasterthanhumansfornarrowtasks.Manymilitaryapplicationsof AI are uncontroversial—improved logistics, cyberdefenses, and robots for medicalevacuation,resupply,orsurveillance—however,theintroductionofAI intoweaponsraiseschallengingquestions.Automationisalreadyusedfora variety of functions in weapons today, but in most cases it is still humans choosing the targets and pulling the trigger. Whether that will continue is unclear.Mostcountrieshavekeptsilentontheirplans,butafewhavesignaled theirintentiontomovefullspeedaheadonautonomy.SeniorRussianmilitary commanders envision that in the near future a “fully robotized unit will be created, capable of independently conducting military operations,” while the U.S.DepartmentofDefenseofficialsstatethattheoptionofdeployingfully autonomousweaponsshouldbe“onthetable.”

BETTERTHANHUMAN?

Armedrobotsdecidingwhotokillmightsoundlikeadystopiannightmare,but

someargueautonomousweaponscouldmakewarmorehumane.Thesamekind

ofautomationthatallowsself-drivingcarstoavoidpedestrianscouldalsobe

usedtoavoidciviliancasualtiesinwar,andunlikehumansoldiers,machines

nevergetangryorseekrevenge.Theyneverfatigueortire.Airplaneautopilots

havedramaticallyimprovedsafetyforcommercialairliners,savingcountless

lives.Couldautonomydothesameforwar?

NewtypesofAIlikedeeplearningneuralnetworkshaveshownstartling

advancesinvisualobjectrecognition,facialrecognition,andsensing human emotions.Itisn’thardtoimaginefutureweaponsthatcouldoutperformhumans indiscriminatingbetweenapersonholdingarifleandoneholdingarake.Yet computersstillfallfarshortofhumansinunderstandingcontextandinterpreting meaning.AIprogramstodaycanidentifyobjectsinimages,butcan’tdrawthese individualthreadstogethertounderstandthebigpicture. Somedecisionsinwararestraightforward.Sometimestheenemyiseasily identifiedandtheshotisclear.Somedecisions,however,liketheoneStanislav Petrovfaced,requireunderstandingthebroadercontext.Somesituations,like theonemysniperteamencountered,requiremoraljudgment.Sometimesdoing therightthingentailsbreakingtherules—what’slegalandwhat’srightaren’t alwaysthesame.

THEDEBATE

Humanityfacesafundamentalquestion:shouldmachinesbeallowedtomake life-and-deathdecisionsinwar?Shoulditbelegal?Isitright?

I’vebeeninsidethedebateonlethalautonomysince2008.Asacivilian

policyanalystinthePentagon’sOfficeoftheSecretaryofDefense,Iledthe groupthatdraftedtheofficialU.S.policyonautonomyinweapons.(Spoiler

alert:itdoesn’tbanthem.)Since2014,I’verantheEthicalAutonomyProjectat

theCenterforaNewAmericanSecurity,anindependentbipartisanthinktankin Washington,DC,duringwhichI’vemetexpertsfromawiderangeofdisciplines grappling with these questions: academics, lawyers, ethicists, psychologists, armscontrolactivists,militaryprofessionals,andpacifists.I’vepeeredbehind thecurtainofgovernmentprojectsandmetwiththeengineersbuildingthenext generationofmilitaryrobots. Thisbookwillguideyouonajourneythroughtherapidlyevolvingworldof next-generation robotic weapons. I’ll take you inside defense companies building intelligent missiles and research labs doing cutting-edge work on swarming.I’llintroducethegovernmentofficialssettingpolicyandtheactivists strivingforaban.Thisbookwillexaminethepast—includingthingsthatwent wrong—and look to the future, as I meet with the researchers pushing the boundariesofartificialintelligence. Thisbookwillexplorewhatafuturepopulatedbyautonomousweapons mightlooklike.Automatedstocktradinghasledto“flashcrashes”onWall Street.Couldautonomousweaponsleadtoa“flashwar”?NewAImethodssuch

asdeeplearningarepowerful,butoftenleadtosystemsthatareeffectivelya “blackbox”—eventotheirdesigners.WhatnewchallengeswilladvancedAI systemsbring?

Over3,000roboticsandartificialintelligenceexpertshavecalledforaban

on offensive autonomous weapons, and are joined by over sixty nongovernmentalorganizations(NGOs)intheCampaigntoStopKillerRobots. ScienceandtechnologyluminariessuchasStephenHawking,ElonMusk,and ApplecofounderSteveWozniakhavespokenoutagainstautonomousweapons, warningtheycouldsparka“globalAIarmsrace.” Cananarmsracebeprevented,orisonealreadyunderway?Ifit’salready happening,canitbestopped?Humanity’strackrecordforcontrollingdangerous technologyismixed;attemptstobanweaponsthatwereseenastoodangerous or inhumane date back to antiquity. Many of these attempts have failed, including early-twentieth-century attempts to ban submarines and airplanes. Eventhosethathavesucceeded,suchasthebanonchemicalweapons,rarely stoprogueregimessuchasBasharal-Assad’sSyriaorSaddamHussein’sIraq.If aninternationalbancannotstoptheworld’smostodiousregimesfrombuilding killerrobotarmies,wemaysomedayfaceourdarkestnightmaresbroughttolife.

STUMBLINGTOWARDTHEROBOPOCALYPSE

Nonationhasstatedoutrightthattheyarebuildingautonomousweapons,butin secret defense labs and dual-use commercial applications, AI technology is racingforward.Formostapplications,evenarmedrobots,humanswouldremain incontroloflethaldecisions—butbattlefieldpressurescoulddrivemilitariesto buildautonomousweaponsthattakethehumanoutoftheloop.Militariescould desiregreaterautonomytotakeadvantageofcomputers’superiorspeedorso that robots can continue engagements when their communications to human controllersarejammed.Ormilitariesmightbuildautonomousweaponssimply becauseofafearthatothersmightdoso.U.S.DeputySecretaryofDefenseBob Workhasasked:

IfourcompetitorsgotoTerminators

decisionsfaster,evenifthey’rebad,howwouldwerespond?

anditturnsouttheTerminatorsareabletomake

ViceChairmanoftheJointChiefsofStaffGeneralPaulSelvahastermed thisdilemma“TheTerminatorConundrum.”Thestakesarehigh:AIisemerging asapowerfultechnology.Usedtherightway,intelligentmachinescouldsave lives by making war more precise and humane. Used the wrong way,

autonomous weapons could lead to more killing and even greater civilian casualties.Nationswillnotmakethesechoicesinavacuum.Itwilldependon what other countries do, as well as on the collective choices of scientists, engineers,lawyers,humanrightsactivists,andothersparticipatinginthisdebate. Artificialintelligenceis cominganditwill be used in war. Howitis used, however, is an open question. In the words of John Connor, hero of the Terminatormoviesandleaderofthehumanresistanceagainstthemachines, “Thefuture’snotset.There’snofatebutwhatwemakeforourselves.”Thefight tobanautonomousweaponscutstothecoreofhumanity’sages-oldconflicted relationshipwithtechnology:dowecontrolourcreationsordotheycontrolus?

PARTI

RobopocalypseNow

1

THECOMINGSWARM

THEMILITARYROBOTICSREVOLUTION

OnasunnyafternooninthehillsofcentralCalifornia,aswarmtakesflight. Onebyone,alauncherflingstheslimStyrofoam-wingeddronesintotheair.The dronesletoffahigh-pitchedbuzz,whichfadesastheyclimbintothecrystalblue Californiasky. Thedronescarvetheairwithsharp,precisemovements.Ilookatthedrone pilotstandingnexttomeandrealizewithsomesurprisethathishandsaren’t touching the controls; the drones are flying fully autonomously. It’s a silly realization—afterall,autonomousdroneswarmsarewhatI’vecomeheretosee —yet somehow the experience of watching the drones fly with such agility withoutanyhumancontrollingthemisdifferentthanI’dimagined.Theirnimble movementsseempurposeful,andit’shardnottoimbuethemwithintention.It’s bothimpressiveanddiscomfiting,thisideaofthedronesoperating“offleash.” I’vetraveledtoCampRoberts,California,toseeresearchersfromtheNaval PostgraduateSchoolinvestigatesomethingnooneelseintheworldhasever donebefore:swarmwarfare.UnlikePredator drones,whichare individually remotelypilotedbyhumancontrollersontheground,theseresearchers’drones arecontrolledenmasse.Today’sexperimentwillfeaturetwentydronesflying simultaneously in a ten-against-ten swarm-on-swarm mock dogfight. The shootingissimulated,butthemaneuveringandflyingareallreal. Eachdronecomesoffthelauncherwithitsautopilotalreadyswitchedon. Withoutanyhumandirection,theyclimbtotheirassignedaltitudesandform twoteams,reportingbackwhentheyare“swarmready.”TheRedandBlue

swarmswaitintheirrespectivecornersoftheaerialcombatarena,circlinglikea

flockofhungrybuzzards.

ThepilotcommandingRedSwarmrubshishandstogether,anticipatingthe

comingbattle—whichisfunny,becausehisentireroleisjusttoclickthebutton

thattellstheswarmtostart.Afterthat,he’sasmuchofaspectatorasIam.

DuaneDavis,theretiredNavyhelicopterpilotturnedcomputerprogrammer

whodesignedtheswarmalgorithms,countsdowntothefight:

“Initiatingswarmv.swarm

3,2,1,shoot!”

BoththeRedandBlueswarmcommandersputtheirswarmsintoaction.The

twoswarmscloseinoneachotherwithouthesitation.“Fight’son!”Duaneyells

enthusiastically.Withinseconds,theswarmsclosethegapandcollide.Thetwo

swarmsblendtogetherintoafurballofcloseaircombat.Theswarmsmaneuver

andswirlasasinglemass.Simulatedshotsaretalliedupatthebottomofthe

computerscreen:

“UAV74firedatUAV33

“UAV59firedatUAV25

“UAV33hit

“UAV25hit

Theswarms’behaviorisdrivenbyasimplealgorithmcalledGreedyShooter. Eachdronewillmaneuvertogetintoakillshotpositionagainstanenemydrone. Ahumanmustonlychoosetheswarmbehavior—wait,follow,attack,orland— andtelltheswarmtostart.Afterthat,alloftheswarm’sactionsaretotally autonomous. OntheRedSwarmcommander’scomputerscreen,it’shardtotellwho’s winning.Thedroneiconsoverlaponeanotherinablurwhile,outside,thedrones circleeachotherinamaelstromofaircombat.Thewhirlinggyrelookslikepure chaostome,althoughDavistellsmehesometimescanpickoutwhichdrones arechasingeachother. ArefereesoftwarecalledTheArbitertracksthescore.RedSwarmgainsthe upperhandwithfourkillstoBlue’stwo.The“killed”drones’statusswitches fromgreentoredasthey’retakenoutofthefight.Thenthefightfallsintoalull, withtheaircraftcirclingeachother,unabletogetakill.Davisexplainsthat becausetheaircraftareperfectlymatched—sameairframe,sameflightcontrols, samealgorithms—theysometimesfallintoastalematewhereneithersidecan gaintheupperhand. Davis resets the battlefield for Round 2 and the swarms return to their

respectivecorners.Whentheswarmcommandersclickgo,theswarmscloseon

eachotheronceagain.Thistimethebattlecomesoutdeadeven,3–3.InRound

3,Redpullsoutadecisivewin,7–4.RedSwarmcommanderishappytotake

creditforthewin.“Ipushedthebutton,”hesayswithachuckle. Justasrobotsaretransformingindustries—fromself-drivingcarstorobot vacuumcleanersandcaretakersfortheelderly—theyarealsotransformingwar.

Globalspendingonmilitaryroboticsisestimatedtoreach$7.5billionperyear

in2018,withscoresofcountriesexpandingtheirarsenalsofair,ground,and

maritimerobots. Robotshavemanybattlefieldadvantagesovertraditionalhuman-inhabited vehicles. Unshackled from the physiological limits of humans, uninhabited (“unmanned”) vehicles can be made smaller, lighter, faster, and more maneuverable. They can stay out on the battlefield far beyond the limits of humanendurance,forweeks,months,orevenyearsatatimewithoutrest.They can take more risk, opening up tactical opportunities for dangerous or even suicidalmissionswithoutriskinghumanlives. However,robotshaveonemajordisadvantage.Byremovingthehumanfrom thevehicle,theylosethemostadvancedcognitiveprocessorontheplanet:the human brain. Most military robots today are remotely controlled, or teleoperated,byhumans;theydependonfragilecommunicationlinksthatcanbe jammed or disrupted by environmental conditions. Without these communications,robotscanonlyperformsimpletasks,andtheircapacityfor autonomousoperationislimited. Thesolution:moreautonomy.

THEACCIDENTALREVOLUTION

Nooneplannedonaroboticsrevolution,buttheU.S.militarystumbledintoone asitdeployedthousandsofairandgroundrobotstomeeturgentneedsinIraq

andAfghanistan.By2005,theU.S.DepartmentofDefense(DoD)hadwoken

up to the fact that something significant was happening. Spending on

uninhabitedaircraft,ordrones,whichhadhoveredaroundthe$300millionper

yearmarkinthe1990s,skyrocketedafter9/11,increasingsixfoldtoover$2

billion per year by 2005. Drones proved particularly valuable in the messy

counterinsurgencywarsinIraqandAfghanistan.LargeraircraftliketheMQ-1B

Predatorcanquietlysurveilterroristsaroundtheclock,trackingtheirmovements

andunravelingtheirnetworks.Smallerhand-launcheddronesliketheRQ-11

Ravencanprovidetroops“over-the-hillreconnaissance”ondemandwhileon patrol.HundredsofdroneshadbeendeployedtoIraqandAfghanistaninshort order. Dronesweren’tnew—theyhadbeenusedinalimitedfashioninVietnam— buttheoverwhelmingcrushofdemandforthemwas.Whileinlateryearsdrones wouldbecomeassociatedwith“dronestrikes,”itistheircapacityforpersistent surveillance,notdroppingbombs,thatmakesthemuniqueandvaluabletothe military.Theygivecommandersalow-cost,low-riskwaytoputeyesinthesky. Soon,thePentagonwaspouringdronesintothewarsatabreakneckpace.

By2011,annualspendingondroneshadswelledtoover$6billionperyear,over

twentytimespre-9/11levels.DoDhadover7,000dronesinitsfleet.Thevast

majorityofthemweresmallerhand-launchedmodels,butlargeaircraftlikethe

MQ-9ReaperandRQ-4GlobalHawkwerealsovaluablemilitaryassets.

Atthesametime,DoDwasdiscoveringthatrobotsweren’tjustvaluablein theair.Theywereequallyimportant,ifnotmoreso,ontheground.Drivenin largepartbytheriseofimprovisedexplosivedevices(IEDs),DoDdeployed

over6,000groundrobotstoIraqandAfghanistan.SmallrobotsliketheiRobot

PackbotallowedtroopstodisableordestroyIEDswithoutputtingthemselvesat

risk.Bombdisposalisagreatjobforarobot.

THEMARCHTOWARDEVER-GREATERAUTONOMY

In2005,afterDoDstartedtocometogripswiththeroboticsrevolutionandits

implicationsforthefutureofconflict,itbeganpublishingaseriesof“roadmaps” for future unmanned system investment. The first roadmap was focused on aircraft, but subsequent roadmaps in 2007, 2009, 2011, and 2013 included groundandmaritimevehiclesaswell.Whilethelion’sshareofdollarshasgone towarduninhabitedaircraft, ground,seasurface, andunderseavehicles have valuablerolestoplayaswell. TheseroadmapsdidmorethansimplycatalogtheinvestmentsDoDwas making. Each roadmap looked forward twenty-five years into the future, outliningtechnologyneedsandwantsinordertohelpinformfutureinvestments by government and industry. They covered sensors, communications, power, weapons, propulsion, and other key enabling technologies. Across all the roadmaps,autonomyisadominanttheme.

The2011roadmapperhapssummarizedthevisionbest:

Forunmannedsystemstofullyrealizetheirpotential,theymustbeabletoachieveahighly

autonomousstateofbehaviorandbeabletointeractwiththeirsurroundings.Thisadvancement

willrequireanabilitytounderstandandadapttotheirenvironment,andanabilitytocollaborate

withotherautonomoussystems.

Autonomyisthecognitiveenginethatpowerrobots.Withoutautonomy,robots areonlyemptyvessels,brainlesshusksthatdependonhumancontrollersfor direction. In Iraq and Afghanistan, the U.S. military operated in a relatively “permissive”electromagneticenvironmentwhereinsurgentsdidnotgenerally havetheabilitytojamcommunicationswithrobotvehicles,butthiswillnot alwaysbethecaseinfutureconflicts.Majornation-statemilitarieswillalmost certainlyhavetheabilitytodisruptordenycommunicationsnetworks,andthe electromagneticspectrumwillbehighlycontested.TheU.S.militaryhasways ofcommunicatingthataremoreresistanttojamming,butthesemethodsare limitedinrangeandbandwidth.Againstamajormilitarypower,thetypeof droneoperationstheUnitedStateshasconductedwhengoingafterterrorists— streaminghigh-definition,full-motionvideobacktostatesidebasesviasatellites —will not be possible. In addition, some environments inherently make communicationschallenging,suchasundersea,whereradiowavepropagationis hinderedbywater.Inthesesituations,autonomyisamustifroboticsystemsare to be effective. As machine intelligence advances, militaries will be able to create ever more autonomous robots capable of carrying out more complex missionsinmorechallengingenvironmentsindependentfromhumancontrol. Even if communications links work perfectly, greater autonomy is also desirable because of the personnel costs of remotely controlling robots. Thousandsofrobotsrequirethousandsofpeopletocontrolthem,ifeachrobotis remotelyoperated.PredatorandReaperdroneoperationsrequireseventoten

pilotstostaffonedrone“orbit”of24/7continuousaround-the-clockcoverage

overanarea.Anothertwentypeopleperorbitarerequiredtooperatethesensors onthedrone,andscoresofintelligenceanalystsareneededtosiftthroughthe sensordata.Infact,becauseofthesesubstantialpersonnelrequirements,theU.S. AirForcehasastrongresistancetocallingtheseaircraft“unmanned.”There maynotbeanyoneonboardtheaircraft,buttherearestillhumanscontrollingit andsupportingit. Because the pilot remains on the ground, uninhabited aircraft free surveillance operations from the limits of human endurance—but only the physicalones.Dronescanstayaloftfordaysatatime,farlongerthanahuman pilotcouldremaineffectivesittinginthecockpit,butremoteoperationdoesn’t changethecognitiverequirementsonhumanoperators.Humansstillhaveto

performthesametasks,theyjustaren’tphysicallyonboardthevehicle.TheAir Forcepreferstheterm“remotelypilotedaircraft”becausethat’swhattoday’s dronesare.Pilotsstillflytheaircraftviastickandrudderinput,justremotely fromtheground,sometimesevenhalfaworldaway. It’s a cumbersome way to operate. Building tens of thousands of cheap robotsisnotacost-effectivestrategyiftheyrequireevenlargernumbersof highlytrained(andexpensive)peopletooperatethem.

Autonomyistheanswer.The2011DoDroadmapstated:

Autonomyreducesthehumanworkloadrequiredtooperatesystems,enablestheoptimizationof

thehumanroleinthesystem,andallowshumandecisionmakingtofocusonpointswhereitis

mostneeded.Thesebenefitscanfurtherresultinmanpowerefficienciesandcostsavingsaswell

asgreaterspeedindecision-making.

Many of DoD’s robotic roadmaps point toward the long-term goal of full

autonomy.The2005roadmaplookedtoward“fullyautonomousswarms.”The

2011 roadmap articulated an evolution of four levels of autonomy from (1)

humanoperatedto(2)humandelegated,(3)humansupervised,andeventually

(4)fullyautonomous.Thebenefitsofgreaterautonomywasthe“singlegreatest

theme”ina2010reportfromtheAirForceOfficeoftheChiefScientiston

futuretechnology. Although Predator and Reaper drones are still flown manually, albeit remotelyfromtheground,otheraircraftsuchasAirForceGlobalHawkand ArmyGrayEagledroneshavemuchmoreautomation:pilotsdirecttheseaircraft wheretogoandtheaircraftfliesitself.Ratherthanbeingflownviaastickand rudder,theaircraftaredirectedviakeyboardandmouse.TheArmydoesn’teven refertothepeoplecontrollingitsaircraftas“pilots”—itcalledthem“operators.” Even with this greater automation, however, these aircraft still require one humanoperatorperaircraftforanythingbutthesimplestmissions. Incrementally, engineers are adding to the set of tasks that uninhabited aircraft can perform on their own, moving step by step toward increasingly autonomous drones. In 2013, the U.S. Navy successfully landed its X-47B prototypedroneonacarrieratsea,autonomously.Theonlyhumaninputwasthe order to land; the actual flying was done by software. In 2014, the Navy’s Autonomous Aerial Cargo/Utility System (AACUS) helicopter autonomously scoutedoutanimprovisedlandingareaandexecutedasuccessfullandingonits

own.Thenin2015,theX-47Bdroneagainmadehistorybyconductingthefirst

autonomousaerialrefueling,takinggasfromanotheraircraftwhileinflight.

Thesearekeymilestonesinbuildingmorefullycombat-capableuninhabited

aircraft.JustasautonomouscarswillallowavehicletodrivefrompointAto

pointBwithoutmanualhumancontrol,theabilitytotakeoff,land,navigate,and refuelautonomouslywillallowrobotstoperformtasksunderhumandirection andsupervision,butwithouthumanscontrollingeachmovement.Thiscanbegin to break the paradigm of humans manually controlling the robot, shifting humansintoasupervisoryrole.Humanswillcommandtherobotwhatactionto take,anditwillexecutethetaskonitsown. Swarming,orcooperativeautonomy,isthenextstepinthisevolution.Davis ismostexcitedaboutthenonmilitaryapplicationsofswarming,fromsearchand rescuetoagriculture.Coordinatedrobotbehaviorcouldbeusefulforawide varietyofapplicationsandtheNavalPostgraduateSchool’sresearchisvery basic,sothealgorithmsthey’rebuildingcouldbeusedformanypurposes.Still, themilitaryadvantagesinmass,coordination,andspeedareprofoundandhard toignore.Swarmingcanallowmilitariestofieldlargenumbersofassetsonthe battlefieldwithasmallnumberofhumancontrollers.Cooperativebehaviorcan alsoallowquickerreactiontimes,sothattheswarmcanrespondtochanging eventsfasterthanwouldbepossiblewithonepersoncontrollingeachvehicle. Inconductingtheirswarmdogfightexperiment,Davisandhiscolleaguesare pushingtheboundariesofautonomy.Theirnextgoalistoworkuptoahundred dronesfightinginafifty-on-fiftyaerialswarmbattle,somethingDavisandhis colleaguesarealreadysimulatingoncomputers,andtheirultimategoalisto movebeyonddogfightingtoamorecomplexgameakintocapturetheflag.Two swarmswouldcompetetoscorethemostpointsbylandingattheother’sairbase withoutbeing“shotdown”first.Eachswarmmustbalancedefendingitsown base,shootingdownenemydrones,andgettingasmanyofitsdronesaspossible intotheenemy’sbase.Whatarethe“plays”torunwithaswarm?Whatarethe besttactics?ThesearepreciselythequestionsDavisandhiscolleagueswantto explore. “IfIhavefiftyplanesthatareinvolvedinaswarm,”hesaid,“howmuchof that swarm do I want to be focused on offense—getting to the other guy’s landingarea?HowmuchdoIwantfocusedondefendingmylandingspaceand doingtheair-to-airproblem?HowdoIwanttodoassignmentsoftasksbetween the swarms? If I’ve got the adversary’s UAVs [unmanned aerial vehicles] comingin,howdoIwantmyswarmdecidingwhichUAVisgoingtotakewhich adversarytotrytostopthemfromgettingtoourbase?” Swarmtacticsarestillataveryearlystage.Currently,thehumanoperator allocatesacertainnumberofdronestoasub-swarmthentasksthatsub-swarm withamission,suchasattemptingtoattackanenemy’sbaseorattackingenemy aircraft.Afterthat,thehumanisinasupervisorymode.Unlessthereisasafety

concern,thehumancontrollerwon’tintervenetotakecontrolofanaircraft. Eventhen,ifanaircraftbegantoexperienceamalfunction,itwouldn’tmake sensetotakecontrolofituntilitlefttheswarm’svicinity.Takingmanualcontrol of an aircraft in the middle of the swarm could actually instigate a midair collision.Itwouldbeverydifficultforahumantopredictandavoidacollision withalloftheotherdronesswirlinginthesky.Ifthedroneisundertheswarm’s command,however,itwillautomaticallyadjustitsflighttoavoidacollision. Rightnow,theswarmbehaviorsDavisisusingareverybasic.Thehuman can command the swarm to fly in a formation, to land, or to attack enemy aircraft.Thedronesthensortthemselvesintopositionforlandingorformation flyingto“deconflict”theiractions.Forsometasks,suchaslanding,thisisdone relatively easily by altitude: lower planes land first. Other tasks, such as deconflictingair-to-aircombataretrickier.Itwouldn’tdoanygood,forexample, forallofthedronesintheswarmtogoafterthesameenemyaircraft.Theyneed tocoordinatetheirbehavior. Theproblemisanalogoustothatofoutfielderscallingaflyball.Itwouldn’t makesensetohavethemanagercallingwhoshouldcatchtheballfromthe dugout.Theoutfieldersneedtocoordinateamongthemselves.“It’sonething whenyou’vegottwohumansthatcantalktooneanotherandoneball,”Davis explained.“It’sanotherthingwhenthere’sfiftyhumansandfiftyballs.”This taskwouldbeeffectivelyimpossibleforhumans,butaswarmcanaccomplish thisveryquickly,throughavarietyofmethods.Incentralizedcoordination,for example,individualswarmelementspasstheirdatabacktoasinglecontroller, which then issues commands to each robot in the swarm. Hierarchical coordination,ontheotherhand,decomposestheswarmintoteamsandsquads much like a military organization, with orders flowing down the chain of command. Consensus-basedcoordinationisadecentralizedapproachwhereallofthe swarmelementscommunicatewithoneanothersimultaneouslyandcollectively decideonacourseofaction.Theycoulddothisbyusing“voting”or“auction” algorithmstocoordinatebehavior.Forexample,eachswarmelementcouldplace a“bid”onan“auction”tocatchtheflyball.Theindividualthatbidshighest “wins”theauctionandcatchestheball,whiletheothersmoveoutoftheway. Emergentcoordinationisthemostdecentralizedapproachandishowflocks ofbirds,coloniesofinsects,andmobsofpeoplework,withcoordinatedaction arisingnaturallyfromeachindividualmakingdecisionsbasedonthosenearby. Simplerulesforindividualbehaviorcanleadtoverycomplexcollectiveaction, allowingtheswarmtoexhibit“collectiveintelligence.”Forexample,acolonyof

antswillconvergeonanoptimalroutetotakefoodbacktothenestovertime

becauseofsimplebehaviorfromeachindividualant.Asantspickupfood,they

leaveapheromonetrailbehindthemastheymovebacktothenest.Iftheycome

acrossanexistingtrailwithstrongerpheromones,they’llswitchtoit.Moreants

willarrivebackatthenestsoonerviathefasterroute,leadingtoastronger

pheromonetrail,whichwillthencausemoreantstousethattrail.Noindividual

ant“knows”whichtrailisfastest,butcollectivelythecolonyconvergesonthe

fastestroute.

SwarmCommand-and-ControlModels

SwarmCommand-and-ControlModels

Communication among elements of the swarm can occur through direct

signaling,akintoanoutfielderyelling“Igotit!”;indirectmethodssuchasco-

observation,whichishowschoolsoffishandherdsofanimalsstaytogether;or

bymodifyingtheenvironmentinaprocesscalledstigmergy,likeantsleaving

pheromonestomarkatrail.

ThedronesinDavis’sswarmcommunicatethroughacentralWi–Firouteron

theground.Theyavoidcollisionsbystayingwithinnarrowaltitudewindows

thatareautomaticallyassignedbythecentralgroundcontroller.Theirattack

behavior is uncoordinated, though. The “greedy shooter” algorithm simply directseachdronetoattackthenearestenemydrone,regardlessofwhatthe otherdronesaredoing.Intheory,allthedronescouldconvergeonthesame

enemydrone,leavingotherenemiesuntouched.It’saterriblemethodforair-to-

aircombat,butDavisandhiscolleaguesarestillintheproof-of-conceptstage. Theyhaveexperimentedwithamoredecentralizedauction-basedapproachand found it to be very robust to disruptions, including up to a 90 percent communicationslosswithintheswarm.Aslongassomecommunicationsare up,evenifthey’respotty,theswarmwillconvergeonasolution. Theeffectoffiftyaircraftworkingtogether,ratherthanfightingindividually orinwingmanpairsashumansdotoday,wouldbetremendous.Coordinated behavior is the difference between a basketball team and five ball hogs all makingarunatthebasketthemselves.It’sthedifferencebetweenabunchof lonewolvesandawolfpack.

In2016,theUnitedStatesdemonstrated103aerialdronesflyingtogetherin

aswarmthatDoDofficialsdescribedas“acollectiveorganism,sharingone distributedbrainfordecision-makingandadaptingtoeachotherlikeswarmsin

nature.”(Nottobeoutdone,afewmonthslaterChinademonstrateda119-drone

swarm.)Fightingtogether,adroneswarmcouldbefarmoreeffectivethanthe samenumberofdronesfightingindividually.Nooneyetknowswhatthebest tacticswillbeforswarmcombat,butexperimentssuchastheseareworkingto teasethemout.Newtacticsmightevenbeevolvedbythemachinesthemselves throughmachinelearningorevolutionaryapproaches.

Swarmsaren’tmerelylimitedtotheair.InAugust2014,theU.S.Navy

OfficeofNavalResearch(ONR)demonstratedaswarmofsmallboatsonthe

JamesRiverinVirginiabysimulatingamockstraittransitinwhichtheboats

protectedahigh-valueNavyshipagainstpossiblethreats,escortingitthrougha

simulatedhigh-dangerarea.Whendirectedbyahumancontrollertoinvestigate

apotentialthreat,adetachmentofuninhabitedboatsmovedtointerceptand

encirclethesuspiciousvessel.Thehumancontrollersimplydirectedthemto

intercept the designated suspicious ship; the boats moved autonomously, coordinatingtheiractionsbysharinginformation.Thisdemonstrationinvolved five boats working together, but the concept could be scaled up to larger numbers,justasinaerialdroneswarms. BobBrizzolara,whodirectedtheNavy’sdemonstration,calledtheswarming boatsa“gamechanger.”It’sanoften-overusedterm,butinthiscase,it’snot hyperbole—roboticboatswarmsarehighlyvaluabletotheNavyasapotential

waytoguardagainstthreatstoitsships.InOctober2000,theUSSColewas

attackedbyal-Qaidaterroristsusingasmallexplosive-ladenboatwhileinport inAden,Yemen.Theblastkilledseventeensailorsandcutamassivegashinthe ship’shull.SimilarattackscontinuetobeathreattoU.S.ships,notjustfrom terrorists but also from Iran, which regularly uses small high-speed craft to harass U.S. ships near the Straits of Hormuz. Robot boats could intercept suspicious vessels further away, putting eyes (and potentially weapons) on potentiallyhostileboatswithoutputtingsailorsatrisk. Whattherobotboatsmightdoafterthey’veinterceptedapotentiallyhostile

vesselisanothermatter.InavideoreleasedbytheONR,a.50calibermachine

gun is prominently displayed on the front of one of the boats. The video’s narratormakesnobonesaboutthefactthattherobotboatscouldbeusedto “damageordestroyhostilevessels,”butthedemonstrationdidn’tinvolvefiring any actual bullets, and didn’t include a consideration of what the rules of engagementactuallywouldhavebeen.Wouldahumanberequiredtopullthe trigger?Whenpressedbyreportersfollowingthedemonstration,aspokesman forONRexplainedthat“thereisalwaysahumanintheloopwhenitcomesto theactualengagementofanenemy.”Butthespokesmanalsoacknowledgedthat “underthisswarmingdemonstrationwithmultiple[unmannedsurfacevehicles], ONRdidnotstudythespecificsofhowthehuman-in-the-loopworksforrulesof engagement.”

OODALoop TheNavy’sfuzzyanswertosuchafundamentalquestionreflectsatensionin the military’s pursuit of more advanced

OODALoop

TheNavy’sfuzzyanswertosuchafundamentalquestionreflectsatensionin the military’s pursuit of more advanced robotics. Even as researchers and engineersmovetoincorporatemoreautonomy,thereisanunderstandingthat there are—or should be—limits on autonomy when it comes to the use of weapons.Whatexactlythoselimitsare,however,isoftenunclear.

REACHINGTHELIMIT

Howmuchautonomyistoomuch?TheU.S.AirForcelaidoutanambitious visionforthefutureofrobotaircraftintheirUnmannedAircraftSystemsFlight

Plan,2009–2047.Thereportenvisionedafuturewhereanarmsraceinspeed

droveadesireforever-fasterautomation,notunlikereal-worldcompetitionin automatedstocktrading. Inaircombat,pilotstalkaboutanobserve,orient,decide,act(OODA)loop, a cognitive process pilots go through when engaging enemy aircraft. Understanding the environment, deciding, and acting faster than the enemy allowsapilotto“getinside”theenemy’sOODAloop.Whiletheenemyisstill tryingtounderstandwhat’shappeninganddecideonacourseofaction,thepilot hasalreadychangedthesituation,resettingtheenemytosquareoneandforcing himorhertocometogripswithanewsituation.AirForcestrategistJohnBoyd, originatoroftheOODAloop,describedtheobjective:

Goal:Collapseadversary’ssystemintoconfusionanddisorderbycausinghimtooverandunder react to activity that appears simultaneously menacing as well as ambiguous, chaotic, or

misleading.

Ifvictorycomesfromcompletingthiscognitiveprocessfaster,thenonecansee

theadvantageinautomation.TheAirForce’s2009FlightPlansawtremendous

potentialforcomputerstoexceedhumandecision-makingspeeds:

AdvancesincomputingspeedsandcapacitywillchangehowtechnologyaffectstheOODAloop. Todaytheroleoftechnologyischangingfromsupportingtofullyparticipatingwithhumansin

eachstepoftheprocess.In2047technologywillbeabletoreducethetimetocompletethe

OODAlooptomicroornanoseconds.Muchlikeachessmastercanoutperformproficientchess players,[unmannedaircraftsystems]willbeabletoreactatthesespeedsandthereforethisloop movestowardbecominga“perceiveandact”vector.Increasinglyhumanswillnolongerbe“in the loop” but rather “on the loop”—monitoring the execution of certain decisions. Simultaneously,advancesinAIwillenablesystemstomakecombatdecisionsandactwithin legalandpolicyconstraintswithoutnecessarilyrequiringhumaninput.

This,then,isthelogicalculminationofthearmsraceinspeed:autonomous

weaponsthatcompleteengagementsallontheirown.TheAirForceFlightPlan

acknowledgedthegravityofwhatitwassuggestingmightbepossible.Thenext

paragraphcontinued:

Authorizingamachinetomakelethalcombatdecisionsiscontingentuponpoliticalandmilitary leaders resolving legal and ethical questions. These include the appropriateness of machines havingthisability,underwhatcircumstancesitshouldbeemployed,whereresponsibilityfor mistakesliesandwhatlimitationsshouldbeplacedupontheautonomyofsuchsystems Ethicaldiscussionsandpolicydecisionsmusttakeplaceinthenearterminordertoguidethe development of future [unmanned aircraft system] capabilities, rather than allowing the developmenttotakeitsownpathapartfromthiscriticalguidance.

The Air Force wasn’t recommending autonomous weapons. It wasn’t even suggestingtheywerenecessarilyagoodidea.Whatitwassuggestingwasthat autonomoussystemsmighthaveadvantagesoverhumansinspeed,andthatAI mightadvancetothepointwheremachinescouldcarryoutlethaltargetingand engagementdecisionswithouthumaninput.Ifthatistrue,thenlegal,ethical, andpolicydiscussionsshouldtakeplacenowtoshapethedevelopmentofthis technology.

AtthetimetheAirForceFlightPlanwasreleasedin2009,Iwasworkingin

theOfficeoftheSecretaryofDefenseasacivilianpolicyanalystfocusingon dronepolicy.Mostoftheissuesweweregrapplingwithatthetimehadtodo withhowtomanagetheoverwhelmingdemandformoredronesfromIraqand Afghanistan.Commandersonthegroundhadaseeminglyinsatiableappetitefor drones.Despitethethousandsthathadbeendeployed,theywantedmore,and Pentagonseniorleaders—particularlyintheAirForce—wereconcernedthat spending on drones was crowding out other priorities. Secretary of Defense

RobertGates,whoroutinelychastisedthePentagonforitspreoccupationwith futurewarsovertheongoingonesinIraqandAfghanistan,stronglysidedwith warfightersinthefield.Hisguidancewasclear:sendmoredrones.Mostofmy timewasspentfiguringouthowtoforcethePentagonbureaucracytocomply withthesecretary’sdirectionandrespondmoreeffectivelytowarfighterneeds, butwhenpolicyquestionslikethiscameup,eyesturnedtowardme. Ididn’thavetheanswerstheywanted.Therewasnopolicyonautonomy.

AlthoughtheAirForcehadaskedforpolicyguidanceintheir2009FlightPlan,

therewasn’tevenaconversationunderway.

The2011DoDroadmap,whichIwasinvolvedinwriting,tookastabatan

answer,evenifitwasatemporaryone:

Policy guidelines will especially be necessary for autonomous systems that involve the

Fortheforeseeablefuture,decisionsovertheuseofforceandthechoice

ofwhichindividualtargetstoengagewithlethalforcewillberetainedunderhumancontrolin

unmannedsystems.

applicationofforce

Itdidn’tsaymuch,butitwasthefirstofficialDoDpolicystatementonlethal

autonomy.Lethalforcewouldremainunderhumancontrolforthe“foreseeable

future.”ButinaworldwhereAItechnologyisracingforwardatabreakneck

pace,howfarintothefuturecanwereallysee?

2

THETERMINATORANDTHEROOMBA

WHATISAUTONOMY?

Autonomy is a slippery word. For one person, “autonomous robot” might meanahouseholdRoombathatvacuumsyourhomewhileyou’reaway.For another,autonomousrobotsconjureimagesfromsciencefiction.Autonomous

robotscouldbeagoodthing,likethefriendly—ifirritating—C-3POfromStar

Wars,orcouldleadtoroguehomicidalagents,likethoseSkynetdeploysagainst

humanityintheTerminatormovies.

Sciencefictionwritershavelonggrappledwithquestionsofautonomyin

robots.IsaacAsimovcreatedthenow-iconicThreeLawsofRoboticstogovern

robotsinhisstories:

1 Arobotmaynotinjureahumanbeingor,throughinaction,allowahuman

beingtocometoharm.

2 Arobotmustobeyordersgivenbyhumanbeingsexceptwheresuchorders

wouldconflictwiththefirstlaw.

3 Arobotmustprotectitsownexistenceaslongassuchprotectiondoesnot

conflictwiththefirstorsecondlaw.

InAsimov’sstories,theselawsembeddedwithintherobot’s“positronicbrain” are inviolable. The robot must obey. Asimov’s stories often explore the consequencesofrobots’strictobedienceoftheselaws,andloopholesinthelaws themselves.IntheAsimov-inspiredmovieI,Robot(spoileralert),theleadrobot protagonist,Sonny,isgivenasecretsecondaryprocessorthatallowshimto

overridetheThreeLaws,ifhedesires.Ontheoutside,Sonnylooksthesameas other robots, but the human characters can instantly tell there is something differentabouthim.Hedreams.Hequestionsthem.Heengagesinhumanlike dialogueandcriticalthoughtofwhichtheotherrobotsareincapable.Thereis somethingunmistakablyhumanaboutSonny’sbehavior. WhenDr.SusanCalvindiscoversthesourceofSonny’sapparentanomalous conduct,shefindsithiddeninhischestcavity.Thesymbolisminthefilmis unmistakable:unlikeotherrobotswhoareslavestologic,Sonnyhasa“heart.” Fanciful as it may be, I, Robot’s take on autonomy resonates. Unlike machines,humanshavetheabilitytoignoreinstructionsandmakedecisionsfor themselves.Whetherrobotscaneverhavesomethingakintohumanfreewillisa commonthemeinsciencefiction.InI,Robot’sclimacticscene,Sonnymakesa choicetosaveDr.Calvin,eventhoughitmeansriskingthesuccessoftheir missiontodefeattheevilAIV.I.K.I.,whohastakenoverthecity.It’sachoice motivatedbylove,notlogic.IntheTerminatormovies,whenthemilitaryAI Skynetbecomesself-aware,itmakesadifferentchoice.Upondeterminingthat humansareathreattoitsexistence,Skynetdecidestoeliminatethem,starting globalnuclearwarandinitiating“JudgmentDay.”

THETHREEDIMENSIONSOFAUTONOMY

Intherealworld,machineautonomydoesn’trequireamagicalsparkoffreewill orasoul.Autonomyissimplytheabilityforamachinetoperformataskor functiononitsown. TheDoDunmannedsystemroadmapsreferredto“levels”ora“spectrum”of autonomy, but those classifications are overly simplistic. Autonomy encompassesthreedistinctconcepts:thetypeoftaskthemachineisperforming; therelationshipofthehumantothemachinewhenperformingthattask;andthe sophisticationofthemachine’sdecision-makingwhenperformingthetask.This meanstherearethreedifferentdimensionsofautonomy.Thesedimensionsare independent, and a machine can be “more autonomous” by increasing the amountofautonomyalonganyofthesespectrums. Thefirstdimensionofautonomyisthetaskbeingperformedbythemachine. Notalltasksareequalintheirsignificance,complexity,andrisk:athermostatis anautonomoussysteminchargeofregulatingtemperature,whileTerminator’s Skynetwasgivencontrolovernuclearweapons.Thecomplexityofdecisions involved and the consequences if the machine fails to perform the task

appropriatelyareverydifferent.Often,asinglemachinewillperformsometasks autonomously,whilehumansareincontrolofothertasks,blendinghumanand machine control within the system. Modern automobiles have a range of autonomous features: automatic braking and collision avoidance, antilock brakes,automaticseatbeltretractors,adaptivecruisecontrol,automaticlane keeping,andself-parking.Someautonomousfunctions,suchasautopilotsin commercial airliners, can be turned on or off by a human user. Other autonomousfunctions,likeairbags,arealwaysreadyanddecideforthemselves whentoactivate.Someautonomoussystemsmaybedesignedtooverridethe humanuserincertainsituations.U.S.fighteraircrafthavebeenmodifiedwithan automaticgroundcollisionavoidancesystem(Auto-GCAS).Ifthepilotbecomes disorientedandisabouttocrash,Auto-GCASwilltakecontroloftheaircraftat thelastminutetopullupandavoidtheground.Thesystemhasalreadysavedat

leastoneaircraftincombat,rescuingaU.S.F-16inSyria.

As automobiles and aircraft demonstrate, it is meaningless to refer to a system as “autonomous” without referring to the specific task that is being automated.Carsarestilldrivenbyhumans(fornow),butahostofautonomous functionscanassistthedriver,oreventakecontrolforshortperiodsoftime.The machine becomes “more autonomous” as it takes on more tasks, but some degreeofhumaninvolvementanddirectionalwaysexists.“Fullyautonomous” self-driving cars can navigate and drive on their own, but a human is still choosingthedestination. Foranygiventask,therearedegreesofautonomy.Amachinecanperforma taskinasemiautonomous,supervisedautonomous,orfullyautonomousmanner. Thisistheseconddimensionofautonomy:thehuman-machinerelationship.

SemiautonomousOperation(humanintheloop) Insemiautonomoussystems,themachineperformsataskandthenwaitsfor

SemiautonomousOperation(humanintheloop)

Insemiautonomoussystems,themachineperformsataskandthenwaitsfor

ahumanusertotakeanactionbeforecontinuing.Ahumanis“intheloop.”

Autonomoussystemsgothroughasense,decide,actloopsimilartothemilitary

OODAloop,butinsemiautonomoussystemstheloopisbrokenbyahuman.

Thesystemcansensetheenvironmentandrecommendacourseofaction,but

cannotcarryouttheactionwithouthumanapproval.

cannotcarryouttheactionwithouthumanapproval. SupervisedAutonomousOperation(humanontheloop)

SupervisedAutonomousOperation(humanontheloop)

Insupervisedautonomoussystems,thehumansits“on”theloop.Onceput

intooperation,themachinecansense,decide,andactonitsown,butahuman

usercanobservethemachine’sbehaviorandintervenetostopit,ifdesired.

FullyAutonomousOperation(humanoutoftheloop)

FullyAutonomousOperation(humanoutoftheloop)

Fullyautonomoussystemssense,decide,andactentirelywithouthuman intervention.Oncethehumanactivatesthemachine,itconductsthetaskwithout communicationbacktothehumanuser.Thehumanis“outoftheloop.” Manymachinescanoperateindifferentmodesatdifferenttimes.ARoomba thatisvacuumingwhileyouarehomeisoperatinginasupervisedautonomous mode.IftheRoombabecomesstuck—myRoombafrequentlytrappeditselfin the bathroom—then you can intervene. If you’re out of the house, then the Roombaisoperatinginafullyautonomouscapacity.Ifsomethinggoeswrong, it’sonitsownuntilyoucomehome.MoreoftenthanIwouldhaveliked,Icame hometoadirtyhouseandaspotlessbathroom. Itwasn’ttheRoomba’sfaultithadlockeditselfinthebathroom.Itdidn’t even know that it was stuck (Roombas aren’t very smart). It had simply wandered into a location where its aimless bumping would nudge the door closed, trapping it. Intelligence is the third dimension of autonomy. More sophisticated, or more intelligent, machines can be used to take on more complextasksinmorechallengingenvironments.Peopleoftenusetermslike “automatic,” “automated,” or “autonomous” to refer to a spectrum of intelligenceinmachines. Automaticsystemsaresimplemachinesthatdon’texhibitmuchintheway of“decision-making.”They sensetheenvironmentandact.The relationship

betweensensingandactionisimmediateandlinear.Itisalsohighlypredictable

tothehumanuser.Anoldmechanicalthermostatisanexampleofanautomatic

system.Theusersetsthedesiredtemperatureandwhenthetemperaturegetstoo

highortoolow,thethermostatactivatestheheatorairconditioning.

Automatedsystemsaremorecomplex,andmayconsiderarangeofinputs

andweighseveralvariablesbeforetakinganaction.Nevertheless,theinternal

cognitiveprocessesofthemachinearegenerallytraceablebythehumanuser,at

leastinprinciple.Amoderndigitalprogrammablethermostatisanexampleof

anautomatedsystem.Whethertheheatorairconditioningturnsonisafunction

ofthehousetemperatureaswellaswhatdayandtimeitis.Givenknowledgeof

theinputstothesystemanditsprogrammedparameters,thesystem’sbehavior

shouldbepredictabletoatraineduser.

shouldbepredictabletoatraineduser. SpectrumofIntelligenceinMachines

SpectrumofIntelligenceinMachines

“Autonomous”isoftenusedtorefertosystemssophisticatedenoughthat their internal cognitive processes are less intelligible to the user, who understandsthetaskthesystemissupposedtoperform,butnotnecessarilyhow thesystemwillperformthattask.Researchersoftenrefertoautonomoussystems asbeing“goal-oriented.”Thatistosay,thehumanuserspecifiesthegoal,but theautonomoussystemhasflexibilityinhowitachievesthatgoal. Takeaself-drivingcar,forexample.Theuserspecifiesthedestinationand othergoals,suchasavoidingaccidents,butcan’tpossiblyspecifyinadvance everysingleactiontheautonomouscarissupposedtoperform.Theuserdoesn’t

know where there will be traffic or obstacles in the road, when lights will change, or what other cars or pedestrians will do. The car is therefore programmedwiththeflexibilitytodecidewhentostop,go,andchangelanesin ordertoaccomplishitsgoal:gettingtothedestinationsafely. Inpractice,thelinebetweenautomatic,automated,andautonomoussystems isstillblurry.Often,theterm“autonomous”isusedtorefertofuturesystems thathavenotyetbeenbuilt,butoncetheydoexist,peopledescribethosesame systemsas“automated.”Thisissimilartoatrendinartificialintelligencewhere AIisoftenperceivedtoencompassonlytasksthatmachinescannotyetdo.Once amachineconquersatask,thenitismerely“software.” Autonomydoesn’tmeanthesystemisexhibitingfreewillordisobeyingits programming.Thedifferenceisthatunlikeanautomaticsystemwherethereisa simple,linearconnectionfromsensingtoaction,autonomoussystemstakeinto accountarangeofvariablestoconsiderthebestactioninanygivensituation. Goal-oriented behavior is essential for autonomous systems in uncontrolled environments.Ifaself-drivingcarwereonaclosedtrackwithnopedestriansor othervehicles,eachmovementcouldbeprogrammedintothecarinadvance— whentogo,stop,turn,etc.Butsuchacarwouldnotbeveryuseful,asitcould onlydriveinasimpleenvironmentwhereeveryactioncouldbepredicted.In more complex environments or when performing more complex tasks, it is crucial that the machine be able to make decisions based on the specific situation. Thisgreatercomplexityinautonomoussystemsisadouble-edgedsword. Thedownsidetomoresophisticatedsystemsisthattheusermaynotbeableto predictitsspecificactionsinadvance.Thefeatureofincreasedautonomycan becomeaflawiftheuserissurprisedinanunpleasantwaybythemachine’s behavior.Forsimpleautomaticorautomatedsystems,thisislesslikely.Butas thecomplexityofthesystemincreases,sodoesthedifficultyofpredictinghow themachinewillact. Itcanbeexciting,ifalittlescary,tohandovercontroltoanautonomous system.Themachineislikeablackbox.Wespecifyitsgoaland,likemagic,the machineovercomesobstaclestoreachthegoal.Theinnerworkingsofhowitdid so are often mysterious to us; the distinction between “automated” and “autonomous”isprincipallyinthemindoftheuser.Anewmachineonlyfeels “autonomous” because we don’t yet have a good mental model for how it “thinks.”Aswegainexperiencewiththemachineandbegintobetterunderstand it, the layers of fog hiding the inner workings of the black box dissipate, revealingthecomplexlogicdrivingitsbehavior.Wecometodecidethemachine

ismerely“automated”afterall.Inunderstandingthemachine,wehavetamedit; thehumansarebackincontrol.Thatprocessofdiscovery,however,canbea rockyone. Afewyearsago,IpurchasedaNest“learningthermostat.”TheNesttracks yourbehaviorandadjuststhehouse’stemperatureasneeded,“learning”your preferencesovertime.TherewerebumpsalongthewayasIdiscoveredvarious aspectsoftheNest’sfunctionalityandoccasionallythehousewastemporarily toowarmortoocold,butIwassufficientlyenamoredofthetechnologythatI waswillingtopushthroughthesegrowingpains.Mywife,Heather,wasless tolerant of the Nest. Every time it changed the temperature on its own, disregarding an instruction she had given, she viewed it more and more suspiciously.(Unbeknownsttoher,theNestwasfollowingotherguidanceIhad givenitpreviously.) ThefinalstrawfortheNestwaswhenwecamehomefromsummervacation

tofindthehouseatoasty84degrees,despitemyhavinggoneonlinethenight

beforeandsettheNesttoacomfortable70.Withsweatdrippingoffourfaces,

we set our bags down in the foyer and I ran to the Nest to see what had happened.Asitturnedout,Ihadneglectedtoturnoffthe“auto-awayfeature.” AftertheNest’shallwaysensordetectednomovementanddiscernedwewere not home, it reverted—per its programming—to the energy-saving “away”

settingof84degrees.OnelookfromHeathertoldmeitwastoolate,though.She

hadlosttrustintheNest.(Or,moreaccurately,inmyabilitytouseit.) TheNestwasn’tbroken,though.Thehuman-machineconnectionwas.The same features that made the Nest “smarter” also made it harder for me to anticipateitsbehavior.Thedisconnectbetweenmyexpectationsofwhatthe Nestwoulddoandwhatitwasactuallydoingmeanttheautonomythatwas supposedtobeworkingformeendedup,moreoftenthannot,workingagainst mygoals.

HOWMUCHSHOULDWETRUSTAUTONOMOUS

SYSTEMS?

AlltheNestdidwascontrolthethermostat.TheRoombamerelyvacuumed.

CominghometoaRoombalockedinthebathroomoranoverheatedhouse

mightbeannoying,butitwasn’tacatastrophe.Thetasksentrustedtothese

autonomoussystemsweren’tcriticalones.

WhatifIwasdealingwithanautonomoussystemperformingatrulycritical

function?WhatiftheNestwasaweapon,andmyinabilitytounderstanditledto

failure?

WhatifthetaskIwasdelegatingtoanautonomoussystemwasthedecision

whetherornottokill?

3

MACHINESTHATKILL

WHATISANAUTONOMOUSWEAPON?

Thepathtoautonomousweaponsbegan150yearsagointhemid-nineteenth

century. As the second industrial revolution was bringing unprecedented productivity to cities and factories, the same technology was bringing unprecedentedefficiencytokillinginwar.

AtthestartoftheAmericanCivilWarin1861,inventorRichardGatling

devisedanewweapontospeeduptheprocessoffiring:theGatlinggun.A forerunnerofthemodernmachinegun,theGatlinggunemployedautomation forloadingandfiring,allowingmorebulletstobefiredinashorteramountof time.TheGatlinggunwasasignificantimprovementoverCivilWar–erarifled muskets, which had to be loaded by hand through the muzzle in a lengthy process.Well-trainedtroopscouldfirethreeroundsperminutewitharifled

musket.TheGatlinggunfiredover300roundsperminute.

In its time, the Gatling gun was a marvel. Mark Twain was an early enthusiast:

[T]heGatlinggun

withunerringaccuracy,adistanceoftwoandahalfmiles.Itfeedsitselfwithcartridges,andyou

workitwithacranklikeahandorgan;youcanfireitfasterthanfourmencancount.Whenfired

rapidly,thereportsblendtogetherliketheclatteringofawatchman’srattle.Itcanbedischarged

isaclusterofsixtotensavagetubesthatcarrygreatconicalpelletsoflead,

fourhundredtimesaminute!Ilikeditverymuch.

The Gatling gun was not an autonomous weapon, but it began a long evolutionofweaponsautomation.IntheGatlinggun,theprocessofloading bullets,firing,andejectingcartridgeswasallautomatic,providedahumankept

turningthecrank. Theresultwas atremendousexpansion intheamountof destructivepowerunleashedonthebattlefield.Foursoldierswereneededto operatetheGatlinggun,butbydintofautomation,theycoulddeliverthesame lethalfirepowerasmorethanahundredmen. RichardGatling’smotivationwasnottoacceleratetheprocessofkilling,but to save lives by reducing the number of soldiers needed on the battlefield. Gatling built his device after watching waves of young men return home woundedordeadfromtheunrelentingbloodshedoftheAmericanCivilWar.In alettertoafriend,hewrote:

ItoccurredtomethatifIcouldinventamachine—agun—whichcouldbyitsrapidityoffire, enableonemantodoasmuchbattledutyasahundred,thatitwould,toagreatextent,supersede the necessity of large armies, and consequently, exposure to battle and disease be greatly diminished.

Gatlingwasanaccomplishedinventorwithmultiplepatentstohisnamefor agriculturalimplements.Hesawtheguninasimilarlight—machinetechnology harnessed to improve efficiency. Gatling claimed his gun “bears the same relationtootherfirearmsthatMcCormack’sreaperdoestothesickle,orthe sewingmachinetothecommonneedle.” Gatlingwasmorerightthanheknew.TheGatlinggundidindeedlaythe seedsforarevolutioninwarfare,abreakfromtheoldwaysofkillingpeopleone atatimewithrifledmusketsandshifttoaneweraofmechanizeddeath.The future Gatling wrought was not one of less bloodshed, however, but unimaginablymore.TheGatlinggunlaidthefoundationsforanewclassof machine:theautomaticweapon.

AUTOMATICWEAPONS:MACHINEGUNS

Automaticweaponscameaboutincrementally,withinventorsbuildingonand refining the work of those who came before. The next tick in the gears of

progresscamein1883withtheinventionoftheMaximgun.UnliketheGatling

gun,whichrequiredahumantohand-cranktheguntopowerit,theMaximgun harnessedthephysicalenergyfromtherecoilofthegun’sfiringtopowerthe processofreloadingthenextround.Hand-crankingwasnolongerneeded,and oncefiringwasinitiated,theguncouldcontinuefiringonitsown.Themachine gunwasborn. The machine gun was a marvelous and terrible invention. Unlike semiautomaticweapons,whichrequiretheusertopullthetriggerforeachbullet,

automaticweaponswillcontinuefiringsolongasthetriggerremainshelddown.

Modernmachinegunscomeinallshapesandsizes,fromthesnub-nosedUzi

thatplainclothessecuritypersonnelcantuckundertheirsuitjacketstomassive

chaingunsthatrattleoffthousandsofroundsperminute.Regardlessoftheir

form,theirpowerispalpablewhenfiringone.

AsaRanger,IcarriedanM249SquadAutomaticWeapon,orSAW,asingle-

personlightmachineguncarriedininfantryfireteams.Weighingseventeen pounds without ammunition, the SAW is on the hefty side of what can be considered“handheld.”Withtraining,theSAWcanbefiredfromtheshoulder standingupinshortcontrolledbursts,butisbestusedlyingontheground.The SAWcomesequippedwithtwometalbipodlegsthatcanbeflippeddownto allowtheguntostandelevatedoffthedirt.Onedoesnotsimplylayonthe groundandfiretheSAW,however.TheSAWhastobemanaged;ithastobe controlled.Whenfired,theweaponbucksandmoveslikeawildanimalfromthe rapid-firerecoil.Atacyclicrateoffire,withthetriggerhelddown,theSAW

willfire800roundsperminute.That’sthirteenbulletsstreamingoutofthe

barrelpersecond.Atthatrateoffire,agunnerwillripthroughhisentirestashof ammunitioninundertwominutes.Thebarrelwilloverheatandbegintomelt. UsingtheSAWeffectivelyrequiresdiscipline.Thegunnermustleanintothe weapontocontrolit,puttinghisweightbehinditanddiggingthebipodlegsinto thedirttopintheweapondownasitisfired.Thegunnerfiresinshortburstsof fivetosevenroundsatatimetoconserveammunition,keeptheweaponon target,andpreventthebarrelfromoverheating.Underheavyfiring,theSAW’s barrelwillglowredhot—thebarrelmayneedtoberemovedandreplacedwitha sparebeforeitbeginstomelt.Theguncan’thandleitsownpower.

OntheotherendofthespectrumofinfantrymachinegunsistheM2.50

caliberheavymachinegun,the“madeuce.”Mountedonmilitarytrucks,the.50

calisthegunthatturnsasimpleoff-roadtruckintoapieceoflethalmachinery, the “gun truck.” At eighty pounds—plus a fifty-pound tripod—the gun is a behemoth.Tofireit,thegunnerleansbackintheturrettobracehimorherself andthumbsdownthetriggerwithbothhands.Thegununleashesapowerful THUKTHUKTHUKastheroundsexit.Thehalfinch–widebulletscansail overamile.

Machinegunschangedwarfareforever.Inthelate1800s,theBritishArmy

usedtheMaximguntoaidintheircolonialconquestofAfrica,allowingthemto

takeonanddefeatmuchlargerforces.Foratime,totheBritishatleast,machine

gunsmighthaveseemedlikeaweaponthatlessenedthecostofwar.InWorld

WarI,however,bothsideshadmachinegunsandtheresultwasbloodshedonan

unprecedentedscale.AttheBattleoftheSomme,Britainlost20,000menina

singleday,moweddownbyautomaticweapons.Millionsdiedinthetrenchesof WorldWarI,anentiregenerationofyoungmen. Machinegunsacceleratedtheprocessofkillingbyharnessingindustrialage efficiencyintheserviceofwar.Menweren’tmerelykilledbymachineguns; theyweremowed down,likeMcCormack’s mechanicalreapercutting down stalksofgrain.Machinegunsaredumbweapons,however.Theystillhavetobe aimedbytheuser.Onceinitiated,theycancontinuefiringontheirown,butthe gunshavenoabilitytosensetargets.Inthetwentiethcentury,weaponsdesigners wouldtakethenextsteptoaddrudimentarysensingtechnologiesintoweapons —theinitialstagesofintelligence.

THEFIRST“SMART”WEAPONS

Fromthefirsttimeahumanthrewarockinangeruntilthetwentiethcentury, warfarewasfoughtwithunguidedweapons.Projectiles—whethershotfroma sling,abow,oracannon—followthelawsofgravityoncereleased.Projectiles areofteninaccurate,andthedegreeofinaccuracyincreaseswithrange.With unguidedweapons,destroyingtheenemyhingedongettingcloseenoughto deliveroverwhelmingbarragesoffiretoblanketanarea. In World War II, as rockets, missiles, and bombs increased the range at whichcombatantscouldtargetoneanother—butnottheiraccuracy—militaries soughttodevelopmethodsforprecisionguidancethatwouldallowweaponsto accuratelystriketargetsfromlongdistances.Someattemptstoinsertintelligence intoweaponswereseeminglycomical,suchasbehavioristB.F.Skinner’sefforts to control a bomb by the pecking of a pigeon on a target image. Skinner’s pigeon-guided bomb might have worked, but it never saw combat. Other attemptstoimplementonboardguidancemeasuresdid,givingbirthtothefirst “smart”weapons:precision-guidedmunitions(PGMs).

ThefirstsuccessfulPGMwastheGermanG7e/T4Falke(“Falcon”)torpedo,

introduced in 1943. The Falcon torpedo incorporated a new technological innovation:anacoustichomingseeker.Unlikeregulartorpedoesthattraveledin astraightlineandcouldverywellmissapassingship,theFalconusedits

homingseekertoaccountforaimingerrors.Aftertraveling400metersfromthe

German U-boat (submarine) that launched it, the Falcon would activate its passiveacousticsensors,listeningforanynearbymerchantships.Itwouldthen steertowardanyships,detonatingonceitreachedthem.

TheFalconwasusedbyonlythreeU-boatsincombatbeforebeingreplaced

bytheupgradedG7es/T5Zaunkönig(“Wren”),whichhadafastermotorand

therefore could hit faster moving Allied navy ships in addition to merchant vessels.Usingatorpedothatcouldhomeinontargetsratherthantravelina straight line had clear military advantages, but it also immediately created

complications.TwoU-boatsweresunkinDecember1943(U-972)andJanuary

1944(U-377)whentheirtorpedoescircledbackonthem,hominginonthe

soundoftheirownpropeller.Inresponsetothisproblem,Germanyinstituteda

400-metersafetylimitbeforeactivatingthehomingmechanism.Tomorefully

mitigate against the dangers of a homing torpedo turning back on oneself, GermanU-boatsalsobeganincorporatingatacticofdivingimmediatelyafter launchandthengoingcompletelysilent. TheAlliesquicklydevelopedacountermeasuretotheWrentorpedo.The Foxer,anacousticdecoytowedbehindAlliedships,wasintendedtolureaway theWrensothatitdetonatedharmlesslyagainstthedecoy,nottheshipitself. TheFoxerintroducedotherproblems;itloudlybroadcasttheAlliedconvoy’s position to other nearby U-boats, and it wasn’t long before the Germans introducedtheWrenIIwithanimprovedacousticseeker.Thusbeganthearms raceinsmartweaponsandcountermeasuresagainstthem.

PRECISION-GUIDEDMUNITIONS

ThelatterhalfofthetwentiethcenturysawtheexpansionofPGMslikethe Wrenintosea,air,andgroundcombat.Today,theyarewidelyusedbymilitaries aroundtheworldinavarietyofforms.Sometimescalled“smartmissiles”or “smartbombs,”PGMsuseautomationtocorrectforaimingerrorsandhelp guide the munition (missile, bomb, or torpedo) onto the intended target. Dependingontheirguidancemechanism,PGMscanhavevaryingdegreesof autonomy. Someguidedmunitionshaveverylittleautonomyatall,withthehuman controllingtheaimpointoftheweaponthroughoutitsflight.Command-guided weaponsaremanuallycontrolledbyahumanremotelyviaawireorradiolink. Forotherweapons,ahumanoperator“paints”thetargetwithalaserorradarand themissileorbombhomesinonthelaserorradarreflection.Inthesecases,the humandoesn’tdirectlycontrolthemovementsofthemunition,butdoescontrol theweapon’saimpointinrealtime.Thisallowsthehumancontrollertoredirect themunitioninflightorpotentiallyaborttheattack.

OtherPGMsare“autonomous”inthesensethattheycannotberecalledonce

launched,butthemunition’sflightpathandtargetarepredetermined.These

munitionscanuseavarietyofguidancemechanisms.Nuclear-tippedballistic

missiles use inertial navigation systems consisting of gyroscopes and

accelerometerstoguidethemissiletoitspreselectedtargetpoint.Submarine-

launchednuclearballisticmissilesusestar-trackingcelestialnavigationsystems toorientthemissile,sincetheundersealaunchingpointvaries.Manycruise missileslookdowntoearthratherthanuptothestarsfornavigation,usingradar ordigitalscenemappingtofollowthecontoursoftheEarthtotheirpreselected target.GPS-guidedweaponsrelyonsignalsfromtheconstellationofU.S.global positioningsystemsatellitestodeterminetheirpositionandguidancetotheir target.Whilemanyofthesemunitionscannotberecalledorredirectedafter launch,themunitionsdonothaveanyfreedomtoselecttheirowntargetsor eventheirownnavigationalroute.Intermsofthetasktheyareperforming,they haveverylittleautonomy,eveniftheyarebeyondhumancontroloncelaunched. Theirmovementsareentirelypredetermined.Theguidancesystems,whether internalsuchasinertialnavigationorexternalsuchasGPS,areonlydesignedto ensurethemunitionstaysonpathtoitspreprogrammedtarget.Thelimitationof these guidance systems, however, is that they are only useful against fixed targets. HomingweaponsareatypeofPGMusedtotrackontomovingtargets.By necessitysincethetargetismoving,homingmunitionshavetheabilitytosense the target and adapt to its movements. Some homing munitions use passive sensors to detect their targets, as the Wren did. Passive sensors listen to or observe the environment and wait for the target to indicate its position by makingnoiseoremittingintheelectromagneticspectrum.Activeseekerssend out signals, such as radar, to sense a target. An early U.S. active homing munitionwastheBatanti-shipglidebomb,whichhadanactiveradarseekerto targetenemyships. Somehomingmunitions“lock”ontoatarget,theirseekersensingthetarget beforelaunch.Othermunitions“lockon”afterlaunch;theyarelaunchedwith theseekerturnedoff,thenitactivatestobeginlookingforthemovingtarget. Anattackdogisagoodmetaphorforafire-and-forgethomingmunition.

U.S.pilotsrefertothetacticoflaunchingtheAIM-120AMRAAMair-to-air

missilein“lockonafterlaunch”modeasgoing“maddog.”Aftertheweaponis

released,itturnsonitsactiveradarseekerandbeginslookingfortargets.Likea

maddoginameatlocker,itwillgoafterthefirsttargetitsees.Similartothe

problemGermanU-boatsfacedwiththeWren,pilotsneedtotakecaretoensure

thatthemissiledoesn’ttrackontofriendlytargets.Militariesaroundtheworld

oftenusetactics,techniques,andprocedures(“TTPs”inmilitaryparlance)to

avoidhomingmunitionsturningbackonthemselvesorotherfriendlies,suchas

theU-boattacticofdivingimmediatelyafterfiring.

HOMINGMUNITIONSHAVELIMITEDAUTONOMY

Homing munitions have some autonomy, but they are not “autonomous weapons”—ahumanstilldecideswhichspecifictargettoattack.It’struethat manyhomingmunitionsare“fireandforget.”Oncelaunched,theycannotbe recalled.Butthisishardlyanewdevelopmentinwar.Projectileshavealways been“fireandforget”sincetheslingandstone.Rocks,arrows,andbulletscan’t berecalledafterbeingreleasedeither.Whatmakeshomingmunitionsdifferent istheirrudimentaryonboardintelligencetoguidetheirbehavior.Theycansense theenvironment(thetarget),determinetherightcourseofaction(whichwayto turn),andthenact(maneuveringtohitthetarget).Theyare,inessence,asimple robot. Theautonomygiventoahomingmunitionistightlyconstrained,however. Homingmunitionsaren’tdesignedtosearchforandhuntpotentialtargetson theirown.Themunitionsimplyusesautomationtoensureithitsthespecific targetthehumanintended.Theyarelikeanattackdogsentbypolicetorun downasuspect,notlikeawilddogroamingthestreetsdecidingonitsown whomtoattack. In some cases, automation is used to ensure the munition does not hit unintendedtargets.TheHarpoonanti-shipmissilehasamodewheretheseeker staysoffwhilethemissileusesinertialnavigationtoflyazigzagpatterntoward thetarget.Then,atthedesignatedlocation,theseekeractivatestosearchforthe intendedtarget.Thisallowsthemissiletoflypastothershipsintheenvironment withoutengagingthem.Becausetheautonomyofhomingmunitionsistightly constrained, the human operator needs to be aware of a specific target in advance.Theremustbesomekindofintelligenceinformingthehumanofthat particulartargetatthatspecifictimeandplace.Thisintelligencecouldcome from radars based on ships or aircraft, a ping on a submarine’s sonar, informationfromsatellites,orsomeotherindicator.Homingmunitionshavea verylimitedabilityintimeandspacetosearchfortargets,andtolaunchone withoutknowledgeofaspecifictargetwouldbeawaste.Thismeanshoming munitionsmustoperateaspartofabroaderweaponsystemtobeuseful.

THEWEAPONSYSTEM

Aweaponsystemconsistsofasensortosearchforanddetectenemytargets,a decision-making element that decides whether to engage the target, and a munition(orothereffector,suchasalaser)thatengagesthetarget.Sometimes theweaponsystemiscontainedonasingleplatform,suchasanaircraft.Inthe caseofanAdvancedMedium-RangeAir-to-AirMissile(orAMRAAM),for example,theweaponsystemconsistsoftheaircraft,radar,pilot,andmissile. The radar searches for and senses the target, the human decides whether to engage,andthemissilecarriesouttheengagement.Alloftheseelementsare necessaryfortheengagementtowork.

necessaryfortheengagementtowork. WeaponSystemOODALoop

WeaponSystemOODALoop

Inothercases,componentsoftheweaponsystemmaybedistributedacross multiplephysicalplatforms.Forexample,amaritimepatrolaircraftmightdetect an enemy ship and pass the location data to a nearby friendly ship, which launchesamissile.Defensestrategistsrefertothislarger,distributedsystem with multiple components as a battlenetwork. Defense analyst Barry Watts described the essential role battle networks play in making precision-guided weaponseffective:

Because“precisionmunitions”requiredetaileddataontheirintendedtargetsoraim-pointstobe

militarilyuseful—asopposedtowasteful—theyrequire“precisioninformation.”Indeed,thetight

linkagebetweenguidedmunitionsand“battlenetworks,”whoseprimaryreasonforexistenceis

toprovidethenecessarytargetinginformation,wasoneofthemajorlessonsthatemergedfrom

carefulstudyoftheUS-ledaircampaignduringOperationDesertStormin1991

guided munitions together with the targeting networks that make these munitions “smart.” [emphasisintheoriginal]

[It]is

Automationisusedformanyengagement-relatedtasksinweaponsystemsand battlenetworks:finding,identifying,tracking,andprioritizingpotentialtargets; timingwhentofire;andmaneuveringmunitionstothetarget.Formostweapon systemsinusetoday,ahumanmakesthedecisionwhethertoengagethetarget. If there is a human in the loop deciding which target(s) to engage, it is a semiautonomousweaponsystem.

target(s) to engage, it is a semiautonomousweaponsystem . SupervisedAutonomousWeaponSystem(humanontheloop) In

SupervisedAutonomousWeaponSystem(humanontheloop)

In autonomous weapon systems, the entire engagement loop—searching, detecting,decidingtoengage,andengaging—isautomated.(Foreaseofuse,I’ll often shorten “autonomous weapon system” to “autonomous weapon.” The termsshouldbetreatedassynonymous,withtheunderstandingthat“weapon” refers to the entire system: sensor, decision-making element, and munition.) Mostweaponsystemsinusetodayaresemiautonomous,butafewcrosstheline toautonomousweapons.

SUPERVISEDAUTONOMOUSWEAPONSYSTEMS

Becausehomingmunitionscanpreciselytargetships,bases,andvehicles,they

canoverwhelmdefendersthroughsaturationattackswithwaves,or“salvos”of

missiles.Inaneraofunguided(“dumb”)munitions,defenderscouldsimplyride

outanenemybarrage,trustingthatmostoftheincomingroundswouldmiss.

Withprecision-guided(“smart”)weapons,however,thedefendermustfinda

waytoactivelyinterceptanddefeatincomingmunitionsbeforetheyimpact.

Moreautomation—thistimefordefensivepurposes—isthelogicalresponse. At least thirty nations currently employ supervised autonomous weapon systemsofvarioustypestodefendships,vehicles,andbasesfromattack.Once placedinautomaticmodeandactivated,thesesystemswillengageincoming rockets,missiles,ormortarsallontheirownwithoutfurtherhumanintervention. Humansareontheloop,however,supervisingtheiroperationinrealtime.

SupervisedAutonomousWeaponSystem(humanontheloop)

SupervisedAutonomousWeaponSystem(humanontheloop)

Thesesupervisedautonomousweaponsarenecessaryforcircumstancesin

whichthespeedofengagementscouldoverwhelmhumanoperators.Likeinthe

AtarigameMissileCommand,saturationattacksfromsalvosofsimultaneous

incomingthreatscouldoverwhelmhumanoperators.Automateddefensesarea

vital part of surviving attacks from precision-guided weapons. They include

ship-baseddefenses,suchastheU.S.AegiscombatsystemandPhalanxClose-

InWeaponSystem(CIWS);land-basedairandmissiledefensesystems,suchas theU.S.Patriot;counter-rocket,artillery,andmortarsystemssuchastheGerman MANTIS;andactiveprotectionsystemsforgroundvehicles,suchastheIsraeli TrophyorRussianArenasystem. Whiletheseweaponsystemsareusedforavarietyofdifferentsituations—to defendships,landbases,andgroundvehicles—theyoperateinsimilarways. Humanssettheparametersoftheweapon,establishingwhichthreatsthesystem shouldtargetandwhichitshouldignore.Dependingonthesystem,different rules may be used for threats coming from different directions, angles, and speeds.Somesystemsmayhavemultiplemodesofoperation,allowinghuman in-the-loop(semiautonomous)oron-the-loop(supervisedautonomous)control.

Theseautomateddefensivesystemsareautonomousweapons,buttheyhave been used to date in very narrow ways—for immediate defense of human- occupied vehicles and bases, and generally targeting objects (like missiles, rockets,oraircraft),notpeople.Humanssupervisetheiroperationinrealtime andcanintervene,ifnecessary.Andthehumanssupervisingthesystemare physicallycolocatedwithit,whichmeansinprincipletheycouldphysically disableitifthesystemstoppedrespondingtotheircommands.

FULLYAUTONOMOUSWEAPONSYSTEMS

Doanynationshavefullyautonomousweaponsthatoperatewithnohuman supervision?Generallyspeaking,fullyautonomousweaponsarenotinwideuse, butthereareafewselectsystemsthatcrosstheline.Theseweaponscansearch for, decide to engage, and engage targets on their own and no human can intervene.Loiteringmunitionsareoneexample. Loitering munitions can circle overhead for extended periods of time, searchingforpotentialtargetsoverawideareaand,oncetheyfindone,destroy it. Unlike homing munitions, loitering munitions do not require precise intelligence on enemy targets before launch. Thus, a loitering munition is a complete “weapon system” all on its own. Ahuman can launch a loitering munitionintoa“box”tosearchforenemytargetswithoutknowledgeofany specifictargetsbeforehand.Someloiteringmunitionskeephumansintheloop via a radio connection to approve targets before engagement, making them semiautonomousweaponsystems.Some,however,arefullyautonomous.

FullyAutonomousWeaponSystem(humanoutoftheloop) TheIsraeliHarpyisonesuchweapon.Nohumanapprovesthespecifictarget

FullyAutonomousWeaponSystem(humanoutoftheloop)

TheIsraeliHarpyisonesuchweapon.Nohumanapprovesthespecifictarget

beforeengagement.TheHarpyhasbeensoldtoseveralcountries—Chile,China,

India,SouthKorea,andTurkey—andtheChinesearereportedtohavereverse

engineeredtheirownvariant.

HARMvs.Harpy

Typeof

Target

Timetosearch Distance

Degreeofautonomy

weapon

HARM Homing missile

Radars

Approx.

90+km

Semiautonomous

4.5

weapon

 

minutes

Harpy

Loitering

Radars

2.5hours

500km

Fully

munition

autonomous

 

weapon

The difference between a fully autonomous loitering munition and a semiautonomoushomingmunitioncanbeillustratedbycomparingtheHarpy withtheHigh-speedAnti-RadiationMissile(HARM).Bothgoafterthesame typeoftarget(enemyradars),buttheirfreedomtosearchfortargetsismassively

different.ThesemiautonomousHARMhasarangeof90-pluskilometersanda

top speed of over 1,200 kilometers per hour, so it is only airborne for

approximatelyfourandahalfminutes.Becauseitcannotloiter,theHARMhas tobelaunchedataspecificenemyradarinordertobeuseful.TheHarpycan

stayaloftforovertwoandahalfhourscoveringupto500kilometersofground.

ThisallowstheHarpytooperateindependentlyofabroaderbattlenetworkthat

givesthehumantargetinginformationbeforelaunch.Thehumanlaunchingthe

Harpydecidestodestroyanyenemyradarswithinageneralareainspaceand

time,buttheHarpyitselfchoosesthespecificradaritdestroys.

time,buttheHarpyitselfchoosesthespecificradaritdestroys. Semiautonomousvs.FullyAutonomousWeapons

Semiautonomousvs.FullyAutonomousWeaponsForsemiautonomousweapons,thehumanoperator

launchestheweaponataspecificknowntargetorgroupoftargets.Thehumanchoosesthetargetandthe

weaponcarriesouttheattack.Fullyautonomousweaponscansearchforandfindtargetsoverawidearea,

allowinghumanoperatorstolaunchthemwithoutknowledgeofspecifictargetsinadvance.Thehuman

decidestolaunchthefullyautonomousweapon,buttheweaponitselfchoosesthespecifictargettoattack.

TomahawkAnti-ShipMissileMissionProfile AtypicalmissionforaTomahawkAnti-ShipMissile

TomahawkAnti-ShipMissileMissionProfileAtypicalmissionforaTomahawkAnti-ShipMissile

(TASM).Afterbeinglaunchedfromashiporsubmarine,theTASMwouldcruisetothetargetarea.Once

overthetargetarea,itwouldflyasearchpatterntolookfortargetsand,ifitfoundone,attackthetargeton

itsown.

Despite conventional thinking that fully autonomous weapons are yet to come,isolatedcasesoffullyautonomousloiteringmunitionsgobackdecades.

Inthe1980s,theU.S.Navydeployedaloiteringanti-shipmissilethatcouldhunt

for, detect, and engage Soviet ships on its own. The Tomahawk Anti-Ship Missile (TASM) was intended to be launched over the horizon at possible locationsofSovietships,thenflyasearchpatternoverawidearealookingfor theirradarsignatures.IfitfoundaSovietship,TASMwouldattackit.(Despite the name, the TASM was quite different from the Tomahawk Land Attack Missile[TLAM],whichusesdigitalscenemappingtofollowapreprogrammed

routetoitstarget.)TheTASMwastakenoutofNavyserviceintheearly1990s.

Whileitwasneverfiredinanger,ithasthedistinctionbeingthefirstoperational fullyautonomousweapon,asignificancethatwasnotrecognizedatthetime. In the 1990s, the United States began development on two experimental loitering munitions: Tacit Rainbow and the Low Cost Autonomous Attack System(LOCAAS).TacitRainbowwasintendedtobeapersistentantiradiation weapontotargetland-basedradars,liketheHarpy.LOCAAShadanevenmore ambitiousgoal:tosearchforanddestroyenemytanks,whicharehardertargets than radars because they are not emitting in the electromagnetic spectrum. NeitherTacitRainbownorLOCAASwereeverdeployed;bothwerecancelled whilestillindevelopment. Theseexamplesshinealightonacommonmisperceptionaboutautonomous weapons, which is the notion that intelligence is what makes a weapon “autonomous.” How intelligent a system is and which tasks it performs

autonomously are different dimensions. It is freedom, not intelligence, that definesanautonomousweapon.Greaterintelligencecanbeaddedintoweapons withoutchangingtheirautonomy.Todate,thetargetidentificationalgorithms usedinautonomousandsemiautonomousweaponshavebeenfairlysimple.This haslimitedtheusefulnessoffullyautonomousweapons,asmilitariesmaynot trustgivingaweaponverymuchfreedomifitisn’tveryintelligent.Asmachine intelligenceadvances,however,autonomoustargetingwillbecometechnically possibleinawiderrangeofsituations.

UNUSUALCASES—MINES,ENCAPSULATEDTORPEDO

MINES,ANDSENSORFUZEDWEAPON

There are a few unusual cases of weapons that blur the lines between semiautonomousandfullyautonomousweapons:minesandtheSensorFuzed Weapondeservespecialmention. Placedonlandoratsea,mineswaitfortheirtargettoapproach,atwhich pointthemineexplodes.Whileminesareautomaticdevicesthatwilldetonate ontheirownoncetriggered,theyhavenofreedomtomaneuverandsearchfor targets.Theysimplysitinplace.(Forthemostpart—somenavalminescandrift withthecurrent.)Theyalsogenerallyhaveverylimitedmethodsfor“deciding” whetherornottofire.Minestypicallyhaveasimplemethodforsensingatarget and,whenthethresholdforthesensorisreached,themineexplodes.(Some navalminesandantitankminesemployacountersothattheywillletthefirst fewtargetspassunharmedbeforedetonatingagainstashiporvehiclelaterinthe convoy.) Mines deserve special mention because their freedom in time is virtuallyunbounded,however.Unlessspecificallydesignedtoself-deactivate after a certain period of time, mines can lay in wait for years, sometimes remainingactivelongafterawarhasended. The fact that mines are often unbounded in time has had devastating humanitarian consequences. By the mid-1990s, an estimated more than 110 million land mines lay hidden in sixty-eight countries around the globe, accumulated from scores of conflicts. Land mines have killed thousands of civilians,manyofthemchildren,andmaimedtensofthousandsmore,sparking theglobalmovementtobanlandminesthatculminatedintheOttawaTreatyin 1997. Adopted by 162 nations, the Ottawa Treaty prohibits the production, stockpiling,transfer,oruseofantipersonnellandmines.Antitanklandminesand navalminesarestillpermitted.

Mines can sense and act on their own, but do not search for targets. Encapsulatedtorpedominesareaspecialtypeofnavalminethatactsmorelike anautonomousweapon,however.Ratherthansimplyexplodingonceactivated, encapsulatedtorpedominesreleaseatorpedothathomesinonthetarget.This givesencapsulatedtorpedominesthefreedomtoengagetargetsoveramuch widerareathanatraditionalmine,muchlikealoiteringmunition.TheU.S.Mk

60CAPTORencapsulatedtorpedominehadapublishedrangeof8,000yards.

Bycontrast,ashipwouldhavetopassoveraregularmineforittodetonate. Eventhoughencapsulatedtorpedominesaremooredinplacetotheseabed,their abilitytolaunchatorpedotochasedowntargetsgivesthemamuchgreater degreeofautonomyinspacethanatraditionalnavalmine.Aswithloitering munitions,examplesofencapsulatedtorpedominesarerare.TheU.S.CAPTOR

minewasinserviceforthroughoutthe1980sand1990sbuthasbeenretired.

TheonlyencapsulatedtorpedominestillinserviceistheRussianPMK-2,used

byRussiaandChina. TheSensorFuzedWeapon(SFW)isanair-deliveredantitankweaponthat defiescategorization.Releasedfromanaircraft,anSFWcandestroyanentire columnofenemytankswithinseconds.TheSFWfunctionsthroughaseriesof RubeGoldbergmachine–likesteps:First,theaircraftreleasesabomb-shaped canisterthanglidestowardthetargetarea.Asthecanisterapproachesthetarget area,theoutercasingreleases,exposingtensubmunitionswhichareejected fromthecanister.Eachsubmunitionreleasesadrogueparachuteslowingits descent. At a certain height above the ground, the submunition springs into action.Itopensitsoutercase,exposingfourinternallyheld“skeets”whichare thenrotatedoutoftheinnercasingandexposed.Theparachutereleasesandthe submunition fires retrojets that cause it to climb in altitude while spinning furiously. The hockey-puck-shaped skeets are then released, flung outward violentlyfromtheforceofthespinning.Eachskeetcarriesonboardlaserand infraredsensorsthatitusestosearchfortargetsbeneathit.Upondetectinga vehiclebeneathit,theskeetfiresanexplosivelyformedpenetrator—ametalslug —downwardintothevehicle.Themetalslugstrikesthevehicleontop,where armoredvehicleshavethethinnestarmor,destroyingthevehicle.Inthismanner, a single SFW can take out a group of tanks or other armored vehicles simultaneously,withtheskeetstargetingeachvehicleprecisely. SimilartothedistinctionbetweenHarpyandHARM,thecriticalvariablein theevaluatingSFW’sautonomyisitsfreedomintimeandspace.Whilethe weapon distributes forty skeets over several acres, the time the weapon can searchfortargetsisminuscule.Eachskeetcanhoverwithitssensoractivefor

onlyafewsecondsbeforefiring.UnliketheHarpy,theSFWcannotloiterforan extendedperiodoverhundredsofkilometers.ThehumanlaunchingtheSFW mustknowthatthereisagroupoftanksataparticularpointinspaceandtime. Likeahomingmunition,theSFWmustbepartofawiderweaponsystemthat provides targeting data in order to be useful. The SFW is different than a traditionalhomingmunition,becausetheSFWcanhitmultipleobjects.This makestheSFWlikeasalvooffortyhomingmunitionslaunchedatatightly geographicallyclusteredsetoftargets.

PUSHING“START”

Autonomousweaponsaredefinedbytheabilitytocompletetheengagement cycle—searchingfor,decidingtoengage,andengagingtargets—ontheirown. Autonomousweapons,whethersupervisedorfullyautonomous,arestillbuilt andputintooperationbyhumans,though.Humansareinvolvedinthebroader processofdesigning,building,testing,anddeployingweapons. Thefactthattherearehumansinvolvedatsomestagedoesnotchangethe significanceofaweaponthatcouldcompleteengagementsentirelyonitsown. Eventhemosthighlyautonomoussystemwouldstillhavebeenborneoutofa processinitiatedbyhumansatsomepoint.IntheclimacticsceneofTerminator

3:RiseoftheMachines,anAirForcegeneralpushesthebuttontostartSkynet.

(Absurdly,thisisdonewithanold“EXECUTEY/N?”promptlikethekindused

inMS-DOSinthe1980s.)Fromthatpointforward,Skynetembarksonitspath

toexterminatehumanity,butatleastatthebeginningahumanwasintheloop.

Thequestionisnotwhethertherewaseverahumaninvolved,butratherhow

muchfreedomthesystemhasonceitisactivated.

WHYAREN’TTHEREMOREAUTONOMOUSWEAPONS?

Automationhasbeenusedextensivelyinweaponsaroundtheworldfordecades, buttheamountoffreedomgiventoweaponshasbeen,uptonow,fairlylimited. Homingmunitionshaveseekers,buttheirabilitytosearchfortargetsisnarrowly constrainedintimeandspace.Supervisedautonomousweaponshaveonlybeen used for limited defensive purposes. The technology to build simple fully autonomousloiteringmunitionslikeTASMandHarpyhasexistedfordecades, yetthereisonlyoneexampleinusetoday.

Whyaren’ttheremorefullyautonomousweapons?Homingmunitionsand evensemiautonomousloiteringmunitionsarewidelyused,butmilitarieshave not aggressively pursued fully autonomous loitering munitions. The U.S. experiencewithTASMmayshedsomelightonwhy.TASMwasinservicein

theU.S.Navyfrom1982to1994,whenitwasretired.Tounderstandbetterwhy

TASMwastakenoutofservice,IspokewithnavalstrategistBryanMcGrath. McGrath, a retired Navy officer, is well known in Washington defense circles.Heisakeenstrategistandunabashedadvocateofseapowerwhothinks deeplyaboutthepast,present,andfutureofnavalwarfare.McGrathisfamiliar withTASMandotheranti-shipmissilessuchastheHarpoon,andwastrainedon

TASMinthe1980swhenitwasinthefleet.

McGrathexplainedtomethatTASMcouldoutrangetheship’sownsensors. Thatmeantthatinitialtargetinghadtocomefromanothersensor,suchasa helicopterormaritimepatrolaircraftthatdetectedanenemyship.Theproblem, asMcGrathdescribedit,wasa“lackofconfidenceinhowthetargetingpicture wouldchangefromthetimeyoufiredthemissileuntilyougotitdownrange.” Becausethetargetcouldmove,unlesstherewasan“activesensor”onthetarget, such as a helicopter with eyes on the target the whole time, the area of uncertaintyofwherethetargetwaswouldgrowovertime. TheabilityoftheTASMtosearchfortargetsoverawideareamitigated,to someextent,thislargeareaofuncertainty.Ifthetargethadmoved,theTASM couldsimplyflyasearchpatternlookingforit.ButTASMdidn’thavethe abilitytoaccuratelydiscriminatebetweenenemyshipsandmerchantvesselsthat justhappenedtobeinitspath.Asthesearchareawidened,theriskincreased

thattheTASMmightrunacrossamerchantshipandstrikeitinstead.Inanall-

outwarwiththeSovietNavy,thatriskmightbeacceptable,butinanysituations shortofthat,gettingapprovaltoshoottheTASMwasunlikely.TASMwas, accordingtoMcGrath,“aweaponwejustdidn’twanttofire.” AnotherfactorwasthatifaTASMwaslaunchedandtherewasn’tavalid target within the search area of the weapon, the weapon would be wasted. McGrathwouldbeloathtolaunchaweapononscantevidencethattherewasa validtargetinthesearcharea.“Iwouldwanttoknowthatthere’ssomething there,eveniftherewassomekindofend-gameautonomyinplace.”Why? “Becausetheweaponscostmoney,”hesaid,“andIdon’thavealotofthem. AndImayhavetofighttomorrow.” Modernmissilescancostupwardsofamilliondollarsapiece.Asapractical matter,militarieswillwanttoknowthatthereis,infact,avalidenemytargetin theareabeforeusinganexpensiveweapon.Oneofthereasonsmilitarieshave

notusedfullyautonomousloiteringmunitionsmoremaybethefactthatthe

advantagetheybring—theabilitytolaunchaweaponwithoutprecisetargeting

datainadvance—maynotbeofmuchvalueiftheweaponisnotreusable,since

theweaponcouldbewasted.

FUTUREWEAPONS

ThetrendofcreepingautomationthatbeganwithGatling’sgunwillcontinue. Advancesinartificialintelligencewillenablesmarterweapons,whichwillbe capableofmoreautonomousoperation.Atthesametime,anotherfacetofthe informationrevolutionisgreaternetworking.GermanU-boatscouldn’tcontrol theWrentorpedoonceitwaslaunched,notbecausetheydidn’twantto;they simplyhadnomeanstodoso. Modernmunitionsareincreasinglynetworkedtoallowthemtobecontrolled orretargetedafterthey’vebeenlaunched.Wire-guidedmunitionshaveexisted fordecades,butareonlyfeasibleforshortdistances.Long-rangeweaponsare now incorporating datalinks to allow them to be controlled via radio communication, even over satellites. The Block IV Tomahawk Land Attack Missile (TLAM-E, or Tactical Tomahawk) includes a two-way satellite communications link that allows the weapon to be retargeted in flight. The

Harpy2,orHarop,hasacommunicationslinkthatallowsittobeoperatedina

human-in-the-loop mode so that the human operator can directly target the weapon. When I asked McGrath what feature he would most desire in a future weapon, it wasn’t autonomy—it was a datalink. “You’ve got to talk to the missile,”heexplained.“Themissileshavetobepartofanetwork.”Connecting theweaponstothenetworkwouldallowyoutosendupdatesonthetargetwhile inflight.Asaresult,“confidenceinemployingthatweaponwoulddramatically increase.” Anetworkedweaponisafarmorevaluableweaponthanonethatisonits own.Byconnectingaweapontothenetwork,themunitionbecomespartofa broadersystemandcanharnesssensordatafromotherships,aircraft,oreven satellitestoassistitstargeting.Additionally,thecommandercankeepcontrolof theweaponwhileinflight,makingitlesslikelytobewasted.Oneadvantageto thenetworkedTacticalTomahawk,forexample,istheabilityforhumanstouse sensorsonthemissiletodobattledamageassessment(BDA)ofpotentialtargets beforestriking.WithouttheabilitytoconductBDAofthetarget,commanders

mighthavetolaunchseveralTomahawksatatargettoensureitsdestruction, sincethefirstmissilemightnotcompletelydestroythetarget.OnboardBDA allowsthecommandertolookatthetargetafterthefirstmissilehits.Ifmore strikesareneeded,moremissilescanbeused.Ifnot,thensubsequentmissiles canbedivertedinflighttosecondarytargets. Everythinghasacountermeasure,though,andincreasednetworkingruns countertoanothertrendinwarfare,theriseofelectronicattack.Themorethat militariesrelyontheelectromagneticspectrumforcommunicationsandsensing targets,themorevitalitwillbetowintheinvisibleelectronicwarofjamming, spoofing,anddeceptionfoughtthroughtheelectromagneticspectrum.Infuture warsbetweenadvancedmilitaries,communicationsincontestedenvironmentsis bynomeansassured.Advancedmilitarieshavewaysofcommunicatingthatare resistant to jamming, but they are limited in range and bandwidth. When communicationsaredenied,missilesordroneswillbeontheirown,relianton theironboardautonomy. Duetotheirexpensivecost,evenhighlyadvancedloiteringmunitionsare likelytofallintothesametrapasTASM,withcommandershesitanttofirethem unlesstargetsareclearlyknown.Butdroneschangethisequation.Dronescanbe launched,sentonpatrol,andcanreturnwiththeirweaponsunusediftheydonot findanytargets.Thissimplefeature—reusability—dramaticallychangeshowa weaponcouldbeused.Dronescouldbesenttosearchoverawideareainspace andtimetohuntforenemytargets.Ifnonewerefound,thedronecouldreturnto basetohuntagainanotherday. More than ninety nations and non-state groups already have drones, and whilemostareunarmedsurveillancedrones,anincreasingnumberarearmed.At leastsixteencountriesalreadypossessarmeddronesandanotherdozenormore nationsareworkingonarmingtheirdrones.Ahandfulofcountriesareeven pursuing stealth combat drones specifically designed to operate in contested areas. For now, drones are used as part of traditional battle networks, with decision-makingresidinginthehumancontroller.Ifcommunicationslinksare intact, then countries can keep a human in the loop to authorize targets. If communications links are jammed, however, what will the drones be programmedtodo?Willtheyreturnhome?Willtheycarryoutsurveillance missions,takingpicturesandreportingbacktotheirhumanoperators?Willthe drones be authorized to strike fixed targets that have been preauthorized by humans, much like cruise missiles today? What if the drones run across emergingtargetsofopportunitythathavenotbeenauthorizedinadvancebya human—willtheybeauthorizedtofire?Whatifthedronesarefiredupon?Will

theybeallowedtofireback?Willtheybeauthorizedtoshootfirst?

Thesearenothypotheticalquestionsforthefuture.Engineersaroundthe

globeareprogrammingthesoftwareforthesedronestoday.Intheirhands,the

futureofautonomousweaponsisbeingwritten.

PARTII

BuildingtheTerminator

4

THEFUTUREBEINGBUILTTODAY

AUTONOMOUSMISSILES,DRONES,ANDROBOTSWARMS

FewactorsloomlargerintheroboticsrevolutionthantheU.S.Departmentof

Defense.TheUnitedStatesspends600billiondollarsannuallyondefense,more

thanthenextsevencountriescombined.Despitethis,U.S.defenseleadersare

concernedabouttheUnitedStatesfallingbehind.In2014,theUnitedStates

launched a “Third Offset Strategy” to reinvigorate America’s military technologicaladvantage.Thenameharkensbacktothefirstandsecond“offset strategies”intheColdWar,wheretheU.S.militaryinvestedinnuclearweapons

inthe1950sandlaterprecision-guidedweaponsinthe1970stooffsettheSoviet

Union’snumericaladvantagesinEurope.ThecenterpieceofDoD’sThirdOffset Strategyisrobotics,autonomy,andhuman-machineteaming. Manyapplicationsofmilitaryroboticsandautonomyarenoncontroversial, suchasuninhabitedlogisticsconvoys,tankeraircraft,orreconnaissancedrones. Autonomyisalsoincreasinginweaponsystems,though,withnext-generation missilesandcombataircraftpushingtheboundariesofautonomy.Ahandfulof experimentalprogramsshowhowtheU.S.militaryisthinkingabouttheroleof autonomy in weapons. Collectively, they are laying the foundations for the militaryofthefuture.

SALTYDOGS:THEX-47BDRONE

TheX-47Bexperimentaldroneisoneoftheworld’smostadvancedaircraft.

Onlytwohavebeeneverbuilt,namedSaltyDog501andSaltyDog502.Witha

sleekbat-wingedshapethatlookslikesomethingoutofthe1980ssci-fiflick

FlightoftheNavigator,theX-47Bpracticallyscreams“thefutureishere.”In

theirshortlife-spanasexperimentalaircraftfrom2011to2015,SaltyDog501

and502repeatedlymadeaviationhistory.TheX-47Bwasthefirstuninhabited

(unmanned)aircrafttoautonomouslytakeoffandlandonanaircraftcarrierand

thefirstuninhabitedaircrafttoautonomouslyrefuelfromanotherplanewhilein

flight.Thesearekeymilestonestoenablingfuturecarrier-basedcombatdrones.

However,theX-47Bwasnotacombataircraft.Itwasanexperimental“X-

plane,”ademonstrationprogramdesignedtomaturetechnologiesforafollow-

onaircraft.Thefocusoftechnologydevelopmentwasautomatingthephysical

movementoftheaircraft—takeoff,landing,flight,andaerialrefueling.TheX-

47Bdidnotcarryweaponsorsensorsthatwouldpermitittomakeengagements.

TheNavyhasstatedtheirfirstoperationalcarrier-baseddronewillbethe

MQ-25Stingray,afutureaircraftthatisstillonthedrawingboard.Whilethe

specificdesignhasyettobedetermined,theMQ-25isenvisionedprimarilyasa

tanker,ferryingfuelformannedcombataircraftsuchastheF-35JointStrike

Fighter,withpossiblyasecondaryroleinreconnaissance.Itisnotenvisionedas

acombataircraft.Infact,overthepastdecadetheNavyhasmovedsteadily

awayfromanynotionofuninhabitedaircraftincombatroles.

TheoriginoftheX-47wasintheJointUnmannedCombatAirSystems(J-

UCAS)program,ajointprogrambetweenDARPA,theNavy,andtheAirForce

intheearly2000stodesignanuninhabitedcombataircraft.J-UCASledtothe

developmentoftwoexperimentalX-45Aaircraft,whichin2004demonstrated

thefirstdronedesignedforcombatmissions.Mostdronestodayareintendedfor surveillancemissions,whichmeanstheyaredesignedforsoaringandstaying

aloftforlongperiodsoftime.TheX-45A,however,sportedthesamesharply

angledwingsandsmoothtopsurfacesthatdefinestealthaircraftliketheF-117,

B-2bomber,andF-22fighter.Designedtopenetrateenemyairdefenses,the

intentwasfortheX-45Atoperformcloseinjammingandstrikemissionsin

supportofmannedaircraft.Theprogramwasnevercompleted,though.Inthe Pentagon’s 2006 Quadrennial Defense Review, a major strategy and budget review conducted every four years, the J-UCAS program was scrapped and restructured.

J-UCAS’scancellationwascuriousbecauseitcameattheheightofthepost-

9/11defensebudgetboomandatatimewhentheDefenseDepartmentwas

wakinguptothepotentialofroboticsystemsmorebroadly.Evenwhilethe

militarywasdeployingthousandsofdronestoIraqandAfghanistan,theAir

Forcewashighlyresistanttotheideaofuninhabitedaircrafttakingoncombat rolesinfuturewars.IntheensuingdecadesinceJ-UCAS’scancellation,despite repeatedopportunities,the AirForcehas notrestarteda programtobuild a combat drone. Drones play important roles in reconnaissance and counterterrorism,butwhenitcomestodogfightingagainstotherenemyaircraft or taking down another country’s air defense network, those missions are currentlyreservedfortraditionalmannedaircraft.

Therealityisthatwhatmaylookfromtheoutsidelikeanunmitigatedrush towardroboticweaponsis,inactuality,amuchmoremuddledpictureinsidethe Pentagon.ThereisintenseculturalresistancewithintheU.S.militarytohanding over combat jobs to uninhabited systems. Robotic systems are frequently embraced for support roles such as surveillance or logistics, but rarely for combatapplications.TheArmyisinvestinginlogisticsrobots,butnotfrontline armedcombatrobots.TheAirForceusesdronesheavilyforsurveillance,butis notpursingair-to-aircombatdrones.Pentagonvisiondocumentssuchasthe

UnmannedSystemsRoadmapsortheAirForce’s2013RemotelyPilotedAircraft

Vectoroftenarticulateambitiousdreamsforrobotsinavarietyofroles,butthese documentsareoftendisconnectedfrombudgetaryrealities.Withoutfunding, these visions are more hallucinations than reality. They articulate goals and aspirations,butdonotnecessarilyrepresentthemostlikelyfuturepath. ThedownscopingoftheambitiousJ-UCAScombataircrafttotheplodding

MQ-25tankerisagreatcaseinpoint.In2006whentheAirForceabandonedthe

J-UCASexperimentaldroneprogram,theNavycontinuedaprogramtodevelop acombataircraft. TheX-47Bwas supposedtomature thetechnologyfor a successorstealthdrone,butinaseriesofinternalPentagonmemorandaissuedin

2011and2012,Navytookasharpturnawayfromacombataircraft.Designs

werescaledbackinfavorofalessambitiousnonstealthysurveillancedrone.

ConceptsketchesshiftedfromlookinglikethefuturisticsleekandstealthyX-

45AandX-47BtothemorepedestrianPredatorandReaperdrones,alreadyover

adecadeoldatthatpoint.TheNavy,itappears,wasn’timmunetothesame culturalresistancetocombatdronesfoundintheAirForce. TheNavy’sresistancetodevelopinganuninhabitedcombataerialvehicle (UCAV)isparticularlynotablebecauseitcomesinthefaceofpressurefrom Congress and a compelling operational need. China has developed anti-ship ballistic and cruise missiles that can outrange carrier-based F-18 and F-35 aircraft.Onlyuninhabitedaircraft,whichcanstayaloftfarlongerthanwouldbe possiblewithahumanintheairplane,havesufficientrangetokeepthecarrier relevantinthefaceofadvancedChinesemissiles.Seapoweradvocatesoutside

theNavyinCongressandthinktankshavearguedthatwithoutaUCAVon board, the aircraft carrier itself would be of limited utility against a high- technologyopponent.YettheNavy’scurrentplanisforitscarrier-baseddrone,

theMQ-25,toferrygasforhuman-inhabitedjets.Fornow,theNavyisdeferring

anyplansforafutureUCAV.

TheX-47Bisanimpressivemachineand,toanoutsideobserver,itmay

seem to portend a future of robot combat aircraft. Its appearance belies the realitythatwithinthehallsofthePentagon,however,thereislittleenthusiasm forcombatdrones,muchlessfullyautonomousonesthatwouldtargetontheir own.NeithertheAirForcenortheNavyhaveprogramsunderwaytodevelop

anoperationalUCAV.TheX-47Bisabridgetoafuturethat,atleastfornow,

doesn’texist.

THELONG-RANGEANTI-SHIPMISSILE

The Long-Range Anti-Ship Missile (LRASM) is a state-of-the-art missile pushing the boundaries of autonomy. It is a joint DARPA-Navy-Air Force projectintendedtofillagapintheU.S.military’sabilitytostrikeenemyshipsat longranges.SincetheretirementoftheTASM,theNavyhasreliedonthe

shorter-rangeHarpoonanti-shipmissile,whichhasarangeofonly67nautical

miles.TheLRASM,ontheotherhand,canflyupto500nauticalmiles.LRASM

alsosportsanumberofadvancedsurvivabilityfeatures,includingtheabilityto autonomouslydetectandevadethreatswhileenroutetoitstarget. LRASMusesautonomyinseveralnovelways,whichhasalarmedsome opponentsofautonomousweapons.TheLRASMhasbeenfeaturedinnoless than three New York Times articles, with some critics claiming it exhibits “artificial intelligence outside human control.” In one of the articles, Steve Omohundro,aphysicistandleadingthinkeronadvancedartificialintelligence, stated“anautonomousweaponsarmsraceisalreadytakingplace.”Itisaleap, though,toassumethattheseadvancesinautonomymeanstatesintendtopursue autonomousweaponsthatwouldhuntfortargetontheirown. TheactualtechnologybehindLRASM,whilecuttingedge,hardlywarrants thesebreathlesstreatments.LRASMhasmanyadvancedfeatures,butthecritical questioniswhochoosesLRASM’stargets—ahumanorthemissileitself?Onits website,LockheedMartin,thedeveloperofLRASM,states:

Themissileemploysamulti-modalsensor

suite,weapondatalink,andenhanceddigitalanti-jamGlobalPositioningSystemtodetectand

Thisadvancedguidance

LRASMemploysprecisionroutingandguidance

destroyspecifictargetswithinagroupofnumerousshipsatsea

operationmeanstheweaponcanusegrosstargetcueingdatatofindanddestroyitspre-defined

targetindeniedenvironments.

While the description speaks of advanced precision guidance, it doesn’t say muchthatwouldimplyartificialintelligencethatwouldhuntfortargetsonits

own.Whatwasthegenesisofthecriticism?Well

LRASMdifferently. Before the first New York Times article in November 2014, Lockheed’s descriptionofLRASMboastedmuchmorestronglyofitsautonomousfeatures. Itusedtheword“autonomous”threetimesinthedescription,describingitasan “autonomous,precision-guidedanti-ship”missilethat“cruisesautonomously” and has an “autonomous capability.” What exactly the weapon was doing autonomouslywassomewhatambiguous,though. AfterthefirstNewYorkTimesarticle,thedescriptionchanged,substituting “semi-autonomous”for“autonomous”inmultipleplaces.Thenewdescription also clarified the nature of the autonomous features, stating “The semi- autonomous guidance capability gets LRASM safely to the enemy area.” Eventually,eventhewords“semi-autonomous”wereremoved,leadingtothe descriptiononlinetodaywhichonlyspeaksof“precisionroutingandguidance” and“advancedguidance.”Autonomyisn’tmentionedatall. Whatshouldwemakeofthisshiftingstoryline?Presumablytheweapon’s functionalityhasn’tchanged,merelythelanguageusedtodescribeit.Sohow autonomousisLRASM? LockheedhasdescribedLRASMasusing“grosstargetcueingdatatofind anddestroyitspredefinedtargetindeniedenvironments.”If“predefined”target meansthatthespecifictargethasbeenchoseninadvancebyahumanoperator, LRASM would be a semiautonomous weapon. On the other hand, if “predefined”meansthatthehumanhaschosenonlyageneralclassoftargets, suchas“enemyships,”andgiventhemissilethefreedomtohuntforthese targets over a wide area and engage them on its own, then it would be an autonomousweapon. Helpfully, Lockheed posted a video online that explains LRASM’s functionality.Inadetailedcombatsimulation,thevideoshowspreciselywhich engagement-related functions would be done autonomously and which by a human.Inthevideo,asatelliteidentifiesahostilesurfaceactiongroup(SAG)— agroupofenemyships—andrelaystheirlocationtoaU.S.destroyer.Thevideo showsaU.S.sailorlookingattheenemyshipsonhisconsole.Hepressesa buttonandtwoLRASMsleapfromtheirlaunchingtubesinablastofflameinto theair.ThetextonthevideoexplainstheLRASMshavebeenlaunchedagainst

Lockheedusedtodescribe

the enemy cruiser, part of the hostile SAG. Once airborne, the LRASMs establishaline-of-sightdatalinkwiththeship.Astheycontinuetoflyouttoward

theenemySAG,theytransitiontosatellitecommunications.AU.S.F/A-18E

fighter aircraft then fires a third LRASM (this one air-launched) against an enemy destroyer, another ship in the SAG. The LRASMs enter a “communicationsandGPS-deniedenvironment.”Theyarenowontheirown. TheLRASMsmaneuverviaplannednavigationalrouting,movingfromone predesignatedwaypointtoanother.Then,unexpectedly,theLRASMsencounter a“pop-upthreat.”Inthevideo,alargeredbubbleappearsinthesky,ano-go zone for the missiles. The missiles now execute “autonomous routing,” detouringaroundtheredbubbleontheirown.Asecondpop-upthreatappears andtheLRASMsmodifytheirrouteagain,movingaroundthethreattocontinue ontheirmission. AstheLRASMsapproachtheirtargetdestination,thevideoshiftstoanew perspectivefocusingonasinglemissile,simulatingwhatthemissile’ssensors see.Fivedotsappearonthescreenrepresentingobjectsdetectedbythemissile’s sensors, labeled “ID:71, ID:56, ID:44, ID:24, ID:19.” The missile begins a processthevideocalls“organic[areaofuncertainty]reduction.”That’smilitary jargonforabubbleofuncertainty.Whenthemissilewaslaunched,thehuman launchingitknewwheretheenemyshipwaslocated,butshipsmove.Bythe timethemissilearrivesattheship,theshipcouldbesomewhereelse.The“area ofuncertainty”isthebubblewithinwhichtheenemyshipcouldbe,abubble thatgetslargerovertime. Sincetherecouldbemultipleshipsinthisbubble,theLRASMbeginsto narrowdownitsoptionstodeterminewhichshipwastheoneitwassentto destroy.Howthisoccursisnotspecified,butonthevideoalarge“areaof uncertainty”appearsaroundallthedots,thenquicklyshrinkstosurroundonly

threeofthem:ID:44,ID:24,andID:19.Themissilethenmovestothenext

phaseofitstargetingprocess:“targetclassification.”Themissilescanseach object, finally settling on ID:24. “Criteria match,” the video states, “target

classified.”ID:24,themissilehasdetermined,istheshipitwassenttodestroy.

Havingzeroedinontherighttarget,themissilesbegintheirfinalmaneuvers.

ThreeLRASMsdescendbelowtheenemyships’radarstoskimjustabovethe

water’ssurface.Ontheirfinalapproach,themissilesscantheshipsonelasttime

toconfirmtheirtargets.Theenemyshipsfiretheirdefensestotrytohitthe

incomingmissiles,butit’stoolate.Twoenemyshipsarehit.

ThevideoconveystheLRASM’simpressiveautonomousfeatures,butisit

anautonomousweapon?Theautonomous/semiautonomous/advancedguidance

describedonthewebsiteisclearlyondisplay.Inthevideo,midwaythroughthe flight the missiles enter a “communications and GPS denied environment.” Withinthisbubble,themissilesareontheirown;theycannotcallbacktohuman controllers.Anyactionstheytakeareautonomous,butthetypeofactionsthey can take are limited. Just because the weapon is operating without a communicationslinktohumancontrollersdoesn’tmeanithasthefreedomtodo anythingitwishes.Themissileisn’tateenagerwhoseparentshavelefttownfor the weekend. It has only been programmed to perform certain tasks autonomously.Themissilecanidentifypop-upthreatsandautonomouslyreroute aroundthem,butitdoesn’thavethefreedomtochooseitsowntargets.Itcan identifyandclassifyobjectstoconfirmwhichobjectwastheoneitwassentto destroy,butthatisn’tthesameasbeingabletochoosewhichtargettodestroy.

choose whichtargettodestroy. ScreenshotsfromLRASMVideo

ScreenshotsfromLRASMVideoInavideosimulationdepictinghowtheLRASMfunctions,asatellite

transmitsthelocationofenemyshipstoahuman,whoauthorizestheattackonthosespecificenemyships.

TheLRASMsarelaunchedagainstspecificenemyships,inthiscasea“SAGCruiser.”

TheLRASMsarelaunchedagainstspecificenemyships,inthiscasea“SAGCruiser.”

Whileenroutetotheirhuman-designatedtargets,theLRASMsemployautonomousroutingaroundpop-up

threats(shownasabubble).

Becausethehuman-designatedtargetisamovingship,bythetimetheLRASMarrivesatthetargetarea

Becausethehuman-designatedtargetisamovingship,bythetimetheLRASMarrivesatthetargetarea thereisan“areaofuncertainty”thatdefinestheship’spossiblelocation.Multipleobjectsareidentified withinthisareaofuncertainty.LRASMusesitsonboard(“organic”)sensorstoreducetheareaof

uncertaintyandidentifythehuman-designatedtarget.LRASMconfirms“ID:24”isthetargetitwassentto

destroy.Whilethemissilehasmanyadvancedfeatures,itdoesnotchooseitsowntarget.Themissileusesits

sensorstoconfirmthehuman-selectedtarget.

Itisthehumanwhodecideswhichenemyshiptodestroy.Thecriticalpoint

inthevideoisn’tattheendofthemissile’sflightasitzeroesinontheship—it’s

atthebeginning.WhentheLRASMsarelaunched,thevideospecifiesthatthey

arelaunchedagainstthe“SAGcruiser”and“SAGdestroyer.”Thehumansare

launchingthemissilesatspecificships,whichthehumanshavetrackedand

identifiedviasatellites.Themissiles’onboardsensorsarethenusedtoconfirm

thetargetsbeforecompletingtheattack.LRASMisonlyonepieceofaweapon

systemthatconsistsofthesatellite,ship/aircraft,human,andmissile.Thehuman

is“intheloop,”decidingwhichspecifictargetstoengageinthebroaderdecision

cycleoftheweaponsystem.TheLRASMmerelycarriesouttheengagement.

BREAKINGTHESPEEDLIMIT:FASTLIGHTWEIGHT

AUTONOMY

Dr.StuartRussellisapioneeringresearcherinartificialintelligence.Heliterally

wrotethetextbookthatisusedtoteachAIresearchersaroundtheworld.Russell

isalsooneoftheleadersintheAIcommunitycallingforabanon“offensive

autonomous weapons beyond meaningful human control.” One research program Russell has repeatedly raised concerns about is DARPA’s Fast LightweightAutonomy(FLA). FLAisaresearchprojecttoenablehigh-speedautonomousnavigationin congested environments. Researchers outfit commercial off-the-shelf quadcopterswithcustomsensors,processors,andalgorithmswiththegoalof

making them autonomously navigate through the interior of a cluttered warehouseatspeedsuptoforty-fivemilesperhour.Inapressrelease,DARPA comparedthezoomingquadcopterstotheMillenniumFalconzippingthrough thehullofacrashedStarDestroyerinStarWars:TheForceAwakens.(Iwould havegonewiththeFalconmaneuveringthroughtheasteroidfieldinTheEmpire

ortheFalconzippingthroughtheinteriorofDeathStarIIin

The Return of the Jedi. But you get the idea: fast = awesome.) In a video accompanyingthepressrelease,shotsoftheflyingquadcoptersaresettopeppy instrumentalmusic.It’sincongruousbecauseinthevideosreleasedsofarthe

dronesaren’tactuallymovingthroughobstaclesat45mph

are creeping their way around obstacles, but they are doing so fully autonomously.FLA’squadcoptersuseacombinationofhigh-definitioncameras, sonar,andlaserlightdetectionandranging(LIDAR)tosenseobstaclesand avoidthemallontheirown.

Autonomousnavigationaroundobstacles,evenatslowspeeds,isnomean feat.Thequadcopter’ssensorsneedtodetectpotentialobstaclesandtrackthem asthequadcoptermoves,aprocessor-hungrytask.Becausethequadcoptercan onlycarrysomuchcomputingpower,itislimitedinhowquicklyitcanprocess theobstaclesitsees.Theprogramaimsinthecomingmonthstospeeditup.As DARPA program manager Mark Micire explained in a press release, “The challenge for the teams now is to advance the algorithms and onboard computationalefficiencytoextendtheUAVs’perceptionrangeandcompensate forthevehicles’masstomakeextremelytightturnsandabruptmaneuversat highspeeds.”Inotherwords,topickupthepace. FLA’squadcoptersdon’tlookmenacing,butitisn’tbecauseoftheup-tempo musicorthecutesyStarWarsreferences.It’sbecausethere’snothinginFLA thathasanythingtodowithweaponsengagements.Notonlyarethequadcopters unarmed,theyaren’tperforminganytasksassociatedwithsearchingforand identifying targets. DARPA explains FLA’s intended use as indoor reconnaissance:

yet.Fornow,they

StrikesBack

FLAtechnologiescouldbeespeciallyusefultoaddressapressingsurveillanceshortfall:Military teamspatrollingdangerousoverseasurbanenvironmentsandrescueteamsrespondingtodisasters such as earthquakes or floods currently can use remotely piloted unmanned aerial vehicles

(UAVs)toprovideabird’s-eyeviewofthesituation,buttoknowwhat’sgoingoninsidean

unstablebuildingorathreateningindoorspaceoftenrequiresphysicalentry,whichcanputtroops

orcivilianresponseteamsindanger.TheFLAprogramisdevelopinganewclassofalgorithms

aimedatenablingsmallUAVstoquicklynavigatealabyrinthofrooms,stairwaysandcorridors

orotherobstacle-filledenvironmentswithoutaremotepilot.

To better understand what FLAwas doing, I caught up with one of the project’sresearchteamsfromtheUniversityofPennsylvania’sGeneralRobotics AutomationSensingandPerception(GRASP)lab.VideosofGRASP’snimble quadcopters have repeatedly gone viral online, showing swarms of drones artfullyzippingthroughwindows,seeminglydancinginmidair,orplayingthe JamesBondthemesongonmusicalinstruments.IaskedDr.DanielLeeandDr. VijayKumar,theprincipalinvestigatorsofGRASP’sworkonFLA,whatthey thought about the criticism that the program was paving the way toward autonomousweapons.LeeexplainedthatGRASP’sresearchwas“verybasic” andfocusedon“fundamentalcapabilitiesthataregenerallyapplicableacrossall ofrobotics,includingindustrialandconsumeruses.”ThetechnologyGRASP was focused on “localization, mapping, obstacle detection and high-speed dynamicnavigation.”Kumaraddedthattheirmotivationsforthisresearchwere “applicationstosearchandrescueandfirstresponsewheretime-criticalresponse andnavigationathighspeedsarecritical.” KumarandLeearen’tweaponsdesigners,soitmaynotbeattheforefrontof theirminds,butit’sworthpointingoutthatthetechnologiesFLAisbuilding aren’teventhecriticalonesforautonomousweapons.Certainly,fast-moving quadcopterscouldhaveavarietyofapplications.Puttingagunorbombonan FLA-empoweredquadcopterisn’tenoughtomakeitanautonomousweapon, however.Itwouldstillneedtheabilitytofindtargetsonitsown.Dependingon theintendedtarget,thatmaynotbeparticularlycomplicated,butatanyrate that’saseparatetechnology.AllFLAisdoingismakingquadcoptersmaneuver fasterindoors.Dependingonone’sperspective,thatcouldbecoolorcouldbe menacing, but either way FLA doesn’t have anything more to do with autonomousweaponsthanself-drivingcarsdo. DARPA’sdescriptionofFLAdidn’tseemtostackupagainstStuartRussell’s criticism.HehaswrittenthatFLAandanotherDARPAprogram“foreshadow plannedusesof[lethalautonomousweaponsystems].”IfirstmetRussellonthe sidelines of a panel we both spoke on at the United Nations meetings on autonomous weapons in 2015. We’ve had many discussions on autonomous weaponssincethenandI’vealwaysfoundhimtobethoughtful,unsurprising given his prominence in his field. So I reached out to Russell to better understandhisconcerns.HeacknowledgedthatFLAwasn’t“cleanlydirected

onlyatautonomousweaponcapability,”buthesawitasasteppingstonetoward somethingtrulyterrifying.

FLAisdifferentfromprojectsliketheX-47B,J-UCAS,orLRASM,which

are designed to engage highly sophisticated adversaries. Russell has a very differentkindofautonomousweaponinmind,aswarmofmillionsofsmall, fast-movingantipersonneldronesthatcouldwipeoutanentireurbanpopulation. Russelldescribedtheselethaldronesusedenmasseasakindof“weaponof massdestruction.”Heexplained,“Youcanmakesmall,lethalquadcoptersan inchindiameterandpackseveralmillionofthemintoatruckandlaunchthem withrelativelysimplesoftwareandtheydon’thavetobeparticularlyeffective.

If25percentofthemreachatarget,that’splenty.”Usedinthisway,evensmall

autonomousweaponscoulddevastateapopulation. There’snothingtoindicatethatFLAisaimedatdevelopingthekindof people-hunting weapon Russell describes, something he acknowledges. Nevertheless,heseesindoornavigationaslayingthebuildingblockstoward antipersonnelautonomousweapons.“It’scertainlyoneofthethingsyou’dlike todoifyouwerewantingtodevelopautonomousweapons,”hesaid. It’sworthnothingthatRussellisn’topposedtothemilitaryasawholeor evenmilitaryinvestmentsinAIorautonomyingeneral.Hesaidthatsomeofhis ownAIresearchisfundedbytheDepartmentofDefense,butheonlytakes moneyforbasicresearch,notweapons.EvenaprogramlikeFLAthatisn’t specifically aimed at weapons still gives Russell pause, however. As a researcher,hesaid,it’ssomethingthathewould“certainlythinktwice”about workingon.

WEAPONSTHATHUNTINPACKS:COLLABORATIVE

OPERATIONSINDENIEDENVIRONMENTS

Russell also raised concerns about another DARPA program: Collaborative OperationsinDeniedEnvironments(CODE).AccordingtoDARPA’sofficial description, CODE’s purpose is to develop “collaborative autonomy—the capabilityofgroupsof[unmannedaircraftsystems]toworktogetherundera single person’s supervisory control.” In a press release, CODE’s program manager,Jean-CharlesLedé,describedtheprojectmorecolorfullyasenabling dronestoworktogether“justaswolveshuntincoordinatedpackswithminimal communication.” Theimageofdroneshuntinginpackslikewolvesmightbealittleunsettling

tosome.Ledéclarifiedthatthedroneswouldremainunderthesupervisionofa human:“multipleCODE-enabledunmannedaircraftwouldcollaboratetofind, track,identifyandengagetargets,allunderthecommandofasinglehuman missionsupervisor.”GraphicsonDARPA’swebsitedepictinghowCODEmight work show communications relay drones linking the drone pack back to a manned aircraft removed from the edge of the battlespace. So, in theory, a humanwouldbeintheloop. CODEisdesignedfor“contestedelectromagneticenvironments,”however, where “bandwidth limitations and communications disruptions” are likely to occur.Themeansthatthecommunicationslinktothehuman-inhabitedaircraft might be limited or might not work at all. CODE aims to overcome these challengesbygivingdronesgreaterintelligenceandautonomysothattheycan operate with minimal supervision. Cooperative behavior is central to this concept.Withcooperativebehavior,onepersoncantellagroupofdronesto achieveagoal,andthedronescandivvyuptasksontheirown. InCODE,thedroneteamfindsandengages“mobileorrapidlyrelocatable targets,”thatis,targetswhoselocationscannotbespecifiedinadvancebya humanoperator.Ifthereisacommunicationslinktoahuman,thenthehuman could authorize targets for engagement once CODE air vehicles find them. Communicationsarechallengingincontestedelectromagneticenvironments,but not impossible. U.S. fifth-generation fighter aircraft use low probability of intercept/lowprobabilityofdetection(LPI/LPD)methodsofcommunicating stealthilyinsideenemyairspace.Whilethesecommunicationslinksarelimited in range and bandwidth, they do exist. According to CODE’s technical

specifications,developersshouldcountonnomorethan50kilobitspersecond

ofcommunicationsbacktothehumancommander,essentiallythesameasa56K

dial-upmodemcirca1997.

Keepingahumanintheloopviaaconnectiononparwithadial-upmodem would be a significant change from today, where drones stream back high- definitionfull-motionvideo.Howmuchbandwidthisrequiredforahumanto authorizetargets?Notmuch,infact.Thehumanbrainisextremelygoodat objectrecognitionandcanrecognizeobjectseveninrelativelylowresolution images.Snapshotsofmilitaryobjectsandthesurroundingareaontheorderof 10 to 20 kilobytes in size may be fuzzy to the human eye, but are still of sufficiently high resolution that an untrained person can discern trucks or

militaryvehicles.A50kilobitpersecondconnectioncouldtransmitoneimage

ofthissizeeverytwotothreeseconds(1kilobyte=8kilobits).Thiswouldallow

theCODEairvehiclestoidentifypotentialtargetsandsendthembacktoa

humansupervisorwhowouldapprove(ordisapprove)eachspecifictargetbefore attack. ButisthiswhatCODEintends?CODE’spublicdescriptionexplainsthatthe aircraftwilloperate“underasingleperson’ssupervisorycontrol,”butdoesnot specifythatthehumanwouldneedtoapproveeachtargetbeforeengagement. Asisthecasewithallofthesystemsencounteredsofar,fromthermostatsto next-generationweapons,thekeyiswhichtasksarebeingperformedbythe humanandwhichbythemachine.PubliclyavailableinformationonCODE presentsamixedpicture. A May 2016 video released online of the human-machine interface for CODEshowsahumanauthorizingeachspecificindividualtarget.Thehuman doesn’tdirectlycontroltheairvehicles.Thehumanoperatorcommandsfour groupsofairvehicles,labeledAces,Badger,Cobra,andDiscogroups.The groups, each composed of two to four air vehicles, are given high-level commands such as “orbit here” or “follow this route.” Then the vehicles coordinateamongthemselvestoaccomplishthetask.

DiscoGroupissentonasearchanddestroymission:“DiscoGroupsearch anddestroyall[anti-aircraftartillery]inthisarea.”Thehumanoperatorsketches aboxwithhiscursorandthevehiclesinDiscoGroupmoveintothebox.“Disco GroupconductingsearchanddestroyatAreaOne,”thecomputerconfirms. AstheairvehiclesinDiscoGroupfindsuspectedenemytargets,theycueup theirrecommendedclassificationtothehumanforconfirmation.Thehuman clicks “Confirm SCUD” and “Confirm AAA” [antiaircraft artillery] on the interface.Butconfirmationdoesnotmeanapprovaltofire.Afewsecondslater, abeepingtoneindicatesthatDiscoGrouphasdrawnupastrikeplanonatarget

andisseekingapproval.DiscoGrouphas90percentconfidenceithasfoundan

SA-12surface-to-airmissilesystemandincludesaphotoforconfirmation.The

humanclicksonthestrikeplanformoredetails.BeneaththepictureoftheSA-

12isasmalldiagramshowingestimatedcollateraldamage.Abrownsplotch

surroundsthetarget,showingpotentialdamagetoanythinginthevicinity.Just

outsideofthesplotchisahospital,butitisoutsideoftheanticipatedareaof

collateraldamage.Thehumanclicks“Yes”toapprovetheengagement.Inthis

video,ahumanisclearlyintheloop.Manytasksareautomated,butahuman

approveseachspecificengagement.

Inotherpublicinformation,however,CODEseemstoleavethedooropento

removingthehumanfromtheloop.Adifferentvideoshowstwoteamsofair

vehicles,TeamAandTeamB,senttoengageasurface-to-airmissile.Asinthe

LRASMvideo,thespecifictargetisidentifiedbyahumanaheadoftime,who

thenlaunchesthemissilestotakeitout.SimilartoLRASM,theairvehicles maneuver around pop-up threats, although this time the air vehicles work cooperatively, sharing navigation and sensor data while in flight. As they maneuver to their target, something unexpected happens: a “critical pop-up target”emerges.Itisn’ttheirprimarytarget,butdestroyingitisahighpriority. TeamAreprioritizestoengagethepop-uptargetwhileTeamBcontinuestothe primarytarget.Thevideomakesclearthisoccursunderthesupervisionofthe humancommander.Thisimpliesadifferenttypeofhuman-machinerelationship, though,thantheearlierCODEvideo.Inthisone,insteadofthehumanbeingin theloop,thehumanisontheloop,atleastforpop-upthreats.Fortheirprimary target,theyoperateinasemiautonomousfashion.Thehumanchosetheprimary target.Butwhenapop-upthreatemerges,themissileshavetheauthorityto operateassupervisedautonomousweapons.Theydon’tneedtoaskadditional permissiontotakeoutthetarget.Likeaquarterbackcallinganaudibleatthe scrimmage line to adapt to the defense, they have the freedom to adapt to unexpectedsituationsthatarise.Thehumanoperatorislikethecoachstanding on the sidelines—able to call a time-out to intervene, but otherwise merely supervisingtheaction. DARPA’sdescriptionofCODEonlineseemstoshowasimilarflexibilityfor whether the human or air vehicles themselves approve targets. The CODE websitesays:“Usingcollaborativeautonomy,CODE-enabledunmannedaircraft wouldfindtargetsandengagethemasappropriateunderestablishedrulesof

theemergenceof

engagement

unanticipated threats.” This appears to leave the door open to autonomous weaponsthatwouldfindandengagetargetsontheirown.

Thedetailedtechnicaldescriptionissuedtodevelopersprovidesadditional

information,butlittleclarity.DARPAexplainsthatdevelopersshould:

andadapttodynamicsituationssuchas

Provideaconcisebutcomprehensivetargetingchipsetsothemissioncommandercanexercise

appropriatelevelsofhumanjudgmentovertheuseofforceorevaluateotheroptions.

Thespecificwordingused,“appropriatelevelsofhumanjudgment,”maysound vague and squishy, but it isn’t accidental. This guidance directly quotes the

officialDoDpolicyonautonomyinweapons,DoDDirective3000.09,which

states:

Autonomousandsemi-autonomousweaponsystemsshallbedesignedtoallowcommandersand

operatorstoexerciseappropriatelevelsofhumanjudgmentovertheuseofforce.

Notably,thatpolicydoesnotprohibitautonomousweapons.“Appropriatelevels

of human judgment” could include autonomous weapons. In fact, the DoD policyincludesapaththroughwhichdeveloperscouldseekapprovaltobuild and deploy autonomous weapons, with appropriate safeguards and testing, shouldtheybedesired. At a minimum, then, CODE would seem to allow for the possibility of autonomous weapons. The aim of the project is not to build autonomous weapons necessarily. The aim is to enable collaborative autonomy. But in a contested electromagnetic environment where communications links to the human supervisor might be jammed, the program appears to allow for the possibilitythatthedronescouldbedelegatedtheauthoritytoengagepop-up threatsontheirown. Infact,CODEevenhintsatonewaythatcollaborativeautonomymightaid in target identification. Program documents list one of the advantages of collaborationas“providingmulti-modalsensorsanddiverseobservationangles to improve target identification.” Historically, automatic target recognition (ATR) algorithms have not been good enough to trust with autonomous engagements.ThispoorqualityofATRalgorithmscouldbecompensatedforby bringingtogethermultipledifferentsensorstoimprovetheconfidenceintarget identification or by viewing a target from multiple angles, building a more completepicture.OneoftheCODEvideosactuallyshowsthis,withairvehicles viewingthetargetfrommultipledirectionsandsharingdata.Whethertarget identificationcouldbeimprovedenoughtoallowforautonomousengagements isunclear,butifCODEissuccessful,DoDwillhavetoconfrontthequestionof whethertoauthorizeautonomousweapons.

THEDEPARTMENTOFMADSCIENTISTS

At the heart of many of these projects is the Defense Advanced Research Projects Agency (DARPA), or what writer Michael Belfiore called “the DepartmentofMadScientists.”DARPA,originallycalledARPA,theAdvanced

ResearchProjectsAgency,wasfoundedin1958byPresidentEisenhowerin

responsetoSputnik.DARPA’smissionistoprevent“strategicsurprise.”The

UnitedStateswassurprisedandshakenbytheSovietUnion’slaunchofSputnik.

Thesmallmetalballhurdlingthroughspaceoverheadwasawake-upcalltothe

realitythattheSovietUnioncouldnowlaunchintercontinentalballisticmissiles

thatcouldhitanywhereintheUnitedStates.Inresponse,PresidentEisenhower

createdtwoorganizationstodevelopbreakthroughtechnologies,theNational

AeronauticsandSpaceAdministration(NASA)andARPA.WhileNASAhad themissionofwinningthespacerace,ARPAhadamorefundamentalmissionof investing in high-risk, high-reward technologies so the United States would neveragainbesurprisedbyacompetitor. To achieve its mission, DARPA has a unique culture and organization distinctfromtherestofthemilitary-industrialcomplex.DARPAonlyinvestsin projectsthatare“DARPAhard,”challengingtechnologyproblemsthatothers mightdeemimpossible.Sometimes,thesebetsdon’tpanout.DARPAhasa mantraof“failfast”sothatifprojectsfail,theydosobeforeinvestingmassive resources. Sometimes, however, these investments in game-changing technologiespayhugedividends.Overthepastfivedecades,DARPAhastime andagainlaidtheseedsfordisruptivetechnologiesthathavegiventheUnited Statesdecisiveadvantages.OutofARPAcameARPANET,anearlycomputer networkthatlaterdevelopedintotheinternet.DARPAhelpeddevelopbasic technologiesthatunderpintheglobalpositioningsystem(GPS).DARPAfunded

thefirst-everstealthcombataircraft,HAVEBlue,whichledtotheF-117stealth

fighter. And DARPA has consistently advanced the horizons of artificial intelligenceandrobotics. DARPArarely builds completed weapon systems. Its projects are small, focusedeffortstosolveextremelyhardproblems,suchasCODE’seffortstoget airvehiclestocollaborateautonomously.StuartRussellsaidthathefoundthese projectsconcerningbecause,fromhisperspective,theyseemedtoindicatethat the United States was expecting to be in a position to deploy autonomous weaponsatafuturedate.Wasthat,infact,theirintention,orwasthatsimplyan inevitability of the technology? If projects like CODE were successful, did DARPAintendtoturnthekeytofullautoorwastheintentiontoalwayskeepa humanintheloop? It was clear that if I was going to understand the future of autonomous weapons,IwouldneedtotalktoDARPA.

5

INSIDETHEPUZZLEPALACE

ISTHEPENTAGONBUILDINGAUTONOMOUSWEAPONS?

DARPAsitsinanondescriptofficebuildinginBallston,Virginia,justafew milesfromthePentagon.Fromtheoutside,itdoesn’tlooklikea“Departmentof MadScientists.”Itlookslikejustanotherglassofficebuilding,withnohintof thewild-eyedideasbubblinginside. Once you’re inside DARPA’s spacious lobby, the organization’s gravitas takeshold.Abovethevisitors’deskonthemarblewall,raisedmetallettersthat arebothsimpleandfuturisticannounce:DEFENSEADVANCEDRESEARCH PROJECTS AGENCY. Nothing else. No motto or logo or shield. The organization’sconfidenceisapparent.Thewordsseemtosay,“thefutureis beingmadehere.” AsIwaitinthelobby,IwatchawallofvideomonitorsannounceDARPA’s

latest project to go public: the awkwardly named Anti-Submarine Warfare (ASW) Continuous Trail Unmanned Vessel (ACTUV). The ship’s christened

name,SeaHunter,iscatchier.TheprojectisclassicDARPA—notonlygame-

changing,butparadigm-bending:theSeaHunterisanentirelyunmannedship. Sleekandangular,itlookslikesomethingtime-warpedinfromthefuture.With along,narrowhullandtwooutriggers,theSeaHuntercarvestheoceanslikea three-pointed dagger, tracking enemy submarines. At the ship’s christening, DeputySecretaryofDefenseBobWorkcomparedittoaKlingonBirdofPrey fromStarTrek.

TherearenoweaponsonboardtheSeaHunter,fornow.Thereshouldbeno

mistake,however:theSeaHunterisawarship.Workcalledita“fightingship,”

part of the Navy’s future “human machine collaborative battle fleet.” At $2

millionapiece,theSeaHunterisafractionofthecostofanew$1.6-billion

ArleighBurkedestroyer.ThelowpriceallowstheNavytopurchasescoresof

thesub-huntingshipsonthecheap.WorklaidouthisvisionforflotillasofSea

Huntersroamingtheseas:

You can imagine anti-submarine warfare pickets, you can imagine anti-submarine warfare

wolfpacks, you can imagine mine warfare flotillas, you can imagine distributive anti-surface

warfaresurfaceactiongroups

Wemightbeabletoputasixpackorafourpackofmissileson

them.Nowimagine50ofthesedistributedandoperatingtogetherunderthehandsofaflotilla

commander,andthisisreallysomething.

Likemanyotherroboticsystems,theSeaHuntercannavigateautonomouslyand mightsomedaybearmed.ThereisnoindicationthatDoDhasanyintentionof authorizing autonomous weapons engagements. Nevertheless, the video on DARPA’slobbywallisareminderthattheroboticsrevolutioncontinuesata breakneckpace.

BEHINDTHECURTAIN:INSIDEDARPA’STACTICAL

TECHNOLOGYOFFICE

DARPA is organized into six departments focusing on different technology areas:biology,informationscience,microelectronics,basicsciences,strategic technologies, and tactical technologies. CODE, FLA, LRASM, and the Sea HunterfallintoDARPA’sTacticalTechnologyOffice(TTO),thedivisionthat builds experimental vehicles, ships, airplanes, and spacecraft. Other TTO

projectsincludetheXS-1ExperimentalSpaceplane,designedtoflytotheedge

of space and back; the Blue Wolf undersea robotic vehicle; an R2-D2-like robotic copilot for aircraft called ALIAS; the Mach 20 Falcon Hypersonic Technology Vehicle, which flies fast enough to zip from New York to Los

Angelesin12minutes;andtheVultureprogramtobuildanultra-longendurance

dronethatcanstayintheairforuptofiveyearswithoutrefueling.Madscience, indeed. TTO’sofficeslooklikeachild’sdreamtoyroom.Litteredaroundtheoffices aremodelsandevensomeactualprototypepiecesofhardwarefrompastTTO projects—missiles,robots,andstealthaircraft.Ican’thelpbutwonderwhatTTO isbuildingtodaythatwillbethestealthoftomorrow. Bradford Tousley, TTO’s director, graciously agreed to meet with me to discussCODEandotherprojects.Tousleybeganhisgovernmentcareerasan

Army armor officer during the Cold War. His first tour was in an armored cavalryunitontheGermanborder,beingreadyforaSovietinvasionthatmight kickoffWorldWarIII.Laterinhiscareer,whentheArmysenthimbackfora secondaryeducation,Tousleyearnedadoctorateinelectricalengineering.His careershiftedfromfrontlinecombatunitstoresearchanddevelopmentinlasers andoptics,workingtoensuretheU.S.militaryhadthebestpossibletechnology. Tousley’scareerhascoveredmultiplestintsatDARPAaswellastimeinthe intelligencecommunityonclassifiedsatellitepayloads,sohehasabreadthof understandingintechnologybeyondmerelyrobotics. TousleypointedoutthatDARPAwasfoundedinresponsetothestrategic surprise of Sputnik: “DARPA’s fundamental mission is unchanged: Enabling pivotalearlyinvestmentsforbreakthroughcapabilitiesfornationalsecurityto achieve or prevent strategic surprise.” Inside DARPA, they weigh these questionsheavily.“Withintheagency,wetalkabouteverysingleprogramwe beginandwehavespiriteddiscussions.Wetalkabouttheprosandcons.Why?

Howfararewewillingtogo?”Tousleymadeclear,however,that

answeringthosequestionsisn’tDARPA’sjob.“Thosearefundamentalpolicy

andconceptandmilitaryemploymentconsiderations”forotherstodecide.“Our

fundamentaljobistotakethattechnicalquestionoffthetable.It’sourjobto

maketheinvestmentstoshowthecapabilitiescanexist”togivethewarfighter

options.Inotherwords,topreventanotherSputnik.

Whynot?

Ifmachinesimprovedenoughtoreliablytakeouttargetsontheirown,what the role was for humans in warfare? Despite his willingness to push the boundariesoftechnology,Tousleystillsawhumansincommandofthemission:

“Thatfinaldecisioniswithhumans,period.”Thatmightnotmeanrequiring humanauthorizationforeverysingletarget,butautonomousweaponswouldstill operateunderhumandirection,huntingandattackingtargetsatthedirectionofa human commander. At least for the foreseeable future, Tousley explained, humans were better than machines at identifying anomalies and reacting to unforeseenevents.Thismeantthatkeepinghumansinvolvedatthemissionlevel wascriticaltounderstandthebroadercontextandmakedecisions.“Untilthe machine processors equal or surpass humans at making abstract decisions, there’s always going to be mission command. There’s always going to be humansintheloop,ontheloop—whateveryouwanttocallit.” Tousleypaintedapictureformeofwhatthismightlooklikeinafuture conflict:“Groupsofplatformsthatareunmannedthatyouarewillingtoattrit [accept some losses] may do extremely well in an anti-access air defense

HowdoItakethoseplatformsandabunchofothersandknit

environment

themtogetherinarchitecturesthathavemannedandunmannedsystemsstriking

targetsinacongestedandcontestedenvironment?Youneedthatknittedsystem

becauseyou’regoingtobeGPS-jammed;communicationsaregoingtobegoing

inandout;you’regoingtohaveairdefensesshootingdownassets,mannedand

unmanned.Inordertogetinandstrikecriticaltargets,tocontrolthat[anti-

access] environment, you’re going to have to have a system-of-systems architecturethattakesadvantageofmannedandunmannedsystemsatdifferent rangeswithsomeamountoffidelityintheabilityofthemunitionbyitselfto identifythetarget—couldbeelectronically,couldbeoptically,couldbeinfrared, couldbe[signalsintelligence],couldbedifferentwaystoidentifythetarget.So that system-of-systems architecture is going to be necessary to knit it all together.”

Militaries especially need autonomy in electronic warfare. “We’re using physicalmachinesandelectronics,andtheelectronicsthemselvesarebecoming

Ineedthecognitiveelectronic

warfaretoadaptinmicroseconds

IfIhaveradarstryingtojamotherradars

machinesthatoperateatmachinespeed

butthey’refrequencyhopping[rapidlychangingradiofrequencies]backand

forth,I’vegottotrackwithit.So[DARPA’sMicrosystemsTechnologyOffice]

isthinkingabout,howdoIoperateatmachinespeedtoallowthesemachinesto

conducttheirfunctions?”

TousleycomparedthechallengeofcognitiveelectronicwarfaretoGoogle’s

go-playingAlphaGoprogram.Whathappenswhenthatprogramplaysanother

versionofAlphaGoat“machinespeed?”Heexplained,“Ashumansascendto

thehigher-levelmissioncommandandI’vegotmachinesdoingmoreofthat

targetingfunction,thosemachinesaregoingtobechallengedbymachineson

theadversary’ssideandahumancan’trespondtothat.It’sgottobemachines

respondingtomachines

machine on machine.” Humans, therefore, shift into a “monitoring” role,

watchingthesesystemsandintervening,ifnecessary.Infact,Tousleyarguesthat

adifficultquestionwillbewhetherhumansshouldinterveneinthesemachine-

on-machinecontests,particularlyincyberspaceandelectronicwarfarewherethe

paceofinteractionswillfarexceedhumanreactiontimes.

Ipointedoutthathavingahumaninvolvedinamonitoringrolestillimplies somedegreeofconnectivity,whichmightbedifficultinacontestedenvironment withjamming.Tousleywasunconcerned.“Weexpectthattherewillbejamming andcommunicationsdenialgoingon,butitwon’tbenecessarilyeverywhere,all

thetime,”hesaid.“It’sonethingtojammycommunicationlinkover1,000

miles,it’sanotherthingtojamtwomissilesthataretalkinginflightthatmaybe

That’soneofthetrendsoftheThirdOffset,that

threehundredmetersapartflyinginformation.”Reliablecommunicationsin contested areas, even short range, would still permit a human being to be involved,atleastinsomecapacity. So,whatrolewouldthatpersonplay?Wouldthispersonneedtoauthorize everytargetbeforeengagement,orwouldhumancontrolsitatahigherlevel?“I thinkthatwillbearuleofengagement-dependentdecision,”Tousleysaid.“Inan extremely hot peer-on-peer conflict, the rules of engagement may be more

Ifthingsarereallyhotandheavy,you’regoingtorelyonthefact

thatyoubuiltsomeofthatautonomouscapabilityinthere.”Still,eveninthis intensebattlefieldenvironment,heattested,thehumanplaystheimportantrole ofoverseeingthecombataction.“Butyoustillwantsomelowdatarate”tokeep apersoninvolved. IttookmeawhiletorealizethatTousleywasn’tshruggingoffmyquestions aboutwhetherthehumanwouldberequiredtoauthorizeeachtargetbecausehe wasbeingevasiveortryingtoconcealasecretprogram,itwasbecausehe genuinelydidn’tseetheissuethesameway.Automationhadbeenincreasingin weaponsfordecades—fromTousley’sperspective,programslikeCODEwere merelythenextstep.Humanswouldremaininvolvedinlethaldecision-making, albeitatahigherleveloverseeinganddirectingthecombataction.Theprecise detailsofhowmuchfreedomanautonomoussystemmightbegrantedtochoose itsowntargetsandinwhichsituationswasn’thisprimaryconcern.Thosewere questionsformilitarycommanderstoaddress.Hisjobasaresearcherwasto,as heputit,“takethattechnicalquestionoffthetable.”Hisjobwastobuildthe options.Thatmeantbuildingswarmsofautonomoussystemsthatcouldgointoa contestedareaandconductamissionwithasminimalhumansupervisionas possible. It also meant building in resilient communications so that humans could have as much bandwidth and connectivity to oversee and direct the autonomous systems as possible. How exactly those technologies were implemented—whichspecificdecisionswereretainedforthehumanandwhich weredelegatedtothemachine—wasn’thiscalltomake. Tousley acknowledged that delegating lethal decision-making came with risks. “If [CODE] enables software that can enable a swarm to execute a mission,wouldthatsameswarmbeabletoexecuteamissionagainstthewrong target?Yeah,thatisapossibility.Wedon’twantthattohappen.Wewantto build in all the fail-safe systems possible.” For this reason, his number-one concernwithautonomoussystemswasactuallytestandevaluation:“WhatI worryaboutthemostisourabilitytoeffectivelytestthesesystemstothepoint thatwecanquantifythatwetrustthem.”Trustisessentialtocommandersbeing

relaxed

willingtoemployautonomoussystems.“Unlessthecombatantcommanderfeels

thatthatautonomoussystemisgoingtoexecutethemissionwiththetrustthathe

orsheexpects,they’llneverdeployitinthefirstplace.”Establishingthattrust

wasallabouttestandevaluation,whichcouldmeanputtinganautonomous

systemthroughmillionsofcomputersimulationstotestitsbehavior.Evenstill,

testingallofthepossiblesituationsanautonomoussystemmightencounterand

itspotentialbehaviorsinresponsecouldbeverydifficult.“OneoftheconcernsI

have,”hesaid,“isthatthetechnologyforautonomyandthetechnologyfor

human-machine integration and understanding is going too far surpass our

abilitytotestit

Thatworriesme.”

TARGETRECOGNITIONANDADAPTIONINCONTESTED

ENVIRONMENTS(TRACE)

TousleydeclinedtocommentonanotherDARPAprogram,TargetRecognition and Adaption in Contested Environments (TRACE), because it fell under a different department he wasn’t responsible for. And although DARPA was incrediblyopenandhelpfulthroughouttheresearchforthisbook,theagency declined to comment on TRACE beyond publicly available information. If there’s one program that seems to be a linchpin for enabling autonomous weapons, it’s TRACE. The CODE project aims to compensate for poor automatic target recognition (ATR) algorithms by leveraging cooperative autonomy.TRACEaimstoimproveATRalgorithmsdirectly. TRACE’sprojectdescriptionexplainstheproblem:

Inatarget-denseenvironment,theadversaryhastheadvantageofusingsophisticateddecoysand

backgroundtraffictodegradetheeffectivenessofexistingautomatictargetrecognition(ATR)

solutions

thefalse-alarmrateofbothhumanandmachine-basedradarimagerecognitionis

unacceptablyhigh.ExistingATRalgorithmsalsorequireimpracticallylargecomputingresources

forairborneapplications.

TRACE’s aim is to overcome these problems and “develop algorithms and techniques that rapidly and accurately identify military targets using radar sensorsonmannedandunmannedtacticalplatforms.”Inshort,TRACE’sgoalis tosolvetheATRproblem. TounderstandjusthowdifficultATRis—andhowgame-changingTRACE would be if successful—a brief survey of sensing technologies is in order. Broadly speaking, military targets can be grouped into two categories:

“cooperative”and“non-cooperative”targets.Cooperativetargetsarethosethat

areactivelyemittingasignal,whichmakesthemeasiertodetect.Forexample, radars,whenturnedon,emitenergyintheelectromagneticspectrum.Radars “see”byobservingthereflectedenergyfromtheirsignal.Thisalsomeansthe radarisbroadcastingitsownposition,however.Enemieslookingtotargetand destroy the radar can simply home in on the source of the electromagnetic energy.ThisishowsimpleautonomousweaponsliketheHarpyfindradars. Theycanusepassivesensorstosimplywaitandlistenforthecooperativetarget (theenemyradar)tobroadcastitsposition,andthenhomeinonthesignalto destroytheradar. Non-cooperative targets are those that aren’t broadcasting their location. Examplesofnon-cooperativetargetscouldbeships,radars,oraircraftoperating withtheirradarsturnedoff;submarinesrunningsilently;orgroundvehiclessuch astanks,artillery,ormobilemissilelaunchers.Tofindnon-cooperativetargets, activesensorsareneededtosendsignalsoutintotheenvironmenttofindtargets. Radarandsonarareexamplesofactivesensors;radarsendsoutelectromagnetic energy and sonar sends out sound waves. Active sensors then observe the reflectedenergyandattempttodiscernpotentialtargetsfromtherandomnoise ofbackgroundclutterintheenvironment.Radar“sees”reflectedelectromagnetic energyandsonar“hears”reflectedsoundwaves. Militariesarethereforeliketwoadversariesstumblingaroundinthedark, eachlisteningandpeeringferventlyintothedarknesstohearandseetheother whileremaininghiddenthemselves.Oureyesarepassivesensors;theysimply receive light. In the darkness, however, an external source of light like a flashlightisneeded.Usingaflashlightgivesawayone’sownposition,though, makingonea“cooperativetarget”fortheenemy.Inthiscontestofhidingand finding,zeroinginontheenemy’scooperativetargetsislikefindingaperson wavingaflashlightaroundinthedarkness.Itisn’thard;thepersonwavingthe flashlightisgoingtostandout.Findingthenon-cooperativetargetswhokeep theirflashlightsturnedoffcanbevery,verytricky. Whenthereislittlebackgroundclutter,objectscanbefoundrelativelyeasily through active sensing. Ships and aircraft stand out easily against their background—a flat ocean and an empty sky. They stand out like a person standinginanopenfield.Aquickscanwithevenadimlightwillpickouta personstandingintheopen,althoughdiscerningfriendfromfoecanbedifficult. Inclutteredenvironments,however,evenfindingtargetsinthefirstplacecanbe hard. Moving targets can be discerned via Doppler shifting—essentially the samemethodthatpoliceusetodetectspeedingvehicles.Movingobjectsshift the frequency of the return radar signal, making them stand out against a

stationarybackground.Stationarytargetsinclutteredenvironmentscanbeas hardtoseeasadeerhidinginthewoods,though.Evenwithalightshined directlyonthem,theymightnotbenoticed. Humanshavechallengesseeingstationary,camouflagedobjectsandhuman visual cognitive processing is incredibly complex. We take for granted how computationally difficult it is to see objects that blend into the background. Whileradarsandsonarscan“see”and“hear”infrequenciesthathumansare incapableof,militaryATRisnowherenearasgoodashumansatidentifying objectsamidclutter. Militariescurrentlysensemanynon-cooperativetargetsusingatechnique calledsyntheticapertureradar,orSAR.Avehicle,typicallyanaircraft,fliesina linepastatargetandsendsoutaburstofradarpulsesastheaircraftmoves.This allowstheaircrafttocreatethesameeffectashavinganarrayofsensors,a powerful technique that enhances image resolution. The result is sometimes grainy images composed of small dots, like a black-and-white pointillist

painting.WhileSARimagesaregenerallynotassharpasimagesfromelectro-

opticalorinfraredcameras,SARisapowerfultoolbecauseradarcanpenetrate throughclouds,allowingall-weathersurveillance.Buildingalgorithmsthatcan automaticallyidentifySARimagesisextremelydifficult,however.GrainySAR imagesoftanks,artillery,orairplanesparkedonarunwayoftenpushthelimits ofhumanabilitiestorecognizeobjects,andhistoricallyATRalgorithmshave fallenfarshortofhumanabilities. ThepoorperformanceofmilitaryATRstandsinstarkcontrasttorecent advancesincomputervision.Artificialintelligencehashistoricallystruggled with object recognition and perception, but the field has seen rapid gains recentlyduetodeeplearning.Deeplearningusesneuralnetworks,atypeofAI approach that is analogous to biological neurons in animal brains. Artificial neuralnetworksdon’tdirectlymimicbiology,butareinspiredbyit.Ratherthan followascriptofif-thenstepsforhowtoperformatask,neuralnetworkswork based on the strength of connections within a network. Thousands or even millionsofdatasamplesarefedintothenetworkandtheweightsofvarious connectionsbetweennodesinthenetworkareconstantlyadjustedto“train”the networkonthedata.Inthisway,neuralnetworks“learn.”Networksettingsare refineduntilthecorrectoutput,suchasthecorrectimagecategory(forexample, cat,lamp,car)isachieved.

DeepNeuralNetwork Deep neuralnetworksarethosethathavemultiple“hidden”layersbetween

DeepNeuralNetwork

Deepneuralnetworksarethosethathavemultiple“hidden”layersbetween theinputandoutput,andhaveproventobeaverypowerfultoolformachine learning.Addingmorelayersinthenetworkbetweentheinputdataandoutput allowsforamuchgreatercomplexityofthenetwork,enablingthenetworkto handlemorecomplextasks.Somedeepneuralnetshaveoverahundredlayers. Thiscomplexityis,itturnsout,essentialforimagerecognition,anddeep

neuralnetshavemadetremendousprogress.In2015,ateamofresearchersfrom

Microsoftannouncedthattheyhadcreatedadeepneuralnetworkthatforthe firsttimesurpassedhumanperformanceinvisualobjectidentification.Usinga

standardtestdatasetof150,000images,Microsoft’snetworkachievedanerror

rateofonly4.94percent,narrowlyedgingouthumans,whohaveanestimated

5.1 percent error rate. A few months later, they improved on their own

performancewitha3.57percentratebya152-layerneuralnet.

TRACEintendstoharnesstheseadvancesandothersinmachinelearningto buildbetterATRalgorithms.ATRalgorithmsthatperformedonparwithor betterthanhumansinidentifyingnon-cooperativetargetssuchastanks,mobile missilelaunchers,orartillerywouldbeagamechangerintermsoffindingand destroying enemy targets. If the resulting target recognition system was of sufficientlylowpowertobelocatedonboardthemissileordroneitself,human authorizationwouldnotberequired,atleastfromapurelytechnicalpointof view.Thetechnologywouldenableweaponstohuntanddestroytargetsallon theirown. RegardlessofwhetherDARPAwasintendingtobuildautonomousweapons, itwasclearthatprogramslikeCODEandTRACEwereputtinginplacethe buildingblocksthatwouldenabletheminthefuture.Tousley’sviewwasthatit wasn’tDARPA’scallwhethertoauthorizethatnextfatefulstepacrossthelineto

weaponsthatwouldchoosetheirowntargets.Butifitwasn’tDARPA’scall

whethertobuildautonomousweapons,thenwhosecallwasit?

6

CROSSINGTHETHRESHOLD

APPROVINGAUTONOMOUSWEAPONS

TheDepartmentofDefensehasanofficialpolicyontheroleofautonomyin weapons, DoD Directive 3000.09, “Autonomy in Weapon Systems.” (Disclosure:WhileatDoD,Iledtheworkinggroupthatdraftedthepolicy.)

SignedinNovember2012,thedirectiveispublishedonlinesoanyonecanread

it.

Thedirectiveincludessomegenerallanguageonprinciplesfordesignof semiautonomousandautonomoussystems,suchasrealistictestandevaluation andunderstandablehuman-machineinterfaces.Themeatofthepolicy,however, is the delineation of three classes of systems that get the “green light” for

approvalinthepolicy.Theseare:(1)semiautonomousweapons,suchashoming

munitions; (2) defensive supervised autonomous weapons, such as the ship-

basedAegisweaponsystem;and(3)nonlethal,nonkineticautonomousweapons,

such as electronic warfare to jam enemy radars. These three types of autonomous systems are in wide use today. The policy essentially says to developers, “If you want to build a weapon that uses autonomy in ways consistentwithexistingpractices,you’refreetodoso.”Normalacquisitionrules apply,butthosetypesofsystemsdonotrequireanyadditionalapproval. Anyfutureweaponsystemthatwoulduseautonomyinanovelwayoutside of those three categories gets a “yellow light.” Those systems need to be reviewedbeforebeginningformaldevelopment(essentiallythepointatwhich largesumsofmoneywouldbespent)andagainbeforefielding.The policy outlines who participates in the review process—the senior defense civilian

officialsforpolicyandacquisitionsandthechairmanoftheJointChiefsofStaff —aswellasthecriteriaforreview.Thecriteriaarelengthy,butpredominantly focusontestandevaluationforautonomoussystemstoensuretheybehaveas intended—thesameconcernTousleyexpressed.Thestatedpurposeofthepolicy isto“minimizetheprobabilityandconsequencesoffailuresinautonomousand semiautonomousweaponsystemsthatcouldleadtounintendedengagements.” Inotherwords,tominimizethechancesofarmedrobotsrunningamok. Lethal autonomous weapons are not prohibited by the policy directive. Instead,thepolicyprovidesaprocessbywhichnewusesofautonomycouldbe reviewedbyrelevantofficialsbeforedeployment.Thepolicyhelpsensurethatif DoD were to build autonomous weapons that they weren’t developed and deployedwithoutsufficientoversight,butitdoesn’thelpanswerthequestionof whetherDoDmightactuallyapprovesuchsystems.Onthatquestion,thepolicy issilent.Allthepolicysaysisthatifanautonomousweaponmetallofthe criteria,suchasreliabilityunderrealisticconditions,theninprincipleitcouldbe authorized.

GIVINGTHEGREENLIGHTTOAUTONOMOUSWEAPONS

Butwoulditbeauthorized?DARPAprogramsareintendedtoexploretheartof the possible, but that doesn’t mean that DoD would necessarily turn those experimental projects into operational weapon systems. To better understand whetherthePentagonmightactuallyapproveautonomousweapons,Isatdown with then-Pentagon acquisition chief, Under Secretary of Defense Frank Kendall. As the under secretary of defense for acquisition, technology and logistics, Kendall was the Pentagon’s chief technologist and weapons buyer undertheObamaAdministration.Whenitcametomajorweaponssystemslike

theX-47BorLRASM,thedecisionwhetherornottomoveforwardwasin

Kendall’shands.IntheprocesslaidoutundertheDoDDirective,Kendallwas oneofthreeseniorofficials,alongwiththeundersecretaryforpolicyandthe chairman of the Joint Chiefs, who all had to agree in order to authorize developinganautonomousweapon. Kendallhasauniquebackgroundamongdefensetechnologists.Inaddition toadistinguishedcareeracrossthedefensetechnologyenterprise,servingina varietyofrolesfromvicepresidentofamajordefensefirmtoseveralmid-level bureaucraticjobswithinDoD,Kendallalsohasworkedprobonoasahuman rightslawyer.HehasworkedwithAmnestyInternational,HumanRightsFirst,

andotherhumanrightsgroups,includingasanobserverattheU.S.prisonat GuantánamoBay.Givenhisbackground,IwashopefulthatKendallmightbe abletobridgethegapbetweentechnologyandpolicy. Kendall made clear, for starters, that there had never been a weapon autonomousenougheventotriggerthepolicyreview.“Wehaven’thadanything thatwasevenremotelyclosetoautonomouslylethal.”Ifhewereputinthat position,Kendallsaidhischiefconcernswouldbeensuringthatitcompliedwith thelawsofwarandthattheweaponallowedfor“appropriatehumanjudgment,” a phrase that appears in the policy directive. Kendall admitted those terms weren’tdefined,butconversationwithhimbegantoelucidatehisthinking. KendallstartedhiscareerasanArmyairdefenderduringtheColdWar, wherehelearnedthevalueofautomationfirsthand.“Wehadanautomaticmode fortheHawksystemthatweneverused,butIcouldseeinanextremesituation where you’d turn it on, because you just couldn’t do things fast enough otherwise,”hesaid.Whenyouhave“fractionsofasecond”todecide—that’sa roleformachines. Kendallsaidthatautomatictargetrecognitionandmachinelearningwere improvingrapidly.Astheyimprove,itshouldbecomepossibleforthemachine toselectitsowntargetsforengagement.Insomesettings,suchastakingoutan enemyradar,hethoughtitcouldbedone“relativelysoon.” Thisraisestrickyquestions.“Wheredoyouwantthehumaninterventionto be?”heasked.“Doyouwantittobetheactualactofemployingthelethality? Doyouwantittobetheacceptanceoftherulesthatyousetforidentifying somethingashostile?”Kendalldidn’thavetheanswers.“Ithinkwe’regoingto havetosortthroughallthat.” Oneimportantfactorwasthecontext.“Areyoujustdrivingdownthestreet orareyouactuallyinawar,oryou’reinaninsurgency?Thecontextmatters.”In somesettings,usingautonomytoselectandengagetargetsmightbeappropriate. Inothers,itmightnot. Kendallsawusinganautonomousweapontotargetenemyradarsasfairly straightforwardandsomethinghedidn’tseemanypeopleobjectingto.There wereotherexamplesthatpushedtheboundaries.KendallsaidonatriptoIsrael, hishostsfromtheIsraelDefenseForceshadhimsitinaMerkavatankthatwas outfittedwiththeTrophyactiveprotectionsystem.TheIsraelisfiredarocket propelledgrenadenearthetank(“offsetafewmeters,”hesaid)andtheTrophy systemintercepteditautomatically.“ButsupposeIalsowantedtoshootbackat. whereverthebullethadcomefrom?”heasked.“Youcanautomatethat,right? That’sprotectingme,butit’stheuseofthatweaponinawaywhichcouldbe

lethaltowhoever,youknow,wasinthelineoffirewhenIfire.”Hepointedout thatautomatingareturn-fireresponsemightpreventasecondshot,savinglives. Kendallacknowledgedthathadrisks,buttherewererisksinnotdoingitaswell. “Howmuchdowewanttoputourownpeopleatriskbynotallowingthemto usethistechnology?That’stheothersideoftheequation.” Thingsbecomeespeciallydifficultifthemachineisbetterthantheperson, which,atsomepoint,willhappen.“Ithinkatthatpoint,we’llhaveatough decisiontomakeastohowwewanttogowiththat.”Kendallsawvaluein keepingahumanintheloopasabackup,but,“Whatifit’sasituationwhere thereisn’tthattime?Thenaren’tyoubetterofftoletthemachinedoit?You know,Ithinkthat’sareasonablequestiontoask.” Iaskedhimforhisanswertothequestion—afterall,hewasthepersonwho woulddecideinDoD.Buthedidn’tknow. “Idon’tthinkwe’vedecidedthatyet,”hesaid.“Ithinkthat’saquestion we’llhavetoconfrontwhenwegettowheretechnologysupportsit.” Kendallwasn’tworried,though.“Ithinkwe’realongwayawayfromthe Terminatoridea,thekillerrobotsletlooseonthebattlefieldidea.Idon’tthink we’re anywhere near that and I don’t worry too much about that.” Kendall expressedconfidenceinhowtheUnitedStateswouldaddressthistechnology. “I’minmyjobbecauseIfindmyjobcompatiblewithbeingahumanrights

lawyer. I think the United States is a country which has high values and it

operatesconsistentwiththosevalues

we’regoingtostartfromthepremisethatwe’regoingtofollowthelawsofwar andobeythemandwe’regoingtofollowhumanitarianprinciples and obey them.” Kendallwasworriedaboutothercountries,buthewasmostconcernedabout whatterroristsmightdowithcommerciallyavailabletechnology.“Automation and artificial intelligence are one of the areas where the commercial developmentsIthinkdwarfthemilitaryinvestmentsinR&D.They’recreating capabilitiesthatcaneasilybepickedupandappliedformilitarypurposes.”As oneexample,heasked,“When[ISIS]doesn’thavetoputapersoninthatcarand canjustsenditoutonitsown,that’saproblemforus,right?”

I’mconfidentthatwhateverwedo,

THEREVOLUTIONARY

Kendall’s boss was Deputy Secretary of Defense Bob Work, the Pentagon’s number-two bureaucrat—and DoD’s number-one robot evangelist. As deputy

secretary from 2014–17, Work was the driving force behind the Pentagon’s ThirdOffsetStrategyanditsfocusonhuman-machineteaming.Inhisvisionof futureconflicts,AIwillworkinconcertwithhumansinhuman-machineteams. Thisblendedhuman-plus-machineapproachcouldtakemanyforms.Humans couldbeenhancedthroughexoskeletonsuitsandaugmentedreality,enabledby machineintelligence.AIsystemscouldhelphumansmakedecisions,muchlike in“centaurchess,”wherehumansareassistedbychessprogramsthatanalyze possiblemoves.Insomecases,AIsystemsmayperformtasksontheirownwith humanoversight,particularlywhenspeedisanadvantage,similartoautomated stock trading. Future weapons will be more intelligent and cooperative, swarmingadversaries. Collectively, Work argues these advances may lead to a “revolution” in warfare. Revolutions in warfare, Work explained in a 2014 monograph, are

“periodsofsharp,discontinuouschange[inwhich]

areoftenupendedbynewmoredominantones,leavingoldwaysofwarfare behind.” Indefensecircles,thisisaboldclaim.TheU.S.defensecommunityofthe

late1990sandearly2000sbecameenamoredwiththepotentialofinformation

technology to lead to a revolution in warfare. Visions of “information dominance” and “network-centric warfare” foundered in the mountains of AfghanistanandthedustystreetsofIraqastheUnitedStatesbecamemiredin messy counterinsurgency wars. High-tech investments in next-generation

weaponsystemssuchasF-22fighterjetswereoverpricedorsimplyirrelevant

forfindingandtrackinginsurgentsorwinningtheheartsandmindsofcivilian populations.Andyet Theinformationrevolutioncontinued,leadingtomoreadvancedcomputer processorsandevermoresophisticatedmachineintelligence.Andevenwhile warfare in the information age might not have unfolded the way Pentagon futurists might have envisioned, the reality is information technology dramaticallyshapedhowtheUnitedStatesfoughtitscounterinsurgencywars. Informationbecamethedominantdriverofcounternetworkoperationsasthe UnitedStatessoughttofindinsurgentshidingamongcivilians,likefindinga needleinastackofneedles. Sweeping technological changes like the industrial revolution or the informationrevolutionunfoldinstagesovertime,overthecourseofdecadesor generations. As they do, they inevitably have profound effects on warfare. Technologies like the internal-combustion engine that powered civilian automobilesandairplanesintheindustrialrevolutionledtotanksandmilitary

existingmilitaryregimes

aircraft.Tanksandairplanes,alongwithotherindustrial-ageweaponrysuchas machineguns,profoundlychangedWorldWarIandWorldWarII. WorkissteepedinmilitaryhistoryandastudentofPentagonfuturistAndy Marshall,whofordecadesranDoD’sOfficeofNetAssessmentandchampioned the idea that another revolution in warfare was unfolding today. Work understandstheconsequencesoffallingbehindduringperiodsofrevolutionary change.Militariescanlosebattlesandevenwars.Empirescanfall,neverto

recover.In1588,themightySpanishArmadawasdefeatedbytheBritish,who

hadmoreexpertlyexploitedtherevolutionarytechnologyoftheday:cannons.In

theinterwarperiodbetweenWorldWarIandWorldWarII,Germanywasmore

successfulincapitalizingoninnovationsinaircraft,tanks,andradiotechnology

andtheresultwastheblitzkrieg—andthefallofFrance.Thebattlefieldisan

unforgivingenvironment.Whennewtechnologiesupendoldwaysoffighting,

militariesandnationsdon’toftengetsecondchancestogetitright.

IfWorkisright,andarevolutioninwarfareisunderwaydriveninpartby machine intelligence, then there is an imperative to invest heavily in AI, robotics, and automation. The consequences of falling behind could be disastrousfortheUnitedStates.Theindustrialrevolutionledtomachinesthat werestrongerthanhumans,andthevictorswerethosewhobestcapitalizedon thattechnology.Today’sinformationrevolutionisleadingtomachinesthatare smarter and faster than humans. Tomorrow’s victors will be those who best exploitAI. Rightnow,AIsystemscanoutperformhumansinnarrowtasksbutstillfall

shortofhumansingeneralintelligence,whichiswhyWorkadvocateshuman-

machineteaming.Suchteamingallowsthebestofbothhumanandmachine intelligence.AIsystemscanbeusedforspecific,tailoredtasksandfortheir advantagesinspeedwhilehumanscanunderstandthebroadercontextandadapt tonovelsituations.Therearelimitationstothisapproach.Insituationswherethe advantages in speed are overwhelming, delegating authority entirely to the machineispreferable.

Whenitcomestolethalforce,inaMarch2016interview,Workstated,“We

willnotdelegatelethalauthorityforamachinetomakeadecision.”Hequickly

caveatedthatstatementamomentlater,however,adding,“Theonlytimewewill

delegateamachineauthorityisinthingsthatgofasterthanhumanreaction

time,likecyberorelectronicwarfare.”

unlesswe

haveto.Inthesameinterview,Worksaid,“Wemightbegoingupagainsta

competitorthatismorewillingtodelegateauthoritytomachinesthanweareand

Inotherwords,wewon’tdelegatelethalauthoritytoamachine

as that competition unfolds, we’ll have to make decisions about how to compete.”Howlongbeforethetighteningspiralofanever-fasterOODAloop forcesthatdecision?Perhapsnotlong.Afewweekslaterinanotherinterview, Workstateditwashisbeliefthat“withinthenextdecadeordecadeandahalf it’sgoingtobecomeclearwhenandwherewedelegateauthoritytomachines.” AprincipalconcernofhiswasthefactthatwhileintheUnitedStateswedebate the “moral, political, legal, ethical” issues surrounding lethal autonomous weapons,“ourpotentialcompetitorsmaynot.” TherewasnoquestionthatifIwasgoingtounderstandwheretherobotics revolutionwasheading,IneededtospeaktoWork.Nosingleindividualhad moreswayoverthecourseoftheU.S.military’sinvestmentsinautonomythan hedid,bothbyvirtueofhisofficialpositioninthebureaucracyaswellashis unofficialpositionasthechiefthought-leaderonautonomy.Workmaynotbean engineerwritingthecodeforthenextgenerationofroboticsystems,buthis influence was even broader and deeper. Through his public statements and internalpolicies,WorkwasshapingthecourseofDoD’sinvestments,bigand small.Hehadchampionedtheconceptofhuman-machineteaming.Howhe framed the technology would influence what engineers across the defense enterprisechosetobuild.Workimmediatelyagreedtoaninterview.

THEFUTUREOFLETHALAUTONOMY

ThePentagonisanimposingstructure.At6.5millionsquarefeet,itisoneofthe

largestbuildingsintheworld.Over20,000peopleenterthePentagoneveryday

togotowork.AsImovedthroughtheseaofvisitorsclearingsecurity,Iwas remindedoftheubiquityoftheroboticsrevolution.Iheardthemaninline behindmeexplaintoPentagonsecuritythatthemysteriousiteminhisbriefcase raisingalarmsintheirx-rayscannerswasadrone.“It’saUAV,”hesaid.“A drone.Ihaveclearancetobringitin,”headdedhastily. Thedronesareliterallyeverywhere,itwouldseem. Work’sofficewasinthefamedE-ringwherethePentagon’stopexecutives reside,andhewaskindenoughtotaketimeoutofhisbusyscheduletotalkwith me.Istartedwithasimplequestion,oneIhadbeensearchingtoanswerinvain inmyresearch:IstheDepartmentofDefensebuildingautonomousweapons? Underscoringthedefinitionalproblem,WorkwantedtoclarifywhatImeant by “autonomous weapon” before answering. I explained I was defining an autonomousweaponasonethatcouldsearchfor,select,andengagetargetson

itsown.Workreplied,“We,theUnitedStates,havehadalethalautonomous

weapon, using your definition, since 1945: the Bat [radar-guided anti-ship bomb].”Hesaid,“Iwoulddefineitasanarrowlethalautonomousweaponin thattheoriginaltargetingoftheJapanesedestroyerthatwefiredatwasdoneby

theyknew[theJapanesedestroyer]was

hostile—andthentheylaunchedtheweapon.Buttheweaponitselfmadeallof thedecisionsonthefinalengagementusinganS-bandradarseeker.”Despitehis use of the term “autonomous weapon” to describe a radar-guided homing munition,Workclarifiedhewascomfortablewiththatuseofautonomy.“Isee absolutelynoprobleminthosetypesofweapons.Itwastargetedonaspecific capabilitybyamanintheloopandalltheautonomywasdesignedtodowasdo the terminal endgame engagement.” He was also comfortable with how autonomywasusedinavarietyofmodernweapons,fromtorpedoestotheAegis shipcombatsystem. Paintingapictureofthefuture,Worksaid,“Wearemovingtoaworldin which the autonomous weapons will have smart decision trees that will be completelypreprogrammedbyhumansandcompletelytargetedbyhumans.So

let’ssaywefireaweaponat150nauticalmilesbecauseouroff-boardsensors

sayaRussianbattaliontacticalgroupisoperatinginthisarea.Wedon’tknow exactlywhatofthebattaliontacticalgroupthisweaponwillkill,butweknow thatwe’reengaginganareawheretherearehostiles.”Workexplainedthatthe missileitself,followingitsprogramminglogic,mightprioritizewhichtargetsto strike—tanks,artillery,orinfantryfightingvehicles.“We’regoingtogettothat level. And I see no problem in that,” he said. “There’s a whole variety of autonomousweaponsthatdoend-gameengagementdecisionsaftertheyhave beentargetedandlaunchedataspecifictargetortargetarea.”(HereWorkis using“autonomousweapon”torefertofire-and-forgethomingmunitions.) Loiteringweapons,Workacknowledged,werequalitativelydifferent.“The thingthatpeopleworryaboutisaweaponwefireatrangeanditloitersinthe areaanditdecideswhen,where,how,andwhattokillwithoutanythingother thanthehumanlaunchingitinthegeneraldirection.”Workacknowledgedthat, regardlessofthelabelused,theseloiteringmunitionswerequalitativelydifferent thanhomingmunitionsthathadtobelaunchedataspecifictarget.ButWork didn’t see any problem with loitering munitions either. “People start to get nervousaboutthat,butagain,Idon’tworryaboutthatatall.”Hesaidhedidn’t believetheUnitedStateswouldeverfiresuchaweaponintoanareaunlessit haddonetheappropriateestimatesforpotentialcollateraldamage.If,onthe otherhand,“wearerelativelycertainthattherearenofriendliesinthearea:

aNavyPBYmaritimepatrolaircraft

weaponsfree.Lettheweapondecide.”

Thesesearch-and-destroyweaponsdidn’tbotherWork,eveniftheywere

choosingtheirowntargets,becausetheywerestill“narrowAIsystems.”These

weaponswouldbe“programmedforacertaineffectagainstacertaintypeof

target.Wecantellthemthepriorities.Wecanevendelegateauthoritytothe

weapontodeterminehowitexecutesendgameattack.”Withtheseweapons,

theremaybe“alotofprescribeddecisiontrees,butthehumanisalwaysfiringit

intoageneralareaandwewilldo[collateraldamageestimation]andwewill

say,‘Canweaccepttheriskthatinthisgeneralareatheweaponmightgoaftera

friendly?’Andwewilldotheexactsamedeterminationthatwehaverightnow.”

Worksaidthekeyquestionis,“Whatisyourcomfortlevelontargetlocation

error?”Heexplained,“Ifyouarecomfortablefiringaweaponintoanareain

whichthetargetlocationerrorisprettybig,youarestartingtotakemorerisks

thatitmightgoagainstanassetthatmightbeafriendlyassetoranalliedassetor

somethinglikethat

muchmoreprocessingpowerontotheweaponitself,the[acceptabledegreeof]

targetlocationerrorisgrowing.Andwewillallowtheweapontosearchthat

areaandfigureouttheendgame.”Animportantfactoriswhatelseisinthe

environmentandtheacceptablelevelofcollateraldamage.“Ifyouhavereallow

collateraldamage[requirements],”hesaid,“you’renotgoingtofireaweapon

intoanareawherethetargetlocationissolargethatthechancesofcollateral

damagegoup.”

Insituationswherethatriskwasacceptable,Worksawnoproblemswith

suchweapons.“Ihearpeoplesay,‘Thisissometerriblething.We’vegotkiller

robots.’Nowedon’t.Robots

Thehumanisstilllaunchingtheweaponandspecifyingthetypeoftargetstobe

engaged,eveniftheweaponischoosingthespecifictargetstoattackwithinthat

widearea.There’salwaysgoingtobeamanorwomanintheloopwho’sgoing

tomakethetargetingdecision,”hesaid,evenifthattargetingdecisionwasnow

atahigherlevel.

WorkcontrastedthesenarrowAIsystemswithartificialgeneralintelligence

(AGI),“wheretheAIisactuallymakingthesedecisionsonitsown.”Thisis

whereWorkwoulddrawtheline.“ThedangerisifyougetageneralAIsystem

anditcanrewriteitsowncode.That’sthedanger.Wedon’tseeeverputtingthat

muchAIpowerintoanygivenweapon.ButthatwouldbethedangerIthinkthat

peopleareworriedabout.WhathappensifSkynetrewritesitsowncodeand

says,‘humansaretheenemynow’?ButthatIthinkisvery,very,veryfarinthe

futurebecausegeneralAIhasn’tadvancedtothat.”Eveniftechnologydidget

So,reallywhat’shappeningisbecauseyoucanputso

willonlyhitthetargetsthatyouprogramin

there,Workwasnotsokeenonusingit.“Wewillbeextremelycarefulintrying

toputgeneralAIintoanautonomousweapon,”hesaid.“AsofthispointIcan’t

[that]makes

allthedecisionsonitsown.That’sjustnotthewaythatIwouldeverforeseethe UnitedStatespursuingthistechnology.[Ourapproach]isallaboutempowering thehumanandmakingsurethatthehumansinsidethebattlenetworkhastactical andoperationalovermatchagainsttheirenemies.” Work recognized that other countries may use AI technology differently. “PeoplearegoingtouseAIandautonomyinwaysthatsurpriseus,”hesaid. Other countries might deploy weapons that “decide who to attack, when to attack,howtoattack”allontheirown.Iftheydid,thenthatcouldchangethe U.S.calculus.“Theonlywaythatwewouldgodownthatpath,Ithink,isifit turns out our adversaries do and it turns out that we are at an operational disadvantagebecausethey’reoperatingatmachinespeedandwe’reoperatingat humanspeeds.Andthenwemighthavetorethinkourtheoryofthecase.”Work said that challenge is something he worries about. “The nature of the competition about how people use AI and autonomy is really going to be somethingthatwecannotcontrolandwecannottotallyforeseeatthispoint.”

gettoaplacewherewewouldeverlaunchageneralAIweapon

THEPASTASAGUIDETOTHEFUTURE

WorkforthrightlyansweredeveryquestionIputtohim,butIstillfoundmyself

leavingtheinterviewunsatisfied.Hehadmadeclearthathewascomfortable

using narrow AI systems to perform the kinds of tasks we’re doing today:

endgameautonomytoconfirmatargetchosenbyahumanordefensivehuman-

supervisedautonomylikeatAegis.Hewascomfortablewithloiteringweapons

thatmightoperateoverawiderareaorsmartermunitionsthatcouldprioritize

targets,buthecontinuedtoseehumansplayingaroleinlaunchinganddirecting

thoseweapons.ThereweresometechnologiesWorkwasn’tcomfortablewith—

artificialgeneralintelligenceor“boot-strapping”systemsthatcouldmodifytheir

owncode.Buttherewasawideswathofsystemsinbetween.Whatabouta

uninhabitedcombataircraftthatmadeitsowntargetingdecisions?Howmuch

targeterrorwasacceptable?Hesimplydidn’tknow.Thosearequestionsfuture

defenseleaderswouldhavetoaddress. Tohelpshedlightonhowfutureleadersmightanswerthosequestions,I turned to Dr. Larry Schuette, director of research at the Office of Naval Research.SchuetteisacareerscientistwiththeNavyandhasadoctoratein

electricalengineering,soheunderstandsthetechnologyintimately.ONRhas

repeatedlybeenattheforefrontofadvancementsinautonomyandrobotics,and

Schuettedirectsmuchofthisresearch.Heisalsoanavidstudentofhistory,soI

hopedhecouldhelpmeunderstandwhatthepastmighttellusabouttheshape

ofthingstocome.

Asaresearcher,Schuettemadeitcleartomethatautonomousweaponsare

notanareaoffocusforONR.Therearealotofareaswhereuninhabitedand

autonomoussystemscouldhavevalue,buthisperspectivewastofocusonthe

mundane tasks. “I’m always looking for: what’s the easiest thing with the highestreturnoninvestmentthatwecouldactuallygodowherepeoplewould

Let’sdotheeasy

stufffirst.”Schuettepointedtothanklessjobsliketankingaircraftorcleaningup

oilspills.“Bethetrashbarge

that even tackling these simple, unobjectionable missions was a big enough challenge.“Iknowthatwhatissimpletoimagineinscienceandtechnologyisn’t assimpletodo.”

Schuettealsoemphasizedthathedidn’tseeacompellingoperationalneed

forautonomousweapons.Today’smodelof“Themanpushesabuttonandthe

weapongoesautonomousfromtherebutthemanmakesthedecision”wasa

“workableframeworkforsomelargefractionofwhatyouwouldwanttodowith

unmanned air, unmanned surface, unmanned underwater, unmanned ground

vehicles

hesaid. As a student of history, however, Schuette had a somewhat different perspective. His office looked like a naval museum, with old ship’s logs scatteredonthebookshelvesandblack-and-whitephotosofnavalaviatorsonthe walls.Whilespeaking,Schuettewouldfrequentlyleapoutofhischairtograba

book about unrestricted submarine warfare or the Battle of Guadalcanal to punctuatehispoint.Thehistoricalexamplesweren’taboutautonomy,ratherthey wereaboutabroaderpatterninwarfare.“Historyisfullofinnovationsand asymmetricresponses,”hesaid.InWorldWarII,theJapanesewere“amazed”at U.S.skillatnavalsurfacegunfire.Inresponse,theydecidedtofightatnight, resulting in devastating nighttime naval surface action at the Battle of Guadalcanal. The lesson is that “the threat gets a vote.” Citing Japanese innovationsinlong-rangetorpedoes,Schuettesaid,“Wehadnotplannedon

fightingatorpedowar

This dynamic of innovation and counter-innovation inevitably leads to surprises in warfare and can often change what militaries see as ethical or

thankusfordoingit

Don’tgoafterthehardmissions

Thepeoplewouldloveyou.”Hisviewwas

Idon’tseemuchneedinfuturewarfaretogetaroundthatmodel,”

TheJapanesehadadifferentidea.”

appropriate.“We’ve had these debates before about ethical use of X or Y,” Schuettepointedout.Hecomparedtoday’sdebatesaboutautonomousweapons todebatesintheU.S.NavyintheinterwarperiodbetweenWorldWarIand World War II about unrestricted submarine warfare. “We went all of the twenties,allthethirties,talkingabouthowunrestrictedsubmarinewarfarewasa badideawewouldneverdoit.Andwhentheshithitthefanthefirstthingwe didwasbeginexecutingunrestrictedsubmarinewarfare.”Schuettegrabbeda book off his shelf and quoted the order issued to all U.S. Navy ship and

submarinecommandersonDecember7,1941,justfourandahalfhoursafterthe

attackatPearlHarbor:

EXECUTEAGAINSTJAPANUNRESTRICTEDAIRANDSUBMARINE

WARFARE

Thelessonfromhistory,Schuettesaid,wasthat“wearegoingtobeviolently

opposedtoautonomousrobotichunter-killersystemsuntilwedecidewecan’t

livewithoutthem.”WhenIaskedhimwhathethoughtwouldbethedecisive

factor,hehadasimpleresponse:“IsitDecembereighthorDecembersixth?”

7

WORLDWARR

ROBOTICWEAPONSAROUNDTHEWORLD

The robotics revolution isn’t American-made. It isn’t even American-led. Countriesaroundtheworldarepushingtheenvelopeinautonomy,manyfurther andfasterthantheUnitedStates.ConversationsinU.S.researchlabsandthe Pentagon’s E-ring are only one factor influencing the future of autonomous weapons.Othernationsgetavotetoo.Whattheydowillinfluencehowthe technologydevelops,proliferates,andhowothernations—includingtheUnited States—react. Therapidproliferationofdronesportendswhatistocomeforincreasingly autonomoussystems.Droneshavespreadtonearlyahundredcountriesaround theglobe,aswellasnon-stategroupssuchasHamas,Hezbollah,ISIS,and YemeniHouthirebels.Armeddronesarenext.Agrowingnumberofcountries havearmeddrones,includingnationsthatarenotmajormilitarypowerssuchas SouthAfrica,Nigeria,andIraq. Armedrobotsarealsoproliferatingonthegroundandatsea.SouthKorea hasdeployedarobotsentryguntoitsborderwithNorthKorea.Israelhassentan armedroboticgroundvehicle,theGuardium,onpatrolneartheGazaborder. Russiaisbuildinganarrayofgroundcombatrobotsandhasplansforarobot tank.EvenShiitemilitiasinIraqhavegotteninonthegame,fieldinganarmed

groundrobotin2015.

ArmedDroneProliferation AsofJune2017,sixteencountriespossessedarmeddrones:China,Egypt,

ArmedDroneProliferationAsofJune2017,sixteencountriespossessedarmeddrones:China,Egypt,

Iran,Iraq,Israel,Jordan,Kazakhstan,Myanmar,Nigeria,Pakistan,SaudiArabia,Turkey,Turkmenistan, UnitedArabEmirates,theUnitedKingdom,andtheUnitedStates.Somenationsdevelopedarmeddrones

indigenously,whileothersacquiredthetechnologyfromabroad.Over90percentofinternationalarmed

dronetransfers(shownonthemapviaarrows)havebeenfromChina.

Armedrobotsareheadingtoseaaswell.Israelhasalsodevelopedanarmed uninhabitedboat,theProtector,topatrolitscoast.Singaporehaspurchasedthe ProtectoranddeployeditforcounterpiracymissionsintheStraitsofMalacca. Even Ecuador has an armed robot boat, the ESGRUM, produced entirely indigenously.Armedwitharifleandrocketlauncher,theESGRUMwillpatrol Ecuadorianwaterwaystocounterpirates. AsintheUnitedStates,thekeyquestionwillbewhetherthesenationsplan to cross the line to full autonomy. No nation has stated they plan to build autonomousweapons.Fewhaveruledthemouteither.Onlytwenty-twonations havesaidtheysupportabanonlethalautonomousweapons:Pakistan,Ecuador, Egypt,theHolySee,Cuba,Ghana,Bolivia,Palestine,Zimbabwe,Algeria,Costa Rica, Mexico, Chile, Nicaragua, Panama, Peru, Argentina, Venezuela,

Guatemala,Brazil,Iraq,andUganda(asofNovember2017).Noneofthese

statesaremajormilitarypowersandsome,suchasCostaRicaortheHolySee,

lackamilitaryentirely.

Oneofthefirstareaswherecountrieswillbeforcedtograpplewiththe

choice of whether to delegate lethal authority to the machine will be for

uninhabited combat aircraft designed to operate in contested areas. Several

nationsarereportedlydevelopingexperimentalcombatdronessimilartotheX-

47B,althoughforoperationfromlandbasesratherthanaircraftcarriers.These

include the United Kingdom’s Taranis, China’s Sharp Sword, Russia’s Skat, France’snEUROn,India’sAura,andarumoredunnamedIsraelistealthdrone. Although these drones are likely designed to operate with protected communicationslinkstohumancontrollers,militarieswillhavetodecidewhat actions they want the drone to carry out if (and when) communications are jammed. Restricting the drone’s rules of engagement could mean giving up valuablemilitaryadvantage,andfewnationsarebeingtransparentabouttheir plans. Given that a handful of countries already possess the fully autonomous Harpy,itisn’tastretchtoimaginethemandothersauthorizingasimilarlevelof autonomy with a recoverable drone. Whether countries are actually building those weapons today is more difficult to discern. If understanding what’s happeninginsidetheU.S.defenseindustryisdifficult,peeringbehindthecurtain ofsecretmilitaryprojectsaroundtheglobeisevenharder.Arecountrieslike Russia,China,theUnitedKingdom,andIsraelbuildingautonomousweapons? Oraretheystillkeepinghumansintheloop,walkingrightuptothelineof autonomous weapons but not crossing it? Four high-profile international programs,aSouthKoreanrobotgun,aBritishmissile,aBritishdrone,anda Russianfleetofarmedgroundrobots,showthedifficultyinuncoveringwhat nationsaroundtheglobearedoing.

THECURIOUSCASEOFTHEAUTONOMOUSSENTRYBOT

SouthKorea’sSamsungSGR-A1robotisapowerfulexampleofthechallengein

discerning how much autonomy weapon systems have. The SGR-A1 is a stationaryarmedsentryrobotdesignedtodefendSouthKorean’sborderagainst

NorthKorea.In2007,whentherobotwasrevealed,theelectricalengineering

magazineIEEESpectrumreportedithadafullyautonomousmodeforengaging targets on its own. In an interview with the magazine, Samsung principal researchengineerMyungHoYoosaid,“theultimatedecisionaboutshooting shouldbemadebyahuman,nottherobot.”ButthearticlemadeclearthatYoo’s “should”wasnotarequirement,andthattherobotdidhaveafullyautomatic option.

Thestorywaspickedupwidely,withtheSGR-A1citedasanexampleofa

real-world autonomous weapon by The Atlantic, the BBC, NBC, Popular

Science,andTheVerge.TheSGR-A1madePopularScience’slistof“Scariest

IdeasinScience”withPopSciasking,“WHY,GOD?WHY?”Severalacademic researchersconductingin-depthreportsonmilitaryroboticssimilarlycitedthe

SGR-A1asfullyautonomous.

Inthefaceofthisnegativepublicity,Samsungbackpedaled,sayingthatin fact a human was required to be in the loop. In 2010, a spokesperson for Samsung clarified that “the robots, while having the capability of automatic surveillance,cannotautomaticallyfireatdetectedforeignobjectsorfigures.” SamsungandtheSouthKoreangovernmenthavebeentight-lippedaboutdetails,

though,andonecanunderstandwhy.TheSGR-A1isdesignedtodefendSouth

Korea’sdemilitarizedzonealongitsborderwithNorthKorea,withwhomSouth Koreaistechnicallystillatwar.Fewcountriesonearthfaceasimmediateand intenseasecuritythreat.OnemillionNorthKoreansoldiersandthethreatof nuclearweaponsloomoverSouthKorealikeamenacingshadow.Inthesame interviewinwhichheassertedahumanwillalwaysremainintheloop,the

Samsungspokespersonasserted,“theSGR-1canandwillpreventwars.”

WhataretheactualspecificationsanddesignparametersfortheSGR-A1?

It’s essentially impossible to know without directly inspecting the robot. If Samsungsaysahumanisintheloop,allwecandoistaketheirwordforit.If SouthKoreaiswillingtodelegatemoreautonomytotheirrobotsthanother nations,however,itwouldn’tbesurprising.DefendingtheDMZagainstNorth KoreaisamatterofsurvivalforSouthKorea.Acceptingtherisksofafully autonomoussentrygunmaybemorethanworthitforSouthKoreaifitenhances deterrenceagainstNorthKorea.

THEBRIMSTONEMISSILE

SimilartotheU.S.LRASM,theUnitedKingdom’sBrimstonemissilehascome underfirefromcriticswhohavequestionedwhetherithastoomuchautonomy. TheBrimstoneisanaircraft-launchedfire-and-forgetmissiledesignedtodestroy groundvehiclesorsmallboats.Itcanaccomplishthismissioninavarietyof ways. Brimstone has two primary modes of operation: Single Mode and Dual Mode.InSingleMode,ahuman“paints”thetargetwithalaserandthemissile homesinonthelaserreflection.Themissilewillgowhereverthehumanpoints

thelaser,allowingthehumantoprovide“guidanceallthewaytothetarget.” DualModecombinesthelaserguidancewithamillimeter-wave(MMW)radar seekerfor“fastmovingandmaneuveringtargetsandundernarrowRulesof Engagement.” The human designates the target with a laser, then there is a “handoff”fromthelasertotheMMWseekeratthefinalstagesotheweaponcan homeinonfastmovingtargets.Inbothmodesofoperation,themissileisclearly engaging targets that have been designated by a human, making it a semiautonomousweapon. However, the developer also advertises another mode of operation, “a previously-developed fire-and-forget, MMW-only mode” that can be enabled “viaasoftwarerolechange.”Thedeveloperexplains:

Thismodeprovidesthrough-weathertargeting,killbox-baseddiscriminationandsalvolaunch.It

ishighlyeffectiveagainstmulti-targetarmorformations.Salvo-launchedBrimstonesself-sort

basedonfiringorder,reducingtheprobabilityofoverkillforincreasedone-passlethality.

This targeting mode would allow a human to launch a salvo of Brimstones againstagroupofenemytanks,lettingthemissilessortoutwhichmissileshit

whichtank.Accordingtoa2015PopularMechanicsarticle,inthismodethe

Brimstoneisfairlyautonomous:

Itcanidentify,track,andlockontovehiclesautonomously.Ajetcanflyoveraformationof

enemyvehiclesandreleaseseveralBrimstonestofindtargetsinasinglepass.Theoperatorsetsa

“killbox”forBrimstone,soitwillonlyattackwithinagivenarea.Inonedemonstration,three

missileshitthreetargetvehicleswhileignoringnearbyneutralvehicles.

On the Brimstone’s spec sheet, the developer also describes a similar functionalityagainstfast-movingsmallboats,alsocalledfastinshoreattackcraft (FIAC):

InMay2013,multipleBrimstonemissilesoperatinginanautonomous[millimeter]wave(MMW)

modecompletedtheworld’sfirstsinglebutton,salvoengagementofmultipleFIAC,destroying

threevessels(onemoving)insideakillbox,whilecausingnodamagetonearbyneutralvessels.

WhenoperatinginMMW-onlymode,istheBrimstoneanautonomousweapon?

Whilethemissilehasareportedrangeinexcessof20kilometers,itcannotloiter

tosearchfortargets.Thismeansthatthehumanoperatormustknowthereare validtargets—groundvehiclesorsmallboats—withinthekillboxbeforelaunch inorderforthemissiletobeeffective. TheBrimstonecanengagethesetargetsusingsomeinnovativefeatures.A pilotcanlaunchasalvoofmultipleBrimstonesagainstagroupoftargetswithin akillboxandthemissilesthemselves“self-sortbasedonfiringorder”tohit different targets. This makes the Brimstone especially useful for defending

againstenemyswarmattacks.Forexample,IranhasharassedU.S.shipswith swarmingsmallboatsthatcouldoverwhelmshipdefenses,causingaUSSCole– type suicide attack. Navy helicopters armed with Brimstones would be an extremelyeffectivedefenseagainstboatswarms,allowingpilotstotakeoutan entiregroupofenemyshipsatoncewithouthavingtoindividuallytargeteach ship. Even with all of the Brimstone’s features, the human user still needs to launchitataknowngroupoftargets.Becauseitcannotloiter,ifthereweren’t targetsinthekillboxwhenthemissileactivateditsseeker,themissilewouldbe wasted.Unlikeadrone,themissilecouldn’treturntobase.Thesalvolaunch capabilityallowsthepilottolaunchmultiplemissilesagainstaswarmoftargets, rather than select each one individually. This makes a salvo of Brimstones similartotheSensorFuzedWeaponthatisusedtotakeoutacolumnoftanks. Eventhoughthemissilesthemselvesmightself-sortwhichmissilehitswhich target,thehumanisstilldecidingtoattackthatspecificclusteroftargets.Even inMMW-onlymode,theBrimstoneisasemiautonomousweapon. ThelinebetweenthesemiautonomousBrimstoneandafullyautonomous weaponthatwouldchooseitsowntargetsisathinone.Itisn’tbasedonthe seekerorthealgorithms.Thesameseekerandalgorithmscouldbeusedona futureweaponthatcouldloiteroverthebattlespace—amissilewithanupgraded engineoradronethatcouldpatrolanarea.Afutureweaponthatpatrolledakill box,ratherthanenteredoneatasnapshotintime,wouldbeanautonomous weapon, because the human could send the weapon to monitor the kill box withoutknowledgeofanyspecifictargets.Itwouldallowthehumantofirethe weapon“blind”andlettheweapondecideifandwhentostriketargets. EveniftheBrimstonedoesn’tquitecrossthelinetoanautonomousweapon, ittakesonemorehalfsteptowardit,tothepointwhereallthatisneededisa lightshovetocrosstheline.AMMW-onlyBrimstonecouldbeconvertedintoa fullyautonomousweaponsimplybyupgradingthemissile’senginesothatit couldloiterforlonger.OrtheMMW-onlymodealgorithmsandseekercouldbe placedonadrone.Notably,theMMW-onlymodeisenabledinthemissilebya software change. As autonomous technology continues to advance, more missilesaroundtheglobewillsteprightupto—orcross—thatline. Would the United Kingdom be willing to cross that line? The debate surroundinganotherBritishprogram,theTaranisdrone,showsthedifficultyin ascertaininghowfartheBritishmightbewillingtopushthetechnology.

THETARANISDRONE

TheTaranisisanext-generationexperimentalcombatdronesimilartothose beingdevelopedbytheUnitedStates,India,Russia,China,France,andIsrael. BAESystems,developeroftheTaranis,hasgivenoneofthemostextensive descriptions of how a combat drone’s autonomy might work for weapons

engagements.SimilartotheX-47B,theTaranisisademonstratorairplane,but

theBritishmilitaryintendstocarrythedemonstrationfurtherthantheUnited StatesandconductsimulatedweaponsengagementswiththeTaranis. Information released by BAE shows how Taranis might be employed. It explains a simulated weapons test that “will demonstrate the ability of [an unmannedcombataircraftsystem]to:fendoffhostileattack;deployweapons deepinenemyterritoryandrelayintelligenceinformation.”Inthetest:

1 Taraniswouldreachthesearchareaviaapreprogrammedflightpath intheformofathree-dimensionalcorridorinthesky.Intelligence wouldberelayedtomissioncommand.

2 When Taranis identifies a target it would be verified by mission command.

3 Ontheauthorityofmissioncommand,Taraniswouldcarryouta simulatedfiringandthenreturntobaseviatheprogrammedflight path.

Atalltimes,Taraniswillbeunderthecontrolofahighly-trainedgroundcrew.TheMission

Commanderwillbothverifytargetsandauthorisesimulatedweaponsrelease.

Thisprotocolkeepsthehumaninthelooptoapproveeachtarget,whichis

consistent with other statements by BAE leadership. In a 2016 panel at the WorldEconomicForuminDavos,BAEChairmanSirRogerCarrdescribed autonomous weapons as “very dangerous” and “fundamentally wrong.” Carr madeclearthatBAEonlyenvisioneddevelopingweaponsthatkeptaconnection

toahumanwhocouldauthorizeandremainresponsibleforlethaldecision-

making.

Ina2016interview,TaranisprogrammanagerCliveMarrisonmadeasimilar

statementthat“decisionstoreleasealethalmechanismwillalwaysrequirea

humanelementgiventheRulesofEngagementusedbytheUKinthepast.”

Marrisonthenhedged,saying,“buttheRulesofEngagementcouldchange.”

TheBritishgovernmentreactedswiftly.Followingmultiplemediaarticles

allegingBAEwasbuildingintheoptionforTaranisto“attacktargetsofitsown

accord,”theUKgovernmentreleasedastatementthenextdaystating:

TheUKdoesnotpossessfullyautonomousweaponsystemsandhasnointentionofdeveloping

oracquiringthem.Theoperationofourweaponswillalwaysbeunderhumancontrolasan

absoluteguaranteeofhumanoversight,authorityandaccountabilityfortheiruse.

TheBritishgovernment’sfull-throateddenialofautonomousweaponswould appeartobeasclearapolicystatementastherecouldbe,butanimportant asteriskisneededregardinghowtheUnitedKingdomdefinesan“autonomous weaponsystem.”InitsofficialpolicyexpressedintheUKJointDoctrineNote

2/11,“TheUKApproachtoUnmannedAircraftSystems,”theBritishmilitary

describesanautonomoussystemasonethat“mustbecapableofachievingthe samelevelofsituationalunderstandingasahuman.”Shortofthat,asystemis defined as “automated.” This definition of autonomy, which hinges on the complexityofthesystemratherthanitsfunction,isadifferentwayofusingthe term “autonomy” than many others in discussions on autonomous weapons, includingtheU.S.government.TheUnitedKingdom’sstanceisnotaproductof sloppylanguage;it’sadeliberatechoice.TheUKdoctrinenotecontinues:

Ascomputingandsensorcapabilityincreases,itislikelythatmanysystems,usingverycomplex

setsofcontrolrules,willappearandbedescribedasautonomoussystems,butaslongasitcanbe

shownthatthesystemlogicallyfollowsasetofrulesorinstructionsandisnotcapableofhuman

levelsofsituationalunderstanding,thentheyshouldonlybeconsideredtobeautomated.

Thisdefinitionshiftsthelexicononautonomousweaponsdramatically.When theUKgovernmentusestheterm“autonomoussystem,”theyaredescribing systemswithhuman-levelintelligencethataremoreanalogoustothe“general AI” described by U.S. Deputy Defense Secretary Work. The effect of this definitionistoshiftthedebateonautonomousweaponstofar-offfuturesystems andawayfrompotentialnear-termweaponsystemsthatmaysearchfor,select, and engage targets on their own—what others might call “autonomous weapons.” Indeed, in its 2016 statement to the United Nations meetings on autonomousweapons,theUnitedKingdomstated:“TheUKbelievesthat[lethal autonomousweaponsystems]donot,andmaynever,exist.”Thatistosay, Britainmaydevelopweaponsthatwouldsearchfor,select,andengagetargets ontheirown;itsimplywouldcallthem“automatedweapons,”not“autonomous weapons.”Infact,theUKdoctrinenotereferstosystemssuchasthePhalanx gun(asupervisedautonomousweapon)as“fullyautomatedweaponsystems.” Thedoctrinenoteleavesopenthepossibilityoftheirdevelopment,providedthey passalegalweaponsreviewshowingtheycanbeusedinamannercompliant withthelawsofwar.

Inpractice,theBritishgovernment’sstanceonautonomousweaponsisnot

dissimilarfromthatexpressedbyU.S.defenseofficials.Humanswillremain

atsomelevel.Thatmightmeanahuman

operator launching an autonomous/automated weapon into an area and delegating to it the authority to search for and engage targets on its own. Whetherthepublicwouldreactdifferentlytosuchaweaponifitwererebranded an“automatedweapon”isunclear.

EveniftheUnitedKingdom’sstanceretainssomeflexibility,thereisstilla tremendousamountoftransparencyintohowtheU.S.andUKgovernmentsare approaching the question of autonomous weapons. Weapons developers like BAE,MBDA,andLockheedMartinhavedetaileddescriptionsoftheirweapon systemsontheirwebsites,whichisnotuncommonfordefensecompaniesin democratic nations. DARPAdescribes its research programs publicly and in detail.Defenseofficialsinbothcountriesopenlyengageinadialogueaboutthe boundariesofautonomyandtheappropriateroleofhumansandmachinesin lethalforce.Thistransparencystandsinstarkcontrasttoauthoritarianregimes.

involvedinlethaldecision-making

RUSSIA’SWARBOTS

WhiletheUnitedStateshasbeenveryreluctanttoarmgroundrobots,withonly oneshort-livedeffortduringtheIraqwarandnodevelopmentalprogramsfor armedgroundrobots,Russiahasshownnosuchhesitation.Russiaisdeveloping afleetofgroundcombatrobotsforavarietyofmissions,fromprotectingcritical installations to urban combat. Many of Russia’s ground robots are armed, rangingfromsmallrobotstoaugmentinfantrytroopstorobotictanks.How muchautonomyRussiaiswillingtoplaceintoitsgroundrobotswillhavea profoundimpactonthefutureoflandwarfare. ThePlatform-M,atrackedvehicleroughlythesizeofafour-wheelerarmed withagrenadelauncherandanassaultrifle,isonthesmallerscaleofRussian war bots. In 2014, the Platform-M took part in an urban combat exercise alongsideRussiantroops.AccordingtoanofficialstatementfromtheRussian military, “the military robots were assigned to eliminate provisional illegal armed formations in urban conditions and striking stationary and mobile targets.”TheRussianmilitarydidnotdescribethedegreeofthePlatform-M’s autonomy,althoughaccordingtothedeveloper:

Platform-M

mobiletargets,forfirepowersupport,forpatrollingandforguardingimportantsites.Theunit’s

weaponscanbeguided,itcancarryoutsupportivetasksanditcandestroytargetsinautomaticor

isusedforgatheringintelligence,fordiscoveringandeliminatingstationaryand

semiautomaticcontrolsystems;itissuppliedwithoptical-electronicandradioreconnaissance

locators.

Thephrase“candestroytargetsinautomatic

autonomousweapon.Thisclaimshouldbeviewedwithsomeskepticism.For one,videosofRussianrobotsshowsoldiersselectingtargetsonacomputer screen.Moreimportantly,therealityisthatdetectingtargetsautonomouslyina groundcombatenvironmentisfarmoretechnicallychallengingthantargeting enemyradarsastheHarpydoesorenemyshipsonthehighseaslikeTASM.The weaponsPlatform-Mcarries—agrenadelauncherandassaultrifle—wouldbe effectiveagainstpeople,notarmoredvehiclesliketanksorarmoredpersonnel carriers.Peopledon’temitintheelectromagneticspectrumlikeradars.They aren’t “cooperative targets.” At the time this claim was made in 2014, autonomouslyfindingapersoninaclutteredgroundcombatenvironmentwould havebeendifficult.Advancesinneuralnetshavechangedthisinthepastfew years,makingiteasiertoidentifypeople.Butdiscerningfriendfromfoewould stillbeachallenge.

TheautonomoustargetidentificationproblemRussianwarbotsfaceisfar more challenging than the South Korean sentry gun on the DMZ. In a demilitarizedzonesuchasthatseparatingNorthandSouthKorea,acountry mightdecidetoplacestationarysentrygunsalongtheborderandauthorizethem to shoot anything with an infrared (heat) signature coming across. Such a decisionwouldnotbewithoutitspotentialproblems.Sentrygunsthatlackany ability to discriminate valid military targets from civilians could senselessly murderinnocentrefugeesattemptingtofleeanauthoritarianregime.Ingeneral, though,aDMZisamorecontrolledenvironmentthanoffensiveurbancombat operations.Authorizingstatic,defensiveautonomousweaponsthatarefixedin placewouldbefardifferentthanrovingautonomousweaponsthatwouldbe intendedtomaneuverinurbanareaswherecombatantsaremixedinamong civilians. Technologiesexisttodaythatcouldbeusedforautomaticresponsesagainst

militarytargets,iftheRussianswantedtogivesuchacapabilitytothePlatform-

M.Thetechnologyisfairlycrude,though.Forexample,theBoomerangshot detectionsystemisaU.S.systemthatusesanarrayofmicrophonestodetect incoming bullets and calculate their origin. According to the developer, “Boomerang uses passive acoustic detection and computer-based signal processingtolocateashooterinlessthanasecond.”Bycomparingtherelative timeofarrivalofabullet’sshockwaveatthevariousmicrophones,Boomerang andothershotdetectionsystemscanpinpointashooter’sdirection.Itcanthen

control”makesitsoundlikean

calloutthelocationofashot,forexample,“Shot.Twoo’clock.400meters.”

Alternatively,acousticshotdetectionsystemscanbedirectlyconnectedtoa

cameraorremoteweaponstationandautomaticallyaimthemattheshooter.

Goingthenextsteptoallowtheguntoautomaticallyfirebackattheshooter

wouldnotbetechnicallychallenging.Oncetheshothasbeendetectedandthe

gunaimed,allthatitwouldtakewouldbetopullthetrigger.

It’spossiblethisiswhatRussiameanswhenitsaysthePlatform-M“can

destroy targets in automatic

however,authorizingautomaticreturn-firewouldbequitehazardous.Itwould requireanextremeconfidenceintheabilityoftheshotdetectionsystemtoweed out false positives and to not be fooled by acoustic reflections and echoes, especiallyinurbanareas.Additionally,thegunwouldhavenoabilitytoaccount for collateral damage—say, to hold fire because the shooter is using human shields. Finally, such a system would be a recipe for fratricide, with robot systems potentially automatically shooting friendly troops or other friendly robots.Tworobotsonthesamesidecouldbecometrappedinanever-ending loopofautomaticfireandresponse,mindlesslyexchanginggunfireuntilthey exhaustedtheirammunitionordestroyedeachother.Itisunclearwhetherthisis whatRussiaintends,butfromatechnicalstandpointitwouldpossible.

control.” From an operational perspective,

Russia’sothergroundcombatrobotsscaleupinsizeandsophisticationfrom

thePlatform-M.TheMRK-002-BG-57“Wolf-2”isthesizeofasmallcarand

outfittedwitha12.7mmheavymachinegun.AccordingtoDavidHamblingof

PopularMechanics,“Inthetank’sautomatedmode,theoperatorcanremotely

selectupto10targets,whichtherobotthenbombards.Wolf-2canactonitsown

tosomedegree(themakersarevagueaboutwhatdegree),butthedecisiontouse

lethalforceisultimatelyunderhumancontrol.”TheWolf-2sitsamongafamily

ofsimilarsizerobotvehicles.TheamphibiousArgoisroughlythesizeofaMini Cooper,sportsamachinegunandrocket-propelledgrenadelauncher,andcan

swimatspeedsupto2.5knots.TheA800MobileAutonomousRoboticSystem

(MARS)isan(unarmed)infantrysupportvehiclethesizeofacompactcarthat cancarryfourinfantrysoldiersandtheirgear.PicturesonlineshowRussian soldiersridingontheback,lookingsurprisinglyrelaxedasthetrackedrobot cruisesthroughanoff-roadcourse. Compactcar–sizedwarbotsaren’tnecessarilyuniquetoRussia,althoughthe Russianmilitaryseemstohaveacasualattitudetowardarmingthemnotseenin Westernnations.TheRussianmilitaryisn’tstoppingatmidsizegroundrobots, though.SeveralRussianprogramsarepushingtheboundariesofwhatispossible with robotic combat vehicles, building systems that could prove decisive in

highlylethaltank-on-tankwarfare.

TheUran-9lookslikesomethingstraightoutofaMechWarriorvideogame,

whereplayerspilotagiantrobotwarriorarmedwithrocketsandcannons.The

Uran-9isfullyuninhabited,althoughitiscontrolledbysoldiersremotelyfroma

nearbycommandvehicle.Itisthesizeofasmallarmoredpersonnelcarrier,

sportsa30mmcannon,andhasanelevatedplatformtolaunchantitankguided

missiles.TheelevatedmissileplatformthatgivestheUran-9adistinctivesci-fi

appearance.Themissilesrestontwoplatformsoneithersideofthevehiclethat, whenraised,looklikearmsreachingintothesky.Theelevatedplatformallows therobottofiremissileswhilesafelysittingbehindcover,forexamplebehind the protective slope of a hillside. In an online promotional video from the

developer,Rosoboronexport,slo-moshotsoftheUran-9firingantitankmissiles

aresettomusicreminiscentofaTchaikovskytechnoremix. The Uran-9 is a major step beyond smaller robotic platforms like the

Platform-MandWolf-2notjustbecauseit’slarger,butbecauseitslargersize

allows it to carry heavier weapons capable of taking on antitank missions. WhereastheassaultrifleandgrenadelauncheronaPlatform-Mwoulddovery

littletoatank,theUran-9’santitankmissileswouldbepotentiallyhighlylethal.

ThismakestheUran-9potentiallyausefulweaponinhigh-intensitycombat

against NATO forces on the plains of Europe. Uran-9s could hide behind hillsidesorotherprotectivecoverandlaunchmissilesagainstNATOtanks.The

Uran-9doesn’thavethearmororgunstostandtoe-to-toeagainstamoderntank,

butbecauseit’suninhabited,itdoesn’thaveto.TheUran-9couldbeasuccessful

ambushpredator.Eveniffiringitsmissilesexposeditspositionandledittobe takenoutbyNATOforces,theexchangemightstillbeawinifittookouta

Westerntank.Becausethere’snooneinsideitandtheUran-9issignificantly

smallerthanatank,andthereforepresumablylessexpensive,Russiacouldfield manyofthemonthebattlefield.Justlikemanystingsfromahornetcanbring

downamuchlargeranimal,theUran-9couldmakethemodernbattlefielda

deadlyplaceforWesternforces.

Russia’sVikhr“robottank”hasasimilarcapability.At14tonsandlackinga

maingun,itissignificantlysmallerandlesslethalthana50-to70-tonmain

battletank.LiketheUran-9,though,its30mmcannonandsixantitankmissiles

showitisdesignedasatank-killingambushpredator,notatank-on-tankstreet fighter. The Vikhr is remote controlled, but news reports indicate it has the abilityto“lockontoatarget”andkeepfiringuntilthetargetisdestroyed.While notthesameaschoosingitsowntarget,trackingamovingtargetisdoabletoday. Infact,trackingmovingobjectsisasastandardfeatureonDJI’sbasemodel

Sparkhobbydrone,whichretailsforunder$500.

TakingthenextstepandallowingtheUran-9orVikhrtoautonomously

targettankswouldtakesomeadditionalwork,butitwouldbemorefeasiblethan tryingtoaccuratelydiscriminateamonghumantargets.Withlargecannonsand treads,tanksaredistinctivemilitaryvehiclesnoteasilyconfusedwithcivilian objects.Moreover,militariesmaybemorewillingtoriskciviliancasualtiesor fratricideintheno-holds-barredarenaoftankwarfare,wherearmoreddivisions

viefordominanceandthefateofnationsisatstake.InvideosoftheUran-9,

humanoperatorscanbeclearlyseencontrollingthevehicle,butthetechnology isavailableforRussiatoauthorizefullyautonomousantitankengagements,ifit chosetodoso.

Russiaisn’tstoppingatdevelopmentoftheVikhrandUran-9,however.It

envisions even more advanced robotic systems that could not only ambush Westerntanks,butstandwiththemtoe-to-toeandwin.Russiareportedlyhas

planstodevelopafullyroboticversionofitsnext-generationT-14Armatatank.

TheT-14Armata,whichreportedlyenteredproductionasof2016,sportsabevy

of new defensive features, including advanced armor, an active protection

systemtointerceptincomingantitankmissiles,andaroboticturret.TheT-14

willbethefirstmainbattletanktosportanuninhabitedturret,whichwillafford

thecrewgreaterprotectionbyshelteringthemwithinthebodyofthevehicle.

Makingtheentiretankuninhabitedwouldbethenextlogicalstepinprotection,

enablingacrewtocontrolthevehicleremotely.WhilecurrentT-14sarehuman-

inhabited, Russia has long-term plans to develop a fully robotic version. VyacheslavKhalitov,deputydirectorgeneralofUralVagonZavod,manufacturer of the T-14 Armata, has stated, “Quite possibly, future wars will be waged withouthumaninvolvement.Thatiswhywehavemadeprovisionsforpossible robotization of Armata.” He acknowledged that achieving the goal of full robotizationwouldrequiremoreadvancedAIthatcould“calculatethesituation onthebattlefieldand,onthisbasis,totaketherightdecision.” Inadditiontopushingtheboundariesonrobots’physicalcharacteristics,the Russianmilitaryhassignaleditintendstousecutting-edgeAItoboostitsrobots’

decision-making.InJuly2017,RussianarmsmanufacturerKalashnikovstated

thattheywouldsoonrelease“afullyautomatedcombatmodule”basedonneural networks.Newsreportsindicatetheneuralnetworkswouldallowthecombat module“toidentifytargetsandmakedecisions.”Asinothercases,itisdifficult to independently evaluate these claims, but they signal a willingness to use artificialintelligenceforautonomoustargeting.Russiancompanies’boastingof autonomousfeatureshasnoneofthehesitationorhedgingthatisoftenseen

fromAmericanorBritishdefensefirms. SeniorRussianmilitarycommandershavestatedtheyintendtomovetoward fully robotic weapons. In a 2013 article on the future of warfare, Russian militarychiefofstaffGeneralValeryGerasimovwrote:

Anotherfactorinfluencingtheessenceofmodernmeansofarmedconflictistheuseofmodern

automatedcomplexesofmilitaryequipmentandresearchintheareaofartificialintelligence.

Whiletodaywehaveflyingdrones,tomorrow’sbattlefieldswillbefilledwithwalking,crawling,

jumping,andflyingrobots.Inthenearfutureitispossibleafullyrobotizedunitwillbecreated,

capableofindependentlyconductingmilitaryoperations.

Howshallwefightundersuchconditions?Whatformsandmeansshouldbeusedagainsta

robotizedenemy?Whatsortofrobotsdoweneedandhowcantheybedeveloped?Alreadytoday

ourmilitarymindsmustbethinkingaboutthesequestions.

ThisRussianinterestinpursuingfullyroboticunitshasnotescapednoticeinthe

West.InDecember2015,DeputySecretaryofDefenseBobWorkmentioned

Gerasimov’s comments in a speech on the future of warfare. As Work has repeatedlynoted,U.S.decisionsmaybeshapedbythoseofRussiaandother nations. This is the danger of an arms race in autonomy: that nations feel compelledtoraceforwardandbuildautonomousweaponsoutofthefearthat othersaredoingso,withoutpausingtoweightherisksoftheiractions.

ANARMSRACEINAUTONOMOUSWEAPONS?

Ifitistrue,assomehavesuggested,thatadangerousarmsraceinautonomous weaponsisunderway,thenitisastrangekindofrace.Nationsarepursuing autonomyinmanyaspectsofweaponrybut,withtheexceptionoftheHarpy,are stillkeepinghumansintheloopfornow.SomeweaponslikeBrimstoneuse autonomyinnovelways,pushingtheboundariesofwhatcouldbeconsidereda semiautonomous weapon. DARPA’s CODE program appears to countenance movingtohuman-on-the-loopsupervisorycontrolforsometypesoftargets,but

thereisnoindicationoffullautonomy.DevelopersoftheSGR-A1gunand

Taranisdronehavesuggestedfullautonomycouldbeafutureoption,although higher authorities immediately disputed the claim, saying that was not their intent. Rather than a full-on sprint to build autonomous weapons, it seems that manynationsdonotyetknowwhethertheymightwanttheminthefutureand arehedgingtheirbets.Onechallengeinunderstandingthegloballandscapeof lethalautonomyisthatthedegreeoftransparencyamongnationsdiffersgreatly.

While the official policies of the U.S. and UK governments leave room to developautonomousweapons(althoughtheyexpressthisdifferentlywiththe UnitedKingdomcallingthem“automatedweapons”)countriessuchasRussia don’tevenhaveapublicpolicy.Policydiscussionsmaybehappeninginprivate inauthoritarianregimes,butwedon’tknowwhattheyare.Pressurefromcivil

societyforgreatertransparencydiffersgreatlyacrosscountries.In2016,theUK-

basedNGOArticle36,whichhasbeenaleadingvoiceinshapinginternational

discussionsonautonomousweapons,wroteapolicybriefcritiquingtheUK government’s stance on autonomous weapons. In the United States, Stuart Russellandanumberofwell-respectedcolleaguesfromtheAIcommunityhave met with mid-level officials from across the U.S. government to discuss autonomous weapons. In authoritarian Russia, there are no equivalent civil societygroupstopressurethegovernmenttobemoretransparentaboutitsplans. As a result, scrutiny focuses on the most transparent countries—democratic nationswhoareresponsivetoelementsofcivilsocietyandaregenerallymore openabouttheirweaponsdevelopment.Whatgoesoninauthoritarianregimesis farmurkier,butnolessrelevanttothefuturepathoflethalautonomy.

Lookingacrossthegloballandscapeofroboticsystems,it’sclearthatmany nationsarepursuingarmedrobots,includingcombatdronesthatwouldoperate in contested air space. How much autonomy some weapon systems have is unclear,butthereisnothingpreventingcountriesfromcrossingthelinetolethal autonomyintheirnext-generationmissiles,combatdrones,orgroundrobots. Next-generationroboticsystemssuchastheTaranismaygivecountriesthat option, forcing uncomfortable conversations. Even if many countries would rathernotmoveforwardwithautonomousweapons,itmayonlytakeonetostart acascadeofothers. Withnoautonomoussmokinggun,itseemsunnecessarilyalarmisttodeclare thatanautonomousweaponsarmsraceisalreadyunderway,butwecouldvery wellbeatthestartingblocks.Thetechnologytobuildautonomousweaponsis widelyavailable.Evennon-stategroupshavearmedrobots.Theonlymissing ingredienttoturnaremotelycontrolledarmedrobotintoanautonomousweapon issoftware.Thatsoftware,itturnsout,isprettyeasytocomeby.

8

GARAGEBOTS

DIYKILLERROBOTS

Agunshotcutsthroughthelowbuzzofthedrone’srotors.Thecamerajerks backwardfromtherecoil.Thegunfiresagain.Asmallbitofflamedartsoutof thehandgunattachedtothehomemade-lookingdrone.Redandyellowwires snakeoverthedroneandintothegun’sfiringmechanism,allowingthehuman controllertoremotelypullthetrigger.

Thecontroversialfifteen-secondvideoclipreleasedinthesummerof2015

was taken by a Connecticut teenager of a drone he armed himself. Law enforcementandtheFAAinvestigated,butnolawswerebroken.Theteenager usedthedroneonhisfamily’spropertyintheNewEnglandwoods.Thereareno

lawsagainstfiringweaponsfromadrone,providedit’sdoneonprivateproperty.

Afewmonthslater,forThanksgiving,hepostedavideoofaflamethrower-

armeddroneroastingaturkey. Dronesarenotonlyinwideusebycountriesaroundtheglobe;theyare readily purchased by anyone online. For under $500, one can buy a small quadcopterthatcanautonomouslyflyaroutepreprogrammedbyGPS,trackand followmovingobjects,andsenseandavoidobstaclesinitspath.Commercial drones are moving forward in leaps and bounds, with autonomous behavior improvingineachgeneration. WhenIaskedthePentagon’schiefweaponsbuyerFrankKendallwhathe feared,itwasn’tRussianwarbots,itwascheapcommercialdrones.Aworld whereeveryonehasaccesstoautonomousweaponsisafardifferentonethana worldwhereonlythemostadvancedmilitariescanbuildthem.Ifautonomous

weapons could be built by virtually anyone in their garage, bottling up the technologyandenforcingaban,asStuartRussellandothershaveadvocated, would be extremely difficult. I wanted to know, could someone leverage commercially available drones to make a do-it-yourself (DIY) autonomous weapon?Howhardwoulditbe? IwasterrifiedbywhatIfound.

HUNTINGTARGETS

Thequadcopterroseoffthegroundconfidently,smoothlygainingaltitudetillit hoveredaroundeyelevel.Theengineernexttometappedhistabletandthe coptermovedout,beginningitssearchofthehouse. Ifollowedalongbehindthequadcopter,watchingitnavigateeachroom.It hadnomap,nopreprogrammedsetofinstructionsforwheretogo.Thedrone wastoldmerelytosearchandreportback,andsoitdid.Asitmovedthroughthe houseitscannedeachroomwithalaserrange-findingLIDARsensor,buildinga map as it went. Transmitted via Wi-Fi, the map appeared on the engineer’s tablet. Asthedroneglidedthroughthehouse,eachtimeitcameacrossadoorwayit stopped, its LIDAR sensor probing the space beyond. The drone was programmedtoexploreunknownspacesuntilithadmappedeverything.Only thenwoulditfinishitspatrolandreportback. I watched the drone pause in front of an open doorway. I imagined its sensorspingingthedistantwalloftheotherroom,itsalgorithmscomputingthat theremustbeunexploredspacebeyondtheopening.Thedronehoveredfora moment,thenmovedintotheunknownroom.Athoughtpoppedunbiddeninto mymind:it’scurious. It’ssillytoimpartsuchahumantraittoadrone.Yetitcomessonaturallyto us,toimbuenonhumanobjectswithemotions,thoughts,andintentions.Iwas remindedofasmallwalkingrobotIhadseeninauniversitylabyearsago.The researcherstapedafacetooneendoftherobot—nothingfancy,justslicesof coloredconstructionpaperintheshapeofeyes,anose,andamouth.Iasked themwhy.Didithelpthemrememberwhichdirectionwasforward?No,they said.Itjustmadethemfeelbettertoputafaceonit.Itmadetherobotseem morehuman,morelikeus.There’ssomethingdeepinhumannaturethatwants toconnecttoanothersentiententity,toknowthatitislikeus.There’ssomething alienandchillingaboutentitiesthatcanmoveintelligentlythroughtheworld

andnotfeelanyemotionorthoughtbeyondtheirownprogramming.Thereis somethingpredatoryandremorselessaboutthem,likeashark. I shook off the momentary feeling and reminded myself of what the technology was actually doing. The drone “felt” nothing. The computer controllingitsactionswouldhaveidentifiedthattherewasagapwherethe LIDARsensorscouldnotreachandso,followingitsprogramming,directedthe dronetoentertheroom. Thetechnologywasimpressive.ThecompanyIwasobserving,ShieldAI, wasdemonstratingfullyautonomousindoorflight,anevenmoreimpressivefeat thantrackingapersonandavoidingobstaclesoutdoors.Foundedbybrothers RyanandBrandonTseng,theformeranengineerandthelatteraformerNavy SEAL,ShieldAIhasbeenpushingtheboundariesofautonomyunderagrant fromtheU.S.military.Shield’sgoalistofieldfullyautonomousquadcopters thatspecialoperatorscanlaunchintoanunknownbuildingandhavethedrones workcooperativelytomapthebuildingontheirown,sendingbackfootageof the interior and potential objects of interest to the special operators waiting outside. Brandondescribedtheirgoalas“highlyautonomousswarmsofrobotsthat requireminimalhumaninput.That’stheend-state.WeenvisionthattheDoD will have ten times more robots on the battlefield than soldiers, protecting soldiersandinnocentcivilians.”Shield’sworkispushingtheboundariesofwhat ispossibletoday.Allthepiecesofthetechnologyarefallingintoplace.The quadcopterIwitnessedwasusingLIDARfornavigation,butShield’sengineers explainedtheyhadtestedvisual-aidednavigation;theysimplydidn’thaveit activethatday. Visual-aidednavigationisacriticallyimportantpieceoftechnologythatwill allowdronestomoveautonomouslythroughclutteredenvironmentswithoutthe aid of GPS. Visual-aided navigation tracks how objects move through the camera’sfieldofview,aprocesscalled“opticalflow.”Byassessingopticalflow, operating on the assumption that most of the environment is static and not moving,fixedobjectsmovingthroughthecamera’sfieldofvisioncanbeusedas areferencepointforthedrone’sownmovement.Thiscanallowthedroneto determinehowitismovingwithinitsenvironmentwithoutrelyingonGPSor otherexternalnavigationaids.Visual-aidednavigationcancomplementother internalguidancemechanisms,suchasinertialmeasurementunits(IMU)that worklikeadrone’s“innerear,”sensingchangesinvelocity.(Imaginesitting blindfoldedinacar,feelingthemotionofthecar’sacceleration,braking,and turning.)WhenIMUsandvisual-aidednavigationarecombined,theymakean

extremelypowerfultoolfordeterminingadrone’sposition,allowingthedrone toaccuratelynavigatethroughclutteredenvironmentswithoutGPS. Visual-aided navigation has been demonstrated in numerous laboratory settingsandwillnodoubttrickledowntocommercialquadcoptersovertime. Thereiscertaintobeamarketforquadcoptersthatcanautonomouslynavigate indoors,fromfilmingchildren’sbirthdaypartiestoindoordroneracing.With

visual-aidednavigationandotherfeatures,dronesandotherroboticsystemswill increasinglybeabletomoveintelligentlythroughtheirenvironment.ShieldAI, likemanytechcompanies,wasfocusedonnear-termapplications,butBrandon Tsengwasbullishonthelong-termpotentialofAIandautonomy.“Roboticsand

artificialintelligencearewheretheinternetwasin1994,”hetoldme.“Robotics

andAIareabouttohaveareallytransformativeimpactontheworld

weseethetechnology10to15yearsdowntheroad?Itisgoingtobemind-

blowing,likeasci-fimovie.” Autonomousnavigationisnotthesameasautonomoustargeting,though. Drones that can maneuver and avoid obstacles on their own—indoors or outdoors—donotnecessarilyhavetheabilitytoidentifyanddiscriminateamong thevariousobjectsintheirsurroundings.Theysimplyavoidhittinganythingat all.Searchingforspecificobjectsandtargetingthemforaction—whetherit’s taking photographs or something more nefarious—would require more intelligence. Theabilitytodotargetidentificationisthekeymissinglinkinbuildinga DIYautonomousweapon.Anautonomousweaponisonethatcansearchfor, decidetoengage,andengagetargets.Thatrequiresthreeabilities:theabilityto maneuver intelligently through the environment to search; the ability to discriminateamongpotentialtargetstoidentifythecorrectones;andtheability toengagetargets,presumablythroughforce.Thelastelementhasalreadybeen demonstrated—peoplehavearmeddronesontheirown.Thefirstelement,the ability to autonomously navigate and search an area, is already available outdoorsandiscomingsoonindoors.Targetidentificationistheonlypiece remaining,theonlyobstacletosomeonemakinganautonomousweaponintheir garage.Unfortunately,thattechnologyisnotfaroff.Infact,asIstoodinthe basement of the building watching Shield AI’s quadcopter autonomously navigatefromroomtoroom,autonomoustargetrecognitionwasliterallybeing demonstratedrightoutside,justabovemyhead.

Where

DEEPLEARNING

Theresearchgroupaskedthattheynotbenamed,becausethetechnologywas newanduntested.Theydidn’twanttogivetheimpressionthatitwasgood enough—that the error rate was low enough—to be used for military applications.Nor,itwasclear,weremilitaryapplicationstheirprimaryintention indesigningthesystem.Theywereengineers,simplytryingtoseeiftheycould solve a tough problem with technology. Could they send a small drone out entirelyonitsowntoautonomouslyfindacrashedhelicopterandreportits locationbacktothehuman? Theanswer,itturnsout,isyes.Tounderstandhowtheydidit,weneedtogo deep.

Deeplearningneuralnetworks,firstmentionedinchapter5asonepotential

solutiontoimprovingmilitaryautomatictargetrecognitioninDARPA’sTRACE program,havebeenthedrivingforcebehindastoundinggainsinAIinthepast fewyears.DeepneuralnetworkshavelearnedtoplayAtari,beattheworld’s reigning champion at go, and have been behind dramatic improvements in speech recognition and visual object recognition. Neural networks are also behindthe“fullyautomatedcombatmodule”thatRussianarmsmanufacturer Kalashnikovclaimstohavebuilt.Unliketraditionalcomputeralgorithmsthat operatebasedonascriptofinstructions,neuralnetworksworkbylearningfrom largeamountsofdata.Theyareanextremelypowerfultoolforhandlingtricky problemsthatcan’tbeeasilysolvedbyprescribingasetofrulestofollow. Let’ssay,forexample,thatyouwantedtowritedownarulesetforhowto visually distinguish an apple from a tomato without touching, tasting, or smelling.Bothareround.Bothareredandshiny.Bothhaveagreenstemontop. Theylookdifferent,butthedifferencesaresubtleandevadeeasydescription. Yetathree-year-oldchildcanimmediatelytellthedifference.Thisisatricky problemwitharules-basedapproach.Whatneuralnetworksdoissidestepthat problem entirely. Instead, they learn from vast amounts of data—tens of thousandsormillionsofpiecesofdata.Asthenetworkchurnsthroughthedata, itcontinuallyadaptsitsinternalstructureuntilitoptimizestoachievethecorrect programmer-specifiedgoal.Thegoalcouldbedistinguishinganapplefroma tomato,playinganAtarigame,orsomeothertask. Inoneofthemostpowerfulexamplesofhowneuralnetworkscanbeusedto solve difficult problems, the Alphabet (formerly Google) AI company DeepMindtrainedaneuralnetworktoplaygo,aChinesestrategygameakinto chess,betterthananyhumanplayer.Goisanexcellentgameforalearning machinebecausethesheercomplexityofthegamemakesitverydifficultto programacomputertoplayatthelevelofaprofessionalhumanplayerbasedon

arules-basedstrategyalone. Therulesofgoaresimple,butfromtheserulesflowsvastcomplexity.Gois

playedonagridof19by19linesandplayerstaketurnsplacingstones—black

foroneplayerandwhitefortheother—ontheintersectionpointsofthegrid. Theobjectiveistouseone’sstonestoencircleareasoftheboard.Theplayer whocontrolsmoreterritoryontheboardwins.Fromthesesimplerulescomean almost unimaginably large number of possibilities. There are more possible positionsingothanthereareatomsintheknownuniverse,makinggo10 100 (one followedbyahundredzeroes)times—literallyagoogol—morecomplexthan chess. Humansattheprofessionallevelplaygobasedonintuitionandfeel.Go takesalifetimetomaster.PriortoDeepMind,attemptstobuildgo-playingAI softwarehadfallenwoefullyshortofhumanprofessionalplayers.TocraftitsAI, called AlphaGo, DeepMind took a different approach. They built an AI

composedofdeepneuralnetworksandfeditdatafrom30milliongamesofgo.

As explained in a DeepMind blog post, “These neural networks take a description of the Go board as an input and process it through 12 different networklayerscontainingmillionsofneuron-likeconnections.”Oncetheneural networkwastrainedonhumangamesofgo,DeepMindthentookthenetworkto thenextlevelbyhavingitplayitself.Ourgoalistobeatthebesthumanplayers, notjustmimicthem,”asexplainedinthepost.“Todothis,AlphaGolearnedto discovernewstrategiesforitself,byplayingthousandsofgamesbetweenits neuralnetworks,andadjustingtheconnectionsusingatrial-and-errorprocess

knownasreinforcementlearning.”AlphaGousedthe30millionhumangames

ofgoasastartingpoint,butbyplayingagainstitselfcouldreachlevelsofgame playbeyondeventhebesthumanplayers.

Thissuperhumangameplaywasdemonstratedinthe4–1victoryAlphaGo

deliveredovertheworld’stop-rankedhumangoplayer,LeeSedol,inMarch 2016. AlphaGo won the first game solidly, but in game 2 demonstrated its

virtuosity.Partwaythroughgame2,onmove37,AlphaGomadeamoveso

surprising,soun-human,thatitstunnedprofessionalplayerswatchingthematch.

Seeminglyignoringacontestbetweenwhiteandblackstonesthatwasunder

wayinonecorneroftheboard,AlphaGoplayedablackstonefarawayina

nearlyemptypartoftheboard.Itwasasurprisingmovenotseeninprofessional

games,somuchsothatonecommentatorremarked,“Ithoughtitwasamistake.”

LeeSedolwassimilarlysotakenbysurprisehegotupandlefttheroom.After

hereturned,hetookfifteenminutestoformulatehisresponse.AlphaGo’smove

wasn’tamistake.EuropeangochampionFanHui,whohadlosttoAlphaGoa

fewmonthsearlierinaclosed-doormatch,saidatfirstthemovesurprisedhim aswell,andthenhesawitsmerit.“It’snotahumanmove,”hesaid.“I’venever seenahumanplaythismove.Sobeautiful.”Notonlydidthemovefeellikea movenohumanplayerwouldnevermake,itwasamovenohumanplayer probablywouldnevermake.AlphaGoratedtheoddsthatahumanwouldhave

madethatmoveas1in10,000.YetAlphaGomadethemoveanyway.AlphaGo

wentontowingame2andafterwardLeeSedolsaid,“IreallyfeelthatAlphaGo

playedthenearperfectgame.”Afterlosinggame3,thusgivingAlphaGothe

winforthematch,LeeSedoltoldtheaudienceatapressconference,“Ikindof

feltpowerless.”

AlphaGo’striumphoverLeeSedolhasimplicationsfarbeyondthegameof

go.MorethanjustanotherrealmofcompetitioninwhichAIsnowtophumans,

thewayDeepMindtrainedAlphaGoiswhatreallymatters.Asexplainedinthe

DeepMindblogpost,“AlphaGoisn’tjustan‘expert’systembuiltwithhand-

craftedrules;insteaditusesgeneralmachinelearningtechniquestofigureout foritselfhowtowinatGo.”DeepMinddidn’tprogramrulesforhowtowinat go.Theysimplyfedaneuralnetworkmassiveamountsofdataandletitlearnall onitsown,andsomeofthethingsitlearnedweresurprising.

In2017,DeepMindsurpassedtheirearliersuccesswithanewversionof

AlphaGo.Withanupdatedalgorithm,AlphaGoZerolearnedtoplaygowithout anyhumandatatostart.Withonlyaccesstotheboardandtherulesofthegame, AlphaGo Zero taught itself to play. Within a mere three days of self-play, AlphaGoZerohadeclipsedthepreviousversionthathadbeatenLeeSedol,

defeatingit100gamesto0.

Thesedeeplearningtechniquescansolveavarietyofotherproblems.In 2015, even before DeepMind debuted AlphaGo, DeepMind trained a neural networktoplayAtarigames.Givenonlythepixelsonthescreenandthegame scoreasinputandtoldtomaximizethescore,theneuralnetworkwasableto learntoplayAtarigamesatthelevelofaprofessionalhumanvideogametester. Mostimportantly,thesameneuralnetworkarchitecturecouldbeappliedacrossa vast array of Atari games—forty-nine games in all. Each game had to be individuallylearned,butthesameneuralnetworkarchitectureappliedtoany game;theresearchersdidn’tneedtocreateacustomizednetworkdesignforeach game. TheAIsbeingdevelopedforgoorAtariarestillnarrowAIsystems.Once trained,theAIsarepurpose-builttoolstosolvenarrowproblems.AlphaGocan beatanyhumanatgo,butitcan’tplayadifferentgame,driveacar,ormakea cupofcoffee.Still,thetoolsusedtotrainAlphaGoaregeneralizabletoolsthat

canbeusedtobuildanynumberofspecial-purposenarrowAIstosolvevarious problems.Deepneuralnetworkshavebeenusedtosolveotherthornyproblems thathavebedeviledtheAIcommunityforyears,notablyspeechrecognitionand visualobjectrecognition. AdeepneuralnetworkwasthetoolusedbytheresearchteamIwitnessed autonomously find the crashed helicopter. The researcher on the project explainedthathehadtakenanexistingneuralnetworkthathadalreadybeen trainedonobjectrecognition,strippedoffthetopfewlayers,thenretrainedthe networktoidentifyhelicopters,whichhadn’toriginallybeeninitsimagedataset. Theneuralnetworkhewasusingwasrunningoffofalaptopconnectedtothe

drone,butitcouldjustaseasilyhavebeenrunningoffofaRaspberryPi,a$40

credit-cardsizedprocessor,ridingonboardthedroneitself. Allofthesetechnologiesarecomingfromoutsidethedefensesector.They are being developed at places like Google, Microsoft, IBM, and university research labs. In fact, programs like DARPA’s TRACE are not necessarily intendedtoinventnewmachinelearningtechniques,butratherimportexisting techniquesintothedefensesectorandapplythemtomilitaryproblems.These methodsarewidelyavailabletothosewhoknowhowtousethem.Iaskedthe

researcher behind the helicopter-hunting drone: Where did he get the initial neuralnetworkthathestartedwith,theonethatwasalreadytrainedtorecognize

otherimagesthatweren’thelicopters?HelookedatmelikeIwaseitherhalf-

crazyorstupid.Hegotitonline,ofcourse.

NEURALNETSFOREVERYONE

IfeelIshouldconfessthatI’mnotatechnologist.Inmyjobasadefenseanalyst, IresearchmilitarytechnologytomakerecommendationsaboutwheretheU.S. militaryshouldinvesttokeepitsedgeonthebattlefield,butIdon’tbuildthings. Myundergraduatedegreewasinscienceandengineering,butI’vedonenothing evenremotelyclosetoengineeringsincethen.Toclaimmyprogrammingskills wererustywouldbetoimplythatatonepointintimetheyexisted.Theextentof mycomputerprogrammingknowledgeisaone-semesterintroductorycoursein C++incollege. Nevertheless,Iwentonlinetocheckouttheopen-sourcesoftwaredatabase theresearcherpointedmeto:TensorFlow.TensorFlowisanopen-sourceAI library developed by Google AI researchers. With TensorFlow, Google researchershavetakenwhattheyhavebeenlearningwithdeepneuralnetworks

andpasseditontotherestoftheworld.OnTensorFlow,notonlycanyou downloadalreadytrainedneuralnetworksandsoftwareforbuildingyourown, therearereamsoftutorialsonhowtoteachyourselfdeeplearningtechniques. Forusersnewtomachinelearning,therearebasictutorialsonclassicmachine learningproblems.Thesetoolsmakeneuralnetworksaccessibletocomputer programmers with little to no experience in machine learning. TensorFlow makes neural networks easy, even fun. A tutorial called Playground (playground.tensorflow.org)allowsuserstomodifyandtrainaneuralnetwork throughapoint-and-clickinterfaceinthebrowser.Noprogrammingskillsare requiredatall. Once I got into Playground, I was hooked. Reading about what neural networkscoulddowasonething.Buildingyourownandtrainingitondatawas entirelyanother.HoursoftimeevaporatedasItinkeredwiththesimplenetwork inmybrowser.Thefirstchallengewastrainingthenetworktolearntopredict thesimpledatasetsusedinPlayground—patternsoforangeandbluedotsacross atwo-dimensionalgrid.OnceI’dmasteredthat,Iworkedtomaketheleanest networkIcould,composedofthefewestneuronsinthefewestnumberoflayers that could still accurately make predictions. (Reader challenge: once you’ve masteredtheeasydatasets,trythespiral.) WiththePlaygroundtutorial,theconceptofneuralnetsbecomesaccessible tosomeonewithnoprogrammingskillsatall.UsingPlaygroundisnomore complicatedthansolvinganeasy-levelSudokupuzzleandwithintherangeofan averageseven-year-old.Playgroundwon’tlettheuserbuildacustomneuralnet tosolvenovelproblems.It’sanillustrationofwhatneuralnetscandotohelp usersseetheirpotential.WithinotherpartsofTensorFlow,though,liemore powerfultoolstouseexistingneuralnetworksordesigncustomones,allwithin reachofareasonablycompetentprogrammerinPythonorC++. TensorFlow includes extensive tutorials on convolutional neural nets, the particulartypeofneuralnetworkusedforcomputervision.Inshortorder,I found a neural network available for download that was already trained to

recognizeimages.TheneuralnetworkInception-v3istrainedontheImageNet

dataset,astandarddatabaseofimagesusedbyprogrammers.Inception-v3can

classifyimagesinto one of 1,000 categories, such as “gazelle,” “canoe,” or

“volcano.”Asitturnsout,noneofthecategoriesInception-v3istrainedonare

thosethatcouldbeusedtoidentifypeople,suchas“human,”“person,”“man,” or “woman.” So one could not, strictly speaking, use this particular neural networktopoweranautonomousweaponthattargetspeople.Still,Ifoundthis to be little consolation. ImageNet isn’t the only visual object classification

databaseusedformachinelearningonlineandothers,suchasthePascalVisual ObjectClassesdatabase,include“person”asacategory.Ittookmeallofabout tensecondsonGoogletofindtrainedneuralnetworksavailablefordownload that could find human faces, determine age and gender, or label human emotions.Allofthetoolstobuildanautonomousweaponthatcouldtarget peopleonitsownwerereadilyavailableonline. This was, inevitably, one of the consequences of the AI revolution. AI technologywaspowerful.Itcouldbeusedforgoodpurposesorbadpurposes; that was up to the people using it. Much of the technology behind AI was software, which meant it could be copied practically for free. It could be downloadedattheclickofabuttonandcouldcrossbordersinaninstant.Trying tocontainsoftwarewouldbepointless.Pandora’sboxhasalreadybeenopened.

ROBOTSEVERYWHERE

Just because the tools needed to make an autonomous weapon were widely availabledidn’ttellmehoweasyorharditwouldbeforsomeonetoactuallydo

it.WhatIwantedtounderstandwashowwidespreadthetechnologicalknow-

how was to build a homemade robot that could harness state-of-the-art techniquesindeeplearningcomputervision.WasthiswithinreachofaDIY dronehobbyistordidthesetechniquesrequireaPhDincomputerscience? There is a burgeoning world of robot competitions among high school students,andthisseemedlikeagreatplacetogetasenseofwhatanamateur robot enthusiast could do. The FIRST Robotics Competition is one such

competitionthatincludes75,000studentsorganizedinover3,000teamsacross

twenty-fourcountries.Togetahandleonwhatthesekidsmightbeabletodo,I headedtomylocalhighschool. Less than a mile from my house is Thomas Jefferson High School for Science and Technology—“TJ,” for short. TJ is a math and science magnet school;kidshavetoapplytogetin,andtheyareaffordedopportunitiesabove andbeyondwhatmosthighschoolstudentshaveaccessto.Butthey’restillhigh schoolstudents—notworld-classhackersorDARPAwhizzes. IntheAutomationandRoboticsLabatTJ,studentsgethands-onexperience buildingandprogrammingrobots.WhenIvisited,twodozenstudentssatat workbencheshunchedovercircuitboardsorsilentlytappingawayatcomputers. Behindthemontheedgesoftheworkshoplaydiscardedpiecesofrobots,like archeologicalrelicsofstudents’projectsfromsemestersprior.Onashelfsat

“Roby Feliks,” the Rubik’s Cube solving robot. Nearby, a Raspberry Pi processorsatatop aplasticmusicalrecorder,wiresrunningfromthecircuit boardtotheinstrumentlikesomemusicalcyborg.Somewhatrandomlyinthe centerofthefloorsatahalf-disassembledrobot,theremnantsofTJ’sadmission totheFIRSTcompetitionthatyear.CharlesDelaCuesta,theteacherinchargeof thelab,apologizedforthemess,butitwasexactlywhatIimaginedarobotlab shouldlooklike. DelaCuestacameacrossasthekindofteacheryouprayyourownchildren have.Laidbackandapproachable,heseemedmorelikealovableassistantcoach than an aloof disciplinarian. The robotics lab had the feel of a place where studentslearnbydoing,ratherthansittingandcopyingdownequationsfroma whiteboard. Whichisn’ttosaythattherewasn’tawhiteboard.Therewas.Itsatina corneramidapileofotherroboticprojects,withcircuitboardsandwiresdraped overit.Studentsweredesigninganautomaticwhiteboardwitharobotarmthat couldzipacrossthesurfaceandsketchoutdesignsfromacomputer.Onthe whiteboardwereaseriesofinhumanlystraightlinessketchedoutbytherobot.It wasatthispointthatIwantedtoquitmyjobandsignupforaroboticsclassat TJ.

Dela Cuesta explained that all students at TJ must complete a robotics

project in their freshmen year as part of their required coursework. “Every student in the building has had to design a small robot that is capable of navigatingamazeandperformingsomesortofobstacleavoidance,”hesaid. Studentsaregivenaschematicofwhatthemazelookslikesotheygettochoose howtosolvetheproblem,whethertopreprogramtherobot’smovesortakethe harderpathofdesigninganautonomousrobotthatcanfigureitoutonitsown. Afterthisrequiredclass,TJofferstwoadditionalsemestersofroboticselectives, whichcanbecomplementedwithuptofivecomputersciencecoursesinwhich studentslearnJava,C++,andPython.Thesearevitalprogrammingtoolsfor usingrobotcontrolsystems,liketheRaspberryPiprocessor,whichrunson LinuxandtakescommandsinPython.DelaCuestaexplainedthateventhough moststudentscomeintoTJwithnoprogrammingexperience,manylearnfast andsomeeventakecomputersciencecoursesoverthesummertogetahead.

“Theycanprettymuchprograminanything—Java,Python

overtheplace,”hesaid.Theirsenioryear,allstudentsatTJmustcompletea

seniorprojectinanareaoftheirchoosing.Someofthemostimpressiverobotics

projectsarethosedonebyseniorswhochoosetomakeroboticstheirareaof

focus.Nexttothewhiteboardstoodabicycleproppeduponitskickstand.A

They’rejustall

largeblueboxsatinsidetheframe,wiressnakingoutofittothegearshifters. DelaCuestaexplaineditwasanautomaticgearshifterforthebike.Thebox senseswhenitistimetoshiftanddoessoautomatically,likeanautomatic transmissiononacar. Thestudents’projectshavebeengettingbetterovertheyears,DelaCuesta explained,astheyareabletoharnessmoreadvancedopen-sourcecomponents andsoftware.Afewyearsago,aclassprojecttocreatearobottourguideforthe schooltooktwoyearstocomplete.Now,thetimelinehasbeenshortenedtonine weeks. “The stuff that was impressive to me five, six years ago we could accomplishinaquarterofthetimenow.Itjustblowsmymind,”hesaid.Still, DelaCuestapushesstudentstobuildthingscustomthemselvesratherthanuse existingcomponents.“Iliketohavethestudents,asmuchaspossible,build fromscratch.”Partly,thisisbecauseit’softeneasiertofitcustom-builthardware intoarobot,anapproachthatispossiblebecauseoftheimpressivearrayoftools

DelaCuestahasinhisshop.Alongabackwallwerefive3-Dprinters,twolaser

cutterstomakecustomparts,andamilltoetchcustomcircuitboards.Aneven moreimportantreasontohavestudentsdothingsthemselvesistheylearnmore thatway.“CustomiswhereIwanttogo,”DelaCuestasaid.“Theylearnalot morefromit.It’snotjustkindofthisblackboxmagicthingtheypluginandit works.Theyhavetoreallyunderstandwhatthey’redoinginordertomakethese thingswork.” Acrossthehallinthecomputersystemslab,Isawthesameethosondisplay. Theteachersemphasizedhavingstudentsdothingsthemselvessotheywere learningthefundamentalconcepts,evenifthatmeantre-solvingproblemsthat have already been solved. Repackaging open-source software isn’t what the teachers are after. That isn’t to say that students aren’t learning from the explosioninopen-sourceneuralnetworksoftware.Ononeteacher’sdesksata copy of Jeff Heaton’s Artificial Intelligence for Humans, Volume 3: Deep Learning and Neural Networks. (This title begs the uncomfortable question whetherthereisaparallelcourseofstudy,ArtificialIntelligenceforMachines, wheremachineslearntoprogramothermachines.Theanswer,Isuppose,is“Not yet.”)Studentsarelearninghowtoworkwithneuralnetworks,butthey’redoing sofromthebottomup.Ajuniorexplainedtomehowhetrainedaneuralnetwork to play tic-tac-toe—a problem that was solved over fifteen years ago, but remainsaseminalcodingproblem.Nextyear,TJwillofferacourseincomputer visionthatwillcoverconvolutionalneuralnetworks. Maybeit’saclichétosaythattheprojectsstudentswereworkingonare mind-blowing,butIwasflooredbythethingsIsawTJstudentsdoing.One

studentwasdisassemblingaKeurigmachineandturningitintoanet-enabled

coffeemakersoitcouldjointheInternetofThings.Wiressnakedthroughitas

thoughtheinternetwasphysicallyinfiltratingthecoffeemaker,likeStarTrek’s Borg.Anotherstudentwastinkeringwithsomethingthatlookedlikeacross

betweena1980sNintendoPowerGloveandanApplesmartwatch.Heexplained

itwasa“gauntlet,”likethatusedbyIronMan.WhenIstaredathimblankly,he

explained(inthatpatientexplaining-to-an-old-personvoicethatyoungpeople

use)thatagauntletisthenameforthewrist-mountedcontrolthatIronManuses

toflyhissuit.“Oh,yeah.That’scool,”Isaid,clearlynotgettingit.Idon’tfeel

likeIneedthefullfunctionalityofmysmartphonemountedonmywrist,but

thenagainIwouldn’thavethoughttenyearsagothatIneededatouchscreen

smartphoneonmypersonatalltimesinthefirstplace.Technologyhasawayof

surprisingus.Today’stechnologylandscapeisademocratizedone,wheregame-

changinginnovationsdon’tjustcomeoutoftechgiantslikeGoogleandApple butcancomefromanyone,evenhigh-schoolstudents.TheAIrevolutionisn’t something that is happening out there, only in top-tier research labs. It’s happeningeverywhere.

THEEVERYONEREVOLUTION

IaskedBrandonTsengfromShieldAIwherethispathtoever-greaterautonomy

wastakingus.Hesaid,“Idon’tthinkwe’reevergoingtogive[robots]complete

autonomy.NordoIthinkweshouldgivethemcompleteautonomy.”Onone

level,it’sreassuringtoknowthatTseng,likenearlyeveryoneImetworkingon

militaryrobotics,sawalimittohowmuchautonomyweshouldgivemachines.

Reasonablepeoplemightdisagreeonwherethatlimitis,andforsomepeople

autonomousweaponsthatsearchforandengagetargetswithinnarrowhuman-

definedparametersmightbeacceptable,buteveryoneIspokewithagreedthere

shouldbesomelimits.Butthescarythingisthatreasonablenessonthepartof

Tsengandotherengineersmaynotbeenough.What’stostopatechnologically

inclinedterroristfrombuildingaswarmofpeople-huntingautonomousweapons

andlettingthemlooseinacrowdedarea?Itmighttakesomeengineeringand

sometime,buttheunderlyingtechnologicalknow-howisreadilyavailable.We

areenteringaworldwherethetechnologytobuildlethalautonomousweaponsis

availablenotonlytonation-statesbuttoindividualsaswell.Thatworldisnotin

thedistantfuture.It’salreadyhere.

Whatwedowiththetechnologyisanopenquestion.Whatwouldbethe

consequenceofaworldofautonomousweapons?Wouldtheyleadtoarobutopia

orrobopocalypse?Writershaveponderedthisquestioninsciencefictionfor decades,andtheiranswersvarywildly.TherobotsofIsaacAsimov’sbooksare mostlybenevolentpartnerstohumans,helpingtoprotectandguidehumanity. Governed by the Three Laws of Robotics, they are incapable of harming humans.InStarWars,droidsare willingservantsof humans.Inthe Matrix trilogy,robotsenslavehumans,growingtheminpodsanddrawingontheirbody heatforpower.IntheTerminatorseries,Skynetstrikesinoneswiftblowto exterminatehumanityafteritdetermineshumansareathreattoitsexistence. Wecan’tknowwithanycertaintywhatafutureofautonomousweapons wouldlooklike,butwedohavebettertoolsthansciencefictiontoguessatwhat promiseandperilstheymightbring.Humanity’spastandpresentexperiences withautonomyinthemilitaryandothersettingspointtothepotentialbenefits anddangersofautonomousweapons.Theselessonsallowustopeerintoa murkyfutureand,piecebypiece,begintodiscerntheshapeofthingstocome.

PARTIII

RunawayGun

9

ROBOTSRUNAMOK

FAILUREINAUTONOMOUSSYSTEMS

March 22, 2003—The system said to fire. The radars had detected an incomingtacticalballisticmissile,orTBM,probablyaScudmissileofthetype SaddamhadusedtoharasscoalitionforcesduringthefirstGulfWar.Thiswas

theirjob,shootingdownthemissile.Theyneededtoprotecttheothersoldierson

theground,whowerecountingonthem.Itwasanunfamiliarsetofequipment;

theyweresupportinganunfamiliarunit;theydidn’thavetheinteltheyneeded.

Butthiswastheirjob.Theweightofthedecisionrestedonatwenty-two-year-

oldsecondlieutenantfreshoutoftraining.Sheweighedtheavailableevidence. Shemadethebestcallshecould:fire.

WithaBOOM-ROAR-WOOSH,thePatriotPAC-2missileleftthelaunch

tube,lititsengine,andsoaredintotheskytotakedownitstarget.Themissile exploded.Impact.Theballisticmissiledisappearedfromtheirscreens:theirfirst killofthewar.Success. FromthemomentthePatriotunitlefttheStates,circumstanceshadbeen againstthem.First,they’dfalleninonadifferent,older,setofequipmentthan whatthey’dtrainedon.Thenonceintheater,theyweredetachedfromtheir parentbattalionandattachedtoanewbattalionwhomtheyhadn’tworkedwith before.Thenewbattalionwasusingthenewermodelequipment,whichmeant their old equipment (which they weren’t fully trained on in the first place) couldn’tcommunicatewiththerestofthebattalion.Theywereinthedark.Their systems couldn’t connect to the larger network, depriving them of vital information.Alltheyhadwasaradio.

But they were soldiers, and they soldiered on. Their job was to protect coalitiontroopsagainstIraqimissileattacks,andsotheydid.Theysatintheir commandtrailer,withoutdatedgearandimperfectinformation,andtheymade thecall.Whentheysawthemissiles,theytooktheshots.Theyprotectedpeople.

Thenextnight,at1:30a.m.,therewasanattackonanearbybase.AU.S.

Armysergeantthrewagrenadeintoacommandtent,killingonesoldierand woundingfifteen.Hewaspromptlydetainedbuthismotiveswereunclear.Was thistheworkofonedisgruntledsoldierorwasheaninfiltrator?Wasthisthefirst ofalargerplot?Wordoftheattackspreadovertheradio.Soldiersweresentto guard the Patriot battery’s outer perimeter in case follow-on attacks came, leavingonlythreepeopleinthecommandtrailer,thelieutenantandtwoenlisted soldiers. Elsewherethatsamenight,furthernorthoverIraq,BritishFlightLieutenant Kevin Main turned around his Tornado GR4A fighter jet and headed back towardKuwait,hismissionforthedaycomplete.Inthebackseatasnavigator wasFlightLieutenantDaveWilliams.WhatMainandWilliamsdidn’tknowas theyrocketedbacktowardfriendlylineswasthatacrucialpieceofequipment, theidentificationfriendorfoe(IFF)signal,wasn’ton.TheIFFwassupposedto broadcastasignaltootherfriendlyaircraftandgroundradarstoletthemknow theirTornadowasfriendlyandnottofire.ButtheIFFwasn’tworking.The reasonwhyisstillmysterious.ItcouldbebecauseMainandWilliamsturnedit offwhileoverIraqiterritorysoasnottogiveawaytheirpositionandforgotto turnitbackonwhenreturningtoKuwait.Itcouldbebecausethesystemsimply broke,possiblyfromapowersupplyfailure.TheIFFsignalhadbeentestedby maintenancepersonnelpriortotheaircrafttakingoff,soitshouldhavebeen functional,butforwhateverreasonitwasn’tbroadcasting. AsMainandWilliamsbegantheirdescenttowardAliAlSalemairbase,the PatriotbatterytaskedwithdefendingcoalitionbasesinKuwaitsentoutaradar signalintothesky,probingforIraqimissiles.Theradarsignalbouncedoffthe frontofMainandWilliams’aircraftandreflectedback,whereitwasreceivedby thePatriot’sradardish.Unfortunately,thePatriot’scomputerdidn’tregisterthe radar reflection from the Tornado as an aircraft. Because of the aircraft’s descendingprofile,thePatriot’scomputertaggedtheradarsignalascoming from an anti-radiation missile. In the Patriot’s command trailer, the humans didn’tknowthatafriendlyaircraftwascominginforalanding.Theirscreen showedaradar-huntingenemymissilehominginonthePatriotbattery. ThePatriotoperators’missionwastoshootdownballisticmissiles,which aredifferentfromanti-radiationmissiles.Itwouldbehardforaradartoconfuse

an aircraft flying level with a ballistic missile, which follows a parabolic trajectorythroughtheskylikeabaseball.Anti-radiationmissilesaredifferent. Theyhaveadescendingflightprofile,likeanaircraftcominginonlanding. Anti-radiation missiles home on radars and could be deadly to the Patriot. Shooting them wasn’t the Patriot operators’ primary job, but they were authorizedtoengageifthemissileappearedtobehominginontheirradar. ThePatriotoperatorssawthemissileheadedtowardtheirradarandweighed theirdecision.ThePatriotbatterywasoperatingalone,withouttheabilityto connecttootherradarsonthenetworkbecauseoftheiroutdatedequipment. Deprivedoftheabilitytoseeotherradarinputsdirectly,thelieutenantcalled overtheradiototheotherPatriotunits.Didtheyseeananti-radiationmissile? Nooneelsesawit,butthismeantlittle,sinceotherradarsmaynothavebeenin apositiontoseeit.TheTornado’sIFFsignal,whichwouldhaveidentifiedthe blipontheirradarasafriendlyaircraft,wasn’tbroadcasting.Evenifithadbeen working,asitturnsout,thePatriotwouldn’thavebeenabletoseethesignal— thecodesfortheIFFhadn’tbeenloadedintothePatriot’scomputers.TheIFF, whichwassupposedtobeabackupsafetymeasureagainstfriendlyfire,was doublybroken. Therewerenoreportsofcoalitionaircraftinthearea.Therewasnothingat alltoindicatethattheblipthatappearedontheirscopesasananti-radiation missilemight,infact,beafriendlyaircraft.Theyhadsecondstodecide. Theytooktheshot.Themissiledisappearedfromtheirscope.Itwasahit. Theirshiftended.Anothersuccessfulday. Elsewhere,MainandWilliams’wingmanlandedinKuwait,butMainand Williamsneverreturned.Thecallwentout:thereisamissingTornadoaircraft. Asthesuncameupoverthedesert,peoplebegantoputtwoandtwotogether. ThePatriothadshotdownoneoftheirown.

U.S.ArmyPatriotOperations ThePatriotairandmissiledefensesystemisusedtocounterarangeof threatsfromenemyaircraftandmissiles.

U.S.ArmyPatriotOperationsThePatriotairandmissiledefensesystemisusedtocounterarangeof

threatsfromenemyaircraftandmissiles.

TheArmyopenedaninvestigation,buttherewasstillawartofight.The

lieutenantstayedatherpost;shehadajobtodo.TheArmyneededhertodothat

job,toprotectothersoldiersfromSaddam’smissiles.Confusionandchaosare

unfortunaterealitiesofwar.Unlesstheinvestigationdeterminedthatshewas

negligent,theArmyneededherinthefight.MoreofSaddam’smissileswere

coming.

The very next night, another enemy ballistic missile popped up on their scope.Theytooktheshot.Success.Itwasacleanhit—anotherenemyballistic missiledown.ThesamePatriotbatteryhadtwomoresuccessfulballisticmissile shootdowns before the end of the war. In all, they were responsible for 45 percent of all successful ballistic missile engagements in the war. Later, the investigationclearedthelieutenantofwrongdoing.Shemadethebestcallwith theinformationshehad. OtherPatriotunitswerefightingtheirownstruggleagainstthefogofwar. ThedayaftertheTornadoshootdown,adifferentPatriotunitgotintoafriendly

fireengagementwithaU.S.F-16aircraftflyingsouthofNajafinIraq.This

time,theaircraftshotfirst.TheF-16firedoffaradar-huntingAGM-88high-

speedanti-radiationmissile.ThemissilezeroedinonthePatriot’sradarand

knockeditoutofcommission.ThePatriotcrewwasunharmed—anearmiss.

Aftertheseincidents,anumberofsafetymeasureswereimmediatelyputin place to prevent further fratricides. The Patriot has both a manual (semiautonomous)andauto-fire(supervisedautonomous)mode,whichcanbe kept at different settings for different threats. In manual mode, a human is requiredtoapproveanengagementbeforethesystemwilllaunch.Inauto-fire mode,ifthereisanincomingthreatthatmeetsitstargetparameters,thesystem willautomaticallyengagethethreatonitsown. Becauseballisticmissilesoftenaffordverylittlereactiontimebeforeimpact, Patriotssometimesoperatedinauto-firemodefortacticalballisticmissiles.Now

thattheArmyknewthePatriotmightmisidentifyafriendlyaircraftasananti-

radiationmissile,however,theyorderedPatriotunitstooperateinmanualmode

foranti-radiationmissiles.Asanadditionalsafety,systemswerenowkeptin

“standby”statussotheycouldtracktargets,butcouldnotfirewithoutahuman

bringingthesystembackto“operate”status.Thus,inordertofireonananti-

radiationmissile,twostepswereneeded:bringingthelauncherstooperatestatus and authorizing the system to fire on the target. Ideally, this would prevent anotherfratricideliketheTornadoshootdown.

Despitetheseprecautions,alittleoveraweeklateronApril2,disasterstruck

again.APatriotunitoperatingnorthofKuwaitontheroadtoBaghdadpickedup

an inbound ballistic missile. Shooting down ballistic missiles was their job. Unliketheanti-radiationmissilethattheearlierPatriotunithadfiredon—which turnedouttobeaTornado—therewasnoevidencetosuggestballisticmissiles mightbemisidentifiedasaircraft.

mightbemisidentifiedasaircraft. OBSERVE ORIENT DECIDE ACT Whatisit?

OBSERVE

ORIENT

DECIDE

ACT

Whatisit?

Isithostile?

Engage?

Systemfiresand

Whoseisit?

Isitavalidtarget?

missilemaneuversto

Decisionwhetheror

nottofire

target

Humanoperatorcan

Radardetectsand

classifiesobject

Establishsituational

awareness

Manualmode(semi- choosetoabortmissile

autonomous):Human

Humansapplyoutside Applyrulesof

whileinflight

informationand

engagement

operatormust

context

authorizeengagement

orsystemwillnotfire

Auto-firemode

(supervised

autonomous):System

willfireunlesshuman

operatorhalts

engagement

PatriotDecision-MakingProcessTheOODAdecision-makingprocessforaPatriotsystem.Inmanual

mode,thehumanoperatormusttakeapositiveactioninorderforthesystemtofire.Inauto-firemode,the

humansupervisesthesystemandcaninterveneifnecessary,butthesystemwillfireonitsownifthehuman

doesnotintervene.Auto-firemodeisvitalfordefendingagainstshort-warningattackswheretheremaybe

littletimetomakeadecisionbeforeimpact.Inbothmodes,thehumancanstillabortthemissilewhilein

flight.

Whattheoperatorsdidn’tknow—whattheycouldnothaveknown—was

that there was no missile. There wasn’t even an aircraft misidentified as a missile.Therewasnothing.Theradartrackwasfalse,a“ghosttrack”likely causedbyelectromagneticinterferencebetweentheirradarandanothernearby Patriotradar.ThePatriotunitssupportingtheU.S.advancenorthtoBaghdad

wereoperatinginanonstandardconfiguration.Unitswerespreadinalinesouth-

to-north along the main highway to Baghdad instead of the usual widely distributedpatterntheywouldadopttocoveranarea.Thismayhavecaused

radarstooverlapandinterfere. ButtheoperatorsinthePatriottrailerdidn’tknowthis.Alltheysawwasa ballistic missile headed their way. In response, the commander ordered the batterytobringitslaunchersfrom“standby”to“operate.”

Theunitwasoperatinginmanualmodeforanti-radiationmissiles,butauto-

firemodeforballisticmissiles.Assoonasthelauncherbecameoperational,the auto-fire system engaged: BOOM-BOOM. Two PAC-3 missiles launched automatically.

ThetwoPAC-3missilessteeredtowardtheincomingballisticmissile,orat

least to the spot where the ground-based radar told them it should be. The missilesactivatedtheirseekerstolookfortheincomingballisticmissile,but therewasnomissile.

Tragically,themissiles’seekersdidfindsomething:aU.S.NavyF/A-18C

Hornetfighterjetnearby.ThejetwaspilotedbyLieutenantNathanWhite,who

wassimplyinthewrongplaceatthewrongtime.White’sF-18wassquawking

IFFandheshoweduponthePatriot’sradarasanaircraft.Itdidn’tmatter.The

PAC-3missileslockedontoWhite’saircraft.Whitesawthemissilescomingand

calleditoutovertheradio.Hetookevasiveaction,buttherewasnothinghe

coulddo.Secondslater,bothmissilesstruckhisaircraft,killinghiminstantly.

ASSESSINGTHEPATRIOT’SPERFORMANCE

ThePatriotfratricidesareanexampleoftherisksofoperatingcomplex,highly automated lethal systems. In a strict operational sense, the Patriot units accomplishedtheirmission.OversixtyPatriotfireunitsweredeployedduring theinitialphaseofthewar,fortyfromtheUnitedStatesandtwenty-twofrom fourcoalitionnations.TheirmissionwastoprotectgroundtroopsfromIraqi ballistic missiles, which they did. Nine Iraqi ballistic missiles were fired at coalitionforces;allweresuccessfullyengagedbyPatriots.Nocoalitiontroops wereharmedfromIraqimissiles.ADefenseScienceBoardTaskForceonthe Patriot’sperformanceconcludedthat,withrespecttomissiledefense,thePatriot wasa“substantialsuccess.” Ontheotherhand,inadditiontotheseninesuccessfulengagements,Patriots wereinvolvedinthreefratricides:twoincidentsinwhichPatriotsshotdown

friendlyaircraft,killingthepilots,andathirdincidentinwhichanF-16firedon

aPatriot.Thus,ofthetwelvetotalengagementsinvolvingPatriots,25percent

were fratricides, an “unacceptable” fratricide rate according to Army

investigators. ThereasonsforthePatriotfratricideswereacomplexmixofhumanerror, impropertesting,poortraining,andunforeseeninteractionsonthebattlefield. Some problems were known—IFF was well understood to be an imperfect solutionforpreventingfratricides.Otherproblems,suchasthepotentialforthe Patriottomisclassifyanaircraftasananti-radiationmissile,hadbeenidentified duringoperationaltestingbuthadnotbeencorrectedandwerenotincludedin operatortraining.Stillotherissues,suchasthepotentialforelectromagnetic interferencetocauseafalseradartrack,werenovelandunexpected.Someof these complications were preventable, but others were not. War entails uncertainty.Eventhebesttrainingandoperationaltestingcanonlyapproximate theactualconditionsofwar.Inevitably,soldierswillfacewartimeconditions wheretheenvironment,adversaryinnovation,andsimplythechaos,confusion, andviolenceofwarallcontributetounexpectedchallenges.Manythingsthat seemsimpleintrainingoftenlookfardifferentinthemawofcombat. OnethingthatdidnothappenandwasnotacauseofthePatriotfratricidesis thatthePatriotsystemdidnotfail,perse.Itdidn’tbreak.Itdidn’tblowafuse. The system performed its function: it tracked incoming targets and, when authorized,shotthemdown.Also,inbothinstancesahumanwasrequiredto givethecommandtofireoratleasttobringthelauncherstooperate.Whenthis lethal,highlyautomatedsystemwasplacedinthehandsofoperatorswhodid notfullyunderstanditscapabilitiesandlimitations,however,itturneddeadly. Notbecausetheoperatorswerenegligent.Noonewasfoundtobeatfaultin eitherincident.Itwouldbeoverlysimplistictoblamethefratricideson“human error.” Instead, what happened was more insidious. Army investigators determinedthePatriotcommunityhadacultureof“trustingthesystemwithout question.” According to Army researchers, the Patriot operators, while nominallyincontrol,exhibitedautomationbias:an“unwarrantedanduncritical trustinautomation.Inessence,controlresponsibilityiscededtothemachine.” Theremayhavebeenahuman“intheloop,”butthehumanoperatorsdidn’t questionthemachinewhentheyshouldhave.Theydidn’texercisethekindof judgmentStanislavPetrovdidwhenhequestionedthesignalshissystemwas giving him regarding a false launch of U.S. nuclear missiles. The Patriot operatorstrustedthemachine,anditwaswrong.

ROBUTOPIAVS.ROBOPOCALYPSE

Wehavetwointuitionswhenitcomestoautonomoussystems,intuitionsthat comepartlyfromsciencefictionbutalsofromoureverydayexperienceswith phones,computers,cars,andmyriadothercomputerizeddevices. The first intuition is that autonomous systems are reliable and introduce greaterprecision.Justasautopilotshaveimprovedairtravelsafety,automation can also improve safety and reliability in many other domains. Humans are terrible drivers, for example, killing more than 30,000 people a year in the

UnitedStatesalone(roughlytheequivalentofa9/11attackeverymonth).Even

withoutfullyautonomouscars,moreadvancedvehicleautopilotsthatallowcars todrivethemselvesundermostconditionscoulddramaticallyimprovesafety andsavelives. However,wehaveanotherinstinctwhenitcomestoautonomoussystems, andthatisoneofrobotsrunamok,autonomoussystemsthatslipoutofhuman controlandresultindisastrousoutcomes.Thesefearsarefedtousthrougha steadydietofdystopiansciencefictionstoriesinwhichmurderousAIsturnon

humans,from2001:ASpaceOdyssey’sHAL9000toExMachina’sAva.But

theseintuitionsalsocomefromoureverydayexperienceswithsimpleautomated devices.Anyonewhohaseverbeenfrustratedwithanautomatedtelephonecall supporthelpline,analarmclockmistakenlysetto“p.m.”insteadof“a.m.,”or anyofthecountlessfrustrationsthatcomewithinteractingwithcomputers,has experienced the problem of “brittleness” that plagues automated systems. Autonomoussystemswilldopreciselywhattheyareprogrammedtodo,anditis thisqualitythatmakesthembothreliableandmaddening,dependingonwhether whattheywereprogrammedtodowastherightthingatthatpointintime. Both of our intuitions are correct. With proper design, testing, and use, autonomoussystemscanoftenperformtasksfarbetterthanhumans.Theycan be faster, more reliable, and more precise. However, if they are placed into situations for which they were not designed, if they aren’t fully tested, if operators aren’t properly trained, or if the environment changes, then autonomoussystemscanfail.Whentheydofail,theyoftenfailbadly.Unlike humans,autonomoussystemslacktheabilitytostepoutsidetheirinstructions andemploy“commonsense”toadapttothesituationathand. Thisproblemofbrittlenesswashighlightedduringatellingmomentinthe

2011Jeopardy!ChallengeinwhichIBM’sWatsonAItookonhumanJeopardy