Sunteți pe pagina 1din 7

Online Certificate Course on International Humanitarian Law With

Special Reference to Artificial Intelligence & Armed Conflict

Artificial Intelligence & Armed Conflict: Introductory Issues and Concerns

Recent advances in artificial intelligence have the potential to affect main aspects of our lives in
significant and widespread ways. Certain types of machine learning systems—the major focus of
recent AI developments—are already pervasive, for example in weather predictions, social
media services and search engine results, online recommendation systems. Machine learning is
also being applied to complex applications that include predictive policing in law enforcement
and ‘advice’ for judges when sentencing in criminal justice. Meanwhile, growing resources are
being allocated to developing other AI applications. At issue here—in the views of experts
expressed below and the blog series that will ensue—are data-driven machine learning
algorithms that are emerging, or could emerge, as tools to ‘advise’ or even replace humans in
certain tasks and decisions during armed conflict.

It is not only private companies and academia, but States too, that are part of the drive to develop
and adopt these technologies to advance their goals. Several experts predict that AI techniques
might entail diverse far-reaching impacts on the conduct of hostilities and the protection of
civilians, as well as other dimensions of armed conflict. As governments, and especially
militaries, seek to incorporate AI to enhance, speed up, and transform their decision-making
processes and operations. What are the potential implications for their use in conflict settings,
and specifically for international humanitarian law (IHL)?

Prof Naz K. Modirzadeh 1 and Dustin A. Lewis 2 are of the view that the future of artificial
intelligence and armed conflict, those of us concerned about international law should prioritize
(among other things) deeply cultivating our own knowledge of the rapidly changing
technologies. And we should make that an ongoing commitment. There is a perennial question
about subject-matter expertise and the law of armed conflict; consider cyber operations,
weaponeering and nuclear technology. When it comes to the increasingly impactful and diverse
suite of techniques and technologies labeled ‘AI’, the concern takes on a different magnitude and

1
Naz K. Modirzadeh, Founding Director, Harvard Law School Program on International Law and Armed Conflict
2
Dustin A. Lewis, Senior Researcher, Harvard Law School Program on International Law and Armed Conflict
Online Certificate Course on International Humanitarian Law With
Special Reference to Artificial Intelligence & Armed Conflict

urgency. That’s in no small part because commentators have assessed that AI has the potential to
transform armed conflict—and not just the conduct of hostilities.

Yet, it seems that the vast array of IHL scholars and practitioners currently lacks a sufficient
understanding of AI. Moreover, many don’t know what they don’t know. That is a dangerous
prospect. To better grasp the purported promise and perils of war algorithms3, much less seek to
meaningfully regulate them through international law, we must be candid about our own
technical blind spots.

Brigadier General Pat Huston4 says that AI is all around us. Google searches, Amazon and
Netflix recommendations, Siri and Alexa responses all leverage AI. AI is also common in
military applications, ranging from benign ‘smart maintenance’ for trucks to the use of
autonomous weapons. Leveraging AI for offensive autonomous weapons warrants careful
analysis. I offer three suggestions—and a few questions—to ensure that this inevitable
development is done legally and ethically:

a) Human Judgment: We must ensure that autonomous weapons allow commanders to


exercise appropriate levels of human judgment. In other words, humans should make key
decisions and can’t unleash autonomous weapons over which they will lose effective
control.
b) Accountability: The use of weapons must always comply with international
humanitarian law and commanders must remain responsible for all weapons they employ.
These are fundamental principles.
c) Government cooperation with Industry: The best and brightest AI researchers should
insist on legal and ethical conduct for military AI uses and should then work with
compliant governments to this end. If they boycott military projects, the void will be

3
“war algorithm” is any algor ithm that is exp ressed in co mp uter co de, that is effectuated
thro ugh a co nstr ucted system, and that is capab le o f op erating in relatio n to armed co nflict.
See: https://pilac.law.harvard.edu/aws
4
Commanding General of the Army’s Legal Center and School in Charlottesville, Virginia
Online Certificate Course on International Humanitarian Law With
Special Reference to Artificial Intelligence & Armed Conflict

filled by researchers who are less capable, less ethical or both and that would be a recipe
for disaster.

Brigadier Huston also raises few questions, such as, What if AI-enhancements make autonomous
weapons better than traditional weapons? What if autonomous weapons are more precise and
cause less collateral damage? Would we be legally obligated to use them, if available? Would we
have an ethical obligation to pursue them?

Tess Bridgeman5 says as machine learning and other forms of AI develop, humans building and
employing these tools in the armed conflict context should be mindful of the distinction between
using these technologies in situations where their capabilities exceed human capacity—such as
making basic perception judgments about large amounts of visual content quickly—and
situations where making complex, nuanced judgments will continue to require deep human
expertise.

Given the dangers of algorithmically-encoded bias and the tremendous human, financial and data
resources necessary to build complex AI systems, we should ask two related questions. First,
how can we use AI-driven tools as one component of human decision-making where the
obstacles we currently face are due to limits in human capacity that are measurably and reliably
improved by use of AI tools? Second, bearing in mind the purposes of IHL, how can we use AI
to gain a better factual understanding of complex conflict environments in order to facilitate
ultimately human-centered decisions? For example, when can AI help provide a more complete
picture of the expected military advantage of an attack or expected damage to civilians or
civilian objects? These kinds of inputs could provide useful information to the ultimate human
decision-maker, but don’t displace the roles or responsibilities of the lawyer, the attack planner
or the combatant in applying IHL.

5
Lecturer at Stanford University, affiliate at Stanford’s Center for International Security and Cooperation, Senior
Fellow at NYU Law School’s Center on Law and Security, former Special Assistant to President Obama and Deputy
Legal Adviser to the National Security Council
Online Certificate Course on International Humanitarian Law With
Special Reference to Artificial Intelligence & Armed Conflict

Professor Yuval Shany6 believes that much of the discourse surrounding the use of AI on the
battlefield revolves around the lawfulness of introducing systems capable of engaging in a
sophisticated decision-making process without meaningful human control. Lack of human
judgment and compassion might render AI-based means and methods of warfare under-
protective of humanitarian interests and less humane. As a result, in the current stage of
development of AI technology, they cannot serve as full autonomous decision makers on the
battlefield.7 Still, the more advanced AI systems become in terms of their capacity to process
large quantities of data in short time spans, the question of what constitutes meaningful human
control becomes more difficult, and there is a risk that such control will gradually become
control only pro forma.

Furthermore, any normative position on the use of AI for on-battlefield decisions (and off-
battlefield decisions) should be ultimately informed by empirical scientific data on the
comparative advantages and disadvantages of human and machine decision-makers. If it turns
out at some point in time in the future that the latter decision-maker is systematically less prone
to mistakes (e.g., fewer false positives) and less likely to be biased (e.g., influenced by fear or
hatred) than humans, then—as a minimum—militaries would be required to use AI in decision-
making, in order to reduce harm to civilians or civilian objects (either due to erroneous
identification of combatants or possible collateral damage). Failure to use such technology may
be regarded as failure to apply a reasonable precaution. This does not mean that the human
decision-maker is redundant. Like in other ‘double lock’ systems, even the less informed (but
perhaps more compassionate and context-sensitive) human decision-maker should remain in the
loop, able to intervene and issue a ‘mission abort’ order, in order to correct what he or she
perceives as AI mistakes. This approach does not cut in the other direction, though: the less
informed human decision-maker should not overrule an AI decision to attack if the more
informed AI identifies the target as a civilian or points to excessive civilian harm. In sum, human
control is inevitably less meaningful in situations where there exists a large gap between the data

6
Hersch Lauterpacht Chair in International Law of the Law Faculty of the Hebrew University of Jerusalem, Vice
President for Research at Israel Democracy Institute, Chair of the UN Human Rights Committee, academic director
of the CyberLaw program at the Cyber Security Research Center of the Hebrew University.
7
The 2018 Human Rights Committee, General Comment, page no. 36.
Online Certificate Course on International Humanitarian Law With
Special Reference to Artificial Intelligence & Armed Conflict

gathering and analysis capacity of machines and humans. And, such gaps are likely eventually to
push humans away from being the principal decision-maker in certain battlefield contexts to
becoming a check and/or balance on AI decision making power.
Neil Davison8 & Netta Goussac9 are of the opinion that the most significant impact of AI and
machine learning in armed conflict will be on decision-making. Be it software that controls a
robot; a digital system crunching data and serving up an analysis, prediction, or recommendation
for humans to act upon; a cyber-attack initiated by AI; or a machine learning system creating
‘fake’ information, the overriding concern is the reliance by humans on machines when taking
decisions. In armed conflict, many of these decisions will be ‘safety critical’, meaning that the
decision may result in death or serious injury, damage to or destruction of property, or may
curtail individual freedoms. Preserving the fundamental human role in—and control over—such
decisions will be essential to avoid unpredictable consequences for civilians and combatants.
Indeed, human involvement in decision-making is necessary to ensure compliance with
international humanitarian rules governing human behaviour in warfare and to preserve a
measure of humanity in conflict.

This means adapting technology, and the rules that bind them, to fit humans -not the other way
around. Safeguards will be needed to allow the humans ‘in-the-loop’ to fulfill their decision-
making responsibilities—both legal and ethical. If this means slowing things down so that
humans can meaningfully play their role, so be it. The alternative approach—where the use of AI
prevents humans from fulfilling their responsibilities—will not end well.

Prof James Kraska10, Prof Michael N. Schmitt11 & Lt Col Jeffrey Biller12
Legal research on military uses of artificial intelligence (AI) tends to focus on the difficult
questions related to autonomous targeting.13 While the legal and ethical concerns of separating

8
Scientific and Policy Adviser, International Committee of the Red Cross Legal Division Arms Unit
9
Legal Adviser in the Arms Unit, International Committee of the Red Cross Legal Division Arms Unit
10
Charles H. Stockton Professor of International Maritime Law
11
Howard S. Levie Professor of Law and Armed Conflict
12
Military Professor, Stockton Center for International Law, U.S. Naval War College
13
https://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-67/JFQ-67_77-84_Thurnher.pdf
Online Certificate Course on International Humanitarian Law With
Special Reference to Artificial Intelligence & Armed Conflict

humans from the decision-making process are justified, they do not reflect the more immediate
realities of technologies utilizing AI. These technologies use AI to aid the human decision-
making process, rather than replace it. While these uses of AI do not receive the same level of
scrutiny as their targeting counterparts, significant legal and policy questions remain.

The AI technologies currently exist and are used for purposes such as aiding judges in making
sentencing decisions and could be useful in military detention operations. Had AI technologies
been available to assist, for example, coalition forces in Iraq in 2007—where they were
processing over 26,000 detainees14-detainees may have been processed more efficiently with less
human resources and less error. If AI could be used to immediately dismiss the large numbers of
individuals with no valid reason for detention, available manpower can be focused on the smaller
number of more difficult cases where continued detention may be warranted.

There are possible issues, however. As AI experts relayed during the workshop, the use of
algorithms in decision-making is only as good as the data on which the algorithm will base its
recommendations. If biased data is initially entered into the system, not only does it get
imprinted into future recommendations, it can expand in scope as those biased recommendations
are fed back into the system.15

Additional legal research and policy discussion in this area will help ensuring that initial uses of
AI in military operations are conducted in a manner that fulfills the overall aims of international
humanitarian law.

14
https://digitalcommons.usnwc.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=1
110&context=ils
15
https://www.nytimes.com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html
Online Certificate Course on International Humanitarian Law With
Special Reference to Artificial Intelligence & Armed Conflict

S-ar putea să vă placă și