Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-25T19:31:34.997Z Has data issue: false hasContentIssue false

Artificial intelligence and machine learning in armed conflict: A human-centred approach

Published online by Cambridge University Press:  18 March 2021

Abstract

Note: This is an edited version of a paper published by the ICRC in June 2019.

Type
Reports and documents
Copyright
Copyright © icrc 2021

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 ICRC, “Expert Views on the Frontiers of Artificial Intelligence and Conflict”, ICRC Humanitarian Law and Policy Blog, 19 March 2019, available at: https://blogs.icrc.org/law-and-policy/2019/03/19/expert-views-frontiers-artificial-intelligence-conflict.

2 ICRC, Summary Document for UN Secretary-General's High-Level Panel on Digital Cooperation, January 2019, available at: https://digitalcooperation.org/wp-content/uploads/2019/02/ICRC-Submission-UN-Panel-Digital-Cooperation.pdf.

3 States party to Additional Protocol I to the Geneva Conventions have an obligation to conduct legal reviews of new weapons during their development and acquisition, and prior to their use in armed conflict. For other States, legal reviews are a common-sense measure to help ensure that the State's armed forces can conduct hostilities in accordance with their international obligations.

4 ICRC, International Humanitarian Law and the Challenges of Contemporary Armed Conflicts, report for the 33rd International Conference of the Red Cross and Red Crescent, Geneva, October 2019 (ICRC Challenges Report 2019), pp. 18–29, available at: www.icrc.org/en/publication/4427-international-humanitarian-law-and-challenges-contemporary-armed-conflicts; ICRC, International Humanitarian Law and the Challenges of Contemporary Armed Conflicts, report for the 32nd International Conference of the Red Cross and Red Crescent, Geneva, October 2015 (ICRC Challenges Report 2015), pp. 38–47, available at: www.icrc.org/en/document/international-humanitarian-law-and-challenges-contemporary-armed-conflicts.

5 The “principles of humanity” and the “dictates of public conscience” are mentioned in Article 1(2) of Additional Protocol I and in the preamble of Additional Protocol II to the Geneva Conventions, referred to as the Martens Clause, which is part of customary international humanitarian law.

6 ICRC, Statements to the Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Lethal Autonomous Weapons Systems, Geneva, 25–29 March 2019, available at: https://tinyurl.com/yyeadno3.

7 ICRC Challenges Report 2019, above note 4, pp. 29–31; Neil Davison, “Autonomous Weapon Systems under International Humanitarian Law”, Perspectives on Lethal Autonomous Weapon Systems, United Nations Office for Disarmament Affairs Occasional Paper No. 30, November 2017, available at: www.icrc.org/en/document/autonomous-weapon-systems-under-international-humanitarian-law.

8 ICRC, Ethics and Autonomous Weapon Systems: An Ethical Basis for Human Control?, report of an expert meeting, Geneva, 3 April 2018, available at: www.icrc.org/en/document/ethics-and-autonomous-weapon-systems-ethical-basis-human-control.

9 ICRC, ICRC Commentary on the “Guiding Principles” of the CCW GGE on “Lethal Autonomous Weapons Systems”, Geneva, July 2020, available at: https://documents.unoda.org/wp-content/uploads/2020/07/20200716-ICRC.pdf; Vincent Boulanin, Neil Davison, Netta Goussac and Moa Peldán Carlsson, Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control, ICRC and Stockholm International Peace Research Institute, June 2020, available at: www.icrc.org/en/document/limits-autonomous-weapons; ICRC, “The Element of Human Control”, UN Doc. CCW/MSP/2018/WP.3, working paper, CCW Meeting of High Contracting Parties, 20 November 2018, available at: https://tinyurl.com/y3c96aa6.

10 ICRC, Statement to the CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems under Agenda Item 6(b), Geneva, 27–31 August 2018, available at: https://tinyurl.com/y4cql4to.

11 Miles Brundage et al., The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, Future of Humanity Institute, Oxford, February 2018.

12 United Nations Institute for Disarmament Research (UNIDIR), The Weaponization of Increasingly Autonomous Technologies: Autonomous Weapon Systems and Cyber Operations, 2017.

13 By asserting that international humanitarian law applies to cyber operations, the ICRC is in no way condoning cyber warfare, nor is it condoning the militarization of cyberspace: ICRC Challenges Report 2015, above note 4, pp. 38–44.

14 ICRC, The Potential Human Cost of Cyber Operations, report of an expert meeting, Geneva, May 2019, available at: www.icrc.org/en/document/potential-human-cost-cyber-operations.

15 Steven Hill and Nadia Marsan, “Artificial Intelligence and Accountability: A Multinational Legal Perspective”, in Big Data and Artificial Intelligence for Military Decision Making, STO Meeting Proceedings STO-MP-IST-160, NATO, 2018.

16 ICRC, Symposium Report: Digital Risks in Situations of Armed Conflict, March 2019, p. 9, available at: www.icrc.org/en/event/digital-risks-symposium.

17 Dustin A. Lewis, Gabriella Blum and Naz K. Modirzadeh, War-Algorithm Accountability, Harvard Law School Program on International Law and Armed Conflict, August 2016.

18 United States, “Implementing International Humanitarian Law in the Use of Autonomy in Weapon Systems”, working paper, CCW Group of Governmental Experts, March 2019.

19 Ashley Deeks, “Predicting Enemies”, Virginia Public Law and Legal Theory Research Paper No. 2018-21, March 2018, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3152385.

20 Vincent Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, Vol. 1: Euro-Atlantic Perspectives, Stockholm International Peace Research Institute, Stockholm, May 2019.

21 S. Hill and N. Marsan, above note 15.

22 Lorna McGregor, “The Need for Clear Governance Frameworks on Predictive Algorithms in Military Settings”, ICRC Humanitarian Law and Policy Blog, 28 March 2019, available at: https://blogs.icrc.org/law-and-policy/2019/03/28/need-clear-governance-frameworks-predictive-algorithms-military-settings; AI Now Institute, AI Now Report 2018, New York University, December 2018, pp. 18–22.

23 ICRC, above note 16, p. 8.

24 ICRC and Brussels Privacy Hub, Handbook on Data Protection in Humanitarian Action, 2nd ed., Geneva, May 2020, available at: www.icrc.org/en/data-protection-humanitarian-action-handbook.

25 ICRC and International Federation of Red Cross and Red Crescent Societies, The Fundamental Principles of the International Red Cross and Red Crescent Movement: Ethics and Tools for Humanitarian Action, Geneva, November 2015, available at: https://shop.icrc.org/les-principes-fondamentaux-de-la-croix-rouge-et-du-croissant-rouge-2757.html.

26 See, for example, the Partnership on AI's focus on the safety of AI and machine learning technologies as “an urgent short-term question, with applications in medicine, transportation, engineering, computer security, and other domains hinging on the ability to make AI systems behave safely despite uncertain, unanticipated, and potentially adversarial environments”. Partnership on AI, “Safety-Critical AI: Charter”, 2018, available at: www.partnershiponai.org/working-group-charters-guiding-our-exploration-of-ais-hard-questions.

27 ICRC, above note 6.

28 United Nations, Report of the 2018 session of the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, UN Doc. CCW/GGE.1/2018/3, 23 October 2018, Sections III.A.26(b), III.C.28(f), available at: http://undocs.org/en/CCW/GGE.1/2018/3.

29 See, for example, the statements delivered at the CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems, Geneva, 25–29 March 2019, available at: https://tinyurl.com/yyeadno3.

30 Tess Bridgeman, “The Viability of Data-Reliant Predictive Systems in Armed Conflict Detention”, ICRC Humanitarian Law and Policy Blog, 8 April 2019, available at: https://blogs.icrc.org/law-and-policy/2019/04/08/viability-data-reliant-predictive-systems-armed-conflict-detention.

31 Future of Life Institute, “Asilomar AI Principles”, 2017, available at: https://futureoflife.org/ai-principles.

32 European Commission, Ethics Guidelines for Trustworthy AI, High-Level Expert Group on Artificial Intelligence, 8 April 2019, pp. 15–16, available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

33 OECD, “Recommendation of the Council on Artificial Intelligence”, OECD/LEGAL/0449, 22 May 2019, available at: https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.

34 Beijing Academy of Artificial Intelligence, “Beijing AI Principles”, 28 May 2019, available at: https://baip.baai.ac.cn/en.

35 Google, “AI at Google: Our Principles”, The Keyword, 7 June 2018, available at: www.blog.google/technology/ai/ai-principles. “We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control.”

36 Microsoft, “Microsoft AI Principles”, 2019, available at: www.microsoft.com/en-us/ai/our-approach-to-ai; Rich Sauer, “Six Principles to Guide Microsoft's Facial Recognition Work”, Microsoft Blog, 17 December 2018, available at: https://blogs.microsoft.com/on-the-issues/2018/12/17/six-principles-to-guide-microsofts-facial-recognition-work.

37 IBM, “IBM's Principles for Trust and Transparency”, THINKPolicy Blog, 30 May 2018 available at: www.ibm.com/blogs/policy/trust-principles.

38 DoD, Summary of the 2018 Department of Defense Artificial Intelligence Strategy, 2019.

39 DoD, Defense Innovation Board, AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense, 31 October 2019.

40 DoD, “DOD Adopts Ethical Principles for Artificial Intelligence”, news release, 24 February 2020, available at: www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/.

41 French Ministry of Defence, “Florence Parly Wants High-Performance, Robust and Properly Controlled Artificial Intelligence”, Actualités, 10 April 2019, available at: www.defense.gouv.fr/english/actualites/articles/florence-parly-souhaite-une-intelligence-artificielle-performante-robuste-et-maitrisee.

42 ICRC, ICRC Strategy 2019–2022, Geneva, 2018, p. 15, available at: www.icrc.org/en/publication/4354-icrc-strategy-2019-2022.

43 ICRC, above note 8, p. 22.

44 Google, Perspectives on Issues in AI Governance, January 2019, pp. 23–24, available at: http://ai.google/perspectives-on-issues-in-AI-governance.

45 R. Sauer, above note 36: “We will encourage and help our customers to deploy facial recognition technology in a manner that ensures an appropriate level of human control for uses that may affect people in consequential ways.”

46 ICRC, above note 8, p. 13.

47 Google, above note 44, p. 22.

48 Dario Amodei et al., Concrete Problems in AI Safety, Cornell University, Ithaca, NY, 2016, available at: https://arxiv.org/abs/1606.06565.

49 ICRC, Autonomy, Artificial Intelligence and Robotics: Technical Aspects of Human Control, report of an expert meeting, Geneva, August 2019, available at: www.icrc.org/en/document/autonomy-artificial-intelligence-and-robotics-technical-aspects-human-control.

50 Joel Lehman et al., The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities, Cornell University, Ithaca, NY, 2018, available at: https://arxiv.org/abs/1803.03453.

51 David Silver et al., “Mastering the Game of Go without Human Knowledge”, Nature, Vol. 550, No. 7676, 19 October 2017.

52 UNIDIR, Algorithmic Bias and the Weaponization of Increasingly Autonomous Technologies: A Primer, 2018.

53 Matthew Hutson, “A Turtle – or a Rifle? Hackers Easily Fool AIs into Seeing the Wrong Thing”, Science, 19 July 2018, available at: www.sciencemag.org/news/2018/07/turtle-or-rifle-hackers-easily-fool-ais-seeing-wrong-thing.

54 AI Now Institute, above note 22, pp. 15–17.

55 Ibid., pp. 18–22.

56 Arnold W. M. Smeulders et al., “Content-Based Image Retrieval at the End of the Early Years”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 12, 2000.

57 M. Hutson, above note 53.

58 Netta Goussac, “Safety Net or Tangled Web: Legal Reviews of AI in Weapons and War-fighting”, ICRC Humanitarian Law and Policy Blog, 18 April 2019, available at: https://blogs.icrc.org/law-and-policy/2019/04/18/safety-net-tangled-web-legal-reviews-ai-weapons-war-fighting; Dustin A. Lewis, “Legal Reviews of Weapons, Means and Methods of Warfare Involving Artificial Intelligence: 16 Elements to Consider”, ICRC Humanitarian Law and Policy Blog, 21 March 2019, available at: https://blogs.icrc.org/law-and-policy/2019/03/21/legal-reviews-weapons-means-methods-warfare-artificial-intelligence-16-elements-consider.

59 ICRC, Commentary on the “Guiding Principles”, above note 9; ICRC, “The Element of Human Control”, above note 9; V. Boulanin et al., above note 9.

60 ICRC, Statement to the CCW Group of Governmental Experts on Lethal Autonomous Weapons Systems, Geneva, 21–25 September 2020, available at: https://documents.unoda.org/wp-content/uploads/2020/09/20200921-ICRC-General-statement-CCW-GGE-LAWS-Sep-2020.pdf.