Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-11T03:49:36.469Z Has data issue: false hasContentIssue false

Factors shaping the legal implications of increasingly autonomous military systems

Published online by Cambridge University Press:  08 March 2016

Abstract

This article identifies five factors that will shape the legal implications of the use of autonomous military systems. It contends that the systems which present legal challenges are those programmed to “make decisions” that are regulated by law. In so doing, they transfer control of, and responsibility for, those decisions away from those who have been traditionally seen as decision-makers to persons responsible for developing and deploying the system. The article also suggests that there may be limits to the extent to which the rules of international humanitarian law can appropriately regulate the altered relationship between soldiers and their increasingly autonomous weapon systems.

Type
Looking into the future
Copyright
Copyright © icrc 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Lockheed Martin, “UCLASS”, available at: www.lockheedmartin.com.au/us/products/uclass.html (all internet references were accessed in May 2015).

2 Jean Kumagai, “A Robotic Sentry for Korea's Demilitarized Zone”, IEEE Spectrum, 1 March 2007, available at: http://spectrum.ieee.org/robotics/military-robots/a-robotic-sentry-for-koreas-demilitarized-zone.

3 The concern here is not with remotely operated weapons such as the unmanned aerial vehicles (UAVs), or drones, currently being employed in various conflicts. Such devices are manually controlled by human operators in respect of their critical functions. While it is true that autonomous vehicles would also generally be crewless, this article discusses only issues arising from a machine's capacity for “making decisions” autonomously.

4 Raytheon, “Phalanx Close-In Weapon System (CIWS)”, available at: www.raytheon.com/capabilities/products/phalanx/.

5 For an overview of these technologies see Advisory Group for Aerospace Research and Development, North Atlantic Treaty Organization (NATO), Precision Terminal Guidance for Munitions, Advisory Report No. AGARD-AR-342, February 1997, available at: www.cso.nato.int/Pubs/rdp.asp?RDP=AGARD-AR-342.

6 A cyber-weapon is software and/or hardware “used, designed, or intended to be used” to conduct “a cyber operation, whether offensive or defensive, that is reasonably expected to cause injury or death to persons or damage or destruction to objects”: Michael N. Schmitt (ed.), Tallinn Manual on the International Law Applicable to Cyber Warfare, Cambridge University Press, Cambridge, 2013, pp. 106 (Rule 30), 141 (Rule 41).

7 It is possible that use of an autonomous military system may also relate to a method of warfare, such as where the process leading to the decision to employ the system is at issue, or where the system has more than one mode of operation. For an example of the second case, see: ibid., p. 142, paras 4 and 5.

8 See, e.g., Protocol Additional (I) to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts, 1125 UNTS 3, 8 June 1977 (entered into force 7 December 1978), Part III.

9 There is, however, considerable disagreement about the precise definitions even in the technical community; see, e.g., the discussion in M. Shane Riza, Killing Without Heart: Limits on Robotic Warfare in an Age of Persistent Conflict, Potomac Books, Washington, DC, 2013, p. 13.

10 See, e.g., the discussion in Henry Hexmoor, Christiano Castelfranchi and Rino Falcone, “A Prospectus on Agent Autonomy”, in Henry Hexmoor, Christiano Castelfranchi and Rino Falcone (eds), Agent Autonomy, Kluwer, Boston, MA, 2003, p. 3.

11 George A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation and Control, MIT Press, Cambridge, MA, 2005, p. 1. Similarly, “[a] system with a high level of autonomy is one that can be neglected for a long period of time without interaction”: Goodrich, Michael A. and Schultz, Alan C., “Human-Robot Interaction: A Survey”, Foundations and Trends in Human-Computer Interaction, Vol. 1, No. 3, 2007, p. 217 Google Scholar.

12 Charles François (ed.), International Encyclopedia of Systems and Cybernetics, Vol. 1, K. G. Saur, Munich, 2004, p. 51. Similarly, “[a]utonomy is a capability (or a set of capabilities) that enables a particular action of a system to be automatic or, within programmed boundaries, ‘self-governing’”: Defense Science Board, The Role of Autonomy in DoD Systems, US Department of Defense, July 2012, p. 1, available at: www.acq.osd.mil/dsb/reports/AutonomyReport.pdf.

13 The appropriateness of categorizing autonomous systems as weapons is also discussed in Liu, Hin-Yan, “Categorization and Legality of Autonomous and Remote Weapons Systems”, International Review of the Red Cross, Vol. 94, No. 886, 2012, p. 627 Google Scholar.

14 See, e.g., Zdzislaw Bubnicki, Modern Control Theory, Springer, Berlin and New York, 2005, pp. 3–4.

15 For a general introduction to autonomous control systems, see Antsaklis, Panos J., Passino, Kevin M. and Wang, S. J., “An Introduction to Autonomous Control Systems”, IEEE Control Systems, Vol. 11, No. 4, 1991, p. 5 Google Scholar.

16 See, e.g., Wagner, Markus, “Taking Humans Out of the Loop: Implications for International Humanitarian Law”, Journal of Law, Information and Science, Vol. 21, No. 2, 2011 CrossRefGoogle Scholar.

17 See, e.g., Army Capabilities Integration Center, US Army, Robotics Strategy White Paper, 19 March 2009, available at: www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA496734; US Air Force, Unmanned Aircraft Systems Flight Plan 2009–2047, 18 May 2009, available at: www.fas.org/irp/program/collect/uas_2009.pdf.

18 Peter A. Hancock and Stephen F. Scallen, “Allocating Functions in Human–Machine Systems”, in Robert R. Hoffman, Michael F. Sherrick and Joel S. Warm (eds), Viewing Psychology as a Whole: The Integrative Science of William N Dember, American Psychological Association, Washington, DC, 1998, p. 521.

19 “Decide” in this case does not imply a human-like decision-making capability; it simply means the action is initiated by the computer according to its programming rather than in response to a command from the human operator.

20 NIST Engineering Laboratory, National Institute of Standards and Technology, Autonomy Levels for Unmanned Systems, 16 June 2010, available at: www.nist.gov/el/isd/ks/autonomy_levels.cfm.

21 Ryan W. Proud, Jeremy J. Hart and Richard B. Mrozinski, Methods for Determining the Level of Autonomy to Design into a Human Spaceflight Vehicle: A Function Specific Approach, NASA Johnson Space Center, September 2003, p. 4, available at: www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA515467.

22 US Army Science Board, Ad-hoc Study on Human Robot Interface Issues, September 2002, p. 16, available at: www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA411834.

23 For an overview, see M. Goodrich and A. Schultz, above note 11.

24 Jean Scholtz, “Theory and Evaluation of Human Robot Interactions”, in Proceedings of the 36th Annual Hawaii International Conference on System Sciences, Hawaii, 6–9 January 2003.

25 Hearst, Marti A., “Trends & Controversies: Mixed-Initiative Interaction”, IEEE Intelligent Systems, Vol. 14, No. 5, 1999, p. 14 Google Scholar.

26 Ian Henderson, The Contemporary Law of Targeting, Martinus Nijhoff, Boston, MA, and Leiden, 2009, p. 237.

27 For a more detailed discussion of how a balance between human control and autonomous operation may be achieved at different stages of the targeting process, see Mark Roorda, “NATO's Targeting Process: Ensuring Human Control Over and Lawful Use of “Autonomous” Weapons”, Amsterdam Center for International Law Research Paper No. 2015-06, 2015, available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2593697.

28 Defense Science Board, above note 12, p. 27.

29 Ibid ., p. 24.

30 An actuator is simply a device through which a controller controls a plant. One example would be an electric motor which pivots a gun turret based on a signal from the turret's software-based control system.

31 “Sense-think-act” refers to the continuous process by which a robot perceives its environment, uses that information to “make decisions” according to its programming, and acts on those decisions; see, e.g., G. A. Bekey, above note 11, p. 2.

32 See, e.g., US Air Force, above note 17, p. 16. The OODA loop may be seen as another expression of the “sense-think-act” loop, and some literature on human–machine interaction also refers to an OODA-like loop to describe four types of functions that may be performed by an autonomous system: information acquisition, information analysis, decision selection and action implementation. See, e.g., Parasuraman, Raja, Sheridan, Thomas B. and Wickens, Christopher D., “A Model for Types and Levels of Human Interaction with Automation”, IEEE Transactions on Systems, Man and Cybernetics, Vol. 30, No. 3, 2000, p. 288 Google Scholar.

33 M. Wagner, above note 16, p. 159.

34 Marchant, Gary E. et al. , “International Governance of Autonomous Military Robots”, Columbia Science and Technology Law Review, Vol. 12, 2011, p. 273 Google Scholar.

35 William Boothby, “How Far Will the Law Allow Unmanned Targeting to Go?” in Dan Saxon (ed.), International Humanitarian Law and the Changing Technology of War, Martinus Nijhoff, Boston, MA, and Leiden, 2013, p. 56.

36 Aspray, William, “The Stored Program Concept”, IEEE Spectrum, Vol. 27, No. 9, 1990, p. 51 Google Scholar.

37 See, e.g., Grut, Chantal, “The Challenge of Autonomous Lethal Robotics to International Humanitarian Law”, Journal of Conflict and Security Law, Vol. 18, No. 1, 2013, p. 5 Google Scholar.

38 Training, in this context, means exposing an artificially intelligent system to sets of example data, representing the tasks it will be faced with and the correct responses, in an effort to induce behaviours which produce optimal outcomes at those tasks. Training is essentially inductive in nature, as the “real” situations encountered by a trained system will inevitably differ in some ways from the examples it was trained on, and is therefore error-prone.

39 For a general overview see, e.g., Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, Prentice Hall, Englewood Cliffs, NJ, 2009.

40 Department of Defense, Autonomy Research Pilot Initiative (ARPI) Invitation for Proposals, November 2012, p. 1, available at: www.auvac.org/uploads/publication_pdf/Autonomy%20Research%20Pilot%20Initiative.pdf.

41 Ibid ., p. 4.

43 Strategic Technology Office, DARPA, “Military Imaging and Surveillance Technology – Long Range (MIST-LR) Phase 2”, Broad Agency Announcement No. DARPA-BAA-13-27, 12 March 2013, p. 6, available at: www.fbo.gov/index?s=opportunity&mode=form&id=78b0ddbf382678fa9ace985380108f89&tab=core&_cview=0.

44 Department of Defense, above note 40, p. 4.

45 Office of Naval Research, “Autonomous Aerial Cargo/Utility System Program”, 27 September 2013, available at: www.onr.navy.mil/en/Science-Technology/Departments/Code-35/All-Programs/aerospace-research-351/Autonomous-Aerial-Cargo-Utility-AACUS.aspx.

46 Lockheed Martin Corporation, “K-MAX”, available at: www.lockheedmartin.com/us/products/kmax.html.

47 Mary Cummings and Angelo Collins, Autonomous Aerial Cargo/Utility System (AACUS): Concept of Operations, Office of Naval Research, p. 2, available at: www.onr.navy.mil/~/media/Files/Funding-Announcements/BAA/2012/12-004-CONOPS.ashx.

49 Anthony Finn and Steve Scheding, Developments and Challenges for Autonomous Unmanned Vehicles: A Compendium, Springer, Berlin, 2010, p. 156.

50 M. Cummings and A. Collins, above note 47, p. 2.

51 See, e.g., US Department of Defense, Unmanned Systems Integrated Roadmap FY2011-2036, No. 11-S-3613, 2011, pp. 49–50.

52 US Army Research Laboratory, “Micro Autonomous Systems and Technology (MAST)”, 25 February 2011, available at: www.arl.army.mil/www/default.cfm?page=332.

53 MAST, “Research Thrusts”, available at: www.mast-cta.org.

54 Peter W. Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-First Century, Penguin Press, New York, 2009, p. 229.

55 Federal Business Opportunities, “Hydra”, 22 August 2013, available at: www.fbo.gov/index?s=opportunity&mode=form&id=4cc32f06144bd6f3eba18655135d6155&tab=core&_cview=1.

56 DARPA Tactical Technology Office, “Hydra”, available at: www.darpa.mil/Our_Work/TTO/Programs/Hydra.aspx.

57 John Keller, “DARPA Considers Unmanned Submersible Mothership Designed to Deploy UAVs and UUVs”, Military & Aerospace Electronics, 23 July 2013, available at: www.militaryaerospace.com/articles/2013/07/darpa-uuv-mothership.html.

58 See, generally, David S. Alberts, John J. Garstka and Frederick P. Stein, Network Centric Warfare, 2nd revised ed., Department of Defense Command and Control Research Program, 1999.

59 NATO, “NATO Network Enabled Capability”, 27 October 2010, available at: www.nato.int/cps/de/SID-815535E4-57782C82/natolive/topics_54644.htm.

60 M. P. Fewell and Mark G. Hazen, Network-Centric Warfare: Its Nature and Modelling, Defence Science and Technology Organisation, September 2003, available at: http://dspace.dsto.defence.gov.au/dspace/bitstream/1947/3310/1/DSTO-RR-0262%20PR.pdf.

62 D. S. Alberts, J. J. Garstka and F. P. Stein, above note 58, p. 175; for a fuller discussion of the status of self-synchronization, see B. J. A. van Bezooijen, P. J. M. D. Essens and A. L. W. Vogelaar, “Military Self-Synchronization: An Exploration of the Concept”, in Proceedings of the 11th International Command and Control Research and Technology Symposium, Cambridge, 26–28 September 2006, available at: www.dodccrp.org/events/11th_ICCRTS/html/papers/065.pdf.

63 Sharon Gaudin, “U.S. Military May Have 10 Robots per Soldier by 2023”, Computerworld, 14 November 2013, available at: www.computerworld.com/s/article/9244060/U.S._military_may_have_10_robots_per_soldier_by_2023.

64 It is understood that this point may raise further questions about matters such as what constitutes a “decision” for the purposes of legal analysis, and what conditions are required for a decision to be considered to be within the immediate control of an autonomous system.

65 Armin Krishnan, Killer Robots: Legality and Ethicality of Autonomous Weapons, Ashgate, Farnham, 2009, p. 33.

66 Raytheon, “Patriot”, available at: www.raytheon.com/capabilities/products/patriot/.

67 That is, machines are not subjects of IHL and, it is suggested, the possibility that they may become so in the future is remote.

68 For a discussion of one such question, see McFarland, Tim and McCormack, Tim, “Mind the Gap: Can Developers of Autonomous Weapons Systems be Liable for War Crimes?”, U.S. Naval War College International Law Studies, Vol. 90, 2014, p. 361 Google Scholar, available at: www.usnwc.edu/getattachment/ed8e80ad-b622-4fad-9a36-9bedd71afebe/Mind-the-Gap--Can-Developers-of-Autonomous-Weapons.aspx.

69 ICRC, “War and International Humanitarian Law”, Geneva, 2010, available at: www.icrc.org/eng/war-and-law/overview-war-and-law.htm.

70 Convention for the Amelioration of the Condition of the Wounded in Armies in the Field, 22 August 1864 (entered into force 22 June 1865), Preamble.

71 P. W. Singer, above note 54, p. 203.

72 Additional Protocol (IV) to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be deemed to be Excessively Injurious or to have Indiscriminate Effects (Protocol on Blinding Laser Weapons), 1380 UNTS 370, 13 October 1995 (entered into force 30 July 1998).

73 See, e.g., C. Grut, above note 37; Asaro, Peter, “On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making”, International Review of the Red Cross, Vol. 94, No. 886, 2012, p. 687 CrossRefGoogle Scholar; M. Wagner, above note 16.

74 See, e.g., Kastan, Benjamin, “Autonomous Weapons Systems: A Coming Legal ‘Singularity’?”, Journal of Law, Technology and Policy, No. 1, 2013, p. 60 Google Scholar.

75 M. S. Riza, above note 9, pp. 129–132.

76 See, e.g., Sharkey, Noel E., “The Evitability of Autonomous Robot Warfare”, International Review of the Red Cross, Vol. 94, No. 886, 2012, p. 789 CrossRefGoogle Scholar.

77 P. Asaro, above note 73, p. 692.

78 A. Finn and S. Scheding, above note 49, p. 36: “As the degree of autonomy increases, so it becomes increasingly difficult to predict the sum state of the system.” Also see the discussion of developer accountability on p. 183.