The prospect of “killer robots” may sound like science fiction. However, the attention given to the operations of remotely piloted drones in recent years has also spotlighted the amount of research that is being conducted on weaponized robots that can select and attack targets without the direct oversight of a human operator. Where the armed services of the major industrialized countries were once quick to distance themselves from the use of autonomous weapons, there is increasing speculation within military and policy circles—and within the U.S. military in particular—that the future of armed conflict is likely to include extensive deployment of autonomous weapon systems (AWS).Footnote 1 A critical 2012 report on AWS by Human Rights WatchFootnote 2 and the 2013 launch of an NGO-led campaign for a treaty prohibiting their development and useFootnote 3 has intensified the ongoing ethical debate about them.Footnote 4
My aim in this article is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have typically acknowledged.Footnote 5 Second, I will attempt to defend the intuition that, even if this is so, there is something ethically problematic about such targeting. Given the extent of my ambitions, the dialectic that follows is somewhat complicated and for this reason it will be useful to briefly sketch an outline of the argument here.
My argument proceeds in three parts. In the first section I introduce a working definition of “autonomous” weapons and describe the military dynamics driving the development of these systems. In the second section I survey and evaluate the existing literature on the ethics of AWS. The bulk of this discussion is framed as an account of two “rounds” of debate between an influential advocate for AWS, Ron Arkin, and his critics. In the third and final section I turn to a deeper investigation of the philosophical foundations of the just war doctrine of jus in bello in order to develop a new account of the origins and force of the intuition that the use of killer robots would necessarily be morally problematic. I conclude that although the theoretical foundations of the idea that AWS are weapons that are evil in themselves are weaker than critics have sometimes maintained, they are nonetheless strong enough to support the demand for a prohibition of the development and deployment of such weapons.
The Military Case for Autonomy
Defining Autonomous Weapon Systems
Any sensible discussion of autonomous weapons must begin by clarifying what the author understands by “autonomy.” The difficulties involved in providing a definition of autonomy broad enough to capture what people take to be (alternatively) exciting and/or problematic about these systems, without begging central questions in the debate about the ethics of their use, go a long way toward explaining why the literature on the ethics of AWS is so vexed.Footnote 6 A minimal definition of autonomy vis-à-vis a weapon or weapon system is that it must be capable of some significant operation without direct human oversight. Perhaps the strongest definition would posit that a system must be morally autonomous—that is to say, be a moral agent with free will and be responsible for its own actions—for it to truly possess autonomy.Footnote 7
Thus, as a number of authors have suggested, it is helpful to think about lethal autonomous operations as situated on a spectrum, with, for instance, antipersonnel mines—which “decide” when to explode on the basis of input from a pressure sensor—at one end, and human beings or (theoretical) strong artificial intelligence at the other.Footnote 8 Of course, if one models the operations of autonomous weapons on the assumption that they are merely sophisticated landmines, then it may be hard to see what all the fuss is about. Alternatively, if a weapon must have the capacities of a human being to be autonomous, then it may appear that we have nothing to worry about, as we are a long way from knowing how to design such systems.
Where the debate about lethal autonomous operations gets interesting is somewhere in the middle, wherein the weapon system possesses a complexity that problematizes understanding it as merely a complex sort of landmine, but where it is not so sophisticated as to require strong artificial intelligence (AI). Questions about the appropriate way to allocate responsibility for the consequences of the operations of such systems arise at the upper end of this range of autonomy.Footnote 9 However, even where these questions do not arise, many people have the intuition that there is something morally problematic about robots killing people.
For the purpose of this article, and in order to avoid prejudicing my discussion of the larger literature by insisting on a more precise—and, therefore, inevitably more controversial—definition, I understand an “autonomous” weapon as one that is capable of being tasked with identifying possible targets and choosing which to attack without human oversight, and that is sufficiently complex such that, even when it is functioning perfectly, there remains some uncertainty about which objects and/or persons it will attack and why. This admittedly rough-and-ready definition represents my attempt to single out an interesting category of systems while avoiding entering into an extended and difficult argument about the precise nature of machine autonomy.Footnote 10 While the first part of the foregoing accords with the U.S. Department of Defense's influential definition of autonomy in weapon systems,Footnote 11 the second part is intended to help distinguish between automatic systems, such as the Phalanx Close-in Weapon System,Footnote 12 and potentially more complex systems that may require us to hypothesize on a case-by-case basis about the “reasons” for “its” actions.Footnote 13 Perhaps the paradigmatic case of the latter would be an autonomous weapon wherein genetic algorithms or machine learning played a central role in determining its behavior.Footnote 14 However, I also intend this definition to capture robots of sufficient complexity that do not rely on these mechanisms.
An Arms Race to Autonomous Operations?
Many speculate that the perceived success of remotely piloted drones and other unmanned systems in recent military conflicts means that the development of AWS is more or less inevitable.Footnote 15 There are at least three military and/or technological logics that drive powerfully in the direction of the development of weapon systems that can operate—and kill people—autonomously.
First, the communications infrastructure that enables the operation of existing remotely piloted weapons, such as the United States’ Predator and Reaper drones, places significant limitations on the operations of these systems. The operation of long-range Unmanned Aerial Vehicles (UAVs) requires the transmission of large amounts of data by military satellite and radio systems. This places an upper limit on the number of UAVs that can be fielded at any point in any given theater of operations. It also restricts the capacity to field long-range UAVs to those few states that have the ability to launch and operate communication satellites (or that are able to access bandwidth provided by their allies). The need to sustain regular communication with human operators also effectively rules out a major role for remotely operated submersibles in future naval combat, given that submarines must operate in a communications blackout in order to avoid detection and destruction by enemy forces. The communication systems necessary to allow remote piloting of unmanned systems are also vulnerable to electronic countermeasures and/or kinetic attacks on the physical infrastructure that sustains them. In any future large-scale conflict involving major industrialized powers, military communication satellites would be among the first targets of attack. Developing and deploying AWS would therefore allow more weapons to be fielded and for the systems to be more survivable.Footnote 16
Second, a number of technological factors have combined to greatly increase the tempo of battle over recent decades, especially in air-to-air combat. In conflicts involving modern high-technology military systems, victory may depend upon decisions that must be made in a matter of seconds and that require integrating information from multiple sources. The demands of modern combat already push the limits of what the human nervous system is capable of. In the future only AWS may be capable of reacting within the time frame necessary to facilitate survival in a hostile environment.Footnote 17
Finally, a number of other features of the operations of UAVs and other unmanned weapons systems suggest that it would be preferable to remove human beings from their operations. For example, the routine operation of UAVs such as Predator and Reaper drones is, on all accounts, extremely boring for the vast majority of the time they are in theater. Consequently, pilot fatigue and error remain a significant cause of accidents involving these systems. Autonomous systems might be less prone to mishaps at launch and recovery, and while traveling to the battlespace, than those controlled by human operators. Moreover, training costs, salaries, and medical benefits for the operators of remote systems are significant. Just as unmanned systems have been touted as cheaper than the manned systems they replace, autonomous systems may eventually become less expensive to operate than remotely piloted ones.Footnote 18
The Ethical Case for Autonomy
The development of autonomous weapons might be desirable on grounds unrelated to these military and technological logics. Alternatively, even if the development of these systems is more or less inevitable, it may still be the case that we should resist them on ethical grounds. Indeed, given that military competition between states is driving the rise of AWS, a global arms control treaty prohibiting autonomous weapons may represent the only way to prevent their being developed and fielded.Footnote 19 We must therefore consider the ethical case for (and, later, against) the development and deployment of autonomous weapons.
Arkin and His Critics: Round I
Ronald Arkin is perhaps the most vocal and enthusiastic advocate for developing AWS writing today. He is also actively involved in their development.Footnote 20 In his influential essay “The Case for Ethical Autonomy in Unmanned Systems,” Arkin adduces a number of arguments in favor of autonomous operations.Footnote 21 He argues that, in the future, AWS may be better able to meet the requirements of ethical conduct in war than humans because: robots can be designed to accept higher risks in the pursuit of confidence in targeting decisions; will have better sensors; will not be swayed by emotions, such as fear or anger, which often prompt humans to act unethically; need not suffer from cognitive biases that afflict human decision-making; and will be better able to integrate information quickly from a wide variety of sources.Footnote 22 As I will discuss further below, his identification of the relevant standard of distinction against which the ethical use of AWS should be measured as that achieved by human warfighters is also a crucial intellectual move in the debate about the ethics of autonomous weapons.
Difficulties with Discrimination
Critics of Arkin's proposal have been quick to point out just how far existing robots are from being able to outperform human beings when it comes to adherence to the requirements of jus in bello.Footnote 23 In particular, Arkin systematically underestimates the extent of the challenges involved in designing robots that can reliably distinguish legitimate from illegitimate targets in war.
Despite many decades of research—and much progress in recent years—perception remains one of the “hard problems” of engineering. It is notoriously difficult for a computer to reliably identify objects of interest within a given environment and to distinguish different classes of objects. This is even more the case in crowded and complex unstructured environments and when the environment and the sensor are in motion relative to each other. In order for AWS to be able to identify, track, and target armed men, for instance, they would need to be able to distinguish between a person carrying an assault rifle and a person carrying a metal tube or a folded umbrella. Moreover, in order to be able to assess the likelihood of collateral damage and thus the extent to which a particular attack would satisfy the jus in bello requirement of proportionality, autonomous weapons will need to be able to identify and enumerate civilian targets reliably, as well as potential military targets. Thus, it will not be sufficient for AWS simply to identify and track armed persons (by recognizing the LIDARFootnote 24 signature of an AK-47, for instance)—they must also be able to identify and track unarmed persons, including children, in order to refrain from attacks on military targets that would involve an unacceptably high number of civilian casualties. Weapons intended to destroy armored vehicles must be capable of distinguishing them from among the almost countless different cars and trucks manufactured around the world; autonomous submarines must be able to distinguish warships from merchant vessels, and so on. Moreover, AWS must be capable of achieving these tasks while their sensors are in motion, from a wide range of viewing angles in visually cluttered environments and in a variety of lighting conditions.
These problems may be more tractable in some domains than others, but they are all formidable challenges to the development of AWS. In fact, the problem of discriminating between legitimate and illegitimate targets is even more difficult than the foregoing demonstrates. For instance, not every person carrying a weapon is directly engaged in armed conflict (in many parts of the world carrying a weapon is a matter of male honor); with prior approval, foreign warships can pass through the territorial waters of another state; neutral troops or peacekeeping forces are sometimes present in areas in which legitimate targets are located; and children sometimes climb on decommissioned tanks placed in playgrounds. Thus, in order to discriminate between combatants and noncombatants, it is not sufficient to be able to detect whether someone (or something) is carrying a weapon. Discrimination is a matter of context, and often of political context. It will be extremely difficult to program robots to be able to make this kind of judgment.Footnote 25
Even if a weapon system could reliably distinguish combatants from noncombatants, this is not the same as being able to distinguish between legitimate and illegitimate targets. According to jus in bello conventions, attacks on combatants may be illegitimate in at least three sorts of circumstances: first, where such attacks may be expected to cause a disproportionate number of civilian casualties (Additional Protocol I to the Geneva Conventions, Article 57);Footnote 26 second, where they would constitute an unnecessarily destructive and excessive use of force;Footnote 27 and third, where the target has indicated a desire to surrender or is otherwise hors de combat (Additional Protocol I to the Geneva Conventions, Article 41).Footnote 28 Before it would be ethical to deploy AWS, then, the systems will need to be capable of making these sorts of discriminations, all of which involve reasoning at a high level of abstraction.
Thus, for instance, how many noncombatant deaths it would be permissible to knowingly cause in the course of an attack on a legitimate military target depends on the military advantage that the destruction of the target is intended to serve; the availability of alternative means of attacking the target; the consequences of not attacking the target at that time (which in turn is partially a function of the likelihood that an opportunity to attack the target will arise again); the availability of alternative means of achieving the desired military objective; and the weaponry available to conduct the attack. Similarly, whether an attack would constitute an unnecessarily destructive use of force (which it may, even where there is no risk of killing noncombatants) is a function of the nature of the military object being targeted; the extent of the military advantage the attack is intended to secure; and the availability of alternative, less destructive, means of achieving this advantage.
Assessing these matters requires extensive knowledge and understanding of the world, including the capacity to interpret and predict the actions of human beings. In particular, assessing the extent to which an attack will achieve a definite military advantage requires an understanding of the balance and disposition of forces in the battlespace, the capacity to anticipate the probable responses of the enemy to various threats and circumstances, and an awareness of wider strategic and political considerations.Footnote 29 It is difficult to imagine how any computer could make these sorts of judgments short of the development of a human-level general intelligence—that is, “strong” AI.Footnote 30
Identifying when enemy forces have surrendered or are otherwise hors de combat is also a profound challenge for any autonomous system.Footnote 31 Perhaps it will be possible to program AWS to recognize the white flag of surrender or to promulgate a convention that all combatants will carry a “surrender beacon” that indicates when they are no longer participating in hostilities.Footnote 32 Yet these measures would not resolve the problem of identifying those who are hors de combat. A gravely wounded soldier separated from his comrades is not a legitimate target even if he has not indicated the desire to surrender (indeed, he may have had no opportunity to do so), but it may be extremely hard for a robot to distinguish such a person from one lying in ambush. Similarly, a ship that has had its guns destroyed or that has been holed below the water so that all hands are required just to remain afloat—and is therefore no military threat—will not always have a different radar or infrared profile from a functioning warship. Human beings can often—if not always—recognize such situations with reference to context and expectations about how people will behave in various circumstances. Again, short of possessing a human-level general intelligence, it is difficult to imagine how a computer could make these discriminations.
Possible Solutions? “Ethical” Robots and Human Oversight
Arkin has offered two responses to these sorts of criticisms. I believe both are inadequate.
First, Arkin has suggested that it should be possible to build into the weapon system the capacity to comply with the relevant ethical imperatives through what he calls an “ethical governor.”Footnote 33 This will not, of course, address the problems of identifying and classifying objects in complex environments, although it is possible that improvements in computer vision technology will reduce these problems to a manageable level. More fundamentally, it presumes an impoverished account of ethics as a system of clearly defined rules with a clearly defined hierarchy for resolving clashes between them.
The sketches of deontological or utilitarian systems of ethics that philosophers have developed are just that—sketches. The task of ethical theory is to try to explain and systematize the ethical intuitions that properly situated and adequately informed persons evince when confronted with various ethical dilemmas. These intuitions are extremely complex and context dependent, which is why philosophers are still arguing about whether they are primarily deontological or consequentialist or perhaps virtue-theoretical. It is these—still poorly understood and often highly contested—intuitions that a machine would need to be capable of replicating in order for it to “do” ethics. Moreover, even the schematized accounts of some subsets of these intuitions that philosophers have developed require agents to reason at a high level of abstraction and to be able to make complex contextual judgments for their application. For instance, consequentialists must be capable of predicting the effects of our actions in the real world, making a judgment about when this attempt to track consequences—which are, after all, essentially infinite—may reasonably be curtailed, and assessing the relative value of different states of the world. It is unclear whether even human beings can do this reliably (which itself is a reason to be cautious about embracing consequentialism), but it seems highly unlikely that, short of achieving human-level general intelligence, machines will ever be able to do so. Similarly, Kantian ethics requires agents to identify the moral principles relevant to their circumstances and resolve any clashes between them—again a task that requires a high degree of critical intelligence.Footnote 34
However, the most fundamental barrier to building an “ethical robot” is that ethics is a realm of meanings. That is to say, understanding the nature of our actions—what they mean—is fundamental to ethical reasoning and behavior.Footnote 35 For instance, most of the time intentionally killing a human being is murder—but not during a declared armed conflict, when both the killer and the victim are combatants; or in situations of self-defense; or when it has been mandated by the state after a fair criminal trial. Thus, in order to be able to judge whether a particular killing is murder or not, one must be able to track reliably the application of concepts like intention, rights, legitimacy, and justice—a task that seems likely to remain well beyond the capacity of any computer for the foreseeable future. Perhaps more importantly, the meaning of murder—why it is a great evil—is not captured by any set of rules that distinguishes murder from other forms of killing, but only by its place within a wider network of moral and emotional responses. The idea that a properly programmed machine could behave ethically, short of becoming a full moral agent, only makes sense in the context of a deep-seated behaviorism of the sort that has haunted computer science and cognitive science for decades.
Arkin's second suggestion is that weaponized robots could be designed to allow a human operator to monitor the ethical reasoning of the robot. The operator could then intervene whenever she anticipates that the robot is about to do something unethical.Footnote 36 Other authors have suggested that AWS could be designed to contact and await instruction from a human operator whenever they encounter a situation their own programming is unable to resolve.Footnote 37
This is problematic for two reasons. First, the need to “phone home” for ethical reassurance would mitigate two of the main military advantages of autonomous weapons: their capacity to make decisions more rapidly than human beings,Footnote 38 and their ability to operate in environments where it is difficult to establish and maintain reliable communications with a human pilot.Footnote 39 If an “autonomous” weapon has to rely on human supervision to attack targets in complex environments, it would be, at most, semi-autonomous.Footnote 40 Second, it presumes that the problem of accurately identifying the ethical questions at stake and/or determining when the ethics of an attack is uncertain is more tractable than resolving uncertainty about the ethics of a given action. However, the capacity of AWS to assess their own ability to answer an ethical question would itself require the capacity for ethical deliberation at the same level of complexity needed to answer the original ethical question. Thus, if we cannot trust a machine to make ethical judgments reliably, we cannot trust it to identify when its judgments themselves might be unreliable.
Arkin and His Critics: Round II
For these reasons, I believe that Arkin's critics are correct in arguing that the difficulties of reliably distinguishing between legitimate and illegitimate targets in complex environments probably rules out the ethical use of AWS in many roles for the foreseeable future.Footnote 41 However, Arkin does have available to him two replies to these sorts of criticisms that are more compelling. First, the problem of discriminating between legitimate and illegitimate targets is much more tractable in specific, restricted domains than Arkin's critics—and the arguments above—suggest. Second, Arkin has argued that the relevant standard of reliability in discriminating between legitimate and illegitimate targets, which robots would need to attain in order for their use to be ethical, is that achieved by human warfighters, which is much lower than might first appear. If AWS would kill fewer noncombatants than human troops, this establishes a strong consequentialist case for their deployment, regardless of other ethical concerns about them. As I will discuss below, a number of other advocates for autonomous weapons have also made a nonconsequentialist argument for the ethical use of autonomous weapons from their putative reliability compared to human warfighters.
One possible solution to the problems discussed above would be to constrain the spatial domain of the operations of AWS and/or the sorts of systems they are tasked with destroying.Footnote 42 How difficult it is to distinguish between a military and nonmilitary object depends on their specific features as well as the sensors available to the AWS; how difficult it is to avoid collateral damage depends upon the relative number of legitimate and illegitimate targets within an area of operations. In anti-submarine warfare, for instance, there are few civilian targets.Footnote 43 Similarly, in air-to-air combat, counter-artillery operations, or the suppression of enemy air defenses it is relatively straightforward to distinguish military from nonmilitary systems.Footnote 44 Tanks and mechanized artillery, and—to a lesser extent—naval assets, also have, for the most part, distinctive visual silhouettes and radar and infrared signatures that distinguish them from the nonmilitary objects (cars, trucks, merchant ships, and so on) among which they might be found. When potential targets are mechanized and combat is confined to a distinct theater of operations, it is much more plausible to hold that autonomous weapons will be capable of reliably identifying potential military targets and distinguishing combatants from noncombatants. Indeed, existing target identification systems are already capable of reliably distinguishing between military and civilian systems in these domains.Footnote 45 The claim that autonomous weapons will never be capable of reliably distinguishing between military and nonmilitary targets therefore appears incorrect.
Nevertheless, not every military target is a legitimate one. Even in these restricted domains, then, the challenge of discriminating between legitimate and illegitimate targets is harder than first appears. In order to be able to avoid causing disproportionate civilian casualties, AWS must be capable not only of identifying potential military targets but also of detecting the presence of civilians in the target area; in addition, proportionality requires them to be able to identify the civilian objects (churches, hospitals, and such) that are relevant to this calculation. They must also be able to determine when attacks on military targets are justified by the principle of necessity and will secure a definite military advantage.Footnote 46 Yet when combat is occurring in a discrete geographical area, especially in the air, in outer space, or underwater—or, more controversially, when civilians have been given sufficient warning to vacate an area in which hostilities are about to commence—and when victory in this context would advance an important military objective, it might prove possible to guarantee that the destruction of any of the military objects present would be justified. It is less clear, however, that the problem of identifying forces that have surrendered or are otherwise hors de combat is any more tractable simply because AWS will be restricted to a specific geographical area. The idea of “surrender beacons” is, perhaps, more practicable when the forces engaged are military assets rather than personnel. Yet the problem of identifying when enemy forces are illegitimate targets by virtue of being so incapacitated by wounds or damage that it is no longer reasonable to consider them as constituting a military threat remains profound. Nevertheless, it seems likely that by confining the operations of AWS to a carefully delineated “kill box,” it might be possible to greatly reduce the risk of attacks on illegitimate targets.
The Consequentialist Case for Autonomy
At this point, Arkin has one further counterargument. That is, while he concedes that the task of designing AWS capable of distinguishing between legitimate and illegitimate targets is difficult, he claims that it is conceivable that the judgment of AWS may someday be superior to that of humans.Footnote 47 Indeed, by highlighting the real world attitudes and behaviors of U.S. soldiers deployed in Iraq, Arkin has argued persuasively that human warfighters are actually quite bad at behaving ethically during wartime.Footnote 48
However, this is arguably the wrong standard to set when considering whether the use of a weapon system would be ethical.Footnote 49 Given that what is at stake is the value of an innocent human life, when it comes to protecting noncombatants from deliberate (or negligent) attack it might be argued that the relevant ethical standard is perfection. For instance, one could not justify deliberately or negligently killing two civilians for every ten combatants by pointing out that other warfighters typically kill three civilians for every ten combatants in similar circumstances. It is reasonable to expect human warfighters not to deliberately target noncombatants or use disproportionate force because they have both the power and the freedom not to. Thus, it is reasonable to expect perfect ethical compliance from humans, even if they seldom achieve it.Footnote 50 Putting AWS into combat when it would be unreasonable to expect that they will not violate the requirements of distinction and proportionality could only be defensible if one believes that the only relevant consideration is the eventual number of civilian casualties. Arkin's argument about the benefits of autonomous targeting therefore depends on adopting a consequentialist ethical framework that is concerned only with the reduction of civilian casualties—a controversial position, especially in the ethics of warfare.
There is, however, another version of this argument, which buttresses the claim about the relative effectiveness of weaponized robots with an appeal to the agency of those their operations might threaten. Thus, Brian Williams, for instance, has argued that the civilian population in the autonomous tribal areas of Pakistan actually prefer operations against al-Qaeda militants to be conducted via drone attacks because the alternative—antiterrorist operations by the Pakistani armed forces—is so much more destructive.Footnote 51 By appealing to the consent of the civilian population, this argument in support of AWS appears to mobilize powerful deontological intuitions.Footnote 52 Yet, on closer inspection, it is a red herring. Consider the nature of the circumstances in which civilians in Pakistan or Sudan might say that they prefer the operations of AWS to the deployment of human warfighters in the areas where they live. These civilians likely bear no responsibility for the events that led to the current conflict there, yet they face the choice of being threatened with death as a result of the operations of poorly trained and often terrified human beings or by autonomous weapons. This is a less than ideal circumstance in which to be trying to secure a meaningful consent, to say the least. Indeed, in many ways one would have to say that this “consent” is coerced. It is like saying to the civilian population in the theater of operations, “Let us risk your life with AWS; otherwise we will threaten you with our (human) armed forces.” While it may be rational for them to prefer AWS to human warfighters, the fact that they do so hardly justifies their use.
The Prospects for Ethical Autonomous Targeting Thus Far
The prospects for ethical autonomous targeting are, therefore, according to my investigation here, mixed. Critics of AWS are correct in holding that the difficulties involved in operating in accordance with the principles of jus in bello are profound and unlikely to be resolvable in urban environments for the foreseeable future. On the other hand, in specific limited domains—and, in particular, in operations against naval assets, tanks, self-propelled artillery, and/or aircraft in a given geographical area—it may be possible for robots to distinguish between legitimate and illegitimate targets with a high degree of reliability. Indeed, in this context AWS might prove even more reliable than human beings, as Arkin has argued. At the very least, the possibility of deploying AWS in this fashion establishes that they are not, as some have suggested, inherently indiscriminate weapons.
At this stage, then, it would be premature to conclude that any of the ethical arguments I have surveyed thus far stand as an insurmountable barrier to the ethical operations of AWS. If we are to explain the widespread ethical intuition that there is something profoundly disturbing about the prospect of “killer robots” we must delve deeper into the philosophical foundations of just war theory.
Robots and Respect
There is, as I have argued elsewhere, a case to be made against developing and deploying robotic weapons in general—both tele-operated and autonomous weapon systems—on the basis of the doctrine of jus ad bellum.Footnote 53 The fact that robotic weapons hold out the prospect of the use of force without risk to one's troops and the likelihood that such systems will be used in more aggressive postures during peace time—again, due to the lack of threat to the life of the “pilot”—suggests that these systems will lower the threshold of conflict and make war more likely.Footnote 54 Furthermore, as Paul Kahn has argued, the pursuit of risk-free warfare problematizes the justification of wars of humanitarian intervention by juxtaposing the high value based on the lives of our own military personnel against the lower value placed on the lives of those in the theater of conflict, whose rights and welfare are supposed to justify the intervention, but who are placed at higher risk of death as a result of the use of robotic weapons.Footnote 55 However, these sorts of concerns are not specific to AWS and have force against a wider range of means of long-distance war fighting.Footnote 56
AWS and Jus in Bello
If there is going to be anything uniquely morally problematic about AWS, then, the explanation will need to be located within the doctrine of jus in bello. In an influential article on the moral foundations of the principles of jus in bello, Thomas Nagel argues that the force of these principles can only be explained by the idea that they are founded in absolutist moral reasoning.Footnote 57 Nagel develops an essentially Kantian account of the key injunctions of jus in bello by way of a principle of respect for the moral humanity of those involved in war. He argues that even during wartime it is essential that we acknowledge the personhood of those with whom we interact and that
whatever one does to another person intentionally must be aimed at him as a subject, with the intention that he receive it as a subject. It should manifest an attitude to him rather than just to the situation, and he should be able to recognize it and identify himself as its object.Footnote 58
Another way of putting this is that we must maintain an “interpersonal” relationship with other human beings, even during wartime. Obviously, if this principle is to serve as a guide to the ethics of war—rather than as a prohibition against war—the decision to take another person's life must be compatible with such a relationship.Footnote 59
Thus, on Nagel's account, applying the principles of distinction and proportionality involves establishing this interpersonal relationship with those who are the targets of a lethal attack—or who might be killed as a result of an attack targeting another—and acknowledging the morally relevant features that render them combatants or otherwise legitimately subjected to a risk of being killed. In particular, in granting the possibility that they might have a right not to be subject to direct attack by virtue of being a noncombatant, one is acknowledging their humanity.Footnote 60 This relationship is fundamentally a relationship between agents—indeed, between members of the Kantian “kingdom of ends.” Immediately, then, we can see why AWS might be thought to be morally problematic, regardless of how reliable they might be at distinguishing between legitimate and illegitimate targets.Footnote 61 When AWS decide to launch an attack the relevant interpersonal relationship is missing.Footnote 62 Indeed, in some fundamental sense there is no one who decides whether the target of the attack should live or die. The absence of human intention here appears profoundly disrespectful.
“Killer Robots” or “Robots for Killing”?
I believe this intuition is central to popular concerns about “killer robots.” However, importantly, this way of understanding the ethics of AWS treats a robot as though “it” were doing the killing. Short of the development of artificial intelligences that are actually moral agents, this seems problematic. We might equally well think of a robot as a tool by which one person attempts to kill another—albeit an indeterminate other.Footnote 63 The relevant interpersonal relationship would then be that between the officer who authorizes the release of the weapon and those the officer intends to kill. Neither the fact that the person who authorizes the launch does not know precisely who she is killing when she sends an AWS into action nor the fact that the identity of those persons may be objectively indeterminate at the point of launch, seems to rule out the possibility of the appropriate sort of relationship of respect.
When a missile officer launches a cruise missile to strike a set of GPS coordinates 1,000 kilometers away, it is highly unlikely that she knows the identity of those she intends to kill.Footnote 64 Similarly, mines and improvised explosive devices (IEDs) kill anyone who happens to trigger them and thus attack persons whose identity is actually indeterminate and not merely contingently unknown. If an interpersonal relationship is possible while using these weapons, it is not clear why there could not be an interpersonal relationship between the commanding officer launching AWS and the people these weapons kill. Thus, neither of these features of AWS would appear to function as an absolute barrier to the existence of the appropriate relationship of respect. That said, however, it is important to note that this comparison is not entirely favorable to either AWS or these other sorts of weapons. People often do feel uneasy about the ethics of anonymous long-range killing and also—perhaps especially—about landmines and IEDs.Footnote 65 Highlighting the analogies with AWS might even render people more uncomfortable with these more familiar weapons. Nevertheless, insofar as contemporary thinking about jus in bello has yet decisively to reject other sorts of weapons that kill persons whose identity is unknown or actually indeterminate without risk to the user, it might appear illogical to reject AWS on these grounds.
It is also worth noting that the language of machine autonomy sits uneasily alongside the claim that autonomous systems are properly thought of merely as tools to realize the intentions of those who wield them.Footnote 66 The more advocates of robotic weapons laud their capacity to make complex decisions without input from a human operator, the more difficult it is to believe that AWS connect the killer and the killed directly enough to sustain the interpersonal relationship that Nagel argues is essential to the principle of distinction. That is to say, even if the machine is not a full moral agent, it is tempting to think that it might be an “artificial agent” with sufficient agency, or a simulacrum of such, to problematize the “transmission” of intention. This is why I have argued elsewhere that the use of such systems may render the attribution of responsibility for the actions of AWS to their operators problematic.Footnote 67 As Heather Roff has put it,Footnote 68 drawing on the work of Andreas Matthias,Footnote 69 the use of autonomous weapons seems to risk a “responsibility gap”; and where this gap exists, it will not be plausible to hold that when a commander sends AWS into action he or she is acknowledging the humanity of those the machines eventually kill.
However, this argument about responsibility has been controversial and ultimately, I suspect, turns upon an understanding of autonomy that is richer and more demanding than that which I have assumed here.Footnote 70 At least some of the “autonomous” weapons currently in early development seem likely to possess no agency whatsoever and thus arguably should be thought of as transmitting the intentions of those who command their use.
What the Use of AWS Says About Our Attitude Toward Our Enemies
Yet this is not the end of an investigation into the implications of a concern for respect for the ethics of AWS. As Nagel acknowledges, there is a conventional element to our understanding of the requirements of respect.Footnote 71 What counts as the humane or inhumane treatment of a prisoner, for instance, or as the desecration of a corpse, is partially a function of contemporary social understandings. Thus, certain restrictions on the treatment of enemy combatants during wartime have ethical force simply by virtue of being widely shared. Moreover, there is ample evidence that existing social understandings concerning the respectful treatment of human beings argue against the use of AWS being ethical. A recent public opinion survey, for example, found high levels of hostility to the prospect of robots being licensed to kill.Footnote 72 Most people already feel strongly that sending a robot to kill would express a profound disrespect of the value of an individual human life.Footnote 73
Evidence that what we express when we treat our enemies in a certain way is sometimes crucial to the morality of warfare is provided by how widely shared is the intuition that the mutilation and mistreatment of corpses is a war crime. Such desecration does not inflict “unnecessary suffering” on the enemy; rather, it is wrong precisely because and insofar as it expresses a profound disrespect for their humanity. Importantly, while the content of what counts as a “mistreatment” or “mutilation” is conventional and may change over time, the intuition that we are obligated to treat even the corpses of our enemies with respect is deeper and much less susceptible to revision.
The ethical principles of jus in bello allow that we may permissibly attempt to kill our enemy, even using means that will inevitably leave them dying horribly. Yet these principles also place restrictions on the means we may use and on our treatment of the enemy more generally. I have argued—following Nagel—that this treatment should be compatible with respect for the humanity of our enemy and that the content of this concept is partially determined by shared social understandings regarding what counts as respectful treatment. Furthermore, I have suggested that widespread public revulsion at the idea of autonomous weapons should be interpreted as conveying the belief that the use of AWS is incompatible with such respect. If I am correct in this, then even if an interpersonal relationship may be held to exist between the commanding officer who orders the launch of an autonomous weapon system and the individuals killed by that system, it should be characterized as one of disrespect.
Interestingly, conceiving of AWS simply as the means whereby the person who authorizes the launch of the robot attempts to kill the intended targets vitiates an influential criticism of the campaign to ban these systems.Footnote 74 Defenders of AWS have suggested that robotic weapons could not be morally problematic “in themselves” because those who might be killed by robots would die as a result of the effects of weapons that are—understood in a more narrow sense—identical to those that a human might use. In conventional military terminology, Predator drones—and by extension, perhaps, future AWS—would ordinarily be understood as platforms from which a weapon (such as a Hellfire missile) may be delivered.Footnote 75 Correspondingly, defenders of AWS claim that it could make no difference to the suffering or the nature of the death of those killed whether the Hellfire missile was fired from an AWS, from a (remotely piloted) Predator drone, or from a (manned) Apache helicopter. Yet if, when it comes to the question of the presence or absence of an “interpersonal” relationship, we are going to understand AWS as a means of attacking targets, we must also understand them as the means the user employs to kill others when it comes to the evaluation of the nature of that means. Indeed, it is quite clear that a combatant who launches AWS is not herself launching Hellfire missiles. Consequently, there is nothing especially problematic with the idea that AWS might be an illegitimate means of killing by virtue of being profoundly disrespectful of the humanity of our enemy.
The Case for Banning AWS
I believe that the contemporary campaign to ban autonomous weapons should be understood as an attempt to entrench a powerful intuitive objection to the prospect of a disturbing new class of weapons in international law: AWS should be acknowledged as mala in se by virtue of the extent to which they violate the requirement of respect for the humanity of our enemies, which underlies the principles of jus in bello.Footnote 76 That the boundaries of such respect are sometimes—as in this case—determined by convention (in the sense of shared social understandings rather than formal rules) does not detract from the fact that it is fundamental to the ethics of war.
A number of critics of the campaign to ban AWS have objected that this proposal is premature and that until we have seen robot weapons in action, we cannot judge whether they would be any better or worse, morally speaking, than existing weapons systems.Footnote 77 Yet insofar as a ban on AWS is intended to acknowledge that the use (rather than the effects) of robotic weapons disrespects the humanity of their targets, this objection has little force.
There is, of course, something more than a little intellectually unsettling about the attempt to place a class of weapons in the category of mala in se through legislation or (legal) convention: what is mala in se should ideally be recognized independently of positive law. Yet if we are honest about the matter, we will admit that there has always been controversy about the extent of this class of weapons, and that some weapons now held to be evil in themselves were once widely believed to be legitimate means of waging war. Only after a period of contestation and moral argument were technologies such as chemical and nuclear weapons acknowledged as prohibited.Footnote 78 The current situation regarding the campaign against AWS is therefore analogous to the way in which the campaigns against the use of chemical weapons at the beginning of the twentieth century and against the use of cluster munitions in the 1990s proceeded.Footnote 79 Should this campaign ultimately prove successful, we will understand it to have recognized truths about these weapons that existed independently of—and prior to—the resulting prohibition.Footnote 80 In the meantime, the strength and popular currency of the intuition that the use of AWS would profoundly disrespect the humanity of those they are tasked to kill is sufficient justification to try to establish such a prohibition.
Conclusion
The prospect of AWS being capable of meeting the jus in bello requirements of distinction and proportionality in the context of counterinsurgency warfare and/or complex urban environments remains remote. However, in particular limited domains, the major barrier to AWS being able to reliably distinguish legitimate from illegitimate targets would appear to be their capacity to detect when enemy forces have surrendered or are otherwise hors de combat. If these difficulties can be overcome, then concerns about the capacity of AWS to identify and attack only the appropriate targets are unlikely to rule out the ethical use of these systems.
The strength of the case for autonomous weapons will also depend on how we assess the relative weight of consequentialist and deontological considerations in the ethics of war. If our main consideration is to reduce the number of noncombatant deaths, it becomes easier to imagine AWS being ethical: they would simply have to be better than human beings at distinguishing between legitimate and illegitimate targets in some given domain.Footnote 81 However, if we are concerned with what we owe noncombatants and others who are not legitimately subject to lethal force, then the merely statistical form of discrimination achievable by robots may be insufficient.
The deeper issue regarding the ethics of AWS, though, concerns whether the use of these weapons is compatible with the requirement of respect for the humanity of our enemies, which underpins the principles of jus in bello. If we understand AWS as “artificial agents” that choose which targets to attack and when, it is likely that the necessary relationship of respect is absent and, therefore, that their use would be unethical. Yet in many cases it may in fact be more plausible to consider AWS as the means whereby the person who is responsible for their launch kills those that the AWS are tasked to attack. However, this means of killing may itself be unethical insofar as it expresses a profound disrespect for the humanity of our enemies. Because this argument relies on a reference to conventions—that is, to social expectations that acquire normative force simply by virtue of being widely shared—to settle the question of what respect requires, the case against AWS is much weaker than critics might prefer. Nevertheless, the line of argument I have developed here is still equal to the task of justifying an international treaty prohibiting the development and deployment of AWS on the grounds that such weapons are “evil in themselves.”
There are, of course, further questions about whether it is realistic to imagine such a prohibition coming into force, let alone being effective in preventing (or at least significantly delaying) the deployment of AWS.Footnote 82 States that have the capacity to develop or field such weapons will also have to confront the question of whether the ethical case for any such treaty is worth whatever sacrifice of military advantage that might result from signing it.Footnote 83 These are matters for further discussion and argument—and where, moreover, philosophers may have little to contribute.Footnote 84 What I have shown here is that there is an ethical case to be made for working toward such a treaty.