
I Introduction
In March 2018, a Volvo XC90 vehicle that was being used to test Uber’s emerging automated vehicle technology killed a pedestrian crossing a road in Tempe, Arizona.Footnote 1 At the time of the incident, the vehicle was in “autonomous mode” and the vehicle’s safety driver, Rafaela Vasquez, was allegedly streaming television onto their mobile device.Footnote 2 In November 2019, the National Transportation Safety Board found that many factors contributed to the fatal incident, including failings from both the vehicle’s safety driver and the programmer of the autonomous system, Uber.Footnote 3 Despite Vasquez later being charged with negligent manslaughter in relation to the incident,Footnote 4 criminal investigations into Uber were discontinued in March 2019.Footnote 5 This instance is particularly emblematic of the current tendency to consider responsibility for actions and decisions of autonomous vehicles (AVs) as lying primarily with users of these systems, and not programmers or developers.Footnote 6
In the military realm, similar issues have arisen. For example, it is alleged that in 2020 an autonomous drone system, the STM Kargu-2, may have been used during active hostilities in Libya,Footnote 7 and that such autonomous weapons (AWs) were programmed to attack targets without requiring data connectivity between the operator and the use of force.Footnote 8 Although AW technologies have not yet been widely used by militaries, for several years, governments, civil society, and academics have debated their legal position, highlighting the importance of retaining “meaningful human control” (MHC) in decision-making processes to prevent potential “responsibility gaps.”Footnote 9 When debating MHC over AWs as well as responsibility issues, users or deployers are more often scrutinized than programmers,Footnote 10 the latter being considered too far removed from the effects of AWs. However, programmers’ responsibility increasingly features in policy and legal discussions, leaving many interpretative questions open.Footnote 11
To fill this gap in the current debates, this chapter seeks to clarify the role of programmers, understood simply here as a person who writes programmes that give instructions to computers, in crimes committed with and not by AVs and AWs (“AV- and AW-related crimes”). As artificial intelligence (AI) systems cannot provide the elements required by criminal law, i.e. the mens rea, the mental element, and the actus reus, the conduct element, including its causally connected consequence,Footnote 12 the criminal responsibility of programmers will be considered in terms of direct responsibility for commission of crimes, i.e., as perpetrators or co-perpetrators,Footnote 13 rather than vicarious or joint responsibility for crimes committed by AI. Programmers could, e.g., be held responsible on the basis of participatory modes of responsibility, such as aiding or assisting users in perpetrating a crime. Despite their potential relevance, participatory modes of responsibility under national and international criminal law (ICL) are not analyzed in this chapter, as that would require a separate analysis of their actus reus and mens rea standards. Finally, it must be acknowledged that as used in this chapter, the term “programmer” is a simplification. The development of AVs and AWs entails the involvement of numerous actors, internal and external to tech companies, such as developers, programmers, data labelers, component manufacturers, software developers, and manufacturers. These distinctions might entail difficulties in individualizing responsibility and/or a distribution of criminal responsibility, which could be captured by participatory modes of responsibility.Footnote 14
This chapter will examine the criminal responsibility of programmers through two examples, AVs and AWs. While there are some fundamental differences between AVs and AWs, there are also striking similarities. Regarding differences, AVs are a means of transport, implying the presence of people onboard, which will not necessarily be a feature of AWs. As for similarities, both AVs and AWs depend on object recognition technology.Footnote 15 Central to this chapter is the point that both AVs and AWs can be the source of incidents resulting in harm to individuals; AWs are intended to kill, are inherently dangerous, and can miss their intended target, and while AVs are not designed to kill, they can cause death by accident. Both may unintentionally result in unlawful harmful incidents.
The legal focus regarding the use of AVs is on crimes against persons under national criminal law, e.g., manslaughter and negligent homicide, and regarding the use of AWs, on crimes against persons under ICL, i.e., war crimes against civilians, such as those found in the Rome Statute of the International Criminal Court (“Rome Statute”)Footnote 16 and in the First Additional Protocol to the Geneva Conventions (AP I).Footnote 17 A core issue is whether programmers could fulfil the actus reus, including the requirement of causation, of these crimes. Given the temporal and spatial gap between programmer conduct and the injury, as well as other possibly intervening causes, a core challenge in ascribing criminal responsibility lies in determining a causal link between programmers’ conduct and AV- and AW-related crimes. To determine causation, it is necessary to delve into the technical aspects of AVs and AWs, and consider when and which of their associated risks can or cannot be, in principle, imputable to a programmer.Footnote 18 Adopting a preliminary categorization of AV- and AW-related risks based on programmers’ alleged control or lack of it over the behavior and/or effects of AVs and AWs, Sections II and III consider the different risks and incidents entailed by the use of AVs and AWs. Section IV turns to the elements of AV- and AW-related crimes, focusing on causation tests and touching on mens rea. Drawing from this analysis, Section V turns to a notion of MHC over AVs and AWs that incorporates requirements for the ascription of criminal responsibility and, in particular, causation criteria to determine under which conditions programmers exercise causal control over the unlawful behavior and/or effects of AVs and AWs.
II Risks Posed by AVs and Programmer Control
Without seeking to identify all possible causes of AV-related incidents, Section II begins by identifying several risks associated with AVs: algorithms, data, users, vehicular communication technology, hacking, and the behavior of bystanders. Some of these risks are also applicable to AWs.Footnote 19
In order to demarcate a programmer’s criminal responsibility, it is crucial to determine whether they ultimately had control over relevant behavior and effects, e.g., navigation and possible consequences of AVs. Thus, the following sections make a preliminary classification of risks on the basis of the programmers’ alleged control over them. While a notion of MHC encompassing the requirement of causality in criminal law will be developed in Section V, it is important to anticipate that a fundamental threshold for establishing the required causal nexus between conduct and harm is whether a programmer could understand and foresee a certain risk, and whether the risk that materialized was within the scope of the programmer’s “functional obligations.”Footnote 20
II.A Are Programmers in Control of Algorithm and Data-Related Risks in AVs?
Before turning to the risks and failures that might lie in algorithm design and thus potentially under programmer control, this section describes the tasks required when producing an AV, and then reviews some of the rules that need to be coded to achieve this end.
The main task of AVs is navigation, which can be understood as the AV’s behavior as well as the algorithm’s effect. Navigation on roads is mostly premised on rules-based behavior requiring knowledge of traffic rules and the ability to interpret and react to uncertainty. In AVs, automated tasks include the identification and classification of objects usually encountered while driving, such as vehicles, traffic signs, traffic lights, and road lining.Footnote 21 Furthermore, “situational awareness and interpretation”Footnote 22 is also being automated. AVs should be able “to distinguish between ordinary pedestrians (merely to be avoided) and police officers giving direction,” and conform to social habits and rules by, e.g., “interpret[ing] gestures by or eye contact with human traffic participants.”Footnote 23 Finally, there is an element of prediction: AVs should have the capability to anticipate the behavior of human traffic participants.Footnote 24
In AV design, the question of whether traffic rules can be accurately embedded in algorithms, and if so who is responsible for translating these rules into algorithms, becomes relevant in determining the accuracy of the algorithm design as well as attributing potential criminal responsibility. For example, are only programmers involved, or are lawyers and/or manufactures also involved? While some traffic rules are relatively precise and consist of specific obligations, e.g., a speed limit represents an obligation not to exceed that speed,Footnote 25 there are also several open-textured and context-dependent traffic norms, e.g., regulations requiring drivers to drive carefully.Footnote 26
AV incidents might stem from a failure of the AI to identify objects or correctly classify them. For example, the first widely reported incident involving an AV in May 2016 was allegedly caused by the vehicle sensor system’s failure to distinguish a large white truck crossing the road from the bright spring sky.Footnote 27 Incidents may also arise due to failures to correctly interpret or predict the behavior of others or traffic conditions, which may sometimes be interlinked with or compounded by problems of detection and sensing.Footnote 28 In turn, mistakes in both object identification and prediction might occur as a result of faulty algorithm design and/or derived from flawed data. In the former case, prima vista, if mistakes in object identification and/or prediction occur due to an inadequate algorithm design, the criminal responsibility of programmers could be engaged.
In relation to the latter, the increasing and almost dominant use of machine learning (ML) algorithms in AVsFootnote 29 means that issues of algorithms and data are interrelated. The performance of algorithms has become heavily dependent on the quality of data. A multitude of different algorithms are used in AVs for different purposes, with supervised and unsupervised learning-based algorithms often complementing one another. Supervised learning, in which an algorithm is fed instructions on how to interpret the input data, relies on a fully labeled dataset. Within AVs, the supervised learning models are usually: (1) “classification” or “pattern recognition algorithms,” which process a given set of data into classes and help to recognize categories of objects in real time, such as street signs; and (2) “regression,” which is usually employed for predicting events.Footnote 30 In cases of supervised learning, mistakes can arise from incorrect data annotation instead of a faulty algorithm design per se. If incidents do occur,Footnote 31 the programmer arguably would not be able to foresee those risks and be considered in control of the subsequent navigation decisions.
Other issues may arise with unsupervised learningFootnote 32 where an ML algorithm receives unlabeled data and programmers “describe the desired behaviour and teach the system to perform well and generalise to new environments through learning.”Footnote 33 Data can be provided in the phase of simulating and testing, but also during the use itself by the end-user. Within such methods, “deep learning” is increasingly used to improve navigation in AVs. Deep learning is a form of unsupervised learning that “automatically extracts features and patterns from raw data [such as real-time data] and predicts or acts based on some reward function.”Footnote 34 When an incident occurs due to deep learning techniques using real data, it must be assessed whether the programmer could have foreseen that specific risk and the resulting harm, or whether it derived, e.g., from an unforeseeable interaction with the environment.
II.B Programmer or User: Who Is in Control of AVs?
As shown in the March 2018 Uber incident,Footnote 35 incidents can also derive from failures of the user to regain control of the AV, with some AV manufacturers attempting to shift the responsibility for ultimately failing to avoid collisions onto the AVs’ occupants.Footnote 36 However, there are serious concerns as to whether an AV’s user, who depending on the level of automation is essentially in an oversight role, is cognitively in the position to regain control of the vehicle. This problem is also known as automation bias,Footnote 37 a cognitive phenomenon in human–machine interaction, in which complacency, decrease of attention, and overreliance on the technology might impair the human ability to oversee, intervene, and override the system if needed.
Faulty human–machine interface (HMI) design, i.e., the technology which connects an autonomous system to the human, such as a dashboard or interface, could cause the inaction of the driver in the first place. In these instances, the driver could be relieved from criminal responsibility. Arguably, HMIs do not belong to programmers’ functional obligations and therefore fall outside of a programmer’s control.
There are phases other than actual driving where a user could gain control of an AV’s decisions. Introducing ethics settings into the design of AVs may ensure control over a range of morally significant outcomes, including trolley-problem-like decisions.Footnote 38 Such settings may be mandatorily introduced by manufacturers with no possibility for users to intervene and/or customize them, or they may be customizable by users.Footnote 39 Customizable ethics settings allow users “to manage different forms of failure by making autonomous vehicles follow [their] decisions” and their intention.Footnote 40
II.C Are Some AV-Related Risks Out of Programmer Control?
There are a group of risks and failures that could be considered outside of programmer control. These include communications failures, hacking of the AV by outside parties, and unforeseeable bystander behavior. One of the next steps predicted in the field of vehicle automation is the development of software enabling AVs to communicate with one another and to share real-time data gathered from their sensors and computer systems.Footnote 41 This means that a single AV “will no longer make decisions based on information from just its own sensors and cameras, but it will also have information from other cars.”Footnote 42 Failures in vehicular communication technologiesFootnote 43 or inaccurate data collected by other AVs cannot be attributed to a single programmer, as they might fall beyond their responsibilities and functions, and also beyond their control.
Hacking could also cause AV incidents. For example, “placing stickers on traffic signs and street surfaces can cause self-driving cars to ignore speed restrictions and swerve headlong into oncoming traffic.”Footnote 44 Here, the criminal responsibility of a programmer could depend on whether the attack could have been foreseen and whether the programmer should have created safeguards against it. However, the complexity of AI systems could make them more difficult to defend from attacks and more vulnerable to interference.Footnote 45
Finally, imagine an AV that correctly follows traffic rules, but hits a pedestrian who unforeseeably slipped and fell onto the road. Such unforeseeable behavior of a bystander is relevant in criminal law cases on vehicular homicide, as it will break the causal nexus between the programmer and the harmful outcome.Footnote 46 In the present case, it must be determined which unusual behavior should be foreseen at the stage of programming, and whether standards of foreseeability in AVs should be higher for human victims.
III Risks Posed by AWs and Programmer Control
While not providing a comprehensive overview of the risks inherent in AWs, Section III follows the structure of Section II by addressing some risks, including algorithms, data, users, communication technology, hacking and interference, and the unforeseeable behavior of individuals in war, and by distinguishing risks based on their causes and programmers’ level of control over them. While some risks cannot be predicted, the “development of the weapon, the testing and legal review of that weapon, and th[e] system’s previous track record”Footnote 47 could provide information about the risks involved in the deployment of AWs. Some risks could be understood and foreseen by the programmer and therefore be considered under their control.
III.A Are Programmers in Control of Algorithm and Data-Related Risks in AWs?
Autonomous drones provide an example of one of the most likely applications of autonomy within the military domain,Footnote 48 and this example will be used to highlight the increasingly autonomous tasks in AWs. This section will address the rules to be programmed, and identify where some risks might lie in the phase of algorithm design.
The two main tasks being automated in autonomous drones are: (1) navigation, which is less problematic than on roads and a relatively straightforward rule-based behavior, i.e., they must simply avoid obstacles while in flight; and (2) weapon release, which is much more complex as “ambiguity and uncertainty are high when it comes to the use of force and weapon release, bringing this task in the realm of expertise-based behaviours.”Footnote 49 Within the latter, target identification is the most important function because it is crucial to ensure compliance with the international humanitarian law (IHL) principle of distinction, the violation of which could also cause individual criminal responsibility for war crimes. The principle of distinction establishes that belligerents and those executing attacks must distinguish at all times between civilians and combatants, and not target civilians.Footnote 50 In target identification, the two main automated tasks are: (1) object identification and classification on the basis of pattern recognition;Footnote 51 and (2) prediction, e.g., predicting that someone is surrendering, or based on the analysis of patterns of behavior, predicting that someone is a lawful target.Footnote 52
Some of the problems in the algorithm design phase may derive from translating the open-textured and context-dependentFootnote 53 rules of IHL,Footnote 54 such as the principle of distinction, into algorithms, and from incorporating programmer knowledge and expert-based rules,Footnote 55 such as those needed to analyze patterns of behavior in targeted strikes and translate them into code.
There are some differences compared with the algorithm design phase in AVs. Due to the relatively niche and context-specific nature of IHL, compared to traffic law which is more widely understood by programmers, programming IHL might require a stronger collaboration with outside expertise, i.e., military lawyers and operators.
However, similar observations to AVs can be made in relation to supervised and unsupervised learning algorithms. Prima vista, if harm results from mistakes in object identification and prediction based on an inadequate algorithm design, the criminal responsibility of the programmer(s) could be engaged. Depending on the foreseeability of such data failures to the programmer and the involvement of third parties in data labeling, and assuming mistakes could not be foreseen, criminal responsibility might not be attributable to programmers. Also similar to AVs, the increasing use of deep learning methods in AWs makes the performance of algorithms dependent on both the availability and accuracy of data. Low quality and incorrect data, missing data, and/or discrepancies between real and training data may be conducive to the misidentification of targets.Footnote 56 When unsupervised learning is used in algorithm design, environmental conditions and armed conflict-related conditions, e.g., smoke, camouflage, and concealment, may inhibit the collection of accurate data.Footnote 57 As with AVs, programmers of AWs may at some point gain sufficient knowledge and experience regarding the robustness of data and unsupervised machine learning that would subject them to due diligence obligations, but the chapter assumes that programmers have not reached that stage yet. In the case of supervised learning, errors in data may lie in a human-generated data feed,Footnote 58 and incorrect data labeling could lead to mistakes and incidents that might be attributable to someone, but not to programmers.
III.B Programmer or User: Who Is in Control of AWs?
The relationship between programmers and users of AWs presents different challenges than AVs. In light of current trends in AW development, arguably toward human–machine interaction rather than full autonomy of the weapons system, the debate has focused on the degree of control that militaries must retain over the weapon release functions of AWs.
However, control can be shared and distributed among programmers and users in different phases, from the design phase to deployment. As noted above, AI engineering in the military domain might require a strong collaboration between programmers and military lawyers in order to accurately code IHL rules in algorithms.Footnote 59 Those arguing for the albeit debated introduction of ethics settings in AWs maintain that ethics settings would “enable humans to exert more control over the outcomes of weapon use [and] make the distribution of responsibilities [between manufacturers and users] more transparent.”Footnote 60
Finally, given their complexity, programmers of AWs might be more involved than programmers of AVs in the use of AWs and in the targeting process, e.g., being required to update the system or implement some modifications to the weapon target parameters before or during the operation.Footnote 61 In these situations, it must be evaluated to what extent a programmer could foresee a certain risk entailed in the deployment and use of an AW in relation to a specific attack rather than just its use in the abstract.
III.C Are Some AW-Related Risks Out of Programmer Control?
In the context of armed conflict, it is highly likely that AWs will be subject to interference and attacks by enemy forces. A UN Institute for Disarmament Research (UNIDIR) report lists several pertinent examples: (1) signal jamming could “block systems from receiving certain data inputs (especially navigation data)”; (2) hacking, such as “spoofing” attacks, might “replace an autonomous system’s real incoming data feed with a fake feed containing incorrect or false data”; (3) “input” attacks could “change a sensed object or data source in such a way as to generate a failure,” e.g., enemy forces “may seek to confound an autonomous system by disguising a target”; and (4) “adversarial examples” or “evasion,” which are attacks that “involve adding subtle artefacts to an input datum that result in catastrophic interpretation error by the machine.”Footnote 62 In such situations, the issue of criminal responsibility for programmers will depend on the modalities of the adversarial interference, whether it could have been foreseen, and whether the AW could have been protected from foreseeable types of attacks.
Similar to the AV context, failures of communication technology, caused by signal jamming or by failures of communication systems between a human operator and the AI system or among connected AI systems, may lead to incidents that could not be imputed to a programmer.
Finally, conflict environments are likely to drift constantly as “[g]roups engage in unpredictable behaviour to deceive or surprise the adversary and continually adjust (and sometimes radically overhaul) their tactics and strategies to gain an edge.”Footnote 63 The continuously changing and unforeseeable behavior of opposing belligerents and the tactics of enemy forces can lead to “data drift,” whereby changes that are difficult to foresee can lead to a weapon system’s failure without it being imputable to a programmer.Footnote 64
IV AV-Related Crimes on the Road and AW-Related War Crimes on the Battlefield
The following section will distil the legal ingredients of crimes against persons resulting from failures in the use of AVs and AWs. The key question is whether the actus reus, i.e., the prohibited conduct, including its resulting harm, could ever be performed by programmers of AVs and AWs. The analysis suggests that save for war crimes under the Rome Statute, which prohibit a conduct, the crimes under examination on the road and the battlefield are currently formulated as result crimes, in that they require the causation of harm such as death or injuries. In relation to crimes of conduct, the central question is whether programmers controlled the behavior of an AV or an AW, e.g., the AW’s launching of an indiscriminate attack against civilians. In relation to crimes of result, the central question is whether programmers exercise causal control over a chain of events leading to a prohibited result, e.g., death, that must occur in addition to the prohibited conduct. Do programmers exercise causal control over the behavior and the effects of AVs and AWs? Establishing causation of crimes of conduct presents differences compared with crimes of result in light of the causal gap that characterizes the latter.Footnote 65 However, this difference is irrelevant in the context of crimes committed with the intermediation of AI since, be they of conduct or result, they always present a causal gap between a programmer’s conduct and the unlawful behavior or effect of an AV and AW. Thus, the issue is whether a causal nexus exists between a programmer’s conduct and either the behavior (in the case of crimes of conduct) or the effects (in the case of crimes of result) of AVs and AWs. Sections IV.A and IV.B will describe the actus reus of AV- and AW-related crimes, while Section IV.C will turn to the question of causation. While the central question of this chapter concerns the actus reus, at the end of this section, I will also make some remarks on mens rea and the relevance of risk-taking and negligence in this debate.
IV.A Actus Reus in AV-Related Crimes
This section focuses on the domestic criminal offenses of negligent homicide and manslaughter in order to assess whether the actus reus of AV-related crimes could be performed by a programmer. It does not address traffic and road violations generally,Footnote 66 nor the specific offense of vehicular homicide.Footnote 67
Given the increasing use of AVs and pending AV-related criminal cases in the United States,Footnote 68 it seems appropriate to take the Model Penal Code (MPC) as an example of common law legislation.Footnote 69 According to the MPC, the actus reus of manslaughter consists of “killing for which the person is reckless about causing death.”Footnote 70 Negligent homicide concerns instances where a “person is not aware of a substantial risk that a death will result from his or her conduct, but should have been aware of such a risk.”Footnote 71
While national criminal law frameworks differ considerably, there are similarities regarding causation which are relevant here. Taking Germany as a representative example of civil law traditions, the Strafgesetzbuch (German Criminal Code) (StGB) distinguishes two forms of intentional homicide: murderFootnote 72 and manslaughter.Footnote 73 Willingly taking the risk of causing death is sufficient for manslaughter.Footnote 74 Negligent homicide is proscribed separately,Footnote 75 and the actus reus consists of causing the death of a person through negligence.Footnote 76
These are crimes of result, where the harm consists of the death of a person. While programmer conduct may be remote with regard to AV incidents, some decisions taken by AV programmers at an early stage of development could decisively impact the navigation behavior of an AV that results in a death. In other words, it is conceivable that a faulty algorithm designed by a programmer could cause a fatal road accident. The question then becomes what is the threshold of causal control exercised by programmers over an AV’s unlawful behavior of navigation and its unlawful effects such as a human death.
IV.B Actus Reus in AW-Related War Crimes
This section addresses AW-related war crimes and whether programmers could perform the required actus reus. Since the actus reus would most likely stem from an AW’s failure to distinguish between civilian and military targets, the war crime of indiscriminate attacks, which criminalizes violations of the aforementioned IHL rule of distinction,Footnote 77 takes on central importance.Footnote 78 The war crime of indiscriminate attacks refers inter alia to an attack that strikes military objectives and civilians or civilian objects without distinction. This can occur as a result of the use of weapons that are incapable of being directed at a specific military objective or accurately distinguishing between civilians and civilian objects and military objectives; these weapons are known as inherently indiscriminate weapons.Footnote 79
While this war crime is neither specifically codified in the Rome Statute nor in AP I, it has been subsumedFootnote 80 under the war crime of directing attacks against civilians. Under AP I, the actus reus of the crime is defined in terms of causing death or injury.Footnote 81 In crimes of result with AWs, a causal nexus between the effects resulting from the deployment of an AW and a programmer’s conduct must be established. Under the Rome Statute, the war crime is formulated as a conduct crime, proscribing the actus reus as the “directing of an attack” against civilians.Footnote 82 A causal nexus must be established between the unlawful AW’s behavior and/or the attack and the programmer’s conduct.Footnote 83 Under both frameworks, the question is whether programmers exercised causal control over the behavior and/or effects, e.g., death or attack, of an AW.
A final issue relates to the required nexus with an armed conflict. The Rome Statute requires that the conduct must take place “in the context of and was associated with” an armed conflict.Footnote 84 However, while undoubtedly there is a temporal and physical distance between programmer conduct and the armed conflict, it is conceivable that programmers may program AW software or upgrade it during an armed conflict. In certain instances, it could be argued that programmer control continues even after the completion of the act of programming, when the effects of their decisions materialize in the behavior and/or effects of AWs in armed conflict. Programmers can be said to exercise a form of control over the behavior and/or effects of AWs that begins with the act of programming and continues thereafter.
IV.C The Causal Nexus between Programming and AV- and AW-Related Crimes
A crucial aspect of programmer criminal responsibility is the causal control they exercise over the behavior and/or effects of AVs and AWs. The assessment of causation refers to the conditions under which an AV’s and AW’s unlawful behavior and/or effects should be deemed the result of programmer conduct for purposes of holding them criminally responsible.
Causality is a complex topic. In common law and civil law countries, several tests to establish causation have been put forward. Due to difficulties in establishing a uniform test for causation, it has been argued that determining conditions for causation are “ultimately a matter of legal policy.”Footnote 85 But this does not render the formulation of causality tests in the relevant criminal provisions completely beyond reach. While a comprehensive analysis of these theories is beyond the scope of this chapter, for the purposes of establishing when programmers exercise causal control, some theories are more aligned with the policy objectives pursued by the suppression of AV- and AW-related crimes.
First, in common law and civil law countries, the “but-for”/conditio sine qua non test is the dominant test for establishing physical causation, and it is intended as a relationship of physical cause and effect.Footnote 86 In the language of MPC §2.03(1)(a), the conduct must be “an antecedent but for which the result in question would not have occurred.” The “but for” test works satisfactorily in cases of straightforward cause and effect, e.g., pointing a loaded gun toward the chest of another person and pulling the trigger. However, AV- and AW-related crimes are characterized by a temporal and physical gap between programmer conduct and the behavior and effect of AVs and AWs. They involve complex interactions between AVs and AWs and humans, including programmers, data providers and labelers, users, etc. AI itself is also a factor that could intervene in the causal chain. The problem of causation in these cases must thus be framed in a way that reflects the relevance of intervening and superseding causal forces which may break the causal nexus between a programmer’s conduct and AV- and AW-related crime.
Both civil law and common law systems have adopted several theories to overcome the shortcomingsFootnote 87 and correct the potential over-inclusivenessFootnote 88 of the “but-for” test, in complex cases involving numerous necessary conditions. Some of these theories include elements of foreseeability in the causality test.
The MPC adopts the “proximate cause test,” which “differentiates among the many possible ‘but for’ causal forces, identifying some as ‘necessary conditions’ – necessary for the result to occur but not its direct ‘cause’ – and recognising others as the ‘direct’ or ‘proximate’ cause of the result.”Footnote 89 The relationship is “direct” when the result is foreseeable and as such “this theory introduces an element of culpability into the law of causation.”Footnote 90
German theories about adequacy assert that whether a certain factor can be considered a cause of a certain effect depends on “whether conditions of that type do, generally, in the light of experience, produce effects of that nature.”Footnote 91 These theories, which are not applied in their pure form in criminal law, include assessments that resemble a culpability assessment. They bring elements of foreseeability and culpability into the causality test, and in particular, a probability and possibility judgment regarding the actions of the accused.Footnote 92 However, these theories leave unresolved the different knowledge perspectives, i.e., objective, subjective, or mixed, on which the foreseeability assessment is to be based.Footnote 93
Other causation theories include an element of understandability, awareness, or foreseeability of risks. In the MPC, the “harm-within-the risk” theory considers that causation in reckless and negligent crimes is in principle established when the result was within the “risk of which the actor is aware or … of which he should be aware.”Footnote 94 In German criminal law, some theories describe causation in terms of the creation or aggravation of risk and limit causation to the unlawful risks that the violated criminal law provision intended to prevent.Footnote 95
In response to the drawbacks of these theories, the teleological theory of causation holds that in all cases involving a so-called intervening independent causal force, the criterion should be whether the intervening causal force was “produced by ‘chance’ or was rather imputable to the criminal act in issue.”Footnote 96 Someone would be responsible for the result if their actions contributed in any manner to the intervening factor. What matters is the accused’s control over the criminal conduct and whether the intervening factor was connected in a but/for sense to their criminal act,Footnote 97 thus falling within their control.
In ICL, a conceptualization of causation that goes beyond the physical relation between acts and effects is more embryonic. However, it has been suggested that theories drawn from national criminal law systems, such as risk-taking and linking causation to culpability, and thus to foreseeability, should inform a theory of causation in ICL.Footnote 98 It has also been suggested that causality should entail an evaluation of the functional obligations of an actor and their area of operation in the economic sphere. According to this theory, causation is “connected to an individual’s control and scope of influence” and is limited to “dangers that he creates through his activity and has the power to avoid.”Footnote 99 As applied in the context of international crimes, which have a collective dimension, these theories could usefully be employed in the context of AV and AW development, which is collective by nature and is characterized by a distribution of responsibilities.
Programmers in some instances will cause harm through omission, notably by failing to avert a particular harmful risk when they are under a legal duty to prevent harmful events of that type (“commission by omission”).Footnote 100 In these cases, the establishment of causation will be hypothetical as there is no physical cause-effect relationship between an omission and the proscribed result.Footnote 101 Other instances concern whether negligence on the side of the programmers, via, e.g., a lack of instructions and warnings, have contributed to and caused the omission, constituting a failure to intervene on behalf of the user. Such omissions amount to negligence, i.e., violations of positive duties of care,Footnote 102 and since it belongs to mens rea, will be addressed in the following section.
IV.D Criminal Negligence: Programming AVs and AWs
In light of the integration of culpability assessments in causation tests, an assessment of programmers’ criminal responsibility would be incomplete without addressing mens rea issues. In relation to mens rea, while intentionally and knowingly programming an AV or AW to commit crimes falls squarely under these prohibitions, in both these contexts, the most expected and problematic issue is the unintended commission of these crimes, i.e., cases in which the programmer did not design the AI system to commit an offense, but harm nevertheless arises during its use.Footnote 103 In such situations, programmers had no intention to commit an offense, but still might incur criminal liability for risks that they should have known and foreseen. To define the scope of criminal responsibility for unintended harm, it is crucial to determine which risks can be known and foreseen by an AV or AW programmer.
There are important differences in the mens rea requirements of AV- and AW-related crimes. Under domestic criminal law, the standards of recklessness and negligence apply to the AV-related crimes of manslaughter and negligent homicide. While “[a] person acts ‘recklessly’ with regard to a result if he or she consciously disregards a substantial risk that his or her conduct will cause the result; he or she acts only ‘negligently’ if he or she is unaware of the substantial risk but should have perceived it.”Footnote 104 The MPC provides that “criminal homicide constitutes manslaughter when it is committed recklessly.”Footnote 105 In the StGB, dolus eventualis, i.e., willingly taking the risk of causing death, would encompass situations covered by recklessness and is sufficient for manslaughter.Footnote 106 For negligent homicide,Footnote 107 one of the prerequisites is that the perpetrator can foresee the risk to a protected interest.Footnote 108
Risk-based mentes reae are subject to more dispute in ICL. The International Tribunal for the former Yugoslavia accepted that recklessness could be a sufficient mens rea for the war crime of indiscriminate attacks under Article 85(3)(a) of AP I.Footnote 109 However, whether recklessness and dolus eventualis could be sufficient to ascribe criminal responsibility for war crimes within the framework of the Rome Statute remains debated.Footnote 110
Unlike incidents with AVs, incidents in war resulting from a programmer’s negligence cannot give rise to their criminal responsibility. Where applicable, recklessness and dolus eventualis, which entail understandability and foreseeability of risks of developing inherently indiscriminate AWs, become crucial to attribute responsibility to programmers in scenarios where programmers foresaw and took some risks. Excluding these mental elements would amount to ruling out the criminal responsibility of programmers in most expected instances of war crimes.
V Developing an International Criminal Law-Infused Notion of Meaningful Human Control over AVs and AWs that Incorporates Mens Rea and Causation Requirements
This section considers a notion of MHC applicable to AVs and AWs that is based on criminal law and that could function as a criminal responsibility “anchor” or “attractor.”Footnote 111 This is not the first attempt to develop a conception of control applicable to both AVs and AWs. Studies on MHC over AWs and moral responsibility of AWsFootnote 112 have been extended to AVs.Footnote 113 In their view, MHC should entail an element of traceability entailing that “one human agent in the design history or use context involved in designing, programming, operating and deploying the autonomous system … understands or is in the position to understand the possible effects in the world of the use of this system.”Footnote 114 Traceability requires that someone in the design or use understands the capabilities of the AI system and its effects.
In line with these studies, it is argued here that programmers may decide and control how both traffic law and IHL are embedded in the respective algorithms, how AI systems see and move, and how they react to changes in the environment. McFarland and McCormack affirm that programmers may exercise control not only over an abstract range of behavior, but also in relation to specific behavior and effects of AWs.Footnote 115 Against this background, this chapter contends that programmer control begins at the initial stage of the AI development process and continues into the use phase, extending to the behavior and effects of AVs and AWs.
Assuming programmer control over certain AV- and AW-related unlawful behavior and effects, how can MHC be conceptualized so as to ensure that criminal responsibility is traced back to programmers when warranted? The foregoing discussion of causality in the context of AV- and AW-related crimes suggests that theories of causation that go beyond deterministic cause-and-effect assessments are particularly amenable to developing a theory of MHC that could ensure responsibility. These theories either link causation to mens rea standards or describe it in terms of the aggravation of risk. In either case, the ability to understand the capabilities of AI systems and their effects, and foreseeability of risks, are required. Considering these theories of causation in view of recent studies on MHC over AVs and AWs, the MHC’s requirement of traceability arguably translates into the requirement of foreseeability of risks.Footnote 116 Because of the distribution of responsibilities in the context of AV and AW programming, causation theories introducing the notion of function-related risks are needed to limit programmers’ criminal responsibility to those risks within their respective obligations and thus their sphere of influence and control. According to these theories, the risks that a programmer is obliged to prevent and that relate to their functional obligations, i.e., their function-related risks, could be considered causally imputable in principle.Footnote 117
VI Conclusion
AVs and AWs are complex systems. Their programming implies a distribution of responsibilities and obligations within tech companies, and between them and manufacturers, third parties, and users, which makes it difficult to identify who may be responsible for harm stemming from their use. Despite the temporal and spatial gap between the programming phase and crimes, the responsibility of programmers in the commission of crimes should not be discarded. Indeed, crucial decisions on the behavior and effects of AVs and AWs are taken in the programming phase. While a more detailed case-by-case analysis is needed, this chapter has mapped out how programmers of AVs and AWs might be in control of certain AV- and AW-related risks and therefore criminally responsible for AV- and AW-related crimes.
This chapter has shown that the assessment of causation as a threshold for establishing whether an actus reus was committed may converge on the criteria of understandability and foreseeability of risks of unlawful behavior and/or effects of AVs and AWs. Those risks which fall within programmers’ functional obligations and sphere of influence can be considered under their control and imputable to them.
Following this analysis, a notion of MHC applicable to programmers of AVs and AWs based on requirements for the imputation of criminal responsibility can be developed. It may function as a responsibility anchor in so far as it helps trace back responsibility to the individuals that could understand and foresee the risk of a crime being committed with an AV or AW.