A. Introduction
Mobility is a central element of human coexistence and mobility revolutions have marked decisive turning points in recent human evolution. Footnote 1 Thus, the invention of the disc wheel around 3,500 BC—a special testimony to human ingenuity without a direct role model in nature Footnote 2 —was a milestone that facilitated the transportation of goods and later also of people considerably. Footnote 3 Another great achievement was the spoked wheel invented in the Bronze Age around 2,000 BC. Footnote 4 Fast forward to the modern age and the patent registered by Karl Benz in 1886—the “Benz Patent-Motorwagen Nummer 1,” in plain language, the car, which was fitted with wire-spoked wheels with hard rubber tires and an internal combustion engine Footnote 5 —was another tremendous invention that, again, contributed to a mobility revolution. Footnote 6 Whereas until then, carts and carriages had to be pulled by draft animals, the so-called automobile released propulsion technology from that dependence. Currently we are witnessing the next great leap in human mobility: The “auto-auto,” as a funny German nickname Footnote 7 goes, is one designation for an autonomous, in other words, self-driving car, that is both propelled and controlled without human intervention. Footnote 8
While the period between the invention of the wheel and its enhanced version—the spoked wheel—was about 1,500 years, the interval between the independence of propulsion technology and its full enhancement now—also including the automation of control technology—was ten times shorter, lasting only about 150 years. Although the invention and improvement of the wheel both represent milestones in mobility, they were presumably not accompanied by legal standards in any significant way. Beginning in the second half of the 19th century, however, technology law is credited with a process of institutional consolidation that has also affected the regulation of cars. Footnote 9 Law, thus, is playing a crucial role in the current revolution of human mobility. One might even go so far as to say that law shares responsibility for the success or failure of modern mobility technology. Footnote 10 Many important areas, such as traffic law, data protection law, liability and criminal law, as well as public law, are affected by auto-autos, and striking the right balance between regulation of and freedom for the development of the technology is no easy task. Footnote 11
The German legislature is the first worldwide that has dared to face this difficult task comprehensively. Footnote 12 Section 1a, which made the “operation of a motor vehicle by means of a highly or fully automated driving function […] permissible if the function is used as intended,” paved the way and was already inserted in the Road Traffic Act (StVG) in 2017. Footnote 13 In July 2016, an Ethics Commission was tasked to deliver a report on ethical and legal questions regarding the introduction of automated vehicles, which was also published in 2017. Footnote 14 This high-level report was the basis for the draft act on autonomous driving published in February 2021, which—for the first time worldwide Footnote 15 —aimed at a comprehensive regulation of self-driving cars on public roads. Footnote 16 The draft then entered into force, almost unchanged, as the Autonomous Driving Act on July 27, 2021. Footnote 17 Almost a year later a decree on the approval and operation of vehicles with autonomous driving functions was adopted on June 24, 2022. Footnote 18 The path for the use of highly and fully automated vehicles has, thus, already been prepared in Germany; further regulatory steps in other countries are likely to be expected. The German act holds, among many other things, in Section 1e paragraph 2, that motor vehicles with an autonomous driving function need to have a specific accident prevention system, or “System der Unfallvermeidung.” The concrete regulation of moral dilemmas in connection with self-driving cars is—needless to say—of great importance but also very controversial. This is no wonder, as many lives are at stake.
This Article focuses on the question as to how to regulate such moral dilemmas involving self-driving cars by law and first holds that self-driving cars should be allowed on the roads. If self-driving cars work, this is a legal duty included in the positive dimension to protect life as guaranteed by Article 2 of the European Convention on Human Rights (ECHR)—see Section C. Footnote 19 Nevertheless, the most recent mobility revolution and all its promises come with a significant burden: Self-driving cars will still be involved in accidents and the technology includes the possibility to program how these accidents should take place. For the first time in the history of traffic law, we have to make decisions about moral dilemmas, about life and death, in cold blood. This is a situation that is unprecedented in its extent and a great challenge for society; likewise, the law constitutes a veritable challenge for ethicists and lawyers—see Section D. Footnote 20 This Article argues in the following that this “possibility” must be faced by the law—not by private companies or individuals—see Section E. Footnote 21
After the stage has been set, the central interest of this Article comes into play, namely the regulation of so-called moral dilemmas, in other words, situations in which, according to all available options, comparable harm occurs—for example, a group of two or three people is killed because it is not possible to prevent both scenarios. The German act, which includes a provision on an accident-avoidance system and thereby regulates moral dilemmas to some extent, will be analyzed in order to clarify whether this act might indeed serve as a role model beyond Germany. For this purpose, the Article will also look at the report of the German Ethics Commission, whose “rules” constitute the basis of the act. To understand the position taken by the Ethics Commission, the Article revisits the so-called “trolley problem,” which prominently arose out of the debate between the Oxford philosopher Philippa Foot Footnote 22 and the American philosopher Judith Jarvis Thomson. Footnote 23 Looking back at this problem, and related but importantly different trolley cases constructed by German criminal lawyers, we come to understand that the current discussion is suffering from a conflation of hypothetical scenarios from different debates in philosophy, empirical studies, and the law—see Section F. Footnote 24 This insight is important when finally discussing the accident regulation of self-driving cars when facing moral dilemmas in Europe, the US, and beyond. The positive obligation of states according to Article 2 ECHR to take measures to save lives surely includes a prominent role for the “minimize harm” principle, potentially even to the extent that human lives have to be offset—see Section G. Footnote 25 In the end, we will see that the 2021 German Act on Autonomous Driving provides some important elements for collision-avoidance systems in self-driving cars but falls short of being a role model for Europe or the US due to its reluctance to regulate moral dilemmas comprehensively.
B. What is in a Name? The Designation of Different Automation Levels
Similar to the development of the automobile—which did not happen at once but owed its existence to numerous, sometimes parallel, technological innovations—the self-driving car will not be ready for use tomorrow or the day after without a precursor. There are many developments and different levels of automation, and these are typically described in six stages according to the classification of the Society of Automotive Engineers (SAE) International, a non-profit association of automotive engineers concerned with technical standards. Footnote 26 The lowest level includes vehicles without any automation—Level 0: No driving automation. The next level comprises well-known and already approved driver-assistance systems such as the Automatic Breaking System (ABS) and Electronic Stability Program (ESP) systems, or more recently, lane departure warning—Level 1: Driver assistance. Partially automated vehicles promise to take over certain activities, such as independent parking or maneuvering a vehicle in a traffic jam—Level 2: Partial automation. Conditionally automated vehicles are, to a certain extent, “autonomous” but still require the possibility of human intervention—Level 3: Conditional automation. Footnote 27 The need for human intervention is no longer given at the next, highly automated level but is still possible—Level 4: High automation. Footnote 28 In the final stage, human drivers are not only obsolete but can no longer intervene at all—Level 5: Full driving automation. This distinction between automated and fully self-driving, in other words, potentially driverless, vehicles in road traffic is important not only from a technical perspective but also from a legal one as it entails different legal regulatory requirements. It is worth pointing out that the classification of Level 5 no longer includes the designation—widely used and also formerly used by the SAE—of such vehicles as being “autonomous.” Footnote 29 This is to be welcomed since the term autonomy is typically used in law and philosophy for self-determination or self-governance. Footnote 30 This does not apply to self-driving cars. In the legal and philosophical sense, only those vehicles could be called autonomous which do not move on the basis of pre-programmed decisions without a driver but which actually make autonomous decisions, for example, with the help of machine learning. Footnote 31 Concurrently, for the purpose of this Article, we will speak of self-driving cars throughout, without wanting to exclude lower levels of automation since moral dilemmas—albeit in a somewhat modified form—can already occur with driver-assistance systems.
C. Why Self-Driving Cars Should be Allowed
The introduction of the automobile initially claimed a comparatively large number of lives: “In the first four years after Armistice Day more Americans were killed in automobile accidents than had died in battle in France.” Footnote 32 Even today, the number of traffic fatalities is by no means insignificant, and the primary cause of accidents is clearly human error. Footnote 33 This is not only relevant for drivers but also for those otherwise involved in road traffic, such as cyclists or pedestrians, who are often affected.
Generally speaking, the law of technology is potentially both technology-preventing and technology-enabling. Footnote 34 In the light of traffic fatalities, the use of self-driving cars is promising. After all, it is quite conceivable that self-driving cars can be designed and programmed in such a way that many (fatal) accidents will actually be avoided. This is true despite the current public debate on accidents involving, for example, various Tesla models in May 2016 and April 2021, both in the US, which were caused by automated vehicles in test mode or by so-called “autopilots” that were not adequately monitored by humans. Footnote 35 A central promise that speaks in favor of the approval of functioning self-driving cars is—despite these accidents—the resulting improvement in traffic safety. To put it bluntly, self-driving cars do not speed, make phone calls, drive under the influence of alcohol or drugs, or fall asleep at the wheel. Footnote 36
Everyone’s right to life as enshrined in Article 2 ECHR, Paragraph 1, covers not only the fundamental prohibition of the state to intentionally end human life, but also the obligation to take precautionary measures to prevent dangerous situations. Footnote 37 The state, thus, has a duty to protect individuals from threats, also from other individuals. However, this duty to protect is not easy to grasp. Footnote 38 The case law of the European Court of Human Rights (ECtHR) in this regard is quite casuistic. Footnote 39 For example, it includes protective measures for individuals whose lives are threatened by environmental hazards and dangerous activities. Footnote 40 However, there are limits to this duty to protect. The state does not have to prohibit road traffic, for example, simply because it is dangerous. A certain risk is therefore part of life. Footnote 41 In any case, however, the state must take legal measures to regulate dangers emanating from road traffic. Footnote 42 The establishment of appropriate and effective traffic regulations, such as a blood alcohol limit, is, accordingly, necessary to protect individuals against particular dangers. Footnote 43 All in all, it is important to bear in mind that the state has to make great efforts to protect life: “When there is a risk of serious and lethal accidents of which the state has—or ought to have—knowledge, the state may be obliged to take and enforce reasonable precautionary measures.” Footnote 44 Insofar as self-driving cars function and fulfill their promise of significantly increased road safety, it can be assumed that the approval of self-driving cars is to be subsumed under the state’s duty to protect life under Article 2 ECHR. Footnote 45 In this vein the German Ethics Commission also postulated in Rule 6 that “[t]he introduction of more highly automated driving systems, especially with the option of automated collision prevention, may be socially and ethically mandated if it can unlock existing potential for damage limitation.” Footnote 46
D. Why Moral Dilemmas Involving Self-Driving Cars Are New
In connection with the promise of increased safety, however, there is a great deal of uncertainty. How should self-driving cars behave when all available options cause harm? Self-driving cars will change road traffic massively and thus the street scene and everyday lives of almost everyone. Footnote 47 Therefore, the ethical and legal risks and problems associated with them must be clearly regulated.
One of the main problems is to decide how self-driving cars should be programmed for moral dilemmas, in other words, for situations in which, according to all available options for action, comparable harm occurs, for example, a group of two persons or a group of three persons is seriously injured or killed because neither scenario can be prevented. Footnote 48 This is not to say that self-driving cars will constantly be confronted with trolley-problem-like situations, which have been criticized as too fictitious for real-world regulatory problems. Footnote 49 However, the difficulty of deciding what is an acceptable probability of collision when initiating a particular maneuver, and what is the relationship to the expected harm in situations where all options involve at least a possibility of collision and harm, is a difficult ethical question that we have not had to answer in advance until the advent of self-driving cars. Furthermore, the millions of cars and kilometers traveled will increase the likelihood of the most extraordinary scenario, all of which can be regulated in advance. The legal question of how to deal with such dilemmas in road traffic is rather new in that, to date, similar accident constellations have always had to be accepted as fate, so to speak, as subconscious human decisions determined the way accidents happened. Footnote 50 Reactions in a fraction of a second cannot be compared with conscious, reflected decisions. This new possibility is a gift and a burden at the same time. Self-driving cars, for instance, include the chance to save more lives; however, the price is high as the decision has to be made that someone will die to save other lives. This is a tremendously difficult question that most legal orders have declined to answer so far, at least in a state of normalcy. Yet, ignoring the technical possibility of saving as many lives as possible also means letting people die. Either way, a solid justification is necessary. Other scenarios, like planes hijacked by terrorists, are fundamentally different and, therefore, only of little argumentative help. Footnote 51
E. Moral Dilemmas Must be Regulated by the Law
The question as to how to make decisions on moral dilemmas involving self-driving cars must be answered by the law. The state has a legal duty to ensure that fatal risks are diminished (Article 2 ECHR). It cannot be left up to car manufacturers or private individuals to decide whether and how they program or choose the programming for self-driving cars in moral dilemmas since companies or private individuals are not allowed to make life-or-death decisions except in emergency situations like in emergency aid cases when their own lives are at stake. Footnote 52 Thus, announcements such as those made by a representative of Mercedes who said that self-driving Mercedes cars will be programmed in crash scenarios to prioritize the safety of their owners rather than, for instance, pedestrians, which puts a price tag on weighty decisions about life and death scenarios in road traffic, must be stopped. Footnote 53
F. How to Inform the Regulation of Moral Dilemmas Involving Self-Driving Cars?
It is the very definition of a moral dilemma that there is no easy solution. Hence, if we agree that self-driving cars can cause or might be involved in situations which constitute a moral dilemma, there is no easy answer. Current attempts to argue for specific regulations of moral dilemmas for self-driving cars might be inclined to inform decisions by referring to legal scholars, mostly criminal lawyers, who have also discussed so-called “trolley cases.” In these debates, however, the criminal lawyers were usually not concerned with a discussion of adequate laws but questions as to whether it would be right to punish individuals in specific situations. Moral dilemmas have plagued many philosophers too. A famous discussion in philosophy and psychology focused on the so-called “trolley problem.” Discussed scenarios are easily adaptable to hypothesize about how the outcome of accidents involving self-driving cars with unavoidable fatalities should be programmed. The trolley problem and similar scenarios, however, are no easy fit to the question as to how to regulate moral dilemmas involving self-driving cars. The debate is huge and complex. Footnote 54 Nevertheless, a major reason for being cautious is the fact that the debate around trolley problems in philosophy originally had quite different goals than what currently seems to be at center stage. Recent empirical studies have also aimed to inform the regulation of moral dilemmas with self-driving cars. It is, however, a difficult and potentially misleading task to simply ask laypeople in studies what they think would be the right thing to do. None of these debates is meaningless when discussing how to make decisions on moral dilemmas involving self-driving cars. Yet, it is important not to conflate the different starting points and goals of these debates when we aim at informing the current regulation of moral dilemmas with self-driving cars. This will be demonstrated taking the German Ethics Commission and the 2021 German Act on Autonomous Driving as a basis.
I. Trolley Cases in Criminal Law
Constructed cases are lawyers’ bread and butter, at least in the classroom. From this point of view, it is not surprising that the jurist Josef Kohler already offered a hypothetical scenario with the title Autolenker-Fall, the “case of the car driver,” in 1915. Footnote 55 Kohler proposed to:
Consider the fact that a car can no longer be brought to a standstill over a short distance but that it is still possible to steer it so that instead of going straight ahead, it goes right or left. If there are now persons straight ahead, to the right and left who can no longer avoid being killed, the driver is not in a position to avoid killing people, but he can steer the car to one side or the other by moving the steering wheel. Can we punish him here for causing the death of A, whereas if the car had continued in a straight line without being steered, B or C would have perished? Footnote 56
Hereby Kohler formulated a decision problem which lawyers still ponder today. It is, however, important to understand why. As often, we learn a great deal about his intentions when we closely read his question at the end of the scenario. His intention as a criminal lawyer was to discuss whether the action chosen by the car driver is punishable under criminal law, or whether the emergency law—“Notrecht” in German—prevents the punishment of an individual who had no choice but to kill someone. There is another scenario, again proposed by a German criminal lawyer, which is strikingly similar to the trolley cases discussed in philosophy. In 1951, Hans Welzel described the following scenario:
On a steep mountain track, a freight car has broken loose and is hurtling down the valley at full speed towards a small station where a passenger train is currently standing. If the freight car were to continue racing along that track, it would hit the passenger train and kill a large number of people. A railroad official, seeing the disaster coming, changes the points at the last minute, which directs the freight car onto the only siding where some workers are unloading a freight car. The impact, as the official anticipated, kills three workers. Footnote 57
Due to the similarity of this scenario, the German Ethics Commission considered it to be “familiar in a legal context as the ‘trolley problem’.” Footnote 58 This is misguided, however, as it conflates important differences between the intentions of Kohler and Welzel on the one hand and the discussion of the “trolley problem” by Philippa Foot and Judith Jarvis Thomson on the other hand. Footnote 59 Welzel’s argument, in broad strokes, was to demonstrate that the railroad official is not culpable for having redirected the freight car. Despite their similarity with currently suggested crash scenarios involving self-driving cars, simply adopting these examples is misguided because when we discuss the ethics of self-driving cars and the question as to how to legally prescribe the programming of these cars for dilemma situations, we must not conflate this with the justification in criminal law of personal actions in terms of emergency aid or personal culpability. While in Kohler’s and Welzel’s cases alike the discussion revolves around the action of an individual and the question whether this individual—ex post—should be punished or not, regulating self-driving cars is a societal question to be answered ex ante by the law maker or an Ethics Commission. Neither the opinion of Kohler, who answered his own question straightforwardly with “certainly not,” or of Welzel, who also considered the railroad official not culpable, must lead directly to the conclusion that self-driving cars ought to be programmed to take a different path in such scenarios. It is not criminal law which is—alone—responsible for the question as to how to decide on such moral dilemmas. And still, despite the fact that this debate is at the core of something else, it does not exclude the possibility that the solution to such and similar scenarios discussed by many criminal lawyers might be informative for the debate on self-driving cars and moral dilemmas. Footnote 60
II. The Traditional “Trolley Problem” in Philosophy
The traditional “trolley problem” in philosophy became famous through a debate between Philippa Foot and Judith Jarvis Thomson. The Oxford philosopher Philippa Foot published an article in 1967 which had an enormous impact on philosophical debates in the following decades worldwide. Footnote 61 Her intention was to discuss the ethics of abortion by making an argument for the distinction between positive and negative rights, in contrast to the doctrine of double effect. To her, negative duties, namely what we owe to other persons in terms of non-interference, were stronger than positive duties, namely what we owe to other persons in the form of aid. Footnote 62
To illustrate her point, she, too, constructed hypothetical scenarios. One scenario, which later became famous, goes like this:
Someone is the driver of a runaway tram which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. Footnote 63
Her scenario, too, was designed to put the driver of the tram in a conflict, namely a conflict of two negative duties, instead of one negative and one positive duty. Taking either track, the tram driver would kill an innocent person. Because both duties were negative duties, she argued that the tram driver might steer the tram to save five persons, not as the doctrine of double effect but because the distinction between positive and negative duties was decisive.
This argument was challenged by Judith Jarvis Thomson some ten years later. She coined the term “trolley problem, in honor of Mrs. Foot’s example.” Footnote 64 To Thomson, after having considered more hypothetical scenarios, it seemed that it is not always the case that the distinction between positive and negative duties guides us in a morally acceptable way. To her rather, “what matters in these cases in which a threat is to be distributed is whether the agent distributes it by doing something to it, or whether he distributes it by doing something to a person.” Footnote 65 To make her point, she changed Foot’s scenario slightly. In Thomson’s case it is not the action of the driver of the tram but the action of a bystander—who might change the points in order to redirect the trolley—which we have to evaluate. Footnote 66 As Thomson supposes, also after having asked several colleagues for their opinions, most persons also consider it morally acceptable for the bystander to change the points in order to save five persons. In this case, however, the bystander violated a negative duty—to not kill one person on the other track—in order to fulfil a positive duty—to aid five persons who would die if the bystander did not act. Footnote 67 This, she states, “is serious trouble for Mrs. Foot’s thesis.” Footnote 68
Judith Jarvis Thomson then goes on to discuss more scenarios, defining the “trolley problem” as the difficulty to explain why the bystander may redirect the trolley but we must not push a fat man off a bridge in order to stop a trolley killing one person—the fat man—in order to save five others. For her “‘kill’ and ‘let die’ are too blunt to be useful tools for the solving of this problem,” Footnote 69 but an “appeal to the concept of a right” could suffice. Footnote 70 If someone must infringe a stringent right of an individual in order to get something that threatens five to threaten this individual then he may not proceed according to Thomson. Footnote 71
The problem is that in both cases we are dealing with negative and positive rights and duties in a similar way, but morally it seems that this should not be decisive as the bystander should redirect the trolley, but the fat man should not be killed in order to save five other persons endangered by the trolley. The “trolley problem,” therefore, at least in the debate between Philippa Foot and Judith Jarvis Thomson is not about how to solve such moral dilemmas. Footnote 72 On the contrary, the right thing to do, morally speaking, is stipulated in all of the scenarios discussed by them. It is rather about the perplexity of how to explain the apparently different moral judgments in rather similar, even almost identical, scenarios. Because this is difficult, and has proven to remain difficult until today, this has been labeled a “problem.”
Hence, the important point for the current debate is that conclusions from the apparent fact that the bystander should redirect the trolley in order to save five people by killing one person should not be made lightheartedly when considering the programming of self-driving cars in such and similar situations. This insight often seems to be neglected, however, at least in discussions in non-academic magazines when someone makes the case for the relevance of the ‘trolley problem’ for moral dilemmas involving self-driving cars. Footnote 73 For this and other reasons, most moral philosophers do not consider the “trolley problem-debate” to be particularly useful for the discussion of the ethics of self-driving cars. Footnote 74
III. Empirical Studies on Moral Dilemmas Involving Self-Driving Cars
What is important to note for our purpose, thus, is that the hypothetical trolley cases were originally designed for quite specific purposes: In the case of Kohler and Welzel for discussing intricate issues of criminal law and culpability in emergency situations, and in the case of Foot and Thomson in order to discover moral principles or rather philosophical arguments aimed at justifying and explaining quite perplexing but nevertheless strong moral intuitions in seemingly only slightly different scenarios. The moral intuitions in all of these exercises were presupposed. In the application of such trolley cases over time and especially in relation to the ethics of self-driving cars, something changed.
It is important to note the shift in the application of trolley cases which has taken place over time, especially in relation to the ethics of self-driving cars. Trolley cases these days seem to be rather an inspiration in order to find out what the right thing to do would be and, thus, how self-driving cars should be programmed for moral dilemmas. This was impressively demonstrated in the large “Moral Machine Experiment,” Footnote 75 which asked over two million persons online to give their opinion on various scenarios to find out what laypersons thought was the right thing to do in a moral dilemma situation involving self-driving cars. These scenarios included characters as diverse as children and the elderly, doctors and the homeless, and different group sizes, to name just a few examples, and to give an idea of the breadth of this and similar efforts to decipher lay moral preferences in the context of self-driving cars. Footnote 76
An important finding reported in the experiment by Edmond Awad and his colleagues, and many similar studies, is that most people think that given a moral dilemma, self-driving cars should be programmed to save five people, even if one person has to die as a result. Footnote 77 Yet, this does not mean that we can decide on the programming of moral dilemmas involving self-driving cars on the basis of empirical studies alone. It is, for instance, a much more complex question concerning what the right programming would be than to simply consider trolley cases like experiments. Despite this caveat, it would be throwing the baby out with the bathwater if we were to ignore such experiments if they validly show a significant tendency in laypersons. Footnote 78 The criticism that morality cannot be established in an experiment like the Moral Machine Experiment Footnote 79 hinges as much upon answering the question as to what public morality is as the experiment by Awad and his colleagues itself. Is morality only to be found in the “ivory tower” of ethical theory building or is it also connected to what the majority of laypersons consider to be the right thing to do? If the latter has to play a role, the study design in order to find morally relevant intuitions becomes crucial. Footnote 80
Just one example shall illustrate why too simple a study design might be problematic. Apparently, in the discussion between Philippa Foot and Judith Jarvis Thomson, a striking and recurring issue was the trouble that numbers alone “won’t do.” They were puzzled by the findings that in such and such a circumstance, it seems to be morally acceptable to save five persons over one, but if a—sometimes only slight—change in the circumstances occurs, the verdict seems to change too, and it is no longer morally acceptable to save five persons over one. To ignore this important element of the discussion–and in fact this was the whole point of the discussion—is to dangerously introduce “morally acceptable” findings of so-called surveys or experiments into the real-life debate on how to regulate moral dilemmas caused by self-driving cars. Footnote 81
After having clarified potentially misinformed starting points or conflations of various quite different debates, we will take a look at the rules suggested by the German Ethics Commission, which are the basis for the 2021 German Act on Autonomous Driving.
IV. The Rules of the German Ethics Commission on Automated and Connected Driving
In 2017, an Ethics Commission set up by the German Federal Minister of Transport and Digital Infrastructure, including, among others, lawyers and philosophers and chaired by former judge of the Federal Constitutional Court of Germany Udo Di Fabio, delivered a Report on “Automated and Connected Driving.” Footnote 82 This report, which was not only published in German, but also in English, was more than a simple report as it came up with “[e]thical rules for automated and connected vehicular traffic.” Footnote 83 Specifically for “[s]ituations involving unavoidable harm,” working group 1, chaired by Eric Hilgendorf, was installed. Footnote 84 While the rules of the Commission deal with various topics, Footnote 85 this Article focuses on rules concerning moral dilemmas. Rule 1 of the Commission entails the insight that “[t]echnological development obeys the principle of personal autonomy, which means that individuals enjoy freedom of action for which they themselves are responsible.” Footnote 86 Rule 2 holds that “[t]he protection of individuals takes precedence over all other utilitarian considerations. The objective is to reduce the level of harm until it is completely prevented.” Footnote 87 Both “rules” will very likely meet with wide acceptance. This similarly holds true for Rule 7, stating that “[i]n hazardous situations that prove to be unavoidable, despite all technological precautions being taken, the protection of human life enjoys top priority in a balancing of legally protected interests.” Footnote 88 Hence, “damage to animals or property” never must override protection of humans. It will be hard to find someone arguing to the contrary in relation to this rule too. Footnote 89
Rule 8 is a caveat: “Genuine dilemmatic decisions, such as a decision between one human life and another, depend on the actual specific situation […]. They can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable.” Footnote 90 The Commission added:
It is true that a human driver would be acting unlawfully if he killed a person in an emergency to save the lives of one or more other persons, but he would not necessarily be acting culpably. Such legal judgements, made in retrospect and taking special circumstances into account, cannot be readily transformed into abstract/general ex ante appraisals and thus not into corresponding programming activities either. Footnote 91
This is a warning that the scenarios presented by the criminal lawyers above must not be taken as direct role models.
After having emphasized the uncertain grounds for the regulation of moral dilemmas, the Ethics Commission holds, in Rule 9, that “[i]n the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” Footnote 92 Rule 9 includes, too, that “[i]t is also prohibited to offset victims against one another.” Footnote 93 While the “[g]eneral programming to reduce the number of personal injuries may be justifiable,” the Commission added that “[t]hose parties involved in the generation of mobility risks must not sacrifice non-involved parties.” Footnote 94 Precisely this addition to Rule 9 is crucial for our purpose.
Interestingly, the members of the Ethics Commission could not agree on a consistent explanation for this rule. Footnote 95 On the one hand, the Ethics Commission refers to a judgment by the Federal Constitutional Court of Germany concerning a hijacked airplane Footnote 96 stating that “the sacrifice of innocent people in favor of other potential victims is impermissible, because the innocent parties would be degraded to mere instruments and deprived of the quality as a subject.” Footnote 97 On the other hand, the Commission states in relation to self-driving cars that the “identity of the injured or killed parties is not yet known,” which distinguishes self-driving car scenarios from the trolley cases. If the programming, furthermore, “reduced the risk to every single road user in equal measure,” then “it was also in the interests of those sacrificed before they were identifiable as such in a specific situation,” to program self-driving cars in order to “minimize the number of victims.”Footnote 98 This “could thus be justified, at any rate without breaching Article 1(1) of the [German] Basic Law.” Footnote 99 The confusion is made complete by the succeeding paragraph, which reads as follows:
However, the Ethics Commission refuses to infer from this that the lives of humans can be “offset” against those of other humans in emergency situations so that it could be permissible to sacrifice one person in order to save several others. It classifies the killing of or the infliction of serious injuries on persons by autonomous vehicles systems as being wrong without exception. Thus, even in an emergency, human lives must not be “offset” against each other. According to this position, the individual is to be regarded as “sacrosanct.” No obligations of solidarity must be imposed on individuals requiring them to sacrifice themselves for others, even if this is the only way to save other people. Footnote 100
The verdict in this paragraph is, then, called into question by the following paragraph on constellations in which “several lives are already imminently threatened.” Footnote 101 Hence in scenarios as put forward, for example, by Kohler above, where a car would kill a group of three persons, but a redirection might avoid killing all three persons and hit only one, or two, the Commission found that–despite openly admitting disagreements between the experts–minimizing harm in this case might be permissible. Footnote 102
However, what is of particular importance for our purpose is a discussion not only of moral principles and rules but of legal solutions. Due to technological progress, we must not only think hypothetically about such scenarios but implement laws which regulate the programming of self-driving cars, also including moral dilemmas they might cause or be involved in. Before we enter the discussion of the controversy on “offsetting” human lives, we will see what has been included in the German act.
V. The 2021 German Act on Autonomous Driving
The provision related to dilemma problems of self-driving cars in the 2021 German Act on Autonomous Driving, Section 1e paragraph 2 No. 2 provides that motor vehicles with an autonomous driving function must have an accident-avoidance system that:
(a) [I]s designed to avoid and reduce harm,
(b) in the event of unavoidable alternative harm to different legal interests, takes into account the importance of the legal interests, with the protection of human life having the highest priority; and
(c) in the case of unavoidable alternative harm to human life, does not provide for further weighting on the basis of personal characteristics. Footnote 103
Rules 2 (lit a)) and 7 (lit b)) of the Ethics Commission were, thus, directly implemented in the law. However, Rule 9 of the Ethics Commission was not fully included. While the prohibition of taking personal characteristics into account was enshrined in the law Footnote 104 — there is no general legal prohibition concerning the “offsetting” of human victims against one another. Neither is there an explicit obligation to protect as many persons as possible even at the risk of directly harming fewer persons.
The 2022 Autonomous Driving Decree which concretizes the 2021 Autonomous Driving Act did not change this lacunae. However, in response to the aforementioned statement by the Mercedes official, a so-called “Mercedes rule” has been established as a “functional requirement for motor vehicles with autonomous driving function”: Footnote 105
If, in order to avoid endangering the lives of the occupants of the motor vehicle with autonomous driving function, a collision can only be avoided by endangering the lives of other participants in the surrounding traffic or uninvolved third parties (unavoidable alternative endangerment of human life), the protection of the other participants in the surrounding traffic and uninvolved third parties must not be subordinated to the protection of the occupants of the autonomously driving motor vehicle.
Beyond these rules, no other regulation of moral dilemmas has been introduced in the law or the decree. This non-regulation is central issue concerning the role model potential of this regulation beyond Germany as empirical studies, as briefly referred to above, found a strong moral preference in laypersons to protect a greater number of persons in case of moral dilemmas. Footnote 106 As we have seen, it was difficult for the German Ethics Commission to come up with a straightforward proposal so the German legislature—maybe due to the controversy in the Ethics Commission—refrained from regulating moral dilemmas in detail. This is, furthermore, understandable considering German history, the Kantian tradition, and last but not least, human dignity as enshrined in Article 1(1) of the German Basic Law. However, these conditions are not given—at least not explicitly or to the same extent—beyond Germany, neither in the rest of Europe nor internationally. In the following sections, therefore, this Article will investigate whether there might be good reasons for doubting that a so-called “utilitarian programming” of self-driving cars, a programming that would lead to the protection of the greatest number of persons saved—even at the expense of killing someone, or some smaller number of persons—in the case of moral dilemmas, actually could be a role model for European and international regulations.
G. Potential “Utilitarian Programming” of Moral Dilemmas Involving Self-Driving Cars in Europe and the US
For the sake of simplicity, the following discussion is based on a specific example of such moral dilemmas. For this purpose, a case is assumed which is similar to the classical trolley problem case described by Foot of a car which can no longer be brought to a standstill. In the case of non-intervention, the car would steer straight into a group of five persons and only in the case of an intervention and redirection would the car avoid colliding with this group but would kill another person instead. The question is how to program self-driving cars for such a dilemma.
I. The Rejection of Utilitarian Programming by the Legal Literature
The position in the German legal literature is rather straightforward. According to Article 1(1) of the Basic Law of the Federal Republic of Germany, “[h]uman dignity shall be inviolable.” Most legal scholars interpret this as a strict prohibition to “offset” human lives. Footnote 107
As far as can be seen, Iris Eisenberger is the first voice in the Austrian legal literature who also took a position on the potential programming of moral dilemmas caused by self-driving cars. Footnote 108 She, too, rejected a utilitarian option for action, taking into account fundamental rights and especially Article 2 ECHR, the right to life. According to her, in a case of conflict that cannot be avoided, it is inadmissible to inscribe in an algorithm the option of action according to which the smallest number of persons would be killed. Footnote 109 To her, the “offsetting” of human lives would violate human dignity in the context of the Austrian legal order as well. Footnote 110 This position meets with opposition in Austria, however. Footnote 111 Beyond Austria and Germany, it is especially empirical studies which suggest that a large number of laypersons would actually favor minimizing harm—even at the cost of killing someone innocent.
II. When the Legal Literature Faces Opposition from Empirical Studies
The silence of the German act concerning moral dilemmas faces strong opposition from empirical studies investigating the moral preferences of laypersons. Footnote 112 These studies show a strong preference for minimizing harm even to the extent that in the case of moral dilemmas, the largest number of persons possible should be saved at the cost of the smaller number of human lives. Footnote 113 Empirical studies, hence, suggest that the moral intuitions of laypersons at least feed a certain skepticism concerning the strict reading of human dignity by German legal scholars, the Ethics Commission, and, finally, the German act. This is relevant, among other things, because the programming of self-driving cars should also meet with the acceptance of most persons. Tracing these intuitions, this Article questions the position taken by the German act outlined above. This critique is fed by Article 2 ECHR, which, arguably, is the criterion for a European regulation. Article 2 ECHR, in contrast to Article 1(1) of the German Basic Law, at least does not prohibit a “utilitarian” programming of self-driving cars. Footnote 114
III. A Technological Caveat: Damage Avoidance Probability and Estimated Harm Instead of Targeted Killing
First of all, it is important to emphasize that the general discussion so far has been clouded by a conflation of different scenarios. Indeed, the starting point of many considerations is a classic scenario where the offsetting of human lives is actually more realistic than in the trolley case: A plane hijacked by terrorists and the question as to what means the state may use to save the lives of innocent persons. In this consideration, it is assumed that shooting down the hijacked plane—to save threatened persons, for example, in a crowded stadium where the plane is heading—necessarily entails the targeted killing of the innocent passengers on board. Footnote 115
In road traffic scenarios, however, the almost certain death of innocent persons is not a prerequisite for the discussion on regulating crashes with self-driving cars. In the case of car accidents, we are not dealing with targeted killings in the vast majority of situations, excluding terrorist activities, which might actually be prevented with well-designed self-driving cars (in this case they probably even deserve to be called autonomous). Clear predictions about the probability of a collision for certain trajectories and survival in case of a collision after a car accident (estimated harm) are rather difficult to make. This is also relevant for the discussion of moral dilemmas as programming might involve the killing of the smaller number of persons to save the larger number; however, ex ante such an outcome is uncertain. It might, thus, be that, concerning crashes with self-driving cars in the case of a moral dilemma, ex ante, we do not talk about purposefully killing one person in order to save two lives. Rather, we are “only” dealing with the question of harm avoidance probabilities, which, nevertheless, might include fatalities.
The technology for accurately predicting collisions that will affect specific persons, for determining the damage caused by such a collision and for possible distinctions based on personal criteria such as age, profession, origin, etc. is a distant dream—or rather—nightmare. Footnote 116 The current technology of automatic object recognition is plagued with more banal mishaps. An example is with the mistaken result of an automatic recognition of objects such as a green apple with a sticker that reads “i pod,” which is immediately identified as an iPod manufactured by Apple. A legal evaluation of any accident-avoidance system must thus be based on the technological possibilities. Footnote 117 It seems that also for self-driving cars the old principle that “ought” implies “can” holds true. For the regulation of self-driving cars, this means that those conflict scenarios should be discussed first which could actually be applied technically on the roads in the near future or in the medium term. For example, in a situation in which personal injury can no longer be avoided, an algorithm might well be able to detect the number of persons potentially threatened in two different scenarios. However, whether and with what probability those persons in the two scenarios will suffer life-threatening injuries after the collision probably already goes far beyond the technological achievements that can be hoped for in the near future. Crash regulation programming of self-driving cars, thus, must not be predominantly discussed against a false background. Fictitious terrorist scenarios in which innocent persons are deliberately killed are thus a misleading foil for collision-avoidance systems. Footnote 118 Consequently, hypothetically equating the probability of killing fails to recognize the difference between a terrorist scenario and a car accident. To put it bluntly, often it is not lives that are “offset,” but the probabilities of damage avoidance that are compared.
Having said that, however, it is also important that the law which is aimed at regulating self-driving cars is designed to speak to such cars. Their programming requires much more precise regulation than humans do. While human behavior can be addressed by general rules, self-driving cars require a very specific programming. Footnote 119 In other words, to make a law talk to the code, we have to put numbers in the law. Footnote 120 In particular, the regulation on collision avoidance systems is not comprehensive and works with very general rules such as avoiding and reducing harm. In the end, this leaves a lot of freedom for car manufacturers to program self-driving cars to deal with moral dilemmas. They should follow rules such as prioritizing humans over non-humans, but not with certain personal characteristics, and not subordinating humans outside of self-driving cars to their passengers. But if these rules are followed, there is simply a generalized harm reduction rule that can lead to many different outcomes.
Following the example of the German Ethics Commission, a commission of experts appointed by the European Commission published an “Ethics of connected and automated vehicles report” in September 2020. Footnote 121 With regard to the regulation of moral dilemmas, the guidelines put forward in this report—similarly to the comments made above on the probability of avoiding harm—point out that it could be difficult for self-driving cars to switch from a generally appropriate risk minimization program to a dilemma situation. Therefore, this commission decided to refrain from providing a concrete solution to all possible dilemmas and emphasized the need to involve the public in solving such scenarios. Footnote 122 And yet, despite the technological caveat mentioned above, we will still face actual moral dilemmas. Footnote 123 Therefore, we must not shy away from the question as to whether the law should prescribe the minimization of harm also in the event that someone ends up being killed. Nevertheless, it is important to mind the correct context, the likelihood of the occurrence of various scenarios, and the technological possibilities which will likely guide and limit the potential solutions.
IV. Article 2 ECHR, the “Offsetting” of Human Lives and a European-Wide Regulation of Moral Dilemmas Involving Self-Driving Cars
Insofar as—as previously claimed—the state must regulate moral dilemmas according to Article 2 ECHR, the question arises as to whether Article 2 ECHR also includes specifications for the design of the regulation. The scope of protection of Article 2 ECHR is the right to life. An interference with the scope of protection occurs through an intentional killing or a life-endangering threat. Examples include state-practiced euthanasia, a killing or disproportionate use of force in the course of official police action—for example, by neglecting a person under state supervision—or by forcing someone to engage in an activity dangerous to life. Footnote 124 These examples are obviously not directly applicable to the scenario discussed here—state vehicles excepted. However, the right to life can also be violated by omission. In this regard, too, state custody is usually a prerequisite, when, for example, the state must not let prisoners die of thirst. Footnote 125 This does not apply to our situation either. Nevertheless, it cannot be denied that a state regulation which prescribes a probability of avoiding harm in such a way that, in a moral dilemma, the group with the larger number of persons should be protected and the group with the smaller number of persons should be sacrificed, represents an encroachment on the scope of protection of Article 2 ECHR. Footnote 126 In principle, an interference with the right to life under Article 2 paragraph 2 ECHR is justified only to “(a) defend someone against unlawful violence; (b) lawfully arrest someone or prevent someone lawfully deprived of his liberty from escaping; (c) lawfully put down a riot or insurrection.” Footnote 127
In Germany the right to life is linked to the guarantee of human dignity and an “offsetting” of life is strictly rejected. Footnote 128 Basically, this position is strongly colored by Immanuel Kant. Footnote 129 It is questionable whether human dignity, which is also guaranteed in principle in Austrian and other European constitutions as well as in the European Convention on Human Rights, entails similarly strong specifications beyond Germany. Footnote 130 A well-known difference already exists in the fact that in Austria and several other jurisdictions, in contrast to Germany, the guarantee of human dignity is not explicitly anchored in the constitution. Despite manifold, specific, concrete references, there is no “comprehensive justiciable right to a dignified life” in Austria. Footnote 131 Consequently, the same situation cannot be assumed in Austria that leads the prevailing opinion in German jurisprudence and, finally, also the German act to deny “utilitarian” solutions to moral dilemmas. This holds true for other European legal orders. Neither it is explicitly enshrined in the ECHR. Footnote 132
The argumentation of the ECtHR in the Finogenov case is instructive for utilitarian programming, including a reference to the ruling of the Federal German Constitutional Court on the Air Security Act. Footnote 133 The use of gas when rescuing hostages in the Dubrovka Theatre in Moscow, which was directed against the terrorists but also affected the hostages, did not constitute a violation of the right to life for the ECtHR because the potentially lethal force was justified in view of the threat situation. The decisive factor here was the hostages’ real chance of survival. Footnote 134 This is particularly relevant for the probability to avoid harm as discussed above, although it must be conceded that the measure which affected the hostages was also intended to protect them.
Moreover, it must be conceded that only a ruling by the ECtHR will probably provide clarity on how these new types of moral dilemmas, which are likely to occur in the near future, can be resolved in conformity with the Convention. After all, Article 2 ECHR was not adopted with an awareness of the new moral dilemmas that will arise as a result of self-driving cars. The potential justification of limitations to the right to life according to Article 2 paragraph 2 ECHR, such as the defense against unlawful violence, does not apply to moral dilemmas in connection with self-driving cars.
However, the ECtHR recognizes in principle, albeit in the context of exceptional circumstances, that according to Article 2 paragraph 2 ECHR, in the case of absolute necessity, the use of force resulting in death can also be justified. Footnote 135 For the scenario discussed here, it makes a considerable difference whether the state bows to the perfidious logic of terrorism and agrees, in mostly hypothetical and extremely unlikely situations, to deliberately kill innocent passengers in order to ensure the survival of a larger number of other persons or whether the state is regulating road traffic, which, inevitably, day in and day out—despite measures to increase road safety—will probably always involve the occurrence of dangerous situations, in a way that minimizes risk. Distinguishing between the scenarios is important Footnote 136 because persons who use roads face a similar risk. For example, all persons engaging in road traffic—unlike plane passengers and stadium visitors in the case of a hijacked plane being directed into a crowded stadium–could be assigned to a hazard community. Footnote 137 It may be instructive for this take to think of all persons involved in road traffic as being behind a Rawlsian veil of ignorance Footnote 138 and then to present the options for action to these persons in such a way that they do not know whether they belong to the threatened group with fewer or more persons. Footnote 139 An ex ante preference for the larger group of persons from the initially anonymous hazard community of all road users could thus be justified in advance and is to be judged differently than an ex post decision in an individual case. Footnote 140 While from an ex ante perspective, it is rather a statistical decision, an ex post decision almost makes it impossible to exclude personal stories from a justification of the decision.
Generally speaking, it can be stated in support of this argument that, taken to the extreme, the assertion that one life is worth the same as several would even lead to the extinction of humankind because one life is not capable of reproducing itself. Basically, the law also knows how to differentiate. Footnote 141 Murder is punished differently than genocide. Also in international humanitarian law, human victims are considered as collateral damage in such a way that an attack—despite civilian victims—can be lawful as long as the number is in proportion to the military advantage. Footnote 142 Thus, Eric Hilgendorf also rightly suspects that “[t]he evaluation that one human life ‘weighs’ as much as several, indeed as infinitely many human lives, may only be sustained if such decisions remain textbook fictions.” Footnote 143
Article 2 ECHR arguably neither prohibits nor prescribes a “utilitarian” programming for moral dilemmas involving self-driving cars and the so-called “margin of appreciation” conceded by the ECtHR to the Member States might also play a role. Footnote 144 Yet, a decision must be taken by the law and a justification of this decision is necessary. Either way, an interference with Article 2 ECHR will be the case. If a “utilitarian” programming were to be prohibited, the state would let the larger number of persons die even though they could have been saved. If a “utilitarian” programming were to be prescribed, the state would be responsible for the deaths of the smaller group of persons. If no programming were to be mandated as in the German act, the decision would be left to private actors. Footnote 145 No-action programming—in other words, in our scenario, not redirecting the car—would leave the larger number of persons marooned. Random programming would ultimately, albeit less often, still cost more lives in the aggregate than a “utilitarian” programming. Footnote 146 This would not meet the state’s duty to protect according to Article 2 ECHR either; in other words, the abandonment of the larger number of persons would again be in need of justification.
The recently amended General Safety Regulation of the European Union had the goal to establish the EU to be in the avant-garde for allowing so-called level 4 self-driving cars on European roads. Footnote 147 This regulation has to stand the test of the arguments made here too. Whether the German act, at least concerning the solution of moral dilemmas, is an ideal role model, is questionable, however. Technically, the Commission Implementing Regulation (EU) 2022/1426 of 5 August 2022 lays down rules for the application of Regulation (EU) 2019/2144 of the European Parliament and of the Council as regards uniform procedures and technical specifications for the type-approval of the automated driving system (-ADS-) of fully automated vehicles. Footnote 148 Particularly interesting for our purpose is ANNEX II of this Regulation on performance requirements. In relation to the regulation for Dynamic Driving Tasks (DDT) under critical traffic scenarios—emergency operation—it reads as follows:
2.1 The ADS [Automated Driving System] shall be able to perform the DDT for all reasonably foreseeable critical traffic scenarios in the ODD [Operational Design Domain].
2.1.1. The ADS shall be able to detect the risk of collision with other road users, or a suddenly appearing obstacle (debris, lost load) and shall be able to automatically perform appropriate emergency operation (braking, evasive steering) to avoid reasonably foreseeable collisions and minimise risks to safety of the vehicle occupants and other road users.
2.1.1.1. In the event of an unavoidable alternative risk to human life, the ADS shall not provide for any weighting on the basis of personal characteristics of humans.
2.1.1.2. The protection of other human life outside the fully automated vehicle shall not be subordinated to the protection of human life inside the fully automated vehicle.
2.1.2. The vulnerability of road users involved should be taken into account by the avoidance/mitigation strategy.
These rules are in many ways similar to the German act presented before. For instance, also in the realm of this regulation it is clear that the “weighting on the basis of personal characteristics of humans” is clearly forbidden. However, in relation to the moral dilemma discussed here more extensively, the constellation of deciding between a larger and a smaller group of people, these rules remain silent as does the German act. It hinges upon the interpretation of the passage stating that “reasonably foreseeable collisions” need to be avoided “and minimis[ing] risks to safety of the vehicle occupants and other road users” are prescribed. Depending on the interpretation this may include a situation in which minimal risk stands for directing the car against a smaller group of persons in order to safe a larger group or not. Footnote 149
V. The Regulation of Moral Dilemmas Involving Self-Driving Cars in the US
Self-driving cars—albeit so far mostly in test mode—are no novelty on roads in the US. Therefore, accidents happen as well. As of July 1, 2022, the Department of Motor Vehicles (DMV) in the US state of California, for instance, has received 486 Autonomous Vehicle Collision Reports. Footnote 150 The regulation of self-driving cars in the US is divided between the federal level, and the individual states. Footnote 151 How individual states regulate self-driving cars varies significantly. Forerunner states in terms of technological development likely also lead the regulation of this technology. Despite that, in relation to moral dilemma situations, which will become a practical problem when self-driving cars become more frequent, there is simply no regulation in the US at either the federal or individual state levels.
Take the regulation in Arizona, for example. In March 2021, the Legislature enacted HB 2813. Footnote 152 Thereby standards for driverless vehicles were established in Arizona. The law does not distinguish between test mode and the normal operation of self-driving cars. Commercial services like passenger transportation, freight transportation and delivery operations can, thus, be offered. In order to use a self-driving car without a human driver, however, the operator must previously submit a law enforcement interaction plan according to Section 28-9602. The car must follow federal laws and standards, comply with all traffic and vehicle safety laws and, most interestingly for our case, according to Section 28-9602 C. 1. (B), in case of failure of the automated driving system, “achieve a minimal risk condition.” Footnote 153 This condition is defined as “a condition to which a human driver or an automated driving system may bring a vehicle in order to reduce the risk of a crash when a given trip cannot or should not be completed.”Footnote 154
While these requirements are very understandable, moral dilemmas are simply not addressed. This is a severe lack, which might impede the trust in self-driving cars once they become more frequent on public roads. Therefore, a regulation is necessary. Car manufacturers won’t take the first move. So far, they are interested in safety questions and do address also extreme situations to some extent. Yet, they do not include moral dilemmas in their considerations. Footnote 155 Who would blame them? The nature of a dilemma is that either way, the outcome involves trouble. Such negative consequences are hard to accept, and likely do not contribute to selling cars. Hence, if regulation does not force producers to deal with this issue, they will likely avoid facing it. So far in the US, states took the primary role in regulating self-driving cars. This might change, however. If a federal law would be enacted in the future, Footnote 156 or if further state laws are adopted, it could be wise to include a provision on moral dilemmas. Thereby important decisions about live and death would not be decided by car producers but by the legislator. The provision in the German act—even though very likely not as a detailed blueprint—might serve as a role model in as much as that this act is the first one worldwide to actually contain a regulation of moral dilemmas. The lack of addressing the dilemma discussed above, however, significantly reduces the role-model potential of the German act significantly.
H. Conclusion
Decisions about life and death are always difficult, bringing us to the edge of what the law can legitimately provide—and possibly—beyond. Footnote 157 In particular, moral dilemmas as discussed here represent a problem at the intersection of ethics, technology, and the law, Footnote 158 which makes regulation even more difficult. However, to lump different scenarios of moral dilemmas together under one constitutional umbrella is not a successful strategy. It is precisely the sensitive subject matter and the threat to such fundamental legal rights as well as to human life that demand an ethical and legal assessment appropriate to the situation in question. Footnote 159 Frightening terrorist scenarios, pandemic-related triage decisions, questions of organ transplantation or, precisely, the regulation of moral dilemmas arising through self-driving cars all touch upon fundamental aspects of human life but nevertheless exhibit important differences.
Self-driving cars, insofar as they increase road safety, should be allowed on European roads due to the state’s duty to protect human life enshrined in Article 2 ECHR. Decisions about moral dilemmas in connection with self-driving cars should not, according to the argumentation in this Article, be left to car manufacturers. Basically, the 2021 German Act on Autonomous Driving provides a solid role-model character for further European-wide regulations of self-driving cars including collision-avoidance systems. However, since crashes involving self-driving cars mostly do not present themselves in advance as decisions about life and death but as probabilities of avoiding harm, the duty to protect life by the state has the consequence that in situations in which comparable harm occurs according to all available options, the protection of a larger group of persons might to be preferred over the protection of a smaller group as the minimization of risks. While this might be debated, what seems clear is that, for true moral dilemmas, the law needs to provide a regulation. This contrasts with the partial silence of the German act concerning moral dilemmas. To put it bluntly, lives should not be “offset,” but rather the probability of avoiding damage should be increased. The state’s duty to protect life, its positive obligations stemming from Article 2 ECHR, might well speak in favor of preferring the protection of the larger group even in a case where the probability of killing the persons in the smaller group borders on certainty. These considerations are relevant for future adaptations of regulations under European and US law, which will continue to be a great challenge due to the sensitive issues and the different positions in these legal orders.
Acknowledgements
I thank Norbert Paulo, the participants of the Panel “Regulating Digital Technologies” at the Biennial Conference of the Standing Group on Regulatory Governance of the European Consortium of Political Research at the University of Antwerp in July 2023 and especially the organizers as well as the discussant Anne Meuwese and an anonymous review for helpful comments, criticism, and discussions on earlier versions. Furthermore, I would like to thank Helen Heaney for stylistic advice, Alexander Berghaus and Jan-Hendrik Grünzel for research assistance, and the student editors of the GLJ for their valuable and detailed work in preparing the manuscript for publication.
Competing Interests
The author declares none.
Funding Statement
This research is part of the project EMERGENCY-VRD and funded by dtec.bw – Digitalization and Technology Research Center of the Bundeswehr which we gratefully acknowledge. dtec.bw is funded by the European Union – NextGenerationEU.