Published online by Cambridge University Press: 17 June 2021
The issue of super-intelligent artificial intelligence (AI) has begun to attract ever more attention in economics, law, sociology and philosophy studies. A new industrial revolution is being unleashed, and it is vital that lawmakers address the systemic challenges it is bringing while regulating its economic and social consequences. This paper sets out recommendations to ensure informed regulatory intervention covering potential uncontemplated AI-related risks. If AI evolves in ways unintended by its designers, the judgment-proof problem of existing legal persons engaged with AI might undermine the deterrence and insurance goals of classic tort law, which consequently might fail to ensure optimal risk internalisation and precaution. This paper also argues that, due to identified shortcomings, the debate on the different approaches to controlling hazardous activities boils down to a question of efficient ex ante safety regulation. In addition, it is suggested that it is better to place AI in the existing legal categories and not to create a new electronic legal personality.
The author would like to thank Roger van den Bergh, Gerrit De Geest, Ben Depoorter, Matthew Dyson, Michael Faure, Paula Giliker, Paul Heald, Eric Helland, Jonathan Klick, Anne Lafarre, Alain Marciano, Philip Morgan, Jens Prüfer, Giovanni Ramello, Wolf-Georg Ringe, Hans-Bernd Schäfer, Ann-Sophie Vandenberghe, Bruce Wardhaugh, the participants of the IMA Workshop, University of York, 2020, the workshop session at the European Master in Law and Economics (EMLE) Midterm Meeting, Hamburg, 2019, the 110th Society of Legal Scholars Annual Conference at the University of Central Lancashire, Preston, 2019 and the participants of the AGCOM workshop on “Law and economics of big data and artificial intelligence”, Rome, 2018 for their thoughtful comments, suggestions and advice. Funding received from: Slovenian Research Agency (Javna Agencija za Raziskovalno dejavnost Republike Slovenije, ARRS); name of the research project: Challenges of inclusive sustainable development in the predominant paradigm of economic and business sciences, grant no.: P5-0128.
1 S Russell and P Norvig, Artificial Intelligence: A Modern Approach (3rd edn, Prentice Hall, NJ, Pearson 2016) pp 2–5.
2 RJ Sawyer, “Robot ethics” (2007), 318(5853) Science 1037; and P Lin, K Abney and GA Bekey, Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge, MA, MIT Press 2011).
3 D Weinbaum and V Veitas, “Open ended intelligence: the individuation of intelligent agents” (2017) 29(2) Journal of Experimental and Theoretical Artificial Intelligence 371–96.
4 In this paper, the term “artificial intelligence” denotes autonomous AI that is independent and has the capacity to self-learn, interact, take autonomous decisions, develop emergent properties and adapt its behaviour/actions to the environment and has no life in a biological sense. In other words, AI’s “behaviour” is determined by computer code that allows some room for “decision-making” by the machine itself, and the AI’s behaviour is not entirely under the control of human actors. See, eg, Russell and Norvig, supra, note 1, 23–28; D McAllester and D Rosenblitt, “Systematic nonlinear planning” (1991) 2 AAAI-91 634–39; EJ Horowitz, JS Breese and M Henrion, “Decision theory in expert systems and artificial intelligence” (1988) 2 International Journal of Approximate Reasoning 247–302; and EJ Horswill, “Functional programming of behaviour-based systems” (2000) 9 Autonomous Robots 83–93. See also P Stone, R Brooks, E Brynjolfsson, R Calo, O Etzioni, G Hager et al, Artificial Intelligence and Life in 2030 (Report of the 2015 study panel 50, Stanford University 2016); P McCorduck, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence (Chico, CA, AK Press 2004) p 133.
5 S Russell, Human Compatible: Artificial Intelligence and the Problem of Control (London, Allen Lane 2019) p 4.
6 ibid.
7 J Turner, Robot Rules: Regulating Artificial Intelligence (London, Palgrave Macmillan 2019) pp 81–86.
8 J Buyers, Artificial Intelligence: The Practical Legal Issues (Minehead, Law Brief Publishing 2018) pp 21–35.
9 G Teubner, “Digital personhood? The status of autonomous software agents in private law” (2018) Ancilla Iuris. See also A Koch, “Liability for emerging digital technologies: an overview” (2020) 11(2) Journal of European Tort Law 115–36.
10 See, eg, Russel and Nordig, supra, note 1; T Simonite, “AI software learns to make AI software” (2017) MIT Technology Review; Y Wilks, Artificial Intelligence: Modern Magic or Dangerous Future? (London, Icon Books 2019); JN Kim, MI Jordan and S Sastry, “Autonomous helicopter flight via reinforcement learning” (2004) Advances in Neural Information Processing Systems 16 (NIPS 2003); and M Minsky, The Emotion Machine: Common-Sense Thinking, Artificial Intelligence, and the Future of the Human Mind (New York, Simon & Schuster 2006).
11 Callvano et al show that the algorithms consistently learn to charge supra-competitive prices without communicating with each other; E Calvano, G Calzolari, V Denicolo and S Pastorello, “Algorithmic pricing: what implications for competition policy? (2019) 55(2) Review of Industrial Organization 155–71; and E Calvano, G Calzolari, V Denicolò and S Pastorello, “Artificial intelligence, algorithmic pricing, and collusion” (2020) 110(10) American Economic Review 3267–97. See also JE Harrington, “Developing competition law for collusion by autonomous artificial agents” (2018) 14(3) Journal of Competition Law & Economics 331–63.
12 See, eg, Resolution on the Civil Law Rules on Robotics of the European Parliament, P8-TA (2017)0051.
13 EU Commission, COM (2018) 237 final.
14 ibid.
15 For a synthesis, see S Lohsse, R Schulze and D Staudenmayer, “Liability for artificial intelligence”, in S Lohsse, R Schulze and D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Baden-Baden, Nomos Verlagsgesellschaft 2018) p 11; J De Bruyne and C Vanleenhove, Artificial Intelligence and the Law (Cambridge, Intersentia 2021); A Koch, “Liability for emerging digital technologies: an overview” (2020), 11(2) Journal of European Tort Law 115–36.
16 See, eg, A Galasso and H Luo, “punishing robots: issues in the economics of tort liability and innovation in artificial intelligence” in NBER Chapters, The Economics of Artificial Intelligence: An Agenda (Cambridge, MA, National Bureau of Economic Research 2018) pp 493–504; HB Schäfer and C Ott, The Economic Analysis of Civil Law (Cheltenham, Edward Elgar 2004) pp 107–273; HB Schäfer, “Tort law: general” in B Bouckaert and G De Geest (eds), Encyclopedia of Law and Economics (Cheltenham, Edward Elgar 2000) pp 569–96; W Emons and J Sobel, “On the effectiveness of liability rules when agents are not identical” (1991) 58(2) Review of Economic Studies 375–90; S Shavell, Economic Analysis of Accident Law (Cambridge, MA, Harvard University Press 1987); AM Polinsky and WP Rogerson, “Product liability, consumer misperceptions and market power” (1983) 14(1) Bell Journal of Economics 581–89; S Shavell, “Strict liability versus negligence” (1980) 9(1) Journal of Legal Studies 1–25; RA Posner, “A theory of negligence” (1972) 1(1) Journal of Legal Studies 29–96; G Calabresi, “Some thoughts on risk distribution and the law of torts” (1961) 70(4) Yale Law Journal 499–553; G Calabresi, The Costs of Accidents: A Legal and Economic Analysis (New Haven, CT, Yale University Press 1970).
17 M Kovac, Judgement-Proof Robots and Artificial Intelligence. A Comparative Law and Economics Approach (London, Palgrave Macmillan 2020).
18 See R Van der Bergh, The Roundabouts of European Law and Economics (The Hague, Eleven International Publishing 2018) pp 21–28; and RA Posner, Economic Analysis of Law (9th edn, Alphen aan den Rijn, Wolters Kluwer Law Publishers 2014).
19 See eg J McCarthy, “From here to human-level AI” (2007) 171(18) Artificial Intelligence 1174–82; GF Luger, Computation and Intelligence: Collected Readings (Palo Alto, CA, AAAI Press 1995); J McCarthy, ML Minsky, N Rochester and CE Shannon, Proposal for the Dartmouth Summer Research Project on Artificial Intelligence (Hanover, NH, Dartmouth College, tech. rep. 1955); and NJ Nillson, The Quest for Artificial Intelligence: A History of Ideas and Achievements (Cambridge, Cambridge University Press 2009).
20 See, eg, P Smolensky, “On the proper treatment of connectionism” (1988) 11(1) Behavioral and Brain Sciences 1–74.
21 Russel and Norvig, supra, note 1, at 26.
22 ibid, at 27.
23 See, eg, McCarthy, supra, note 19; ML Minsky, P Sigh and A Sloman, “Designing architectures for human-level intelligence” (2004) 25(2) AI Magazine 113–254; ML Minsky, The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of Human Mind (New York, Simon & Schuster 2007); N Nilsson, “Human-level artificial intelligence?” (2005) 26(4) AI Magazine 68–75; J Beal and PH Winston, “The new frontier of human-level artificial intelligence” (2009) 24(4) IEEE Intelligent Systems 21–23; and NJ Nilsson, Artificial Intelligence: A New Synthesis (Burlington, MA, Morgan Kaufmann 1998).
24 See, eg, B Goertzel and C Pennachin, Artificial General Intelligence (Berlin, Springer 2007); E Yudkowsky, “Artificial intelligence as a positive and negative factor in global risk,” in N Bostrom and M Cirkovic (eds), Global Catastrophic Risk (Oxford, Oxford University Press 2008); and S Omohundro, “The basis AI drives” (2008) AGI-08 Workshop on the Sociocultural, Ethical and Futurological Implications of Artificial Intelligence.
25 See Weinbaum and Veitas, supra, note 3. However, see also M Boden, AI: Its Nature and Future (Oxford, Oxford University Press 2016) 119; and W Wallach and C Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford, Oxford University Press 2009) p 68.
26 Weinbaum and Veitas, supra, note 3.
27 It combines the advantages of semantic reasoning and neural networks; M Acosta, P Cudré-Mauroux, M Maleshkova, T Pellegrini, H Sack and Y Sure-Vetter (eds), Semantic Systems. The Power of AI and Knowledge Graphs (Berlin, Springer 2019).
28 T Poonam, TV Prasad and M Singh, “Comparative study of three declarative knowledge representation techniques” (2010) 2(7) International Journal of Advanced Trends in Computer Science and Engineering 2274. See also Nillson, supra, note 19.
29 Data-based AI actually solves the “knowledge bottleneck” in AI (the problem of how to express all of the knowledge that a system needs): A Halevy, P Norvig and F Pereira, “The unreasonable effectiveness of data” (2009) 24(2) IEEE Intelligent Systems 8–12; R Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York, Viking Press 2005); and M Banko and E Bril, “Scaling to very very large corpora for natural language disambiguation” (2001) ACL-01: Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, 26–33.
30 See J Kaplan, Artificial Intelligence: What Everyone Needs to Know (Oxford, Oxford University Press 2016); and P McCorduck, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence (Chico, CA, AK Press 2004) p 133.
31 See, eg, R Leenes and F Lucivero, “Laws on robots, laws by robots, laws in robots: regulating robot behaviour by design” (2014) 6 Law, Innovation and Technology 193; U Pagallo, The Laws of Robots: Crimes, Contracts, and Torts (Berlin, Springer 2013); PM Asaro, “A body to kick, but still no soul to damn: legal perspectives on robotics”, in P Lin (ed.), Robot Ethics: The Ethical and Social Implications of Robotics (Cambridge, MA, MIT Press 2012) p 169; FP Hubbard, “‘Sophisticated robots’: balancing liability, regulation, and innovation”, (2015) 66 Florida Law Review 1803; R de Bruin, “Autonomous intelligent cars on the European intersection of liability and privacy” (2016) 7(3) European Journal Risk Regulation 485–501; MF Lohmann, “Liability issues concerning self-driving vehicles” (2016) 7(2) European Journal Risk Regulation 335–40; and S Lohsse, R Schulze and D Staudenmayer, “Liability for artificial intelligence”, in S Lohsse, R Schulze and D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Baden-Baden, Nomos 2018) p 11.
32 S Chopra and LF White, A Legal Theory for Autonomous Artificial Agents (Ann Arbor, MI, University of Michigan Press 2011); T Schulz, Verantwortlichkeit bei autonom agierenden Systemen (Baden-Baden, Nomos 2014); and EAR Dahiyat, “Towards new recognition of liability in the digital world: should we be more creative?” (2011) 19(3) International Journal of Law and Information Technology 224–42.
33 See, eg, E Palmerini and A Bertolini, “Liability and risk management in robotics,” in R Schulze and D Staudenmayer (eds), Digital Revolution: Challenges for Contract Law in Practice (Baden-Baden, Nomos 2016) p 225; and E Tjong Tjin Tai, “Aansprakelijkheid voor robots en algoritmes” (2017) Nederlands Tijdschrift voor Handelsrecht 123.
34 See, eg, T Schulz, Verantwortlichkeit bei autonom agierenden Systemen (Baden-Baden, Nomos 2014); J Hanisch, Haftung für Automation (Göttingen, Cuvillier 2010); and S Gless and K Seelmann (eds), Intelligente Agenten und das Recht (Baden-Baden, Nomos 2016).
35 Tjong Tjin Tai argues that strict liability for robots (and possibly algorithms) would have to be adopted (via specific statue) and imposed on the owner and/or user. He also suggests that product liability could be extended to algorithms (via statute); E Tjong Tjin Tai, “Liability for (semi)autonomous systems: robots and algorithms” in V Mak, E Tjong Tjin Tai and A Berlee (eds), Research Handbook in Data Science and Law (Cheltenham, Edward Elgar 2018) pp 55–82.
36 G Wagner, “Robot liability” in S Lohsse, R Schulze and D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things: Munster Colloquia on EU Law and the Digital Economy IV (Baden-Baden, Nomos 2019).
37 Thus, as Wagner suggests, “in the interest of meaningful incentives of the manufacturer to employ available safety measures and to balance their costs and benefits, manufacturer liability is essential”; ibid.
38 R Abbott and A Sarch, “Punishing artificial intelligence: legal fiction or science fiction” (2019) 53(1) UC David Law Review 323–84.
39 O Rachum-Twaig, “Whose robot is it anyway?: liability for artificial-intelligence-based robots” (2020) 2020(4) University of Illinois Law Review 1141–76.
40 ibid.
41 ibid. However, De Bruyne and Vanleenhove argue that in relation to AI the existing rules of jurisdiction and applicable law do not pose particular problems when applied to self-driving cars; J De Bruyne and C Vanleenhove, “The rise of self-driving cars: is the private international law framework for non-contractual obligations posing a bump in the road?” (2018) 5(1) IALS Student Law Review 14–26.
42 H Eidenmüller, “Machine performance and human failure: how shall we regulate autonomous machines?” (2019) 15(1) Journal of Business & Technology Law 109–33. See also E Karner, “Liability for robotics: current rules, challenges, and the need for innovative concepts,” in S Lohsse, R Schulze and D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Baden-Baden, Nomos 2018) p 117.
43 Borghetti also argues that fault is not a relevant concept when algorithms are at stake, and establishing an algorithm’s defect will probably be too difficult in most cases; JS Borghetti, “Civil liability for artificial intelligence: what should its basis be?” (2019) 17 Revue des Juristes de Sciences Po 94–102. See also JS Borghetti, “How can artificial intelligence be defective?” in S Lohsse, R Schulze and D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Baden-Baden, Nomos 2018) p 63.
44 Borghetti (2018). supra, note 43. See also P Machnikowski, “Producers’ liability in the EC Expert Group report on liability for AI” (2020) 11(2) Journal of European Tort Law 137–49.
45 C Wendehorst, “Strict liability for AI and other emerging technologies” (2020) 11(2) Journal of European Tort Law 150–80.
46 TS Cabral, “Liability and artificial intelligence in the EU: assessing the adequacy of the current Product Liability Directive” (2020) 27(5) Maastricht Journal of European and Comparative Law 615–35.
47 G Calabresi and AD Melamed, “Property rules, liability rules and inalienability: one view of the cathedral” (1972) 85(6) Harvard Law Review 1089–128.
48 See, eg, R Cooter and T Ulen, Law and Economics (6th edn, Boston, MA, Addison-Wesley 2016) pp 287–373; Posner, supra, note 16, at 14; HB Schäfer and C Ott, The Economic Analysis of Civil Law (Cheltenham, Edward Elgar 2004) pp 107–273; and E Mackaay, Law and Economics for Civil Law Systems (Cheltenham, Edward Elgar 2015).
49 G De Geest, “Who should be immune from tort liability?” (2012) 41(2) Journal of Legal Studies 291–319.
50 S Shavell, “Liability for accidents” in MA Polinsky and S Shavell (eds), Handbook of Law and Economics (Vol. 1, Amsterdam, North Holland 2007) pp 139–83.
51 DN Dewees, D Duff and MJ Trebilcock, Exploring the Domain of Accident Law: Taking the Facts Seriously (Oxford, Oxford University Press 1996) p 452.
52 For an overview of his contributions, see S Shavell, Economic Analysis of Accident Law (Cambridge, MA, Harvard University Press 2007).
53 S Shavell, “Liability for harm versus regulation of safety” (1984) 13(2) Journal of Legal Studies 357–74.
54 ibid.
55 ibid.
56 ibid. Also see RA Epstein, “The principles of environmental protection; the case of Superfund” (1982) 2(1) Cato Journal 9–53.
57 S Rose-Ackerman, “Tort law as a regulatory system” (1991) 81 AEA Papers and Proceedings 2.
58 PW Schmitz, “On the joint use of liability and safety regulation” (2000) 20(3) International Review of Law and Economics 371–82.
59 J Summers, “The case of the disappearing defendant: an economic analysis” (1983) 132 University of Pennsylvania Law Review 145–85; and S Shavell, “The judgement proof problem” (1986) 6(1) International Review of Law and Economics 45–58.
60 Summers, supra, note 59.
61 Shavell, supra, note 59. Also see JJ Ganuza and F Gomez, “Being soft on tort. Optimal negligence rule under limited liability” (2005) UPF Working paper; and J Boyd and DE Ingberman, “Noncompensatory damages and potential insolvency” (1994) 23(2) Journal of Legal Studies 895–910.
62 Shavell, supra, note 50, at 148.
63 Shavell offers an example of the injurer’s problem of choosing care x under strict liability, when their assets are y < h and where the injurer’s problem is formulated as minimising x + p(x)y; where the injurer chooses x(y) determined by –p’(x)y = 1 instead of –p’(x)h = 1, so that x(y) < x* (and the lower is y, the lower is x(y)). In this instance, the injurer’s wealth after spending on care would be y – x, and only this amount would be left to be paid in a judgment; Shavell, supra, note 50, at 148.
64 G Huberman, D Mayers and C Smith, “Optimal insurance policy indemnity schedules” (1983) 14(2) Bell Journal of Economics 415–26. Also see WR Keeton and E Kwerel, “Externalities in automobile insurance and the underinsured driver problem” (1984) 27(3) Journal of Law and Economics 149–79; and Shavell, supra, note 59.
65 Shavell, supra, note 50, at 180.
66 In addition, the problem of excessive engagement in risky activities is mitigated to the extent that liability insurance is purchased, but the problem of suboptimal levels of care could be exacerbated if the insurers’ ability to monitor care is imperfect; see Shavell, supra, note 50, at 180.
67 Shavell, supra, note 59, at 58.
68 Moreover, such a liability might result in over-deterrence of such an AI data provider, the operator or a software engineer, and may be detrimental to innovation and also hamper innovation activity. See, eg, M Porter, The Competitive Advantage of Nations (New York, Free Press 1990); WP Viscusi and MJ Moore, “Product liability, research and development, and innovation” (1993) 101(1) Journal of Political Economy 161–84; J Pelkmans and A Renda, “Does EU regulation hinder or stimulate innovation?” (2014) Centre for European Policy Studies, Special report No. 26; and A Galasso and H Luo, “Risk-mitigating technologies: the case of radiation diagnostic devices” (2020) Management Science 1–19.
69 See, eg, G Huberman et al, supra, note 64; and Keeton and Kwerel, supra, note 64.
70 See S Shavell, Foundations of Economic Analysis of Law (Cambridge, MA, Harvard University Press 2004) pp 175–289; R Pitchford, “Judgement-proofness” in P Newman (ed.), The New Palgrave Dictionary of Economics and the Law (London, Palgrave Macmillan 1998) pp 380–83; and AH Ringleb and SN Wiggins, “Liability and large-scale, long-term, hazards” (1990) 98(3) Journal of Political Economy 574–95.
71 See, eg, G Corfield, “Tesla death smash probe: neither driver nor autopilot saw the truck” (2017) The Register; and S Levin and JC Wong, “Self-driving Uber kills Arizona women in first fatal crash involving pedestrian” (2018) The Guardian.
72 Evidently, autonomous systems are expected to decrease the number and severity of accidents dramatically, but accidents will continue to occur. The critical point is that the pool of accidents that an autonomous system still causes will not be the same as the pool of accidents a reasonable driver is unable to avoid. However, as Wagner points out, “AI might fail to observe and account for a freak event that any human would have recognized and adapted his or her behaviour to”; Wagner, supra, note 36.
73 KD Logue, “Solving the judgement-proof problem” (1994) 72 Texas Law Review 1375–94.
74 Expert Group on Liability and New Technologies and New Technologies Formation, “Liability for Artificial Intelligence and Other Emerging Technologies”, European Union, 2019.
75 ibid.
76 One has to note that civil law countries and the EU Member States operate their own liability systems, and the differences among these systems are manifold. Yet generally where an actor fails to take due care and this negligence causes harm to another or where a wrongdoer causes such harm intentionally, this actor is liable to compensate the victim. “The principle of fault-based liability covers harm done to a set of fundamental interests of the person, i.e. life, health, bodily integrity, freedom of movement, and private property; in some legal systems the list of protected interests also includes purely economic interests and human dignity”; Wagner, supra, note 36. For thorough analyses, see C von Bar, The Common European Law of Torts (Vol. 1, Munich, C.H. Beck 1998); and Koch, supra, note 15.
77 Turner, supra, note 7. See also Expert Group on Liability and New Technologies and New Technologies Formation, supra, note 74, at 22–27; and M Infantino and E Zervogianni, “The European ways to causation,” in M Infantino and E Zervogianni (eds), Causation in European Tort Law (Cambridge, Cambridge University Press 2017) pp 604–05.
78 M Alfonseca, M Cebrian, AF Anta, L Coviello, A Abeliuk and I Rahwan, “Superintelligence cannot be contained: lessons from computability theory” (2021) 70 Journal of Artificial Intelligence Research 65–76. See also V Mnih, K Kavukcuoglu, SD Rusu, AA Veness, J Bellemare, MG Graves et al, “Human-level control through deep reinforcement learning” (2015) 518(7540) Nature 529–33.
79 Alfonseca et al, supra, note 78.
80 ibid. See also N Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford, Oxford University Press 2014).
81 I Rahwan, M Cebrian, N Obradovich, J Bongard, J-F Bonnefon, C Breazeal et al, “Machine behaviour” (2019) 568(7753) Nature 477–86.
82 While AI is the product of human creation, today the production process is so complicated that the producer or creator may be unable to predict the way in which the algorithm may respond to all possible input conditions; see Tjong Tjin Tai, supra, note 35.
83 EAC Karnow, “The application of traditional tort theory to embodied machine intelligence,” in R Calo, M Froomkin and I Kerr (eds), Robot Law (Cheltenham, Edward Elgar 2015).
84 MU Schere, “Regulating artificial intelligence systems: risks, challenges, competencies, and strategies” (2016) 29(2) Harvard Journal of Law & Technology 353, at 363.
85 M Martin-Casals, “Causation and scope of liability in the Internet of Things,” in S Lohsse, R Schulze and D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Baden-Baden, Nomos 2018) p 223.
86 See, eg, Infantino and Zervogianni, supra, note 77, at 606; and H Kötz and G Wagner, Deliktsrecht (Berlin, Franz Vahlen 2016) p 94.
87 For thorough analyses, see Wagner, supra, note 36; Koch, supra, note 15; De Bruyne and Vanleenhove, supra, note 15; Palmerini and Bertolini, supra, note 33; Borghetti (2018), supra, note 43; and Lohsse et al, supra, note 15.
88 M Shifton, “The Restatement (Third) of Torts: Products Liability – the Alps cure for prescription drug design liability” (2001) 29(6) Fordham Urban Law Journal 2343–86.
89 For a thorough discussion on whether a piece of software is a product, see Wagner, supra, note 36.
90 See Wagner, supra, note 36. See also Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee on the Application of the Council Directive on the approximation of the laws, regulations, and administrative provisions of the Member States concerning liability for defective products (85/374/EEC), COM(2018) 246 final, 8 f.
91 Expert Group on Liability and New Technologies and New Technologies Formation, supra, note 74.
92 ibid. However, such duties are contained in the general safety regulation and sector-specific legislation that is relevant within an AI context (see, eg, Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices). See also Wagner, supra, note 36; and Koch, supra, note 15.
93 ibid.
94 See Turner, supra, note 7, at 98. See also L Griffiths, P de Val and RJ Dormer, “Developments in English product liability law: a comparison with the American system” (1988) 62(354) Tulane Law Review 383–85.
95 Namely, since all new technology in essence presents a certain conceptual problem to the existing jurisprudence, efficient legal institutions react and generally address such issues by, for example, requiring legal standards of reasonableness, duty of care or good faith. See, eg, Guille v. Swan, Supreme Court of New York 1822; and Rylands v. Fletcher (1868) LR 3 HL 330.
96 See OJ Erdelyi and J Goldsmith, “Regulating artificial intelligence: proposal for a global solution” (2018) AIES 95–101; and V Wadhwa, “Laws and ethics can’t keep pace with technology” (2014) 15 Massachusetts Institute of Technology: Technology Review.
97 P Schmitz, supra, note 58. See also A Agrawal, J Gans and A Goldfarb, “Prediction, judgment, and complexity: a theory of decision-making and artificial intelligence,” in A Agrawal, J Gans and A Goldfarb (eds), The Economics of Artificial Intelligence: An Agenda (Chicago, IL, University of Chicago Press 2019).
98 For syntheses, see A Sykes, “The economics of vicarious liability” (1984) 93 Yale Law Journal 168–206; and RH Kraakman, “Vicarious and corporate civil liability,” in G De Geest and B Bouckaert (eds), Encyclopedia of Law and Economics (Vol. II, Civil Law and Economics, Cheltenham, Edward Elgar 2000).
99 Shavell, supra, note 59. Also see Shavell, supra, note 52.
100 Turner, supra, note 7.
101 ibid.
102 Shavell, supra, note 50, at 180.
103 Expert Group on Liability and New Technologies and New Technologies Formation, supra, note 74, at 45–46.
104 See, eg, Shavell, supra, note 70; and Shavell, supra, note 50, at 139–83.
105 Similar to the required minimum starting capital for corporations.
106 Shavell shows that principals will engage in the activity if and only if their benefits would exceed the expected harm caused; and if they engage in the activity, they will choose the optimal level of care. If individuals’ assets are less than the potential harm, however, they will engage too often in the harmful activity, as they will not then face (effective) expected liability equal to the expected harm, and they will similarly lack incentives to take optimal care; see Shavell, supra, note 50.
107 R Pitchford, “How liable should a lender be? The case of judgement-proof firms and environmental risk” (1995) 85 American Economic Review 1171–86.
108 Namely, although their assets are low and their care would be inadequate, their benefits might still exceed the expected harm that they create; see Shavell, supra, note 50, at 170. Shavell also suggest that minimum asset requirements are somewhat blunt instruments for alleviating the incentive problems; see S Shavell, “Minimum asset requirements and compulsory liability insurance as solutions to the judgement-proof problem” (2005) 36(1) Rand Journal of Economics 63–77.
109 In fact, the European Parliament has already advised the European Commission to consider and adopt a mandatory insurance scheme with respect to robotics and AI; European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).
110 Potential injurers may make superior decisions as to whether to engage in an activity and, if they do so, may have stronger incentives to reduce risk when they have at stake at least the required level of assets and/or liability insurance coverage if they are sued for causing harm; ibid. Moreover, a party with assets less than the possible harm can pay at most their assets and thus faces a commensurately low expected liability. But, as Shavell suggests, if the party must purchase liability insurance in order to engage in the activity, they will bear a higher expected liability, and this may improve their decisions as to whether to participate in the activity; ibid. See also PJ Jost, “Limited liability and the requirement to purchase insurance” (1996) 16 International Review of Law and Economics 259–76.
111 Karnow suggest that risk would be assessed along a spectrum of automation: the higher the intelligence, the higher the risk, and thus the higher the premium, and vice versa. If third parties declined to deal with uncertified programs, the system would become self-fulfilling and self-policing. Sites should be sufficiently concerned to wish to deal only with certified agents. Programmers (or others with an interest in using, licensing or selling the agent) would in effect be required to secure a Turing certification, pay the premium and thereby secure protection for sites at which AI agents are employed; CEA Karnow, “Liability for distributed artificial intelligences” (1996) 11(147) Berkeley Technology Law Journal 193–94. Interestingly, such a system was already put forth back in the days of slavery to account for the autonomous acts of slaves – admittedly a discomforting comparison; JB Wahl, “Legal constraints on slave masters: the problem of social cost” (1997) 41(1) American Journal of Legal History 1–24.
112 Shavell, supra, note 108.
113 ibid.
114 Forbidding the purchase of liability insurance can then improve incentives to take care if, without a prohibition, AI users or developers would have purchased positive coverage and insurers cannot observe the injurer’s level of care; ibid. See also MK Polborn, “Mandatory insurance and the judgement-proof problem” (1998) 18(2) International Review of Law and Economics 141–46.
115 Shavell points out that such direct regulation – safety standards – will help to form incentives for the principals and the manufacturer to ex ante reduce risk as a precondition for engaging in an activity; see Shavell, supra, note 108. See also BW Smith, “Automated driving and product liability” (2017) 2017(1) Michigan State Law Review 1–74; and KS Abraham and RL Rabin, “Automated vehicles and manufacturer responsibility for accidents: a new legal regime for a new era” (2019) 105(1) Virginia Law Review 127–71.
116 Yet such an intervention might simply forgo the enormous potential benefits of AI. See J Babcock, J Kram’ar and RV Yampolskiy, The AI Containment Problem (Berlin, Springer 2016) pp 53–63.
117 Kaplow suggests that the basic trade-offs depend on factors including the frequency and the degree of heterogeneity of adverse events, as well as the relative costs of individuals in learning and applying the law; L Kaplow, “Rules versus standards: an economic analysis” (1992) 42 Duke Law Journal 557–629.
118 Galasso and Luo, supra, note 16.
119 Scherer suggests establishing a regulatory authority dedicated to regulating and governing the development of AI; see Scherer, supra, note 84.
120 This also implies the establishment of a specialised superhuman AI regulator, an agency encompassing all superhuman AI-related activities (similar to the US Food and Drugs Administration (FDA)).
121 See, eg, Directive EC 2006/42 on machinery; Directive 2014/53/EU on radio equipment; Directive 2001/95/EC on general product safety.
122 EU Commission, COM (2018) 237 final.
123 Such regulation would actually maintain AI use–consumer liability to the extent that users of AI technologies have sufficient incentives to take precautions and invest in training, thus internalising potential harm to others; see Galasso and Luo, supra, note 16, at 499. See also B Hay and K Spier, “Manufacturer liability for harms caused by consumers to Others” (2005) 95 American Economic Review 1700–11.
124 Galasso and Luo, supra, note 16, at 499. See also B O’Reilly, “Patents running out: time to take stock of robotic surgery” (2014) 25 International Urogynecology Journal 711–13.
125 See E Von Hippel, Democratizing Innovation (Cambridge, MA, MIT Press 2005); Hay and Spier, supra, note 123; and Galasso and Luo, supra, note 16.
126 Yet it has to be emphasised that such regulation may involve inefficiency because of regulators’ limited knowledge of risk and of the cost and ability to reduce it; see Shavell, supra, note 50, at 171.
127 Abbot and Sarch argue that existing criminal law coverage will in cases of hard AI crimes likely fall short, and that additional AI-related offenses must be created to adequately deter novel crimes implemented with the use of AI; Abbot and Sarch, supra, note 38. Moreover, Hallevy explains that it “seems legally suitable for situations in which an AI entity committed an offense, while the programmer or user had no knowledge of it, had not intended it, and had not participated in it”; see G Hallevy, Liability for Crimes Involving Artificial Intelligence Systems (Berlin, Springer 2015).
128 See, eg, People v Davis 958 P.2d 1083 (Cal. 1998). For a restatement of such a liability with regards to joint enterprise criminal liability in the UK, see R v. Jogee, Ruddock v. The Queen (2016) UKSC8, (2016) UKPC 7.
129 Shavell illustrates this principle with an example of pollution release that will cause harm of $1 million with a 1% probability. Thus, a firm with $100,000 of assets would be able to pay $10,000 for the expected harm it would cause were it to cause the pollution, even though it would only be able to pay one-tenth of the actual $1 million harm it might generate, and so its incentives to reduce risk would be much too low under the liability system; see Shavell, supra, note 50, at 171.
130 Abbot and Sarch, supra, note 38.
131 Giufrida et al suggest that this could be modelled, for instance, on the International Oil Pollution Compensation Funds, created under the auspices of the International Maritime Organization pursuant to the 1992 International Convention on Civil Liability for Oil Pollution Damage and the 1992 International Convention on the Establishment of an International Fund for Compensation for Oil Pollution Damage; see I Giuffrida, F Lederer and N Vermeys, “A legal perspective on the trials and tribulations of AI: how artificial intelligence, the Internet of Things, smart contracts, and other technologies will affect the law” (2018) 68(3) Case Western Reserve Law Review 747–81.
132 An AI compensation fund could, for example, operate like the National Vaccine Injury Compensation Program (VICP). Namely, vaccines create widespread social benefits but are known in rare cases to cause serious medical problems. VICP is a no-fault alternative to traditional tort liability that compensates individuals injured by a VICP-covered vaccine. It is funded by a tax on vaccines that is paid by users. See National Vaccine Injury Compensation Program <https://www.hrsa.gov/vaccine-compensation/index.html> (last accessed 8 February 2021).
133 Moreover, New Zealand has replaced tort law with a publicly funded insurance scheme to compensate victims of accidents. See, eg, PH Schuck, “Tort reform, Kiwi-style” (2008) 27(1) Yale Law & Policy Review 187–90.
134 The Price–Anderson Act, background info (Center for Nuclear Science & Technology Information, La Grange Park, IL), November 2005. See also Abbot and Sarch, supra, note 38.
135 G De Geest and G Dari-Mattiacci, “Soft regulators, tough judges” (2007) 15(2) Supreme Court Economic Review 119–40.
136 Schmitz, supra, note 58.
137 S Rose-Ackerman, “Regulation and the law of torts” (1991) 81 American Economic Review 54–58.
138 De Geest and Dari-Mattiacci, supra, note 135.
139 P8_TA (2017) 0051.
140 LB Solum, “Legal personhood for artificial intelligences” (1992) 70 North Carolina Law Review 1231.
141 GR Wright, “The pale cast of thought: on the legal status of sophisticated androids” (2001) 25 Legal Studies Forum 297, at 297.
142 G Teubner, “Rights of non-humans? Electronic agents and animals as new actors in politics and law” (2007) Lecture delivered on 17 January 2007, Max Weber Lecture Series MWP 2007/04. Also see Teubner, supra, note 9.
143 BJ Koops, M Hildebrandt and DO Jaquet-Chiffell, “Bridging the accountability gap: rights for new entities in the information society?” (2010) 11(2) Minnesota Journal of Law, Science & Technology 497–561.
144 T Allen and R Widdison, “Can computers make contracts?” (1996) 9(1) Harvard Journal of Law & Technology 25–52.
145 Teubner, supra, note 9. See also Expert Group on Liability and New Technologies and New Technologies Formation, supra, note 74.
146 ibid.