Introduction
In recent years there is hardly a topic in legal scholarship that has attracted as much attention as artificial intelligence (AI). That has to do with a whole array of doctrinal legal issues, ethical challenges, socio-technical expectations as well as politically-charged industrial politics. The discussions about regulating AI are most of the time an amalgam of legal doctrines, legal methods and author-specific aspirations of what AI may mean for humans and the future applicability of law. The level of sophistication in some of these discussions has been quite high. At the same time, the debates are sometimes quite controversial. On the one hand, it is argued that AI is only another new technology that may create some challenges but, eventually, it will be integrated and handled by the canon of incumbent law.Footnote 1 On the other hand, it is contended that the disruption of AI refers not only to technology, but also to established legal routines and hence new legal designs are needed for a successful integration of AI into law.Footnote 2 That the latter set of questions is on the doorstep has only recently started to become apparent. Although autonomous decisions by AI might not yet be the norm as engineers say, legal and ethical questions can rapidly materialise due to the constant advancement and improvement of AI. It is in this context that opening up a debate on how law can possibly address the challenges posed by AI turns out to be a beneficial process.
Putting all specificities and branching of the legal discourse aside, there are two pillars on which the current debate on AI is grounded. A first complex of questions can be traced back to what has been called the ‘responsibility gap’ or ‘accountability gap’.Footnote 3 The accountability gap refers to the problem of allocating responsibility to AI. If an AI entity undertakes autonomous decisions, then the AI may also be responsible for its own decisions. But how can an AI become responsible in a system of legal obligations that are tailor-made for humans and corporate actors which assume human decision-makers? Can AI be liable for its own decisions? If so, what would such an allocation of liability look like? An even stronger case can be made for AIs that communicate with each other and which coordinate their decisions, as is the case, for example, with algorithmic collusion. Would those networks create a separate legal entity that can be held accountable and which creates legal consequences for its owners? Those questions, which seem to be looming, would not be easy to address since they challenge the incumbent legal system. Admittedly, most AI systems currently have a human in the loop – the technology not yet being mature enough for autonomous decision making. The number of AI systems that may qualify for fully autonomous decision making is, however, likely to increase. One needs only to think of the rising number of autonomous care robots in assistance and care.Footnote 4 Thus, legal scholars have begun to investigate how to address that accountability gap.
The second pillar of the legal debate is concerned with the consequences that different legal designs for AI may have. The two dimensions of consequences are either ethical or economical, or a mix of both. This is clearly apparent in the EU Commission White Paper on Artificial Intelligence and the Proposal for a Regulation laying down harmonised rules on artificial intelligence, both endorsing an AI approach that is human-centric (ie fulfilling certain societal values) and acts as a catalyst for economic growth (ie aiming to raise per capita income in the EU).Footnote 5 This approach is also mirrored in the recent EU Commission proposal for an update of the Product Liability Directive, aiming at the coverage of AI-related harms.Footnote 6 In that sense, the rules governing AI are not discussed from the doctrinal angle of consistency within a system of norms, but as socio-technological tools to achieve certain ends.Footnote 7 In the case of the EU, the aim is to catch up with the US and China by providing a legal framework for AI that facilitates EU-based business models.
This contribution will deal with the accountability gap and its associated legal challenges arising from the deployment of AI by drawing inspiration from a particular instance in Roman law. In fact, an analogy has occasionally been made in the literature between how ancient Romans regulated slaves and how AI might be regulated.Footnote 8 Slaves were allowed and expected to take autonomous decisions up to a certain degree, thereby implying that those decisions might entail failure and damage. This made it necessary that the law would balance the risks between the master, on the one hand, and third contracting parties, on the other hand. An effective governance of the contractual relations of slaves was necessary to raise the economic potential of slaves for their masters and to ensure relational trust for third parties.
In this paper, we are not arguing that Roman law provides the blueprint for dealing with today's AI problems, or that it assists in the definition of legal personhood for robots. This would be too far-fetched for a couple of reasons. First, slaves were actual persons endowed with the thinking (and sentient) capacities of any human being – something that AI entities lack. Secondly, and relatedly, the range of activities that slaves could carry out was infinitely broader than the ones that an AI system can currently operate autonomously. However, Roman law provides a stock of knowledge that can be helpful to sort out certain challenges that the deployment of AI systems has started to pose and will continue to pose. In other words, our contribution aims to give guidance on the direction in which solutions for the agency problem of AI can be found, bearing in mind that technological progress is a gradual process, and the accountability gap is only nascent given a series of attempts by today's law-making to address it.
The paper is organised as follows. In Section 2, we explain how the autonomy, association, and network risk of autonomous decision making sometimes leads to the accountability gap in contemporary law. Section 3 delves into Roman law and explores how it dealt with autonomous decision making of slaves. It will become apparent that there are striking parallels between the legal problems that had to be solved then and those needing to be solved now. It will also become clear that the accumulated knowledge implied in Roman law provides interesting suggestions on how to possibly shape legal designs aimed at closing the accountability gap of AI. Section 4 puts the autonomous decision making of machines in a wider context by stressing that the law's function of solving conflicts and facilitating cooperation is intrinsically linked with how to balance the allocation of risks among different stakeholders. The paper ends with a brief conclusion.
1. The triple-helix of the accountability gap
Damages, losses, and wrong expectations cannot be avoided in a world of uncertainty and fallible knowledge. Law or any other institution cannot simply rule out losses and misfortune. A trivial example is traffic: terrible accidents can happen at sea, in streets or in the air, but one would hardly decide from that to stop traffic and transportation. The typical answer to risk is rather to spot the decision-making entity and to constrain its sphere of activity to a degree that is in accordance with societal standards; this may also include an obligation to compensate victims. Hence, in property, contract, and tort law it is about spotting responsibility and agency, thereby facilitating human action and trade to the benefit of the involved parties. Where necessary, the public regulation of specific activities complements the private law.
Private and public law aim at the same target from different angles: the resolution of conflict by identifying the accountable agent(s).Footnote 9 Thereby, conflict resolution should be efficient in the sense that the purposes of all agents which are affected by a conflict are considered. That means the conflict resolution mechanisms which are provided by law should be informed, purposeful and prevent strategic action to the disadvantage of third parties. There should be no accountability gap. Over the exact meaning of ‘informed’, ‘purposeful’ and ‘strategic’ there might be dissent, but the root problem of the accountability gap is straightforward. The accountability gap refers to a missing link between a law or regulation, on the one hand, and a responsible decision or action, on the other.
The accountability gap is not a severe problem when there are appropriate tools to repair it.Footnote 10 Judges often repair smaller accountability gaps by employing an existing law through interpretation. But there are also larger accountability gaps that cannot be easily bridged by expanding an established law, because the result would not only be a doctrinal ‘overstretch’, but the deficient legal design would also lead to dysfunctional decisions and actions.Footnote 11 In these latter cases new doctrinal solutions and tools are necessary that lead to socially meaningful results. These kinds of paradigmatic shifts in law have happened in the past and are in principle not a new phenomenon. Examples include the invention of the modern limited liability company as a reaction to the new capital-intensive production possibilities of the industrial revolution,Footnote 12 the legal definitions and ways of how to deal with electricity as a sort of intangible good,Footnote 13 or the emergence of enterprise liability.Footnote 14 A similar turning point is being reached with the advent of AIs and robots, too. Autonomous decision making seems destined to bring doctrinal routine to its limit, whether that is in automated contracts, the liability of surgery robots in hospitals or in the case of algorithmic collusion creating hardcore cartels.
To better understand what principal legal problems would be involved if machines were to take decisions autonomously, it is worthwhile distinguishing between three different types of risk: (1) the autonomy risk; (2) the association risk; and (3) the network risk. These three risks constitute the triple-helix of the accountability gap and may require a recalibration of responsibility between human and artificial decision makers.Footnote 15
The autonomy risk. This sort of risk may emerge when AI entities have leeway to take their own decisions based on what they have learned from (big) data. It is this type of machine autonomy that we often have in mind when we think about robots doing the job of humans. For example, it is not unrealistic to imagine that an AI could formulate independently the terms of a contract and sign it in the future.Footnote 16 By doing so, AI would create a valid obligation against the contractual partner. This would not mean that this scenario is currently happening nor that the AI would automatically become a self-standing legal person. However, this situation would make the AI identifiable as a distinctive entity (legal representative) in the process of contracting, in which the owner (employer) of the AI might be the ultimate principal vouching for the fulfilment of the contractual obligations as well as for any possible damages. An even more common example is extra-contractual liability arising from autonomous healthcare robots, when one would reasonably ask for responsibility on the part of the AI and compensation of victims. In this regard, one should note that the liability of contemporary operators denies compensation if the operator has maintained the AI according to the state of the art of possible safety standards.Footnote 17 Moreover, it is yet not clear whether software codes that establish algorithms fall under the European Product Liability Directive.Footnote 18 While it can be assumed that at the moment consumers are still sufficiently protected by legal interpretations of liability laws, sector specific regulations and insurances (eg car liability insurance), the progress of AI technology is likely to lead to more legal inconsistencies. In addition, this growing inconsistency in legal design would have the side effect that the incentive for controlling the developmental risk of the AI is thwarted, with detrimental effects on the usage of those advanced systems. A concrete example can be autonomous vehicles since technical experts provide the prospect of full driving autonomation with no need for a human to drive – so-called levels 4 and 5.Footnote 19 To counteract this scenario, one may argue for a clearer attribution of responsibility. Similarly, and as discussed below, the introduction of the corporate form in the seventeenth century made it easier to find the locus of responsibility allowing for a more rapid advancement of the industrial revolution.
That does not mean that AIs and robots should be legally treated like humans, simply because they create and sign contracts. The machines come into the world as distinctive legal entities because humans would attribute to them, for pragmatic reasons, decision-making power. Accordingly, the deliberate attribution of decision-making power may create a distinct locus of responsibility that is not fully covered by human oversight, although a human owner might be in the background as the principal.Footnote 20 This mismatch of responsibility and decision making comes strikingly forward in academic and policy discussions, when one asks for ‘explainability’ of algorithmic decision making.Footnote 21 But, at the same time, it is a core feature of machine learning that the exact reasons leading to a decision remain in a black box. That makes it a deliberate and consequentialist decision of humans to attribute responsibility to AIs for the risks that they may cause, because this legal design yields advantages for society over legal designs that would simply expand the incumbent legal designs. That does not mean that AIs’ autonomy would be unrestrained or that responsibility becomes a shallow category. On the contrary, it means that a socially advantageous legal design becomes integrated into the conflict resolution mechanisms of doctrinal law.
To underscore the last point, it is worthwhile remembering the introduction of the limited liability company some 200 years ago. It was also not a human but a corporate actor with its own legal personality that was invented against the background of colonial trade and the need to raise financial capital for the new production possibilities of the industrial revolution. Hence, the introduction of legal personhood for companies was a deliberate act to reap the benefits of technological progress and the exploration of new parts of the world.Footnote 22 The process of introducing new corporate forms was thus not ad hoc, but was a process of legal experimentation until the adequate risk allocations between a company's stakeholders had been found. Moreover, the vast literature about the regulatory competition between company laws indicates that legal experimentation to find out the best legal designs never comes to an end.Footnote 23 In addition, the history of company law teaches us that there is not one, but a need for very different corporate forms with very different levels of sophistication – a point to which this paper will return in the final section.
In summary, the autonomy risk may emerge when decision-making power is delegated to AIs. This delegation is for good reasons, because otherwise the benefits of AI cannot be reaped. But that may bring with it a need for a recalibration of the accountability between a human principal and the AI as agent. This recalibration must close the accountability gap in order to resolve conflicts in the case of failure of AIs as well as to re-establish doctrinal consistency. Moreover, the accountability gap must be closed in a smart way, meaning the legal design must fulfil its purpose in an effective way and should facilitate the application of algorithmic decision making.
The association risk. This type of risk may materialise in man-machine associations. That is when humans and AIs collaborate and form an entity which interacts with other entities. An illustration is a surgeon who collaborates with a surgery robot to get the best result for a patient. This can be the case of an outside medical specialist who supervises the operation of Smart Tissue Autonomous Robot – an AI that can autonomously perform laparoscopic surgery – that is owned and operated by the hospital.Footnote 24 This scenario makes it difficult to allocate responsibilities for compensation purposes – eg whether the doctor should be considered as operator or user.Footnote 25 Another example can be the decision over a mortgage for a family house made by a bank employee in conjunction with a predictive analytics software that is scoring a high default risk of the couple asking for the mortgage due to a bias in the model used.Footnote 26 In this scenario, it is a tall order to prove voluntary or involuntary discrimination by the bank which had partly relied on (an opaque) AI technology.Footnote 27 In man-machine associations, man and machine bring in their comparative advantages which meld into one service. In the case of misfortune or damage, it is barely possible to sequentially trace back all decisions which were made either by the machine or the human and to allocate responsibility accordingly.Footnote 28 Therefore, those associations of man and machines may be regarded as a symbiosis that creates its own legal entity, at least as a locus for responsibility in the case of contractual and non-contractual liability.Footnote 29 This would still preserve the ethical obligation with the human but recognise that the decisions have been made in a conjunction with a machine. Any regulations or legal obligations are then targeted against the hybrid and not only the human(s) involved.Footnote 30 This yields the advantage that potential victims of the hybrid know exactly who to approach in case of damages or malperformance.
The network risk. This risk type points to a scenario in which the decision making is located in a network of AIs. AIs in a network learn from each other and can coordinate their decisions. Those networked AIs can do a whole range of things. Surgeon robots may learn from each other around the world and boost their capabilities.Footnote 31 That is especially relevant when it concerns complex surgery that does not happen very often at a single hospital, or where the gene sequencing for vaccines is largely done by AIs.Footnote 32 In a pandemic, networked AIs learn from each other worldwide. But networked AIs also analyse stock markets and may increase correlation of risk and decrease diversification, thereby contributing to the worsening of a systemic event and financial crises.Footnote 33 Networked AIs are also able to collude with each other and to perform cartel strategies that have not been seen yet; one only has to think about sophisticated price discrimination strategies of flight or hotel booking systems. Networked AIs open the door to a new world of possibilities in all aspects of life, as health, business, education, sustainability, or policing for better or for worse.
The most important feature of AI networks is that they take decisions without human interference. This implies that there is basically no human who could be made accountable and to whom a decision could be traced back. A poignant example for the doctrinal problems which emerge is algorithmic collusion.Footnote 34 Think, for example, of flight booking systems which learn from each other how to coordinate price discriminatory tactics. Those systems can coordinate with each other, using collusive tactics better than any human could do, because the documentation of quantities, qualities and prices is automatically in the big data. Also, keeping cartel stability is less of a problem for AIs, because relational trust is not a valid category for a machine. The networked AIs simply keep their collusive tactics as learned by their algorithms. As such, consumers and the public may suffer considerable damages from networked AIs. Hence, public authorities will certainly stop those activities when they detect them, possibly by simply pulling the plug. That means the public attaches a consequence to a behaviour that is not in the public interest and regarded as illegitimate. For economic and ethical reasons, society does not allow algorithmic collusion.
The problem with networked AI is, however, that traditional legal doctrine has major difficulties in solving the rising challenges within a consistent system of legal reasoning. This not only has to do with the lack of human responsibility in AI networks, but also with the lack of human moral judgement that could be addressed by legal norms. In other words, legal doctrine gets into problems, because there is no human to which its routines could be addressed. This becomes clear when one looks specifically at the case of algorithmic collusion.
Collusion through networked AIs has the evident effect of an anticompetitive agreement. But an agreement needs at least the quality of a meeting of minds, the will of someone to make an offer to collude or to follow an offer. This implies that there is a sort of communication and intent about any sort of agreement. This carries even more weight if a legal order attaches criminal sanctions to collusive tactics and charges it with moral sentiment. Therefore, it is implicitly assumed that there is a human who is responsible and morally in charge of the collusion. Typically, this is the company and its management involved in collusion. But with networked AI there is no human which could be morally targeted, or which would be deterred by the threat of a criminal sanction. Also, the responsibility of a human for the actions of the AI cannot be easily demonstrated when there is no evidence for collusive intent and if there is no documentation and communication about it.Footnote 35 The AI remains a black box, although the call for ‘explainability’ is becoming louder.
In the end, it is the lack of legal personhood that makes it impossible to integrate the case of networked AIs into the incumbent doctrinal conflict-resolution system. Incumbent legal doctrine foresees that there is at least some anchoring of decision making with humans. But networked AIs fail in this respect. There is no human in the loop that could be made accountable without overstretching the incumbent law and running into doctrinal inconsistencies. Therefore, it is reasonable to conceive networked AIs and their actions as separate legal entities that create specific risks, for which they are accountable. Those risk pools are identifiable and can be regulated as well as be obliged to pay compensation.
The incumbent legal system is not fully equipped to close the accountability gap that can emerge by the three identified risks of AI. While some attempts have been successful, a general framework that would cover all possible instances is yet to be found. It is in this context that many scholars start discussing possible alternatives. However, this is not an entirely novel problem in legal history. There have been other instances where law had to address lacunae in accountability. One such historical occurence is the emergence of slave-run business models in ancient Rome. The expansion of social and economic activities through slaves let the praetors, the Roman magistrates with responsibility for litigation, introduce new legal remedies – the actiones adiecticiae qualitatis. This ‘legal invention’ allowed the establishment of a sort of indirect agency for entities which did not have legal personality and were thus subject to others’ legal authority (alieni iuris). The paper will turn now to this legal invention of the praetors and relate it to today's legal problems of conceiving AIs as legal entities.
2. Mind the gap: how Romans closed the accountability gap
(a) The slaves-AI analogy
The literature on AI has occasionally looked at how ancient Romans dealt with the accountability gap problem created by assigning business activities to slaves.Footnote 36 In both AI and slave-run businesses, the underlying problem can become that of (indirect) agency. Just as the user or operator of AI cannot fully predict or control how the AI will behave and decide, so the master did not know how his slave would behave. Of course, the slave was a human, unlike AI. This implies that slaves had potentially full freedom and autonomy in carrying out any (business) activity – something that is presently beyond the abilities of AI entities. However, what is remarkably interesting for the present contribution is how Roman law dealt with a scenario where the slave, who was not granted legal personality, could take autonomous decisions which have an effect on the master. In other words, slave-run business in Roman times concerned a situation in which there was a sort of agency under structural uncertainty given that a micro-management of the slave by the master was either impossible or not reasonable. Hence, that the slave is a human may play a role in the detailing of the incentives of the governance system, but it is less relevant for solving the structural problem of agency under uncertainty. It is in fact on this latter aspect that the present contribution, adopting a future-oriented outlook, focuses its attention.
The agency problem between master and slave emerged after the second century BC, when ancient Rome was in the early days of becoming a hegemonial power in the Mediterranean Sea. The military success led to a sharp increase in the number of slaves. The traditional familia expanded, containing a relatively high number of slaves. Relatedly, the pater familias tended to delegate business activities to his slaves (and/or other persons-in-power such as filii).Footnote 37 Hence, the number of slaves who acted as the managers of the family business and were supposed to carry out transactions and negotiate binding contracts on behalf of their masters, increased considerably.Footnote 38
This shift in the ancient management practice created a new problem for the Roman regulatory framework: how to deal with the accountability gap problem? According to the ius civile in force at that time, masters did not have to answer for their slaves’ business activities vis-à-vis third parties, ie suppliers and customers. The guiding principle was ‘alteri stipulari nemo potest’: all obligations would only bind the parties which entered directly into an agreement, and not third parties – the so-called privity of contract.Footnote 39 This regulatory approach granted considerable protection to the pater familias, who could benefit from the slaves’ business activities without being accountable for their actions – the only exception being that the slaves would commit delicts rendering their master noxally liable.Footnote 40 On the other hand, third contracting parties were in a weak position since slaves, who did not have legal personality, could not be brought to courts and thus the contractors of slaves would end up with insufficient compensation even though slaves would be contractually liable. The situation as just described from the early days of Roman slavery seems to mirror today's situation in which employing AIs under the EU Product Liability Directive creates legal inconsistencies and produces economically wrong incentives to employ AI.Footnote 41 Therefore, it is no wonder that the EU, being confronted with this problem, has initiated a debate about an adaptation to the Product Liability Directive and a more coherent integration of AI into private law.
In ancient Rome, the accountability gap led to the risk allocation between the parties directly and indirectly involved being so asymmetric, and the incentives for getting efficient contractual outcomes so low, that the incumbent regulatory framework could hardly be a long-term sustainable solution. Contracting third parties were simply reluctant to do business with other masters’ slaves, given that there was no legal certainty that a master would honour the terms of the contract.Footnote 42 Hence, a legal change in the regulatory framework was necessary.
The so-called actiones adiecticiae qualitatis were progressively introduced.Footnote 43 These were a set of remedies granted by the praetor to contracting third parties to seek legal protection against the master of a slave with whom they carried out business transactions. One may understand this as a sort of ‘piercing the corporate veil’ from the slave to the legal entity of the master. The aim of these legal remedies was to ensure some additional responsibility for the master and, indirectly, to give some sort of incentive to oversee what the slaves were doing.Footnote 44
When looking at the Roman regulatory framework, however, the part that attracts most attention from scholars is the creation of a sort of corporate limited liability through the peculium and its associated actio de peculio.Footnote 45 The peculium was a fictitiously separate asset from the property owned by the master (res domini). Within the financial parameters of the peculium, the slave independently administered his business transactions. In other words, the slaves got a maximum capital that vouched for their transactions. Based on this historical experience, Pagallo considered the creation of a digital peculium for AI applications.Footnote 46 Whether this already includes the necessity of creating legal personhood for AI in a strict sense is a doctrinal question that need not be answered here. Making a tangent between the peculium and the liability of AI is a fascinating proposal. But one must acknowledge that the establishment of a peculium and its associated actio de peculio represents only a part of the more composite regulatory landscape offered by Roman law. Other legal solutions came into play and complemented the actio de peculio.
There were in fact six legal remedies (ie actiones adiecticiae qualitatis) available to Romans offered by the praetors. It is possible to distinguish these remedies based on whether they set an unlimited or limited liability for the master regarding the slave's business transactions vis-à-vis contracting third parties. As is further discussed below, one can conceive this as a direct consequence of the more differentiated legal needs of consumers and businesses in a growing society. The actio exercitoria, actio institoria, and actio quod iussu belong to the remedies granting an unlimited liability. The actio de peculio, actio de in rem verso, and actio tributoria are, conversely, those legal remedies that ensure the master's limited liability. The paper now turns to review these six legal remedies and uses the resulting accumulated knowledge to reflect on the contemporary discussions on AI. While some more specific points for today's legal issues are raised in the following subsection, the next main section adopts a more encompassing view.
(b) The specific legal remedies in Roman law
The actio exercitoria and the actio institoria were two similar remedies aimed at giving protection to contracting third parties which had business transactions with a slave who was either a maritime or commercial entrepreneur. The actio exercitoria was used whenever an exercitor (both the owner of the ship or the one who rented it)Footnote 47 entrusted the management of a ship to his slave so that the latter became shipmaster (magister navis) and could purchase equipment or goods.Footnote 48 Evidently, the actio exercitoria was a kind of insurance for the remote contractors of slaves to trust in the cooperation of – even though the ship was hundreds of miles distant from – the master. On the other hand, the actio institoria referred to the institor,Footnote 49 who was the administrator of any commercial activity.Footnote 50 As Paulus defines it, ‘A manager is a person who is appointed to buy or sell in a shop or in some other place or even without any place being specified’.Footnote 51 Thus, the actio exercitoria and the actio institoria allowed contracting third parties to sue the master, who is called upon to fulfil the obligations undertaken by the slave.Footnote 52
In both legal remedies, the master's responsibility was only limited by the praepositio, which was an explicit authorisation by the master to his slave to perform (only) certain activities.Footnote 53 Hence, the master would incur unlimited liability only for the transactions falling under the scope of the activities mentioned in the praepositio. Transferring this idea to the employment of AI would mean making the owner of the AI accountable only for the tasks that the AI is supposed to perform within the activities that characterise the business of its owner. In other cases, the owner would not be held accountable and, at the most, the liability could be shifted to the producer or programmer of the AI. How the actual allocation of responsibilities across the value chain (eg producers, operator, owner) would look like in practice would depend on different factors such as the level of automation or the specific sector involved. However, Roman law shows that the potential of private law does not yet seem exhausted in the contemporary proposals for regulating AI. Moreover, it hints at the fact that legal solutions more tailored to the challenges arising from an accountability gap can be possible as prescribed by current scholarship.Footnote 54
The third (and last) legal remedy to set the master's unlimited liability was the actio quod iussu.Footnote 55 This legal remedy aimed to provide contracting third parties with legal protection for the business transaction(s) concluded with a slaveFootnote 56 who was delegated by the master (quod iussu) to fulfil that specific transaction(s).Footnote 57 In addition, this legal remedy could also be brought against the master who ratified what his slave did without authorising him beforehand.Footnote 58 The appointment by command (iussum) had more formal requirements compared to the praepositio: the former could only occur before witnesses, by letter, on oath, or through a messenger.Footnote 59 In this way, the extent of the activities encompassed by these two types of authorisation differed: while the iussum could be limited to a specific act, the praepositio embraced several activities. This distinction led literature to argue that the recourse to legal remedies varied depending on the specific context.Footnote 60 The actio exercitoria and the actio institoria were usually applicable in a context where the slave acted as a ‘manager’, whereas the actio quod iussu was usually used for slaves who performed a single order by the master.Footnote 61
The distinction between the praepositio and the iussum might appear at first glance only as a procedural clarification between a general and a specific rule. Instead, the main difference is the fact that each legal remedy confined the owner's liability to a specific function of the slave's autonomy.. Looking at it this way, it is possible to find again a parallel to recent debates. For example, in the EU there is an ongoing debate about whether to regulate AI according to a general standard, applicable to all industries indifferently, or according to sector and technological specificities of AIs which create certain risk levels.Footnote 62
Roman law makes us aware that the latter solution is possible. In other words, a regulatory framework can accommodate a series of remedies, each one confining liability to specific functions of AI's autonomous nature. That way, it would be possible to develop a sort of regulatory experimentation, whereby different AI entities may be subject to different liability schemes so that rules for AI would better align the needs of business with society.Footnote 63 In its proposed regulatory framework for AI, the EU foresees, at least, the so-called regulatory sandboxes that will allow regulatory opt-outs for certain AI applications for a certain time.
As previously mentioned, Roman law not only foresaw cases in which the master would become unlimitedly liable. Other remedies allowed for a limited liability of the master. Here, the peculium played a decisive role, because it was the only source from which contracting parties could satisfy their credit vis-à-vis the slave.
The most prominent legal remedy was the actio de peculio, which allowed a party to receive legal protection for the business transactions contracted with the slave (or any another person in power).Footnote 64 The master would guarantee the contract within the limits of the peculium originally granted to the slave.Footnote 65 According to Roman law, the grant of free administration of the peculium (concessio liberae administrationis)Footnote 66 was equal to a general authorisation for the slave to do business within the parameters of the peculium. This legal design strongly supported the entrepreneurial activities of the slave and reduced the need for those activities to be monitored by the master. Because advanced AIs will become more entrepreneurial in the future and may conclude contracts that have not been foreseen, the legal design of the peculium may become an interesting starting point for a better integration of AI into private law.Footnote 67 Regulations which only suppress entrepreneurial activities of AI clearly lead to economic disadvantages by foreclosing many welfare-increasing opportunities. Therefore, identifying AIs as legal entities with a specified autonomy up to a certain amount of liability specified beforehand is a sensible proposal. This would not exclude the possibility of accompanying liability insurances coming into play to compensate extra-contractual damages.
Another remedy offered by Roman law to protect contracting parties was the actio de in rem verso.Footnote 68 This remedy was applicable whenever the benefits arising from a contract concluded by the slave were to be incorporated in the master's assets.Footnote 69 In other words, a master who enjoys the benefits of the slave's transaction implicitly has the obligation vis-à-vis the third party.Footnote 70 Because of this reciprocity, some scholars posit that the actio de in rem verso was usually applicable in those contexts where slaves were not business managers ‘by profession’.Footnote 71 In those cases, contracting third parties would be more likely refer to the actio de peculio. In addition, it is also noteworthy that the main distinction of the actio de in rem verso from the actio quod iussu is that the former was applicable whenever the slave performed a business transaction that was useful to the master, but without his actual knowledge.Footnote 72
The actio de in rem verso can trigger complex liability cascades and therefore plays only a niche role in today's civil laws. However, it gives an interesting perspective for the regulation of the association risk, when a human co-works with an AI. Then, the AI typically works for the financial interest of its master. At the same time the collaboration might be so close and intertwined that it is not possible to decipher whether the AI or the human is accountable for a certain action. In those cases, the actio de in rem verso gives a clear hint to make the master of the AI contractually liable if she enjoyed the benefits of the commercial collaboration. In turn, the master may seek financial relief herself from the producer or programmer of the AI. But in any case, an injured third party could demand compensation from the owner of the AI, if the latter enjoyed benefits from the human-AI association, even in cases in which it is not possible to identify who caused the breach of obligations. A similar approach could be advanced in the case of network risks: if the owners of an AI enjoy the benefits from a network of AIs, they will be obliged to compensate victims. This way, the owners of an AI get a strong incentive to oversee the behaviour of AIs in forming algorithmic collusions.
Finally, the last remedy offered by the praetor was the actio tributoria.Footnote 73 With this legal remedy, it was possible to ensure a par condicio creditorum between contracting third parties and the slave's master over the assets belonging to the peculium.Footnote 74 In fact, the contracting third parties’ receivables were traditionally paid only after deducting those of the master.Footnote 75 That way it was possible that the master could allow the slave (or another person in power)Footnote 76 to continue several business transactions in parallel without worrying about repaying all the receivables even within the peculium. As a result, there was the chance that the master would be over-indebted and would default when liquidity was lacking. Therefore, the introduction of the actio tributoria aimed to prevent this behaviour by the master. The master, being aware of the various debts incurred from his slave, would become liable and be treated on the same footing as contractual third parties in the distribution of the stock of the peculium (merx peculiaris).Footnote 77 As Albanese points out, Roman law could have considered the knowledge and the approval of the master to make a transaction with the merx peculiaris in the same mould as a praepositio.Footnote 78 However, one must note that there is a strand of scholarship which is dismissive of whether this remedy actually belongs to the actiones adiecticiae qualitatis.Footnote 79
From the actio tributoria we can learn something for today's AI regulation, too. The owner of an AI may be negligent in the sense that she lets an AI perform too many and/or too risky business transactions (eg financial risks), whereby her gains would be secured while the whole pool of third parties would not be. An example is civil law liability in the case of algorithmic collusion between two or more AIs when single AIs may not only perform the primary task, but also interact with each other to gain further benefits by coordinating their actions. Today, it is not self-evident that a doctrinal link can be made between the collusion of AIs and the owners of the AI.Footnote 80 Within the logic of the actio tributoria, the masters of all colluding AIs would be identified, because of the benefits from collusion. A financial pool is created from which the creditors are compensated according to the quotas decided by court. In this way Roman law may give a fresh idea of how to deal with the network risk of AI.
3. Back to the future – legal differentiation and the timing of legal innovation
In Section 2 a link was made between how Roman law regulated the relation between a master, a slave and a third party in contract law, and what we can learn from that for today's challenges of AI regulation. Central for Roman law is the master's consent in the transactions of the slave. Hence, legal protection for third contracting parties is based on either a master's explicit authorisation (praepositio and iussum) or the establishment of a peculium. The peculium can be considered as an implicit authorisation for the slave to perform autonomous business transactions for the master.
Moreover, the master's type of authorisation played a prominent role for the kind of liability masters had to incur. For example, Miceli claims that the unlimited liability was based on the existence of an explicit authorisation due to the stable and continued cooperation between master and slave.Footnote 81 The lack of an explicit authorisation, instead, could have been the reason why the master should only have limited liability for the slaves’ transaction activities.Footnote 82
Roman law foresaw context-specific ways of closing the accountability gap between masters and slaves, depending on the kind of business, the frequency of business and the experience of the slave. And this is exactly what can be learned for closing the accountability gap that can emerge between the owner of an AI, the AI and contractual third parties: the context specificity in which AIs do contracting and how this puts obligations on the master and third parties. Or, to put it differently, it is doubtful whether simple extensions of incumbent private law will be sufficient to fully lift the economic potential of AI. From the Roman law experience, one would expect a much more differentiated menu of legal options. However, legal differentiation is not the only lesson to be learned from Roman law. By delving into the academic controversy on the chronological order in which the legal remedies were introduced, it is in fact possible to infer other related observations that may also become relevant for today's AI problems.
It has already been argued that the possibility of establishing a peculium and its associated actio de peculio can be interpreted as a proto-limited liability scheme. The peculium has therefore been applauded as the zenith of Roman law making. But this overlooks that the actiones adiecticiae qualitatis were in fact not granted altogether by the praetor, but were introduced consecutively over time as adaptions to the legal needs of Roman businessmen in a prospering society.Footnote 83 The legal development of the actiones adiecticiae qualitatis was a gradual process.
According to the Institutiones of Gaius, the order of the actiones adiecticiae qualitatis was the following: actio quod iussu, actio exercitoria, actio institoria, actio de peculio and actio de in rem verso. The same order can also be found in the Digest reporting the praetorian edict, except that the actio quod iussu comes last (and not first). Hence, most Romanists believe that the legal remedies establishing the master's limited liability were the last to be introduced.Footnote 84 This conventional view can be further divided into two camps. In fact, some authors argue that the correct order was the one reported in the Institutiones.Footnote 85 A second strand of scholarship believes that it was the Digest which reported the accurate chronological order by which legal remedies emerged over time.Footnote 86
However, de Ligt suggests that the master's limited liability was only an intermediary stage, before Roman law provided some legal remedies that established an unlimited liability to the master.Footnote 87 This is quite an interesting observation, because it takes into account that when Romans started having recourse to slaves for commercial transactions the masters’ activities were strictly separated legally from those of the slaves, and it seems unreasonable to assume that Roman law immediately established a system of unlimited liability for masters.Footnote 88 Accordingly, the praetors would devote more attention to the needs of the pater familias (ie master), while it was only later that attention was shifted to contracting third parties, making it necessary to get to a more elaborated liability regime.Footnote 89
This alternative interpretation is particularly relevant because it showcases that the accountability gap problem is not a mere technical problem but depends on what factor the legislator deems to be more relevant. If the praetor thinks in terms of pater familias, a limited liability scheme is the logical starting point. It would be unreasonable to believe that Roman law would immediately establish an unlimited liability for the pater familias. But if the praetor thinks instead in terms of the problems created through a lack of legal personality of slaves and the pursuant reluctance of third parties to contract, then an unlimited liability scheme would be the logical starting point for law making.
The gist of this debate is the question of how the risks among the various parties involved in slaves’ business activities should be allocated and which incentives this allocation of risks sets for doing business. Adopting the conventional view means that a limited liability system was introduced only relatively late in Roman history, when the praetor had realised that business activities were inhibited by quasi-unlimited liability.Footnote 90 Adopting de Ligt's alternative view means, on the other hand, that the limited liability scheme granted by the actiones adiecticiae qualitatis was introduced relatively early in Roman law as an attempt to balance the master's interests with the opposing interests of third parties.Footnote 91 And it was only later that unlimited liability was permitted when the master gave explicit authorisations to slaves with professional business experiences (ie iussum and praescriptio).
The lack of sufficient evidence to corroborate one interpretation rather than another makes this interpretative exercise, to a certain extent, speculative. However, regardless of which interpretation is historically correct, the controversy shows two important aspects which seem valuable for today's legal assessment of AI. First, ancient Romans did not have resort to only one legal solution to address the accountability gap problem. Rather, they offered a series of different regulatory solutions depending on the contextual needs that emerged at a specific point in time. The legal remedies adopted at a later stage did not change the incumbent legal system but complemented it.
Secondly, the accountability gap problem, together with the pursuit of different societal goals, is essentially a matter of allocating risks among different stakeholders and choosing a starting point for legal development. If the regulator prefers the master's view, then a limited liability system will be preferred as a starting point. The master can ‘experiment’ with new business models and technologies and learn how to deal with completely new and uncertain situations, without fearing immediate bankruptcy. Instead, if the regulator adopts the third parties’ view, then the legal evolution would start out from unlimited towards limited liability. In this scenario, society would appreciate the legitimate interests of third parties over the business interests of the master. Only when the need to innovate and to boost business activities becomes stronger over time, will there be a shift to a limited liability system.
These two observations have concrete policy implications if contextualised to AI. For example, they suggest that fitting AIs with limited liability and thereby facilitating entrepreneurial ventures, while inhibiting more balanced and complex transactions, is not as fantastic as one may think at first glance. The more sophisticated liability regimes might be saved for the future when AIs have many more faculties and have become more established in society. Moreover, it suggests that initially opting for a certain regulatory scheme would not necessarily foreclose other possible legal solutions, especially when certain needs materialise at a later stage and create a demand for change. Hence, a more heterogenous legal framework, in which stakeholders can have recourse to multiple legal solutions and choose the one that comes closest to their interests, seems a more sensible solution due to the inherently dynamic nature of AI technology. This more open approach makes it possible that the most effective legal solution will emerge over time and that not just one specific route of legal development will become enshrined in stone.Footnote 92
Conclusion
This paper dealt with the accountability gap problem that may arise from the full deployment of AI. Here it is argued that the technical advancements of AI create new challenges for legal scholarship, which are likely to expand further due to the increasing role of the autonomous decision and its effect on the autonomy risk, the association risk, and the network risk. Incumbent law does not always seem fit to address the accountability problem without overstretching the given doctrinal law. A somewhat similar problem existed in ancient Rome. At that time, the emergence of slave-run business models required regulatory action by the praetors and the establishment of new legal routines. The regulatory response by Roman law had been context-specific and geared towards the actual needs of stakeholders, ie pater familias and contracting third parties. Related to that, Roman law did not establish a single and exclusive legal solution: the praetors allowed for the use of several legal remedies (ie the actiones adiecticiae qualitatis) which could be chosen by the concerned stakeholders depending on their needs.
Admittedly, the accountability gap problem posed by the deployment of AI in contemporary societies has some intrinsic features that may make the comparison with the use of slaves in ancient Rome not so easy. For instance, as a matter of fact, slaves had thinking and sentient capacities, which AI entities have not. Hence, Roman praetors could shape the legal remedies taking into consideration their possible incentive effects on slaves’ behaviour. Although the incentives associated with legal remedies can be discussed for producers or programmers of AI applications to deter machine failures, they would be of no use for AI entities themselves. Indeed, whether an AI system has a sensory output which can imitate a human or whether it is fundamentally different and what that may mean is an epistemological question that has not yet been answered, and possibly can never be answered.Footnote 93 On the other hand, the invention of the corporate form is a testimony that an effective allocation of risks and responsibilities is not bound to the human physis. Furthermore, masters were legally liable only insofar as the actions of their slaves had generated either contractual or delictual liability. In other words, according to Roman law, the existence of fault by slaves should be proven. This legal requirement cannot be so easily fulfilled when it comes to holding AI entities liable. Nonetheless, while the concept of fault may be far-fetched for AI applications, it may be possible to refer to other terms such as ‘mistakes’ or ‘unpredictable behaviour’. Lastly, while the increasing use of slavery likely led to ‘technological stagnation’ in ancient Rome,Footnote 94 this would achieve the opposite effect in present times due to the self-learning capabilities of AI systems.
Bearing in mind these caveats, two important things can be learned from the study of Roman law for shaping today's legal design on AI entities. First, the coexistence of multiple remedies to deal with the accountability gap is preferable for more effectively addressing context-specific issues. This consideration becomes even more relevant given that AI is a progressively developing technology characterised by a rising degree of autonomy. This implies that there will be a continuous need to have a flexible regulatory framework since it might not always be possible to anticipate the most suitable legal solution. Accordingly, today's regulatory discussions should not be focused on finding the one and only optimal solution for closing the accountability gap, but on devising a more heterogeneous framework in which different legal solutions coexist.Footnote 95
Secondly, the analysis of Roman law showed us that multiple regulatory solutions are the outcome of a continuous and gradual process in which the functionalities of a new law unfold over time based on the actual needs with which society is faced. However, unlike in Roman times, it seems that today the legislator, academia and other stakeholders have a sufficiently clear picture of the various interests at stake. This makes it easier to develop – in the first instance – multiple legal solutions, from which the concerned parties could choose in most cases. The actual regulatory choice would then create a path for learning and legal development. In summary, this contribution demonstrated that it is possible to draw some lessons from legal history for the future design of law. This does not necessarily imply the re-enactment of old legal solutions, but simply conceiving past experiences as a source of guidance and inspiration for modelling regulation on artificial intelligence.