Enthusiasm abounds about the potential of artificial intelligence to automate public decision-making. The rise of machine learning and computational text analysis together with the proliferation of digital platforms has raised the prospect of “robo-judging” and “robo-administrators.” From a human rights perspective, the reaction has been mixed, and on balance negative. Optimists herald the possibilities of democratizing legal services and making decision-making more predictable and efficient.Footnote 1 Critics warn, however, of the specter of new forms of social control, arbitrariness, and inequality.Footnote 2 This essay examines the concerns over the turn to automation from the perspective of two international human rights: the rights to social securityFootnote 3 and a fair trial. It argues that while the critiques deserve a full hearing, they should be evidence-based, informed by an understanding of “technological systems,” and cognizant of the trade-offs between human and machine failure.
The Long Road to Automation
The dream of automating judicial and administrative processes is not new. It dates to at least the first wave of law and artificial intelligence in the 1970s. Drawing on the similarity between the deductive logic in law and computer programming, scholars and others in the “expert design” movement developed a range of prototypes and rudimentary applications.Footnote 4 For example, Sergot “automated” the British Citizenship Act by guiding users through an ordered set of questions to the correct legal result.Footnote 5, Footnote 6 And already in 1972 Norway was using “fully automated legal decision-making” to calculate benefits under housing laws.
However, progress was dampened in the 1990s by the onset of the so-called winter in artificial intelligence and law. The complexity and bespoke nature of law challenged the programming paradigm while application was hampered by an absence of digital platforms and financial investment. However, these constraints have loosened. In the private sector today, a US$20 billion legal technology market is fueling a range of software applications, automating to varying degrees many aspects of lawyering.Footnote 7 Advances in machine learning permit, for example, automated legal research in some fields, the drafting of text for new contracts, and the identification of documents for discovery requests.
In the public sector, automation is equally central in legal technology discourse. Government departments, international organizations, and judicial bodies are increasingly moving from mere digitization to experiments with automation.Footnote 8 This is accompanied by a growing research literature that pilots data-driven techniques to predict judicial and administrative decision-making, potentially paving the way for more ambitious future applications of artificial intelligence.Footnote 9
Yet the enthusiasm is not shared by all. It is important to ask: what are the current and future implications for human rights, and should we be worried? Consider two examples.
The Digital Welfare State and the Right to Social Security
In late 2019, Philip Alston, the UN Special Rapporteur for Extreme Poverty and Human Rights, announced that the world was “stumbling zombie-like into a digital welfare dystopia.”Footnote 10 Pushing back against the “cheerleaders” of digitalization and promises of improved access and transparency, Alston reported that, in partnership with the private sector, governments are digitalizing the welfare state to “automate, predict, identify, surveil.”Footnote 11 He called for a “sober reflection on the downsides” of the transformation of social protection and assistance.Footnote 12
The report outlined a range of concerns with the rise of automated eligibility assessments, calculation of benefits, fraud detection, and risk scoring. First, Alston pointed to the lack of accuracy. He catalogued numerous scandals, from 1,132 eligibility errors affecting US$101 million worth of payments in Ontario to the automatic issuing of half a million flawed debt notices to social security beneficiaries in Australia to the tune of US$0.85 billion.Footnote 13 Second, Alston highlighted that technologies overlook structural disadvantages based on inequality, poverty, and racism.Footnote 14 An individual's rights may be determined on the basis of predictions derived from the behavior of a general population group, which can be exacerbated by secret algorithmic processing, risk-scoring and need categorization.
Finally, he warned of ideological appropriation. The digital welfare state, unwittingly or not, provided a useful “neutral” cover for long-standing neoliberal policies that challenged the right to social security, whether by reducing welfare budgets, narrowing the beneficiary pool, or enhancing sanctions.Footnote 15 He described this digitalization as reversing the “traditional notion that the State should be accountable to the individual”Footnote 16 because digitalization makes the individual transparent to the state.Footnote 17
Indeed, this latter point is an important recognition that technology possesses constitutive power. In 1978, Carolyn Miller argued that technology, not unlike law, creates its “own forms of consciousness,” making us view it as “truer, or more transparent, or more objective than others.”Footnote 18 Technology begins as an instrumental means and becomes an inevitable end,Footnote 19 reshaping how we see the world. It privileges “linear, incremental, causal forms of thought” in understanding social phenomena and legitimates “efficiency” narratives.Footnote 20
Alston's critique is comprehensive but not new. While the early adoption of algorithmic governance in policing and security has garnered the most attention,Footnote 21 its arrival in the welfare state has not gone unnoticed.Footnote 22 Harlow and Rawlings worry that “the good governance triad of transparency, accountability and participation may be restricted, even reversed,” especially through the loss of reason-giving and discretion;Footnote 23 Larkin argues that the absence of digital literacy can hinder access to social services;Footnote 24 Burton demonstrates that face-to-face and telephonic services may be more appropriate for serious and urgent cases;Footnote 25 and Tomlinson indicates how the digitalization of appeals may be transforming administrative process into formal adjudication.Footnote 26
The Digital Rule of Law and Right to Fair Trial
Scholars have raised similar concerns about automation's effect on civil rights, such as the right to fair trial. Digitalization and automation are reshaping legal proceedings. A growing number of countries have digitized aspects of formal dispute resolution, with an increasing use of video, online portals, and e-documentary systems—a process only likely to be accelerated by COVID-19-related restrictions that have illustrated the contingency of physical proceedings. In the private sector, online dispute resolution (ODR) platforms have grown and increasingly attracted public interest. A public-private partnership in the Netherlands provided ODR for divorce and housing cases until 2017Footnote 27 and many predict that digital resolution of disputes will become common.Footnote 28
Further, many see prospects for automated judging. Using machine learning methods on past jurisprudence, researchers have been able to predict with increasing confidence outcomes in judgments.Footnote 29 In the United States, many courts use COMPAS, an apparent machine learning software, to predict recidivism when imposing criminal sentences.Footnote 30 In New Zealand, a computer-based prediction model helps handle claims and profile claimants under the country's accident compensation scheme.Footnote 31 Others eye the potential for digital-friendly legislation and institutional reform that would permit greater automated decision-making through “expert design.” These court-centric developments are likely to be complemented by attempts by parties in litigation to gain an advantage through use of data-driven legal research and prediction.Footnote 32
Rights-based concerns about automated judging are growing, and new civil society organizations such as Algorithm Justice League are on the rise. The critiques fall into four main categories and mirror many of the critiques of the digital welfare estate. First, there is the potential for arbitrariness and discrimination. For example, while the literature is divided, there is some evidence that the COMPAS algorithm discriminates against African-American defendants by using structural background data.Footnote 33 Second, there are concerns about legal accuracy. Many doubt that either expert design or machine learning can master the bespoke and complex nature of legal decision-making, and worry about the rush to simplify law to reduce this obstacle.Footnote 34 Third, there is a lack of transparency over algorithm-based methods. Litigants may be deprived of reasons in automated decision-making and the algorithms in some software may be inaccessible due to intellectual property restrictions. Fourth, there may be an increase in the justice divide inter partes. If some litigants are better able to game or predict automated decision-making, they may obtain an unfair advantage in legal systems already plagued by strong disparities among parties.
Grounded Technological Critique
In the move to automated legal decision-making, the critical reflex of human rights (and often of doctrinal) scholars is strong. There is a legitimate concern that, in a Gramscian manner, new digital hegemonies create new subalterns and threaten old ones.Footnote 35 However, in the spirit of critical empiricism, it is worth reflecting on how to critique the march of new digital technologies.Footnote 36
The first question concerns how we frame technology in general and automation in particular. Many critics adopt an “artifact”-centric approach, which is focused on digital techniques and methods. Yet, in science and technological studies, technology is commonly understood as the complex assemblages of components and know-how that make up “technological systems.”Footnote 37 Thus, frames such as the “digital” welfare state or “robo”-judging obscure as much as they enlighten.
Modern states have been long based on “systems” and various forms of “automation.” Indeed, a distinctive aspect of the General Comment No. 19 on the Right to Social Security under the International Covenant on Economic, Social and Cultural Rights is that the first element of the right is access to a “system” to provide coverage for predetermined risks.Footnote 38 Likewise, the right to a fair trial in a court process is based on a complex combination of actors, processes, and rules. Yet, at the same time, these bureaucratic and judicial systems have long been tools for control, exclusion, informal punishment, and surveillance. As Duncan Kennedy has observed, it is this dark side of the welfare state that helped spur the turn to rights across the political spectrum from the 1970s.Footnote 39
If we retain an artifact-centric conception of technology, we risk reifying and romanticizing the imaginary of a “human/e state.” Public administration should be understood as a form of technology, a complex and hierarchical amalgam of rules, algorithms, institutions, and spaces—that can both liberate and repress. Humans in their physical, affective, and cognitive states are just one element of this system. It is thus essential that the emerging critiques of digital welfare states not succumb to an atemporal reflex, but rather be viewed in the longue durée. In this respect, Fleur Johns's approach, which highlights the fusion of old and new technologies—the “list-as-algorithm”—is helpful in identifying the transformation rather than arrival of new forms of technological power.Footnote 40 The same can be said for Alston's linking of neoliberal forms of governance and new control-oriented technologies. The key question for advocates concerned with international human rights law is thus to ask how digital technologies strengthen or relieve the long-standing abusive aspects of the governmental state.
The second question concerns evidence. What do we know of the ills of the digital and automated state? One perennial challenge in critical theory and human rights fact-finding is the preference for the anecdotal and qualitative. While various studies do support several of Alston's conclusions, we should guard against cherry-picking. For example, the eligibility errors in the Ontario software were clear in 2015 but it is difficult to find evidence of the same problem in later Auditor-General reports. Did the Ontario government fix the automation errors after an experimental phase?
This potential slippage goes to the heart of the debate on the uptake of automation technologies. By what metrics should we evaluate its upsides and downsides? We have long known that there is a “black box” in human decision-making: administrative and judicial cognition is inflected and shaped by implicit bias, racial animus, arbitrariness and custom, laziness, and error. This is sometimes lost in discussions of the dangers of the computational “black box,” with its structural bias that is compounded by the determinism and atheorism of data-driven approaches and the bluntness and inflexibility of “expert design” programming. Discussions of automation and digitalization should be guided by a logic of minimizing danger, regardless of whether its origin is machine or human.
The third question is how to effectively regulate the dark sides of automated decision-making. Alston sets out a classic human rights approach, a Pareto-optimization logic in which no individual is made worse off through efficiency improvements. In the case of welfare, he argues that states must ensure that there is a legal basis for digital welfare reforms; promote digital literacy and non-digital access;Footnote 41 maintain eligibility fairness and human dignity in procedures; protect civil rights through privacy constraints and limits on use of data to harass and surveil; democratize policy-making on digitalization; and hold both public and private actors accountable. Similar demands are made elsewhere on the emerging automation of judging, although with a greater emphasis on accuracy.
Is this enough to address the dark sides of digitalization? In my view, holding the digital Leviathan to account will also require new digital tools. Indeed, the robo-debt scandal in Australia traversed by Alston was tackled by advocacy groups through a website for automating complaints, which also served as a platform for digital mobilization. The legal tech movement is slowly developing public interest technologies,Footnote 42 and applications like the new JustBot application help individuals in Europe apply more easily to the European Court of Human Rights and potentially avoid customary summary rejection.
In sum, the human rights community must not only ready itself to challenge digital developments, but also must develop digital weapons that match new forms of bureaucratic and judicial power. Yet despite the promise of a legal assistance revolution through technology, legal technology is rarely directed towards public interest, rights-enhancing projects.Footnote 43 Alston is right when he observes that the automation agenda is mostly one of costs savings and efficiency.Footnote 44 Public and private investment in digital accountability will be crucial therefore in ensuring that automation advances rather than retards international human rights.