This symposium explores ways in which developments in artificial intelligence (AI) will affect the substance of international law, and, conversely, how international law may guide states' decisions to deploy AI. Today governments and private sector actors are using AI tools to improve medical diagnoses; enable greater autonomy in daily activities such as driving; enhance the accuracy of facial recognition software; and improve judicial decision-making. Even though AI increasingly is penetrating commercial, military, and scientific arenas, states have been slow to create new international agreements or to amend existing ones to catch up to these technological developments. Nevertheless, AI is certain to produce changes in the areas of human rights, the use of force, transnational law enforcement, global health, intellectual property regimes, and international labor law, among others.
Consider, for example, the fact that various governments have started to use AI-driven facial recognition software not only to identify criminal suspects but also to record patterns of life and monitor people quarantined with the coronavirus. This use implicates international human rights rules related to privacy, freedom of association, and freedom of expression. AI tools also have the potential to complicate transnational law enforcement cooperation and compliance, such as where a state makes an extradition request that is derived entirely from probable cause determinations informed by opaque algorithms. Further, states are developing self-driving cars and ships that ultimately may operate transnationally. Yet the international community generally, and states specifically, have not reached a common understanding about the extent to which existing international law suffices to regulate these developments. The most intensive AI-related international negotiations have focused on whether to ban lethal autonomous weapons systems, but the discussions have thus far proven complicated, contentious, and inconclusive.
This symposium presents the opportunity for international legal experts to contemplate how AI will affect—and be affected by—substantive rules of international law. But it is worth noting that AI may intersect with international law in two other ways as well. First, states may begin to deploy machine learning tools to help position themselves procedurally for treaty negotiations or international adjudication.Footnote 1 States might, for instance, use machine learning or computational text analysis to identify patterns within large numbers of statements made by their negotiating partners to the UN General Assembly and shape their negotiating positions accordingly. In addition, states might use AI tools to improve the way they approach dispute resolution, by more quickly and thoroughly processing information about arbitrators or by unveiling unseen patterns within arbitral or judicial decisions.Footnote 2
Second, AI tools may help states enforce international law.Footnote 3 A state could, for instance, develop and deploy sensors to detect violations of weapons treaties and use AI to monitor the sensors. Perhaps international criminal lawyers might use AI tools to help identify evidence of war crimes—or evidence that helps prove their clients' innocence.Footnote 4 But these procedural and enforcement tools largely lie ahead of us; the substantive developments that this symposium's authors discuss are in the here and now.
The essays collectively take a traditional approach to international lawyering: they explore how states may, could, and should adjust the application of existing international law to new facts and scenarios posed by AI-driven technologies, and they consider how existing law should shape the way states develop and deploy these technologies. The essays are grounded in the real world, offering pragmatic suggestions for states as they collectively transition to a world in which AI will play a critical role. In short, these essays get down to the hard work of assessing the gaps, ambiguities, and guidance in existing treaties and customary rules and begin to sketch a road map forward. In so doing, the essays offer an important lesson to international lawyers: that it is critically important—both for those who work for governments and those who wish to influence state decisions from the outside—to understand the basics of these technologies.
The symposium opens with an essay by Malcolm Langford of the University of Oslo.Footnote 5 Langford's essay explores the potential impact on human rights of automated decision-making and machine learning by governments. Using the “right to social security” and “right to a fair trial” as examples, he identifies the existing arguments about how automating these processes will undercut the substance of these rights. But Langford challenges critics to push past easy assumptions about the new “digital wave.” He urges critics to view the technologies in a way that avoids romanticizing the complex (human-driven) systems and processes that currently exist in the welfare state, and to base their judgments on empirical evidence rather than cherry-picking examples of salient technological failures. He concludes by arguing that a key way to protect human rights is to fight fire with fire: there is already evidence that new technologies themselves can help ensure the rights compliance of other new technologies.
Steven Hill, who until recently served as the Legal Adviser to NATO, provides us with a government-focused perspective on AI. Hill's essay considers the types of challenges that AI is posing to a powerful military alliance that shares common values but different international and domestic law obligations.Footnote 6 Hill explains how NATO has deployed AI to enhance its members' situational awareness and to conduct cyber defense by, for example, identifying trends in cyber threats. But he also argues that it is important for NATO to bring all of the allies on board with the various uses of AI to avoid a backlash against these technologies. This includes achieving allied agreement on the ownership, sharing, and use of data—the key fuel for AI, and a challenge in an international organization whose members each have different views on data regulation and privacy. Hill suggests that a good way to establish stability and trust in this area is to foster ongoing legal and ethical dialogues, drawing on NATO's tradition of seeking “legal interoperability” among its members.
Bryant Walker Smith of the University of South Carolina Law School considers a different multilateral challenge posed by AI: How should states adapt existing treaties to new technologies?Footnote 7 States parties to the 1949 and 1968 Conventions on Road Traffic currently are wrestling with this precise question as they seek to update those treaties to reflect the new reality of automated driving. Smith's analysis takes a deep dive into the potential ambiguities embedded in the treaties' command that every vehicle shall have a driver. He explores why different states may take different approaches to interpreting this phrase, and why states may favor more or less formal approaches to clarifying the treaty language. Stepping back from the automated driving example, Smith's essay surfaces the types of questions that states will face across a wide range of existing treaties and new technologies. Finally, the essay identifies the important role of companies in an international law ecosystem that is forced to wrestle with changes wrought by AI.
Like Langford, Daragh Murray of the University of Essex considers the complex interplay between AI and human rights, but Murray focuses on the use of human rights law as an ex ante guide for state decision-making.Footnote 8 Murray argues that states should use a well-established European approach to human rights as an organizing framework through which to analyze and shape their choices about AI before they choose to deploy it. Using live facial recognition as an example, Murray illustrates why states must assess whether deploying this type of AI-driven tool is “necessary in a democratic society,” a requirement drawn from European Court of Human Rights judgments. Murray carefully explains that this test requires states to identify the objective underpinning the deployment, demonstrate why AI is necessary to achieve that objective, and specify how the state will deploy the technology. Murray's essay takes a “stitch-in-time” approach, arguing that states will be better suited to respond to legal challenges in the future if they heed human rights baselines when making AI deployment decisions today.
In all, these essays invite us to think hard—and with specificity—about other substantive areas of international law that will intersect with states' deployment of AI technologies. International lawyers face a difficult but exciting challenge in helping to ensure the responsible use of AI by states and by individuals in the years ahead.