I. Introduction: Knowledge Is Power
The conjecture ‘that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’Footnote 1 has motivated scientists for more than half a century, but only recently attracted serious attention from political decision-makers and the general public. This relative lack of attention is perhaps due to the long gestation of the technology necessary for that initial conjecture to become a practical reality. For decades merely an aspiration among a small, highly skilled circle engaged in basic research, the past few years have witnessed the emergence of a dynamic, economically and intellectually vibrant field.
From the beginning, national security needs drove the development of Artificial Intelligence (AI). These security needs were motivated in part by surveillance needs, especially code-breaking, and in part by weapons development, in particular nuclear test simulation. While the utilisation of some machine intelligence has been part of national security for decades, the recent explosive growth in machine capability is likely to transform national and international security, consequently raising important regulatory questions.
Fueled by the confluence of at least five factors – the increase in computational capacity; availability of data and big data; revolution in algorithm and software development; explosion in our knowledge of the human brain; and existence of an affluent and risk-affine technology industry – the initial conjecture is no longer aspirational but has become a reality.Footnote 2 The resulting capabilities cannot be ignored by states in a competitive, anarchic international system.Footnote 3 As AI becomes a practical reality, it affects national defensive and offensive capabilities,Footnote 4 as well as general technological and economic competitiveness.Footnote 5
There is a tendency to describe intelligence in an anthropomorphic fashion that conflates it with emotion, will, conscience, and other human qualities. While this makes for good television, especially in the field of national security,Footnote 6 this seems to be a poor analytical or regulatory guideline.Footnote 7 For these purposes, a less anthropocentric definition is preferable, as suggested for instance by Nils Nilsson:
For me, artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment. According to that definition, lots of things – humans, animals, and some machines – are intelligent. Machines, such as ‘smart cameras,’ and many animals are at the primitive end of the extended continuum along which entities with various degrees of intelligence are arrayed. At the other end are humans, who are able to reason, achieve goals, understand and generate language, perceive and respond to sensory inputs, prove mathematical theorems, play challenging games, synthesize and summarize information, create art and music, and even write histories. Because ‘functioning appropriately and with foresight’ requires so many different capabilities, depending on the environment, we actually have several continua of intelligences with no particularly sharp discontinuities in any of them. For these reasons, I take a rather generous view of what constitutes AI.Footnote 8
The influential Stanford 100 Year Study on Artificial Intelligence explicitly endorses this broad approach, stressing that human intelligence has been but the inspiration for an endeavour that is unlikely to actually replicate the brain. It appears that intelligence – whether human, animal, or machineFootnote 9 – is not necessarily one of clearly differentiated kind, but ultimately a question of degree of speed, capability, and adaptability:
Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by – but typically operate quite differently from – the ways people use their nervous systems and bodies to sense, learn, reason, and take action. … According to this view, the difference between an arithmetic calculator and a human brain is not one of kind, but of scale, speed, degree of autonomy, and generality. The same factors can be used to evaluate every other instance of intelligence – speech recognition software, animal brains, cruise-control systems in cars, Go-playing programs, thermostats – and to place them at some appropriate location in the spectrum.Footnote 10
At its most basic, AI means making sense of data, and can thus be differentiated from cyberspace, which primarily concerns the transmission of data. Collecting data is fairly inconsequential without someone to analyse and make sense of it.Footnote 11 If the purpose of a thought or action can be expressed numerically, it can be turned into coded instructions and thereby cause a machine to achieve that purpose. In order to understand the relationship better, it is helpful to differentiate between data, information, knowledge, and intelligence.
Data is raw, unorganised, factual, sensory observation, collected in either analog or digital form, with single data points unrelated to each other. Already in this raw form, data can be used by simple machines to achieve a purpose, for instance temperature or water pressure readings by a thermostat switching a heater on or off, or a torpedo’s depth sensor guiding its steering system. Observed and recorded facts can take many forms, such as statistics, satellite surveillance photographs, dialed phone numbers, etc. Such data, whether qualitative or quantitative, stands on its own and is not related to external signifiers. In this form, it is not very informative and fairly meaningless. Where analog storage is logistically limited, the recording of observational data in electronic, machine-readable form is no longer physically limited.
Information, by contrast, depends on an external mental model through which data acquires meaning, context, and significance. Data becomes information through analysis and categorisation; it acquires significance only through the imposition of order and structure. Information is, therefore, data that has been processed, organised according to meaningful criteria, given context, and thereby made useful towards achieving outcomes according to predetermined needs. This process is dependent on the existence of conceptional models created in response to these needs.Footnote 12 Significance, meaning, and usefulness are, therefore, qualities not inherent in the data, but external impositions to sift, categorise, and ‘clean’ data from extraneous ‘noise’. Data that has been transformed into information has ‘useless’ elements removed and is given context and significance according to an external yardstick of ‘usefulness’. To follow the earlier example, linking temperature readings in different rooms at different times, with occupancy readings and fluctuating electricity prices could be used by a ‘smart’ thermostat to make ‘intelligent’ heating choices.
Knowledge is to make sense of information, being aware of the limitations of the underlying data and theoretical models used to classify it, being able to place that information into a wider context of meaning, purpose, and dynamic interactions, involving experience, prediction, and the malleability of both purpose and model. Knowledge refers to the ability to understand a phenomenon, theoretically or practically, and to use such understanding for a deliberate purpose. It can be defined as ‘justified true belief’.Footnote 13 This process complements available information with inferences from past experience and intuition, and responds to feedback, including sensory, cognitive, and evaluative.
Intelligence refers to the ability to ‘function appropriately and with foresight’, thus AI presumes that the act of thinking that turns (sensory) data into information and then into knowledge, and finally into purposeful action is not unique to humans or animals. It posits that the underlying computational process is formally deducible, can be scientifically studied and replicated in a digital computer. Once this is achieved, all the inherent advantages of the computer come to bear: speed, objectivity (absence of bias, emotion, preconceptions, etc.), scalability, permanent operation, etc. In the national security field, some have compared this promise to the mythical figure of the Centaur, who combined the intelligence of man with the speed and strength of the horse.Footnote 14
The development of the Internet concerned the distribution of data and information between human and machine users.Footnote 15 AI, by contrast, does not primarily refer to the transmission of raw or processed data, the exchange of ideas, or the remote control of machinery (Internet of things, military command and control, etc.), but the ability to detect patterns in data, process data into information, and classify that information in order to predict outcomes and make decisions. Darrell M. Allen and John R. West suggest three differentiating characteristics of such systems: intentionality, intelligence, and adaptability.Footnote 16
The Internet has already transformed our lives, but the enormous changes portended by AI are just beginning to dawn on us. The difficulty of predicting that change, however, should not serve as an excuse for what James Baker deemed ‘a dangerous nonchalance’ on behalf of decision-makers tasked with managing this transformation.Footnote 17 Responsible management of national security requires an adequate and realistic assessment of the threats and opportunities presented by new technological developments, especially their effect on the relative balance of power and on global public goods, such as the mitigation of catastrophic risks, arms races, and societal dislocations. In modern administrative states, such management is inevitably done through law, both nationally and internationally.Footnote 18
In this chapter, I will begin by contrasting the challenge posed by AI to the related but distinct emergence of the cyber domain. I then outline six distinct implications for national security: doomsday scenarios, autonomous weapons, existing military capabilities, reconnaissance, economics, and foreign relations. Legal scholarship often proposes new regulation when faced with novel societal or technological challenges. But it appears unlikely that national actors will forego the potential advantages offered by a highly dynamic field through self-restraint by international convention. Still, even if outright bans and arms control-like arrangements are unlikely, the law serves three important functions when dealing with novel challenges: first, as the repository of essential values guiding action; second, offering essential procedural guidance; and third, by establishing authority, institutional mandates, and necessary boundaries for oversight and accountability.
II. Cyberspace and AI
The purpose of this sub-section is not to outline the large literature applying the principles of general international law, and especially the law of armed conflict, to cyber operations. Rather, it seeks to highlight the distinctive elements of the global communication infrastructure, especially how AI is distinct from some of the regulatory and operationalFootnote 19 challenges that characterise cybersecurity.Footnote 20 The mental image conjured by early utopian thinkers and adopted later by realist and military policy-makers rests on the geographical metaphor of ‘cyberspace’ as a non-corporeal place of opportunity and risk.Footnote 21 This place needs to be defended and thus constitutes an appropriate area of military operations.
As technical barriers eventually fell, the complexity of the network receded behind increasingly sophisticated but simple to operate graphical user-interfaces, making networked information-sharing first a mainstream, and eventually a ubiquitous phenomenon, affecting almost all aspects of human life almost everywhere. This has led to an exponential increase in the availability of information, much of it of a sensitive nature, often voluntarily relinquished. This has created a three-pronged challenge: data protection, information management, and network security.Footnote 22
Much early civilian, especially academic, thinking focused on the dynamic relationship between technology and culture, stressing the emergence of a new, virtual habitat: ‘A new universe, a parallel universe created and sustained by the world’s computers and communication lines.’Footnote 23 But as the novelty wore off while its importance grew, the Internet became ‘re-territorialised’ as nation-states asserted their jurisdiction, including in the hybrid, multi-stakeholder regulatory fora that had developed initially under American governmental patronage.Footnote 24 Perhaps more importantly, this non-corporeal realm created by connected computers, came to be seen not as a parallel universe following its own logic and laws, but as an extension of existing jurisdictions and organisational mandates:
Although it is a man-made domain, cyberspace is now as relevant a domain for DoD [Department of Defence] activities as the naturally occurring domains of land, sea, air, and space. Though the networks and systems that make up cyberspace are man-made, often privately owned, and primarily civilian in use, treating cyberspace as a domain is a critical organizing concept for DoD’s national security missions. This allows DoD to organize, train, and equip for cyberspace as we do in air, land, maritime, and space to support national security interests.Footnote 25
This is reflected in the United States (US) National Security Strategy, which observes: ‘Cybersecurity threats represent one of the most serious national security, public safety, and economic challenges we face as a nation.’Footnote 26 Other countries treat the issue with similar seriousness.Footnote 27
Common to the manner in which diverse nations envisage cybersecurity is the emphasis on information infrastructure, in other words, on the need to keep communication channels operational and protected from unwanted intrusion. This, however, is distinct from the specific challenge of AI, which concerns the creation of actionable knowledge by a machine.
The initial ideas that led to the creation of the Internet sought to solve two distinct problems: the civilian desire to use expensive time-share computing capacity at academic facilities more efficiently by distributing tasks, and the military need to establish secure command and control connections between installations, especially to remote nuclear weapons facilities.Footnote 28 In both cases, it was discovered that existing circuit switched telephone connections were unreliable. The conceptional breakthrough consisted in the idea of package switched communication, which permitted existing physical networks to be joined non-hierarchically, permitting a non-hierarchical, decentralised architecture that is resilient, scalable, and open.Footnote 29
The Internet is, therefore, not one network, but a set of protocols specifying data formats and rules of transmission, permitting local, physical networks to communicate along dynamically assigned pathways.Footnote 30 The technology, the opportunities, and the vulnerabilities it offered came to be condensed in the spatial analogy of cyberspace. This ‘foundational metaphor’ was politically consequential because the use of certain terminology implied, rather than stated outright, particular understandings of complex issues at the expense of others, thus shaping policy debates and outcomes.Footnote 31 Denounced later by himself as merely an ‘effective buzzword’ chosen because ‘it seemed evocative and essentially meaningless’, the definition offered by William Gibson highlights the problematic yet appealing character of this spatial analogy: ‘Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation … A graphic representation of data abstracted from the banks of every computer in the human system. Unthinkable complexity. Lines of light ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.’Footnote 32 The term combined the non-physical nature of a world being dynamically created by its denizens in their collective imagination, but relying behind the graphical user-interface on a complex physical infrastructure.Footnote 33 The advantages of open communications have eventually led military and civilian installations in all nations to become accessible through the Internet, creating unique vulnerabilities due to opportunity costs of communication disruption, physical damage to installations, and interruptions of critical public goods like water or electricity.Footnote 34 What the American military defines as its key challenge in this area applies likewise to most other nations:
US and international businesses trade goods and services in cyberspace, moving assets across the globe in seconds. In addition to facilitating trade in other sectors, cyberspace is itself a key sector of the global economy. Cyberspace has become an incubator for new forms of entrepreneurship, advances in technology, the spread of free speech, and new social networks that drive our economy and reflect our principles. The security and effective operation of US critical infrastructure – including energy, banking and finance, transportation, communication, and the Defense Industrial Base – rely on cyberspace, industrial control systems, and information technology that may be vulnerable to disruption or exploitation.Footnote 35
Some have questioned the definitional appropriation of ‘cyberspace’ as a ‘domain’ for military action through ‘linguistic and ideational factors [which] are largely overlooked by the prevailing approach to cybersecurity in IR [international relations], which has productively emphasized technical and strategic aspects’ at the expense of alternative ways of thinking about security in this field.Footnote 36 Without prejudice to the theoretical contributions such investigations could make to political science and international relations,Footnote 37 the legal regulation of defensive and offensive networked operations has, perhaps after a period of initial confusion,Footnote 38 found traditional concepts to be quite adequate, perhaps because the spatial analogy facilitates the application of existing legal concepts.
The central challenges posed by the increasing and unavoidable dependence on open-architecture communication are both civilian and military. They concern primarily three distinct but related operational tasks: prevent interruptions to the flow of information, especially financial transactions; prevent disruptions to critical command and control of civilian and military infrastructure, especially energy, water, and nuclear installations; and prevent unauthorised access to trade and military secrets.Footnote 39 These vulnerabilities have, of course, corresponding opportunities for obtaining strategic information, striking at long distance while maintaining ‘plausible deniability’,Footnote 40 and establishing credible deterrence.Footnote 41 Again, how the American military describes its own mandate applies in equal measure to other nations, not least its chief competitors Russia and China:
American prosperity, liberty, and security depend upon open and reliable access to information. The Internet empowers us and enriches our lives by providing ever-greater access to new knowledge, businesses, and services. Computers and network technologies underpin US military warfighting superiority by enabling the Joint Force to gain the information advantage, strike at long distance, and exercise global command and control.
The arrival of the digital age has also created challenges for the Department of Defense (DoD) and the Nation. The open, transnational, and decentralized nature of the Internet that we seek to protect creates significant vulnerabilities. Competitors deterred from engaging the US and our allies in an armed conflict are using cyberspace operations to steal our technology, disrupt our government and commerce, challenge our democratic processes, and threaten our critical infrastructure.Footnote 42
Crucially important as these vulnerabilities and opportunities are for national security, defensive and offensive operations occurring on transnational communication networks raise important regulatory questions,Footnote 43 including the applicability of the law of armed conflict to so-called cyber-operations.Footnote 44 Yoram Dinstein dismisses the need for a revolution in the law of armed conflict necessitated by the advent of cyber warfare: ‘this is by no means the first time in the history of LOAC that the introduction of a new weapon has created the misleading impression that great legal transmutations are afoot. Let me remind you of what happened upon the introduction of another new weapon, viz., the submarine.’Footnote 45 Dinstein recounts how the introduction of the submarine in World War I led to frantic calls for international legal regulation. But instead of comprehensive new conventional law, states eventually found the mere restatement that existing rules must also be observed by submarines sufficient. He concludes that were an international convention on cyber warfare to be concluded today, ‘it would similarly stipulate in an anodyne fashion that the general rules of LOAC must be conformed with.’Footnote 46 Gary Solis likewise opens the requisite chapter in his magisterial textbook by stating categorically: ‘This discussion is out of date. Cyber warfare policy and strategies evolve so rapidly that is difficult to stay current.’ But what is changing are technologies, policies, and strategies, not the law: ‘Actually, cyber warfare issues may be resolved in terms of traditional law of war concepts, although there is scant demonstration of its application because, so far, instances of actual cyber warfare have been unusual. Although cyber questions are many, the law of war offers as many answers.’Footnote 47 Concrete answers will depend on facts that are difficult to ascertain, due to inherent technical difficulties to forensic analysis in an extremely complex, deliberately heterogeneous network composed of a multitude of actors, both private and public, benign and malign. Legal assessments likewise rely on definitional disputes and normative interpretations that reflect shifting, often short-term, policies and strategies. Given vastly divergent national interests and capabilities, no uniform international understanding, let alone treaty regulation has emerged.Footnote 48
In sum, while AI relies heavily on the same technical infrastructure of an open, global information network, its utilisation in the national security field poses distinct operational and legal challenges not fully encompassed by the law of ‘cyber warfare’.Footnote 49 That area of law presents the lawyer primarily with the challenge of applying traditional legal concepts to novel technical situations, especially the evidentiary challenges of defining and determining an armed attack, establishing attribution, the scope of the right to self-defence and proportionality, as well as thorny questions of the treatment of non-state or quasi-state actors, the classification of conflicts, and not least the threshold of the ‘use of force’.Footnote 50 AI sharpens many of the same regulatory conundra, while creating novel operational risks and opportunities.Footnote 51
III. Catastrophic Risk: Doomsday Machines
In the latest instalment of the popular Star Wars movie franchise, there is a key scene where the capabilities of truly terrible robotic fighting machines are presented. The franchise’s new hero, the eponymous Mandalorian, manages only with considerable difficulty to defeat but one of these robots, of which, however, an entire battalion is waiting in the wings. The designers of the series have been praised for giving audiences ‘finally an interesting stormtrooper’, that is a machine capable of instilling fear and respect in the viewer.Footnote 52
Whatever the cineastic value of these stormtroopers, in a remarkable coincidence a real robotics company simultaneously released a promotional video of actual robots that made these supposedly frightening machines set in a far distant future look like crude, unsophisticated toys. The dance video released by Boston Dynamics in early 2021 to show off several of its tactical robots jumping, dancing, pirouetting elegantly to music put everything Hollywood had come up with to shame: these were no prototypes, but robots that had already been deployed to police departmentsFootnote 53 and the military,Footnote 54 doing things that one previously could only have imagined in computer generated imagery.Footnote 55 Impressive and fearsome as these images are, these robots do exhibit motional ‘intelligence’ in the sense that they are able to make sense of their surroundings and act purposefully in it, but they are hardly able to replicate, let alone compete with human action, yet.
The impressive, even elegant capabilities showcased by these robots show that AI has made dramatic strides in recent years, bringing to mind ominous fears. In an early paper written in 1965, one of the British Bletchley Park cryptographers, the pioneering computer scientist and friend of Alan Turing, Irving John ‘Jack’ Good warned that an ‘ultra-intelligent machine’ would be built in the near future that could prove to be mankind’s ‘last invention’ because it would lead to an ‘intelligence explosion’, that is an exponential increase in self-generating machine intelligence.Footnote 56 While highly agile tactical robots conjure tropes of dangerous machines enslaving humanity, the potential risk posed by the emergence of super-intelligence is unlikely to take either humanoid form or motive but constitutes both incredible opportunity and existential risk, as Good pointed out half a century ago:
The survival of man depends on the early construction of an ultra-intelligent machine. … Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus, the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.Footnote 57
Good would have been pleased to learn that both the promise and premonition of AI are no longer the preserve of science fiction, but taken seriously at the highest level of political decision-making. In a well-reported speech, President Vladimir Putin of Russia declared in 2017 that leadership in AI: ‘is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.’Footnote 58 Very similar statements guide official policy in all great powers, raising the spectre of what has been termed an ‘arms race’ in AI,Footnote 59 as a result of which ‘super-intelligent’ machines (i.e. those with capabilities higher than humans across the board), might endanger mankind.Footnote 60
It is interesting to note that the tone of the debate has changed significantly. Writing in a popular scientific magazine in 2013, Seth Baum asked rhetorically whether his readers should even take the topic seriously: ‘After all, it is essentially never in the news, and most AI researchers don’t even worry. (AGI today is a small branch of the broader AI field.) It’s easy to imagine this to be a fringe issue only taken seriously by a few gullible eccentrics.’Footnote 61 Today, these statements are no longer true. As Artificial General Intelligence, and thus the prospect of super-intelligence, is becoming a prominent research field, worrying about its eventual security implications is no longer the preserve of ‘a few gullible eccentrics’. Baum correctly predicted that the relative lack of public and elite attention did not mean that the issue was unimportant.
Comparing it to the issue of climate change that likewise took several decades to evolve from a specialist concern to an all-consuming danger, he predicted that the trend was clear that given the exponential development of technology, the issue would soon become headline news. The same point was made roughly at the same time by the co-founder of the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, Huw Price. Summing up the challenge accurately, Price acknowledged that some of these concerns might seem far-fetched, the stuff of science fiction, which is exactly part of the problem:
The basic philosophy is that we should be taking seriously the fact that we are getting to the point where our technologies have the potential to threaten our own existence – in a way that they simply haven’t up to now, in human history. We should be investing a little of our intellectual resources in shifting some probability from bad outcomes to good ones. To the extent – presently poorly understood – that there are significant risks, it’s an additional danger if they remain for these sociological reasons outside the scope of ‘serious’ investigation.Footnote 62
There are two basic options: either to design safe AI with appropriate standards of transparency and ethical grounding as inherent design features, or not to design dangerous AI.Footnote 63 Given the attendant opportunities and the competitive international and commercial landscape, this latter option remains unattainable. Consequently, there has been much scientific thinking on devising ethical standards to guide responsible further technological development.Footnote 64 International legal regulation, in contrast, has so far proven elusive, and national efforts remain embryonic.Footnote 65
Some serious thinkers and entrepreneurs argue that the development of super-intelligence must be abandoned due to inherent, incalculable, and existential risks.Footnote 66 Prudence would indicate that even a remote risk of a catastrophic outcome should keep all of us vigilant. Whatever the merits of these assessments, it appears unlikely that an international ban of such research is likely. Moreover, as Ryan Calo and others have pointed out, there is a real opportunity cost in focusing too much on such remote but highly imaginative risks.Footnote 67
While the risks of artificial super-intelligence, which is defined as machine intelligence that surpasses the brightest human minds, are still remote, they are real and may quickly threaten human existence by design or indifference. Likewise, general AI or human-level machine intelligence remains largely aspirational, referring to machines that can emulate human beings at a range of tasks, switching fluidly between them, training themselves on data and their own past performance, and re-writing their operating code. In contrast, concrete policy and regulatory challenges need to be addressed now as a result of the exponential development of the less fearsome but concrete narrow AI, defined as machines that are as good or better than humans at particular tasks, such as interpreting x-ray or satellite images.
These more mundane systems are already operational and rapidly increase in importance, especially in the military field. Here, perhaps even more than in purely civilian domains, Pedro Domingos’ often quoted adage seems fitting: ‘People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.’Footnote 68 Without belittling the risk of artificial general or super-intelligence, Calo is thus correct to stress that focusing too much attention on this remote risk will reduce necessary attention from pressing societal needs and thereby risk ‘an AI Policy Winter’ in which necessary regulation limps behind rapid technical development.Footnote 69
IV. Autonomous Weapons System
Automated weapons have been in use for a long time; how long depends largely on the degree of automation informing one’s definition. A broad definition of a robot, under which we can subsume autonomous weapons systems, is a physical system that senses, processes, and acts upon the world. We can thus differentiate between ‘disembodied AI’ which collects, processes, and outputs data and information, but whose effect in the physical world is mediated; and robotics which leverage AI to itself physically act upon the world.Footnote 70
In order to ascertain the likely impact of AI on autonomous weapons systems, it is helpful to conceive of them and the regulatory challenges they pose as a spectrum of capabilities rather than sharply differentiated categories, with booby traps and mines on one end; improvised explosive devices (IEDs), torpedoes, and self-guided rockets somewhere in the middle; drones and loitering munition further towards the other end; and automated air defence and strategic nuclear control systems at or beyond the other polar end. It appears that two qualitative elements are crucial: the degree of processing undertaken by the system,Footnote 71 and the amount of human involvement before the system acts.Footnote 72
It follows that the definition of ‘autonomous’ is not clear-cut, nor is it likely to become so. Analytically, one can distinguish four distinct levels of autonomy: human operated, human delegated, human supervised, and fully autonomous.Footnote 73 These classifications, however, erroneously ‘imply that there are discrete levels of intelligence and autonomous systems’,Footnote 74 downplaying the importance of human–machine collaboration.Footnote 75 Many militaries, most prominently that of the US, insist that a human operator must remain involved, including ‘fail safe’ security precautions:
Semi-autonomous weapons systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator. It is DoD policy that … autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment of the use of force.Footnote 76
In contrast to the assumptions underlying the discussion in the previous section, even fully autonomous systems currently always involve a human being who ‘makes, approves, or overrides a fire/don’t fire decision’.Footnote 77 Furthermore, such systems have been designed by humans, who have programmed them within specified parameters, which include the need to observe the existing law of armed conflict.Footnote 78 These systems are deployed into battle by human operators and their commanders,Footnote 79 who thus carry command responsibility,Footnote 80 including the possible application of strict liability standards known from civil law.Footnote 81
Given the apparent military benefits of increased automation and an extremely dynamic, easily transferable civilian field, outright bans of autonomous weapon systems, robotics, and unmanned vehicles appear ‘insupportable as a matter of law, policy, and operational good sense’.Footnote 82 To be sure, some claim that the principles of distinction, proportionality, military necessity, and the avoidance of unnecessary suffering, which form the basis of the law of armed conflict,Footnote 83 in conjunction with general human rights law,Footnote 84 somehow impose a ‘duty upon individuals and states in peacetime, as well as combatants, military organizations, and states in armed conflict situations, not to delegate to a machine or automated process the authority or capability to initiate the use of lethal force independently of human determinations of its moral and legal legitimacy in each and every case.’Footnote 85 Without restating the copious literature on this topic, it is respectfully suggested that such a duty for human determination cannot be found in existing international, and only occasionally in national,Footnote 86 law. Solis’ textbook begins discussing the war crime liability of autonomous weapons by stating the obvious counter-factual: ‘Any lawful weapon can be employed unlawfully.’ He proceeds to devise a number of hypothetical scenarios in which autonomous weapons could indeed be used or deliberately designed unlawfully, to conclude:
The likelihood of an autonomous weapon system being unlawful in and of itself is very remote; it would not meet Article 36 testing requirements and thus would not be put into use. And the foregoing four scenarios involving possible unlawful acts by operators or manufacturers are so unlikely, so phantasmagorical, that they are easily lampooned. … While acts such as described in the four scenarios are unlikely, they are possible.Footnote 87
As stated, Article 36 of the 1977 Additional Protocol I to the Geneva Conventions imposes on the contracting parties the obligation to determine prior to the deployment of any new weapon that it conforms with the existing law of armed conflict and ‘any other rule of international law applicable’. For states developing new weapons, this obligation entails a continuous review process from conception and design, through its technological development and prototyping, to production and deployment.Footnote 88
Given the complexity and rapid continuous development of autonomous weapons systems, especially those relying on increasingly sophisticated AI, such a legally mandatory review will have to be continuous, rigorous, and overcome inherent technical difficulties, given the large number of sub-systems from a large number of providers. Such complexity notwithstanding, autonomous weapons, including those relying on AI, are not unlawful in and of themselves.
In principle, the underlying ethical conundra and proportional balancing of competing values that need to inform responsible robotics generally,Footnote 89 need to inform the conception, design, deployment, and use of autonomous weapons system, whether or not powered by AI: ‘I reject the idea that IHL [international humanitarian law] is inadequate to regulate autonomous weapons. … However far we go into the future and no matter how artificial intelligence will work, there will always be a human being at the starting point … This human being is bound by the law.’Footnote 90 The most likely use scenarios encompass so-called narrow AI where machines have already surpassed human capabilities. The superior ability to detect patterns in vast amounts of unstructured (sensory) data has for many years proven indispensable for certain advanced automated weapons systems. Anti-missile defence systems, like the American maritime Aegis and land-based Patriot, the Russian S300 and S400 or the Israeli ‘Iron Dome’, all rely on the collection and processing of large amounts of radar and similar sensor data, and the ability to respond independently and automatically. This has created unique vulnerabilities: their susceptibility to cyber-attacks ‘blinding’ them,Footnote 91 the dramatic shortening of warning and reaction time even where human operators remain ‘in the loop’,Footnote 92 and the possibility to render these expensive, highly sophisticated systems economically unviable by targeting them with unconventional countermeasures, such as very cheap, fairly simple commercial drones.Footnote 93
V. Existing Military Capabilities
Irrespective of the legal and ethical questions raised, AI is having a transformative effect on the operational and economic viability of many sophisticated weapons systems. The existing military technology perhaps most immediately affected by the rise of AI are unmanned vehicles of various kinds, so-called drones and ‘loitering munitions’.Footnote 94 Currently relying on remote guidance by human operators or relatively ‘dumb’ automation, their importance and power is likely to increase enormously if combined with AI. Simultaneously, certain important legacy systems, for instance large surface ships such as aircraft carriers, can become vulnerable and perhaps obsolete due to neurally linked and (narrowly) artificially intelligent ‘swarms’ of very small robots.Footnote 95
The ready availability of capable and affordable remotely operated vehicles, plus commercial satellite imagery and similar information sources has put long-range power-projection capabilities in the hands of a far larger group of state and non-state actors. This equalisation of relative power is further accelerated by new technology rendering existing weapon systems vulnerable or ineffective. Important examples include distributed, swarm-like attacks on ships or permeating expensive air defence systems with cheap, easily replaceable commercial drones.Footnote 96
The recent war over Nagorno-Karabakh exposed some of these general vulnerabilities, not least the inability of both Armenia and Azerbaijan’s short-range air defense (SHORAD) arsenals, which admittedly were limited in size and quality, to protect effectively against sophisticated drones. While major powers like the US, China, and Russia are developing and deploying their own drone countermeasures,Footnote 97 certain existing systems, for instance aircraft carriers, have become vulnerable. This portends potential realignments in relative power where large numbers of low-cost expendable machines can be used to overwhelm an otherwise superior adversary.Footnote 98
There has been much academic speculation about the perceived novelty of drone technology and the suggested need to update existing legal regulations.Footnote 99 It needs to be stated from the outset that remotely piloted land-, air-, or sea-crafts have been used since the 1920s,Footnote 100 and thus cannot be considered either new or unanticipated by the existing law of armed conflict.Footnote 101 Likewise, it is difficult to draw a sharp technical distinction between certain drones and some self-guided missiles, which belong to a well-established area of military operations and regulation.Footnote 102
The novelty lies less in the legal or ethical assessment, than in the operational challenge of the dispersal of a previously highly exclusive military capability. The US has twice before responded to such a loss of its superior competitive edge by embarking on an ‘offset’ strategy meant to avoid having to match capabilities, instead seeking to regain superiority through an asymmetric technological advantage.Footnote 103
The ‘First Offset’ strategy successfully sought to counter Soviet conventional superiority through the development and deployment of, especially tactical, nuclear weapons.Footnote 104 The ‘Second Offset’ strategy was begun towards the end of the Vietnam War and reached its successful conclusion during the Iraq War of 1991. It meant to counter the quantitative equalisation of conventional assets, especially airpower, not by increasing the number of assets but their quality. Mustering American socio-economic advantages in technological sophistication, the key to the strategy was the development of previously unimaginable strike precision. As with any other military technology, it was anticipated that the opponent would eventually catch up, at some point neutralising this advantage. Given the economic near-collapse of the Soviet Union and its successor Russia, the slow rise of China, and the relative absence of other serious competitors, the technological superiority the US had achieved in precision strike capability surprisingly endured far longer than anticipated:
Perhaps the most striking feature of the evolution of non-nuclear (or conventional) precision strike since the Cold War ended in 1991 has been what has not happened. In the early 1990s, there was growing anticipation that for major powers such as the United States and Russia, ‘long-range precision strike’ would become ‘the dominant operational approach.’ The rate at which this transformation might occur was anyone’s guess but many American observers presumed that this emerging form of warfare would proliferate rather quickly. Not widely foreseen in the mid-1990s was that nearly two decades later long-range precision strike would still be a virtual monopoly of the US military.Footnote 105
Written in 2013, this assessment is no longer accurate. Today, a number of states have caught up and dramatically improved both the precision and range of their power projection. The gradual loss of its relative monopoly with respect to precision strike capability, remote sensing, and stealth, while simultaneously exclusive assets like aircraft carrier groups are becoming vulnerable, ineffective, or fiscally unsustainable,Footnote 106 led the US to declare its intention to respond with a ‘Third Offset’ strategy. It announced in 2014 that it would counter potential adversaries asymmetrically, rather than system by system:
Trying to counter emerging threats symmetrically with active defenses or competing ‘fighter for fighter’ is both impractical and unaffordable over the long run. A third offset strategy, however, could offset adversarial investments in A2/AD [anti-access/area denial] capabilities in general – and ever-expanding missile inventories in particular – by leveraging US core competencies in unmanned systems and automation, extended-range and low-observable air operations, undersea warfare, and complex system engineering and integration. A GSS [global surveillance and strike] network could take advantage of the interrelationships among these areas of enduring advantage to provide a balanced, resilient, globally responsive power projection capability.Footnote 107
The underlying developments have been apparent for some time, ‘disruptive technologies and destructive weapons once solely possessed by advanced nations’ have proliferated and are now easily and cheaply available to a large number of state and non-state opponents, threatening the effectiveness of many extremely expensive weapon systems on which power-projection by advanced nations, especially the US, had relied.Footnote 108 One of these disruptive technologies has been unmanned vehicles, especially airborne ‘drones’. While these have been used for a century and have been militarily effective for half a century,Footnote 109 the explosion in surveillance and reconnaissance capability afforded by AI, and the dramatic miniaturisation and commercialisation of many of the underlying key components have transformed the global security landscape by making these capabilities far more accessible.Footnote 110
Drones have proven their transformative battlefield impact since the 1973 Yom Kippur War and 1982 Israeli invasion of Lebanon.Footnote 111 Whatever their many operational and strategic benefits, unmanned aircraft were initially not cheaper to operate than conventional ones: ‘higher costs for personnel needed to monitor and analyze data streams that do not exist on manned platforms, as well as the costs for hardware and software that go into the sensor packages,’Footnote 112 to say nothing of the considerable expense of training their pilots,Footnote 113 left drones and the long-range precision targeting capability they conferred out of the reach of most armies, primarily due to economic costs, skilled manpower shortages, and technological complexity.
The recent conflict between Azerbaijan and Armenia has decisively shown that these conditions no longer hold. Both are relatively poor nations with fairly unsophisticated armed forces, with the crucial suppliers being the medium powers of Turkey and Israel. This highlighted the dramatic availability and affordability of such technology,Footnote 114 much of it off-the-shelf and available through a number of new entrants in the market, raising important questions of export controls and procurement.Footnote 115 Drone technology and their transformational impact on the battlefield are no longer the prerogative of rich industrial nations. While AI does not appear to have played a large role in this conflict yet,Footnote 116 the decisiveness of the precision afforded by long-range loitering munition, unmanned vehicles, and drastically better reconnaissance,Footnote 117 has not been lost on more traditional great powers.Footnote 118
This proliferation of precision long-range weaponry portends the end of the enormous advantages enjoyed by the US as a result of its ‘Second Offset’ strategy. Following the Vietnam War, the US successfully sought to counteract the perceivedFootnote 119 numerical superiority of the Soviet UnionFootnote 120 in air and missile power by investing in superior high-precision weaponry, harnessing the country’s broad technological edge.Footnote 121 These investments paid off and conferred a surprisingly long-lasting dominance. The loss of its main adversary and the inability of other adversaries to match its technological capabilities, meant that the unique advantages conferred to the US – primarily the ability to essentially eliminate risk to one’s own personnel by striking remotely and to reduce political risk from ‘collateral damage’ by striking precisely – created an enduring willingness to deploy relatively unopposed in a vast number of unconventional conflict scenarios, sometimes dubbed a ‘New American Way of War’.Footnote 122
In principle, ‘combat drones and their weapons systems are lawful weapons’.Footnote 123 Moreover, given inherent technical differences, especially their drastically higher loitering ability, lack of risk to personnel and higher precision, can actually improve observance of the law of armed conflict by making it easier to distinguish and reduce ‘collateral damage’,Footnote 124 having led some to claim that not to use drones would actually be unethical.Footnote 125 Given vastly better target reconnaissance and the possibility for much more deliberate strike decisions, convincing arguments can be made that remotely operated combat vehicles are not only perfectly lawful weapons but have the potential to increase compliance with humanitarian objectives: ‘While you can make mistakes with drones, you can make bigger mistakes with big bombers, which can take out whole neighborhoods. A B-2 [manned bomber] pilot has no idea who he is hitting; a drone pilot should know exactly who he is targeting.’Footnote 126 These very characteristics – the absence of risk to military personnel and vastly better information about battlefield conditions – have also made drone warfare controversial, aspects that are heightened but not created by the addition of AI. The relative absence of operational and political risk led to a greater willingness to use armed force as a tool of statecraft, in the process bending or breaking traditional notions of international law and territorial integrity.Footnote 127 Some have argued that remote warfare with little to no risk to the operator of the weapon is somehow unethical, somehow incompatible with the warrior code of honour, concerns that should, if anything, apply even more forcefully to machines killing autonomously.Footnote 128 Whatever the merits of the conception of fairness underlying such conceptions, such ‘romantic and unrealistic views of modern warfare’ do not reflect a legal obligation to expose oneself to risk.Footnote 129
There is a legal obligation, however, to adequately balance risks resulting from obtaining military advantages, which include reducing exposing service-members to risk, and the principle of distinction meant to protect innocent civilians. Many years ago, Stanley Hoffmann denounced the perverse doctrine of ‘combatant immunity’ in the context of high altitude bombing by manned aircraft staying above the range of air defences despite the obvious costs in precision and thus civilian casualties this would entail.Footnote 130 In some respects, the concerns Hoffmann expressed have been addressed by unmanned aircraft, which today permit unprecedented levels of precision, deliberation, and thus observance of the principle of distinction:
Drones are superior to manned aircraft, or artillery, in several ways. Drones can gather photographic intelligence from geographic areas too dangerous for manned aircraft. Drones carry no risk of friendly personnel death or capture. Drones have an operational reach greater than that of aircraft, allowing them to project force from afar in targets far in excess of manned aircraft. The accuracy of drone-fired munitions is greater than that of most manned aircraft, and that accuracy allows them to employ munitions with a kinetic energy far less than artillery or close air support require, thus reducing collateral damage.Footnote 131
At the same time, however, the complete removal of risk to one’s own personnel has reduced traditional inhibitions to engage in violence abroad,Footnote 132 including controversial policies of ‘targeted killings’.Footnote 133 Many of the ethical and legal conundra, as well as operational advantages that ensured are heightened if the capability of remotely operated vehicles is married with AI, which can improve independent or pre-authorised targeting by machines.Footnote 134
VI. Reconnaissance
The previous section showed that the rapid development of AI is transforming existing military capabilities, leading to considerable adjustments in relative strength. As in the civilian field, the main driver is the removal of a key resource constraint, namely the substitution of skilled, thus expensive and often rare, manpower by machines no longer constrained by time, availability, emotions, loyalty, alertness, etc. The area where these inherent advantages are having the largest national security impact is reconnaissance and intelligence collection.Footnote 135
It is not always easy to distinguish these activities clearly from electronic espionage, sabotage, and intellectual property theft discussed above, but it is apparent that the capabilities conferred by automated analysis and interpretation of vast amounts of sensor data is raising important regulatory questions related to privacy, territorial integrity, and the interpretation of classical ius in bello principles on distinction, proportionality, and military necessity.
The advantages of drones outlined just aboveFootnote 136 have conferred unprecedented abilities to pierce the ‘fog of war’ by giving the entire chain of command, from platoon to commander in chief, access to information of breathtaking accuracy, granularity, and actuality.Footnote 137 Such drone-supplied information is supplemented by enormous advances in ‘signal and electronic intelligence’, that is eavesdropping into communication networks to obtain information relevant for tactical operations and to make strategic threat assessments. But all this available information would be meaningless without someone to make sense of it. Just like in civilian surveillance,Footnote 138 the limiting factor has long been the human being needed to watch and interpret the video or data feed.Footnote 139 As this limiting factor is increasingly being removed by computing power and algorithms, real-time surveillance at hitherto impractical levels becomes possible.Footnote 140
Whether the raw data is battlefield reconnaissance, satellite surveillance, signal intelligence, or similar sensor data, the functional challenge, regulatory difficulty, and corresponding strategic opportunity are the same: mere observation is relatively inconsequential – from both a regulatory and operational point of view – unless the information is recorded, classified, interpreted, and thereby made ‘useful’.Footnote 141 This reflects a basic insight made already some forty years ago by Herbert Simon:
in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.Footnote 142
In systems design, whether military or civilian, the main design problem is often seen as acquiring and presenting more information, following the traditional mental model that information scarcity is the chief constraint. As Simon and others correctly pointed out, however, these design parameters fundamentally mistake the underlying transformation brought about by technological change that is the ever-decreasing cost of collecting and transmitting data leading to the potential for ‘information overload’. In other words, the real limiting factor was attention, defined as ‘focused mental engagement on a particular item of information. Items come into our awareness, we attend to a particular item, and then we decide whether to act.’Footnote 143
The true distinguishing, competitive ability is, therefore, to design systems that filter out irrelevant or unimportant information and to identify among a vast amount of data those patterns likely to require action. AI is able to automatise this difficult, taxing, and time-consuming process, by spotting patterns of activity in raw data and bringing it to the attention of humans. The key to understanding the transformation wielded by AI, especially machine learning, is the revolutionary reversal of the role of information. For most of human history, information was a scarce resource, which had to be obtained and transmitted at great material and human cost. Technological advances during the latter half of the twentieth century reversed that historic trajectory, making information suddenly over-abundant. Today, the limiting factor is no longer the availability of information as such, but our ability to make sense of its sheer amount. The ability to use computing power to sift through that sudden information abundance thus becomes a chief competitive ability, in business just as on the battlefield: ‘Data mining is correctly defined as the nontrivial process of identifying valid, novel, potentially useful and ultimately understandable patterns in data.’Footnote 144 The key to performance, whether military or economic, is to derive knowledge from data, that is the ability to search for answers in complex and dynamic environments, to spot patterns of sensitive activity among often unrelated, seemingly innocuous information and to bring it to the attention of human decision-makers or initiate automated responses. Drastic advances in AI, made possible by the triple collapse in the price of sensor data collection, data storage, and processing power,Footnote 145 finally seem to offer a solution to the problem of information over-abundance by substituting machine attention for increasingly scarce human mental energy.
These long-gestating technological capabilities have suddenly aligned to bring about the maturation of AI. As we saw with respect to unmanned vehicles, one of their key structural advantages consists in their ability to deliver large amounts of sensor data, just like signal intelligence. Traditionally, one of the key constraints consisted in the highly skilled, thus rare and expensive, manpower necessary to make sense of that data: interpreting photographic intelligence, listening in on air control communications in foreign languages, etc.Footnote 146 Most of these tasks can already successfully be carried out by narrow AI, offering three game-changing advantages: first, the complete removal of manpower constraint in classifying and interpreting data, detecting patterns and predicting outcomes; second, machine intelligence is quicker than humans, it doesn’t tire, it isn’t biased,Footnote 147 but perhaps most importantly, it can detect patterns humans wouldn’t be able to see; and third, AI permits disparate data to be fused, permitting otherwise invisible security-relevant connections to be identified.Footnote 148
VII. Foreign Relations
Perhaps more important than the ability to lift the ‘fog of war’ through better reconnaissance might be the transformation of the role of information and trust in the conduct of foreign relations. Again, this aspect of AI overlaps but is distinct from the Internet. To highlight the enormity of the challenges posed by AI, it might be useful to recall the early years of the Internet. The first time I surfed the web was in the autumn of 1995. Email was known to exist but it was not used by anyone I knew; my own first email was only sent two years later in graduate school. That autumn, I had to call and book a time-slot at the central library of the University of London, the websites I managed to find were crude, took a god-awful time to load and one had to know their addresses or look them up in a physical, printed book.Footnote 149
My conclusion after that initial experience seemed clear: this thing would not catch on. I did not use it again for several years. After all, who would want to read a newspaper on a computer, waiting forever and scrambling through terrible layout? In a now-hilarious appearance on an American late-night show that year, the Microsoft founder Bill Gates responded to the host’s thinly-disguised dismissal by giving a fairly enduring definition of that ‘internet thing’: ‘Well, it’s becoming a place where people are publishing information. … It is the big new thing.’Footnote 150 Obviously, Gates was more clairvoyant than me. Indeed, the Internet would be the new big thing, but he understood that it would take some time until normal people like me could see its value.Footnote 151
Even after search-engines made the increasingly graphical web far more user-friendly, by 2000 the internet was still not mainstream and some journalists wondered whether it was ‘just a passing fad’.Footnote 152 Like many new cultural phenomena driven by technological innovation, those ‘in the know’ enjoyed their avant-garde status, as the editor of one of the early magazines serving this new demographic stressed: ‘Internet Underground was this celebration of this relatively lawless, boundless network of ideas we call the Internet. It assumed two things about its audience: 1) You were a fan [and] 2) you knew how to use it. Otherwise, the magazine wouldn’t have made much sense to you.’Footnote 153 The removal of physical, temporal, and pecuniary barriers to the sharing of information indeed created a ‘network of ideas’, opening new vistas to collective action, new interpretations of established civil liberties, and new conceptions of geography.Footnote 154 Early generations of technophiles ‘in the know’ conjured this non-corporeal geography as a utopia of unfettered information-sharing, non-hierarchical self-regulation, and self-realisation through knowledge. Then-prevailing conceptions of ‘cyberspace’ were characterised by scepticism of both government power and commercial interests, often espousing anarchist or libertarian attitudes towards community, seeing information as a commodity for self-realisation, not profit.Footnote 155
Early utopians stressed the opportunities created by this new, non-hierarchical ‘network of ideas’, which many perceived to be some kind of ‘samizdat on steroids’, subversive to authoritarian power and its attempts to control truth:Footnote 156 ‘The design of the original Internet was biased in favor of decentralization of power and freedom to act. As a result, we benefited from an explosion of decentralized entrepreneurial activity and expressive individual work, as well as extensive participatory activity. But the design characteristics that underwrote these gains also supported cybercrime, spam, and malice.’Footnote 157 Civilian internet pioneers extrapolated from these core characteristics of decentralisation and unsupervised individual agency a libertarian utopia in the true meaning of the word, a non-place or ‘virtual reality’ consisting of and existing entirely within a ‘network of ideas’. Here, humans could express themselves freely, assume new identities and interests. Unfettered by traditional territorial regimes, new norms and social mores would govern their activities towards personal growth and non-hierarchical self-organisation. Early mainstream descriptions of the Internet compared the novelty to foreign travel, highlighting emotional, cultural, and linguistic barriers to understanding:
The Internet is the virtual equivalent of New York and Paris. It is a wondrous place full of great art and artists, stimulating coffee houses and salons, towers of commerce, screams and whispers, romantic hideaways, dangerous alleys, great libraries, chaotic traffic, rioting students and a population that is rarely characterized as warm and friendly. … First-time visitors may discover that finding the way around is an ordeal, especially if they do not speak the language.Footnote 158
As the Internet became mainstream and eventually ubiquitous, many did, in fact, learn to ‘speak its language’, however imperfectly.Footnote 159 The advent of AI can be expected to bring changes of similar magnitude, requiring individuals and our governing institutions to again ‘learn its language’. AI is altering established notions of verification and perceptions of truth. The ability to obtain actionable intelligence despite formidable cultural and organisational obstacles,Footnote 160 is accompanied by the ability to automatically generate realistic photographs, video, and text, enabling information warfare of hitherto unprecedented scale, sophistication, and deniability.Footnote 161 Interference in the electoral and other domestic processes of competing nations are not new, but the advent of increasingly sophisticated AI is permitting ‘social engineering’ in novel ways.
First, it has become possible to attack large numbers of individuals with highly tailored misinformation through automated ‘chatbots’ and similar approaches. Secondly, the quality of ‘deep fakes’ generated by sophisticated AI are increasingly able to deceive even aware and skilled individuals and professional gatekeepers.Footnote 162 Thirdly, the well-known ‘Eliza-effect’ of human beings endowing inanimate objects like computer interfaces with human emotions, that is imbuing machines with ‘social’ characteristics permits the deployment of apparently responsive agents at scale, offering unprecedented opportunities and corresponding risks not only for ‘phishing’ and ‘honey trap’ operations,Footnote 163 but especially to circumvent an enemy government by directly targeting its population.Footnote 164
A distinct problem fueled by similar technological advances is the ability to impersonate representatives of governments, thereby undermining trust and creating cover for competing narratives to develop.Footnote 165 Just as with any other technology, it is reasonable to expect that eventually corresponding technological advances will make it possible to detect and defuse artificially created fraudulent information.Footnote 166 It is furthermore reasonable to expect that social systems will likewise adapt and create more sophisticated consumers of such information better able to resist misinformation. Such measures had been devised during wars and ideological conflicts in the past and it is therefore correct to state that ‘deep fakes don’t create new problems so much as make existing problems worse’.Footnote 167 Jessica Silbey and Woodrow Hartzog are, of course, correct that the cure to the weaponisation of misinformation lies in strengthening and creating institution tasked with ‘gatekeeping’ and validation:
We need to find a vaccine to the deep fake, and that will start with understanding that authentication is a social process sustained by resilient and inclusive social institutions. … it should be our choice and mandate to establish standards and institutions that are resilient to the con. Transforming our education, journalism, and elections to focus on building these standards subject to collective norms of accuracy, dignity, and democracy will be a critical first step to understanding the upside of deep fakes.Footnote 168
The manner in which this is to be achieved goes beyond the scope of this chapter, but it is important to keep in mind that both accurate information itself, as well as misinformation have long been part of violent and ideological conflict.Footnote 169 Their transformation by the advent of AI must, therefore, be taken into account for a holistic assessment of its impact on national security and its legal regulation. This is particularly pertinent due to the rise of legal argumentation not only as a corollary of armed conflict but as its, often asymmetric, substitute in the form of ‘lawfare’,Footnote 170 as well as the evident importance of legal standards for such societal ‘inocculation’ to be successful.Footnote 171
VIII. Economics
National security is affected by economic competitiveness, which supplies the fiscal and material needs of military defence. The impact of the ongoing revolution in AI on existing labour markets and productive patterns is likely to be transformational.Footnote 172 The current debate is reminiscent of earlier debates about the advent of robotics and automation in production. Where that earlier debate focused on the impact on the bargaining power and medium-term earning potential of blue-collar workers, AI is also threatening white-collar workers, who hitherto seemed relatively secure from cross-border wage arbitrage as well as automation.Footnote 173 In a competitive arena, whether capitalism for individual firms or anarchy for nations, the spread of innovation is not optional but a logical consequence of the ‘socialising effect’ of any competitive system:Footnote 174 ‘Machine learning is a cool new technology, but that’s not why businesses embrace it. They embrace it because they have no choice.’Footnote 175
This embrace of AI has at least three important national security implications, with corresponding regulatory challenges and opportunities. First, dislocations resulting from the substitution of machines for human labour has destabilising effects for social cohesion and political stability, both domestic and international.Footnote 176 These dislocations have to be managed, including through the use of proactive regulation meant to further positive effects while buffering negative consequences.Footnote 177 The implications of mass unemployment resulting from this new wave of automation is potentially different from earlier cycles of technological disruption because it could lead to permanent unemployability of large sectors of the population, rendering them uncompetitive at any price. This could spell a form of automation-induced ‘resource curse’ affecting technologically advanced economies,Footnote 178 suddenly suffering from the socio-economic-regulatory failings historically associated with underdeveloped extractive economies.Footnote 179
Second, the mastery of AI has been identified by all major economic powers as central to maintaining their relative competitive posture.Footnote 180 Consequently, the protection of intellectual property, the creation of a conducive regulatory, scientific, and investment climate to nurture the sector has itself increasingly become a key area of competition between nations and trading blocs.Footnote 181
Third, given the large overlap between civilian and military sectors, capabilities in AI developed in one are likely to affect the nation’s position in the other.Footnote 182 Given inherent technological characteristics, especially scalability and the drastic reduction of marginal costs, and the highly disruptive effect AI can have on traditional military capabilities, the technology has the potential to drastically affect the relative military standing of nations quite independent of conventional measures such as size, population, hardware, etc.: ‘Small countries that develop a significant edge in AI technology will punch far above their weight.’Footnote 183
IX. Conclusion
Like many previous innovations, the transformational potential of AI has long been ‘hyped’ by members of the epistemic communities directly involved in its technical development. There is a tendency among such early pioneers to overstate potential, minimise risk, and alienate those not ‘in the know’ by elitist attitudes, incomprehensible jargon, and unrealistic postulations. As the comparison with cyberspace has shown, it is difficult to predict with accuracy what the likely impact of AI will be. Whatever its concrete form, AI is almost certain to transform many aspects of our lives, including national security.
This transformation will affect existing relative balances of power and modes of fighting and thereby call into question the existing normative acquis, especially regarding international humanitarian law. Given the enormous potential benefits and the highly dynamic current stage of technological innovation and intense national competition, the prospects for international regulation, let alone outright bans are slim. This might appear to be more consequential than it is, because much of the transformation will occur in operational, tactical, and strategic areas that can be subsumed under an existing normative framework that is sufficiently adaptable and broadly adequate.
The risk of existential danger by the emergence of super-intelligence is real but perhaps overdrawn. It should not detract from the laborious task of applying existing international and constitutional principles to the concrete regulation of more mundane narrow AI in the national security field.