Cambridge Forum on AI: Law and Governance publishes content focused on the governance of artificial intelligence (AI) from law, rules, and regulation through to ethical behaviour, accountability and responsible practice. It also looks at the impact on society of such governance along with how AI can be used responsibly to benefit the legal, corporate and other sectors.
Following the emergence of generative AI and broader general purpose AI models, there is a pressing need to clarify the role of governance, to consider the mechanisms for oversight and regulation of AI, and to discuss the interrelationships and shifting tensions between the legal and regulatory landscape, ethical implications, and evolving technologies. Cambridge Forum on AI: Law and Governance uses themed issues to bring together voices from law, business, applied ethics, computer science, and many other disciplines to explore the social, ethical, and legal impact of AI, data science, and robotics and the governance frameworks they require.
Cambridge Forum on AI: Law and Governance is part of the Cambridge Forum journal series, which progresses cross-disciplinary conversations on issues of global importance.
The journal invites submissions for the upcoming Themed Issue: AI and the Decision to Go to War, Guest Edited by Toni Erskine (Australian National University) and Steven E. Miller (Harvard University).
The deadline for submissions is: 20 January 2025.
When it comes to AI-enabled systems employed in the military domain, the attention of both policy-makers and scholars has been overwhelmingly directed towards their impact on the conduct of war. The predominant focus to date has been on the use of ‘lethal autonomous weapons systems’ (LAWS) and decision support systems that rely on big data analytics and machine learning to recommend targets – after the decision to engage in war has been taken. By contrast, this Themed Issue aims to shift our gaze from the impact of AI on the battlefield to the relatively neglected prospect of AI in the war-room. In other words, its object of analysis is the anticipated increasing influence of AI-enabled systems on state-level decision making on the resort to war.
If we look at the proliferation of AI-driven systems to aid decision making in a host of other realms, we have reason to predict that such systems will also steadily infiltrate deliberations over the very initiation of war. There are two general ways that current AI-enabled technologies could impact resort-to-force decision making: through decision-support systems that would inform (and ideally enhance) human deliberations over whether and when to wage war; and, alternatively, through automated self-defence, whereby AI-driven systems would temporarily displace human actors by both calculating and implementing decisions on the resort to force (in the context of defence against cyber attacks, for example, or, even more controversially, a nuclear response to a first strike). The purpose of this Themed Issue is: 1) to analyse risks and/or opportunities associated with employing AI-enabled systems to guide decisions on war initiation in either of these two ways; and 2) to propose approaches to mitigating these risks or enhancing these opportunities, including through policy recommendations.
In pursuing these endeavours, contributions will variously tackle questions related to: international law; the ethics of war (including jus ad bellum considerations within the just war tradition); regulation in the context of both the design and use of such AI-enabled systems; the effect on, and need to revise or reinforce, institutional norms, structures, and chains of command; what would constitute responsible practices; and judgements of the responsibilities and accountability of key actors, including with respect to the exercise of restraint.
Scholars with relevant expertise in AI or national/international security and war from any disciplinary perspective are invited to submit an article for consideration in the Themed Issue. Submissions are welcome from researchers at any career stage. In addition to the originality and significance of the argument, and the rigour of the underlying research, articles will be assessed on their clarity and accessibility to a diverse audience and, importantly, on their fit with the Themed Issue’s stated scope and purpose. On this final point, it will be important that submissions address the impact (or predicted impact) of AI-enabled systems specifically on decisions about resorting to war.
Submission guidelines
Cambridge Forum on AI: Law and Governance seeks to engage multiple subject disciplines and promote dialogue between policymakers and practitioners as well as academics. The journal therefore encourages authors to use an accessible writing style.
The short articles for this Themed Issue should each be between 4000 and 6000 words (including footnotes but excluding the bibliography). Please see the journal’s author instructions for more information.
Articles will be peer reviewed for both content and style. Articles will appear digitally and open access in the journal.
All submissions should be made through the journal’s online peer review system. Author should consult the journal’s author instructions prior to submission.
All authors will be required to declare any funding and/or competing interests upon submission. See the journal’s Publishing Ethics guidelines for more information.
Contacts
Questions regarding submission and peer review can be sent to the journal’s inbox at cfl@cambridge.org.
Specific queries regarding the scope and aims of this Themed Issue should be directed to Professor Toni Erskine, of the two Guest Editors for this issue, at toni.erskine@anu.edu.au.