Call for Papers
Towards Trustworthy and Responsible AI in Adjudication
Cambridge Forum on AI: Law and Governance publishes content focused on the governance of artificial intelligence (AI) from law, rules, and regulation through to ethical behaviour, accountability and responsible practice. It also looks at the impact on society of such governance along with how AI can be used responsibly to benefit the legal, corporate and other sectors.
Following the emergence of generative AI and broader general purpose AI models, there is a pressing need to clarify the role of governance, to consider the mechanisms for oversight and regulation of AI, and to discuss the interrelationships and shifting tensions between the legal and regulatory landscape, ethical implications and evolving technologies. Cambridge Forum on AI: Law and Governance uses themed issues to bring together voices from law, business, applied ethics, computer science and many other disciplines to explore the social, ethical and legal impact of AI, data science, and robotics and the governance frameworks they require.
Cambridge Forum on AI: Law and Governance is part of the Cambridge Forum journal series, which progresses cross-disciplinary conversations on issues of global importance.
The journal invites submissions for the upcoming Themed Issue: Towards trustworthy and responsible AI in adjudication, Guest Edited by Shu Li, Helga Molbæk-Steensig and Alberto Quintavalla. Abstracts of no more than 300 words should be emailed to li@law.eur.nl
The deadline for abstract submissions is 1 June 2025.
Purpose and content of the themed issue
The proposed themed issue deals with the trust and responsible application of Artificial Intelligence (AI) in adjudication. While studies on AI and adjudication have been around for decades, technological leaps in recent years have expanded possible use-cases for AI in adjudication, creating a need for both doctrinal and empirical studies on the new developments.
In fact, while some uses of AI in courts are well-established and largely uncontroversial (e.g., automatic transcriptions, translations, anonymisations, document handling and tele-courts), some ongoing projects involving AI directly in judges' decision-making can raise several regulatory and ethical concerns. The new tools, which are already present in a range of jurisdictions, may aid in identifying relevant facts and legal precedents, suggesting judgment templates, determining the level of just satisfaction, and predicting recidivism risks. There are also moves towards utilising generative AI directly for judgment drafting. These new tools are predicted to offer substantial benefits in terms of speeding up adjudication and addressing existing problems of human biases and discrimination. However, the use of AI in adjudication also entails risks of errors, hallucinations, algorithmic bias and security breaches. The incorporation of AI into adjudication also raises constitutional questions and concerns related to judicial independence and Court legitimacy.
Building trustworthy and responsible AI has been recognised as the cornerstone of developing AI in nearly all jurisdictions. The interpretation and implementation of this principle, however, largely depends on the specific applications of AI. In terms of developing and deploying AI for the purpose of adjudication, trust and responsibility must be further embedded into the discussion of several fundamental issues, such as judicial power, judicial system and judicial independence.
The themed issue will invite contributions broadly investigating the risks and benefits of using AI for adjudication, including for efficiency, human rights, democracy and the rule of law, but encourages in particular pieces focusing on comparative and transversal issues such as:
- Constitutional law perspectives on using AI in adjudication
- The compatibility of AI with common- and civil law judiciary systems
- Comparative perspectives on using AI in adjudication
- Interdisciplenary studies on the trustworthy and responsible AI in adjudication
Important dates:
- Deadline for abstract submission (300 words maximum): 1 June 2025
- Notification of accepted abstract: 30 June 2025
- Symposium for accepted abstracts in Rotterdam: September / October 2025
- Deadline for final paper submission (only accepted abstracts): 1 December 2025
Submission guidelines
Cambridge Forum on AI: Law and Governance seeks to engage multiple subject disciplines and promote dialogue between policymakers and practitioners as well as academics. The journal therefore encourages authors to use an accessible writing style.
Authors have the option to submit a range of article types to the journal. Please see the journal’s author instructions for more information.
Articles will be peer reviewed for both content and style. Articles will appear digitally and open access in the journal.
All submissions should be made through the journal’s online peer review system.
Author should consult the journal’s author instructions prior to submission.
All authors will be required to declare any funding and/or competing interests upon submission. See the journal’s Publishing Ethics guidelines for more information.
Contacts
Questions regarding submission and peer review can be sent to the journal’s inbox at cfl@cambridge.org.