Skip to main content Accessibility help
×
Experimental Regulation for AI Governance
07 Nov 2024 to 01 Mar 2025
Experimental Regulation for AI Governance: 
Closing the Legal Lag with Regulatory Sandboxes? 


Cambridge Forum on AI: Law and Governance publishes content focused on the governance of artificial intelligence (AI) from law, rules, and regulation through to ethical behaviour, accountability and responsible practice. It also looks at the impact on society of such governance along with how AI can be used responsibly to benefit the legal, corporate and other sectors.

Following the emergence of generative AI and broader general purpose AI models, there is a pressing need to clarify the role of governance, to consider the mechanisms for oversight and regulation of AI, and to discuss the interrelationships and shifting tensions between the legal and regulatory landscape, ethical implications and evolving technologies. Cambridge Forum on AI: Law and Governance uses themed issues to bring together voices from law, business, applied ethics, computer science and many other disciplines to explore the social, ethical and legal impact of AI, data science, and robotics and the governance frameworks they require.

Cambridge Forum on AI: Law and Governance is part of the Cambridge Forum journal series, which progresses cross-disciplinary conversations on issues of global importance.

The journal invites submissions for the upcoming Themed Issue: Experimental Regulation for AI Governance: Closing the Legal Lag with Regulatory Sandboxes? The issue will be Guest Edited by Hannah Ruschemeier (University of Hagen). 

The deadline for full paper submissions is 1 March 2025

Aim and focus of the issue, selection criteria

Tech is fast – Law is slow? The dynamics of digital transformation, particularly the development of AI, appear to conflict with the reactive nature of effective legal regulation, which often lags behind current developments due to the democratic law-making process, requiring compromise and adherence to constitutional procedures. Beyond well-documented AI risks such as discrimination, biases, digital vulnerability, privacy infringements, liability, epistemic concerns, and ethical dilemmas, the opacity and complexity of these technologies further complicate regulatory efforts. Moreover, the detection, analysis, and enforcement of individual and collective rights infringements continue to rely heavily on analogue methods and, on the side of regulators, frequently outdated technologies. This regulatory lag risks trailing technological progress, potentially impeding the enforcement of legal standards.

In addressing how law, as a regulatory tool, might respond to these challenges, the search for innovative legal instruments becomes crucial. Experimental regulation, which combines empirical evidence with legal requirements within a more flexible model than traditional legislation, is one potential solution.  Regulatory sandboxes are a key tool of experimental regulation. Here, innovative products and services can be tested within a limited timeframe in close cooperation with the competent supervisory authority, often with the application of substantive legal exemptions. Regulators can use regulatory sandboxes to create a test-environment for new technologies and, during this experimental phase, refrain from enforcing the original legal requirements, for example, in order to gain insights into the subject of regulation.

The aim of the themed issue is to provide an in-depth, interdisciplinary and comparative analysis of the potential of regulatory sandboxes for an effective, democratic and resilient regulation under the AI Act in the EU and beyond. Following this, the issue strives to include different perspectives: First, interdisciplinary perspectives beyond law, especially from political sciences, economy, ethics and STS. Second, a combination of case studies and research articles to provide a practical as well as an academic view on the topic. And third, a mapping of specific non-EU perspectives to establish a basis for a comparison of the different legal basis and enforcement which is also meant to promote future research on the “Brussels effect” of the AI Act.

Consequently, the underlying question is whether regulatory sandbox are an effective measure in AI Governance and a suitable tool to close the legal lag. The issue therefore will explore the concept of regulatory sandboxes from various disciplinary and legal perspectives, emphasizing cross-jurisdictional analysis.

The themed issue will follow two thematic and methodological tracks: 

Track (I) Concrete examples, case studies and areas of application of regulatory sandboxes

Potential contributions to the issue could address, but are not limited to, topics such as:

  • Legal Comparative Perspective: legal framwork, implementation and governance aspects of Regulatory Sandboxes in the EU Member States
  • Cases Studies from Non-EU countries
  • Cases Studies from non-legal perspectives  


Track (II) Overarching analyses 

Due to recent legal developments, emphasis will be placed on Track I, promising up-to-date insights that extend beyond previous research. Track II will serve to complement the existing literature on regulatory sandboxes, now contextualized under the AI Act.

Potential contributions to the issue could address, but are not limited to, topics such as:

  • Regulatory Sandboxes and EU-administration law
  • Critical Perspectives on Sandboxes and AI regulation 
  • Regulatory Sandboxes and international law, especially Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (Art. 13)

 

Submissions are welcome from researchers at any career stage. In addition to the originality and significance of the argument, and the rigour of the underlying research, articles will be assessed on their clarity and accessibility to a diverse audience and, importantly, on their fit with the Themed Issue’s stated scope and purpose.

Submission guidelines

Cambridge Forum on AI: Law and Governance seeks to engage multiple subject disciplines and promote dialogue between policymakers and practitioners as well as academics. The journal therefore encourages authors to use an accessible writing style.

Authors have the option to submit a range of article types to the journal. Please see the journal’s author instructions for more information. 

Articles will be peer reviewed for both content and style. Articles will appear digitally and open access in the journal.

All submissions should be made through the journal’s online peer review system. Author should consult the journal’s author instructions prior to submission.

All authors will be required to declare any funding and/or competing interests upon submission. See the journal’s Publishing Ethics guidelines for more information. 

Contacts 

Questions regarding submission and peer review can be sent to the journal’s inbox at cfl@cambridge.org.