To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Experimental legal regimes, notably regulatory sandboxes, seek to promote technological innovation while at the same time ensuring consumer protection against unsafe or unsuitable products and services. But in doing so, they may not always be able to prevent harm to consumers. This chapter explores the relationship between regulatory sandboxes and private law. Given that within such sandboxes the participating firms may benefit from regulatory relief, it considers whether, and, if so, to what extent traditional private law nevertheless remains and should remain applicable to their activities during the experiment. It develops three models of the relationship between regulatory sandboxes and private law – separation, substitution, and complementarity – and considers their key characteristics, manifestations, and implications in the context of European private law. The chapter reveals the tension between, on the one hand, fostering technology-enabled innovation, legal certainty, and uniformity and, on the other hand, realising interpersonal justice and individual fairness while leaving room for diversity. It also assesses each model in terms of its potential to reconcile these competing considerations and draws lessons from this assessment for EU and national legislators and courts.
Like other areas of law and legal practice, the arbitration world is beginning to grapple with how to harness the potential of artificial intelligence (AI) while managing its risks. Analogizing to existing AI tools for analysing case law and judicial behavior, as well as to algorithmic hiring applications, this chapter explores how similar technology could be used to improve the process of selecting investment arbitrators. As criticisms of investment arbitration continue to mount, a new selection tool could help to address systemic concerns about fairness, diversity, and legitimacy. Such a tool could level the playing field for parties in terms of access to information about prospective arbitrators as well as expand and diversify the pool of viable candidates. In addition to providing guidance for the parties making their own selections, the suggested tool could be used by arbitral institutions to help with appointing the tribunal president or even, with the parties’ consent, the entire panel. The chapter provides a framework for thinking through questions of design and implementation and concludes by addressing potential challenges and objections.
Concerns around misinformation and disinformation have intensified with the rise of AI tools, with many claiming this is a watershed moment for truth, accuracy and democracy. In response, numerous laws have been enacted in different jurisdictions. Addressing Misinformation and Disinformation introduces this new legal landscape and charts a path forward. The Element identifies avoidance or alleviation of harm as a central legal preoccupation, outlines technical developments associated with AI and other technologies, and highlights social approaches that can support long-term civic resilience. Offering an expansive interdisciplinary analysis that moves beyond narrow debates about definitions, Addressing Misinformation and Disinformation shows how law can work alongside other technical and social mechanisms, as part of a coherent policy response.
For far too long, tech titans peddled promises of disruptive innovation - fabricating benefits and minimizing harms. The promise of quick and easy fixes overpowered a growing chorus of critical voices, driving a sea of private and public investments into increasingly dangerous, misguided, and doomed forms of disruption, with the public paying the price. But what's the alternative? Upgrades - evidence-based, incremental change. Instead of continuing to invest in untested, high-risk innovations, constantly chasing outsized returns, upgraders seek a more proven path to proportional progress. This book dives deep into some of the most disastrous innovations of recent years - the metaverse, cryptocurrency, home surveillance, and AI, to name a few - while highlighting some of the unsung upgraders pushing real progress each day. Timely and corrective, Move Slow and Upgrade pushes us past the baseless promises of innovation, towards realistic hope.
This talk examines how corpus linguistics and artificial intelligence treasure the potential to reshape contemporary language learning ecologies. It argues that the rapid normalisation of generative AI has intensified the need for pedagogical models that combine low-friction access to language support with transparent methods grounded in attested usage. Drawing on ecological perspectives and recent empirical research, the talk shows how AI-driven environments expand opportunities for language learning while creating risks related to opacity and over-reliance. Corpus linguistics, data-driven learning and corpus literacy offer a complementary foundation by providing traceable evidence, reproducible analyses, and practices that foster learners’ critical judgement. Two convergence scenarios are proposed: AI as an extension of DDL, and corpus literacy as the operational core of critical AI literacy. Together, these scenarios illustrate how open-box pedagogies can reconcile responsiveness and accountability, ensuring that AI-mediated learning remains anchored in transparent processes and empirically grounded language knowledge.
This article critically examines the integration of artificial intelligence (AI) into nuclear decision-making processes and its implications for deterrence strategies in the Third Nuclear Age. While realist deterrence logic assumes that the threat of mutual destruction compels rational actors to act cautiously, AI disrupts this by adding speed, opacity and algorithmic biases to decision-making processes. The article focuses on the case of Russia to explore how different understandings of deterrence among nuclear powers could increase the risk of misperceptions and inadvertent escalation in an AI-influenced strategic environment. I argue that AI does not operate in a conceptual vacuum: the effects of its integration depend on the strategic assumptions guiding its use. As such, divergent interpretations of deterrence may render AI-supported decision making more unpredictable, particularly in high-stakes nuclear contexts. I also consider how these risks intersect with broader arms race dynamics. Specifically, the pursuit of AI-enabled capabilities by global powers is not only accelerating military modernisation but also intensifying the security dilemma, as each side fears falling behind. In light of these challenges, this article calls for greater attention to conceptual divergence in deterrence thinking, alongside transparency protocols and confidence-building measures aimed at mitigating misunderstandings and promoting stability in an increasingly automated military landscape.
This article investigates the profound impact of artificial intelligence (AI) and big data on political and military deliberations concerning the decision to wage war. By conceptualising AI as part of a broader, interconnected technology ecosystem – encompassing data, connectivity, energy, compute capacity and workforce – the article introduces the notion of “architectures of AI” to describe the underlying infrastructure shaping contemporary security and sovereignty. It demonstrates how these architectures concentrate power within a select number of technology companies, which increasingly function as national security actors capable of influencing state decisions on the resort to force. The article identifies three critical factors that collectively alter the calculus of war: (i) the concentration of power across the architectures of AI, (ii) the diffusion of national security decision making, and (iii) the role of AI in shaping public opinion. It argues that, as technology companies amass unprecedented control over digital infrastructure and information flows, most nation states – particularly smaller or less technologically advanced ones – experience diminished autonomy in decisions to use force. The article specifically examines how technology companies can coerce, influence or incentivise the resort-to-force decision making of smaller states, thereby challenging traditional notions of state sovereignty and international security.
As globalization spreads, English has become a lingua franca. Emerging technologies (e.g., Artificial Intelligence) now make learning English more accessible, affordable, and tailored to each learner. Social media and digital platforms immerse users in English, offering interactive, personalized, and engaging experiences that fuel Informal Digital Learning of English (IDLE). Research spanning more than ten regions has found that IDLE brings a wide range of benefits, including greater motivation, higher academic achievement, and stronger speaking skills. Today, IDLE is being woven into schools and local communities through partnerships among teachers, NGOs, and industry leaders. This volume seeks to (a) showcase the latest research on IDLE, (b) highlight examples of IDLE in educational and community settings, and (c) chart future pathways for practice, research, and collaboration.
The Council of Europe has very recently adopted the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law. This article provides an initial analysis of the CoE AI Convention. It emphasises the necessity of understanding the CoE AI Convention within the context of its adoption as an international treaty negotiated within the Council of Europe. This context has affected its scope in terms of how the treaty includes the regulation of the usage of AI systems by both public authorities and private actors. The detailed review of the available negotiation documents reveals that the concrete level of protection offered by the Convention has been lowered. This includes the risk-based approach, which shapes the obligations undertaken by States under the treaty. This approach is explained and contrasted with the approach under the EU AI Act. The argument that emerges is that the absence of categorisation of risk levels in the treaty is related to its higher level of abstraction, which does not necessarily imply less robust obligations. The content of these obligations is also clarified in light of the requirement imposed by the treaty of consistency with human rights law. An argument is advanced that the principles formulated in the treaty – human dignity and individual autonomy, transparency and oversight, accountability, non-discrimination, data protection, reliability, risk-management – can offer interpretative guidance for the development of human rights standards.
The entangled relations of humanity’s natural and digital ecosystems are discussed in terms of the risk-uncertainty conundrum. The discussion focuses on global warming from the perspective of the small world of geoengineering, with a particular focus on geothermal energy, marine geoengineering, and the political economy of mitigation and adaptation (section 1). It inquires into the large world of the biosphere, Anthropocene, and uncertainties created by the overlay of human and geological time (section 2). And it scrutinizes the technosphere, consciousness, and language as humanity’s arguably most important cultural technology (section 3).
Taking the child’s perspective means looking at the world through the eyes of the infant or the child. This can help us to better understand play practices and better plan for children’s learning and development. But how do we do this in practice? In this chapter we explore these ideas and help you design programs where you gain insight into the importance of documenting infants’ and young children’s perspectives on their play and identify a range of practical ways to find out children’s perspectives on their play.
If the history of human rights shows anything, it shows that claim-making has no predetermined agents, and this volume nods to the rights of the non-human in a chapter by Jim Davies, who analyzes what might be at stake in the recognition of artificial intelligence not just as an instrumental tool, but a rights-bearing claimant in its own right. Indeed, Davies pursues this possibility through an analogy with the rise of entitlements of non-human nature, especially non-human animals.
Technological disruption leads to discontent in the law, regarding the limited remedies that are available under private law. The source of the problem is a ‘private law’ model that assumes that the function of law is to correct wrongs by compensating individuals who are harmed. So, the model is based on (i) individual claimants and (ii) financial redress. If we copy this private law model into our regulatory regimes for new technologies our governance remedies will fall short. On the one hand, the use of AI can affect in a single act a large number of people. On the other hand, not all offences can be cured through awarding money damages. Therefore, it is necessary to rethink private remedies in the face of AI wrongs to make law effective. To achieve this, the mantra of individual compensation has to be overcome in favor of a social perspective should prevail including the use of non-pecuniary measures to provide effective remedies for AI wrongs.
This chapter examines some ways in which human agency might be affected by a transition from legal regulation to regulation by AI. To do that, it elucidates an account of agency, distinguishing it from related notions like autonomy, and argues that this account of agency is both philosophically respectable and fits common sense. With that account of agency in hand, the chapter then examines two different ways – one beneficial, one baleful – in which agency might be impacted by regulation by AI, focussing on some agency-related costs and benefits of transforming private law from its current rule-based regulatory form to an AI-enabled form of technological management. It concludes that there are few grounds to be optimistic about the effects of such a transition and good reason to be cautious.
Being Human in the Digital World is a collection of essays by prominent scholars from various disciplines exploring the impact of digitization on culture, politics, health, work, and relationships. The volume raises important questions about the future of human existence in a world where machine readability and algorithmic prediction are increasingly prevalent and offers new conceptual frameworks and vocabularies to help readers understand and challenge emerging paradigms of what it means to be human. Being Human in the Digital World is an invaluable resource for readers interested in the cultural, economic, political, philosophical, and social conditions that are necessary for a good digital life. This title is also available as Open Access on Cambridge Core.
Interest in the use of chatbots powered by large language models (LLMs) to support women and girls in conflicts and humanitarian crises, including survivors of gender-based violence (GBV), appears to be increasing. Chatbots could offer a last-resort solution for GBV survivors who are unable or unwilling to access relevant information and support in a safe and timely manner. With the right investment and guard-rails, chatbots might also help treat some symptoms related to mental health and psychosocial conditions, extending mental health and psychosocial support (MHPSS) to crisis-affected communities. However, the use of chatbots can also increase risks for individual users – for example, generating unintended harms when a chatbot hallucinates or produces errors. In this paper, we critically examine the opportunities and limitations of using LLM-powered chatbots1 that provide direct care and support to women and girls in conflicts and humanitarian crises, with a specific focus on GBV survivors. We find some evidence in the global North to suggest that the use of chatbots may reduce self-reported feelings of loneliness for some individuals, but we find less evidence on the role and effectiveness of chatbots in crisis counselling and treating depression, post-traumatic or somatic symptomology, particularly as it relates to GBV in emergencies or other traumatic events that occur in armed conflicts and humanitarian crises. Drawing on key expert interviews as well as evidence and research from adjacent scholarship – such as feminist AI, trauma treatment, GBV, and MHPSS in conflicts and emergencies – we conclude that the potential benefits of GBV-related, AI-enabled talk therapy chatbots do not yet outweigh their risks, particularly when deployed in high-stakes scenarios and contexts such as armed conflicts and humanitarian crises.
Two modern trends in insurance are data-intensive underwriting and behaviour-based insurance. Data-intensive underwriting means that insurers analyse more data for estimating the claim cost of a consumer and for determining the premium based on that estimation. Insurers also offer behaviour-based insurance. For example, some car insurers use artificial intelligence (AI) to follow the driving behaviour of an individual consumer in real time and decide whether to offer that consumer a discount. In this paper, we report on a survey of the Dutch population (N = 999) in which we asked people’s opinions about examples of data-intensive underwriting and behaviour-based insurance. The main results include: (i) If survey respondents find an insurance practice unfair, they also find the practice unacceptable. (ii) Respondents find almost all modern insurance practices that we described unfair. (iii) Respondents find practices for which they can influence the premium fairer. (iv) If respondents find a certain consumer characteristic illogical for basing the premium on, then respondents find using the characteristic unfair. (v) Respondents find it unfair if an insurer offers an insurance product only to a specific group. (vi) Respondents find it unfair if an insurance practice leads to the poor paying more. We also reflect on the policy implications of the findings.
This short research article interrogates the rise of digital platforms that enable ‘synthetic afterlives’, with a focus on how deathbots – AI-driven avatar interactions grounded in personal data and recordings – reshape memory practices. Drawing on socio-technical walkthroughs of four platforms – Almaya, HereAfter, Séance AI, and You, Only Virtual – we analyse how they frame, archive, and algorithmically regenerate memories. Our findings reveal a central tension: between preserving the past as a fixed archive and continually reanimating it through generative AI. Our walkthroughs demonstrate how these services commodify remembrance, reducing memory to consumer-driven interactions designed for affective engagement while obscuring the ethical, epistemological and emotional complexities of digital commemoration. In doing so, they enact reductive forms of memory that are embedded within platform economies and algorithmic imaginaries.
Section 230 of the Communications Decency Act is often called "The Twenty-Six Words That Created the Internet." This 1996 law grants platforms broad legal immunity against claims arising from both third-party content that they host, and good-faith content moderation decisions that they make. Most observers agree that without Section 230 immunity, or some variant of it, the modern internet and social media could not exist. Nonetheless, Section 230 has been subject to vociferous criticism, with both Presidents Biden and Trump having called for its repeal. Critics claim that Section 230 lets platforms have it both ways, leaving them free to host harmful content but also to block any content they object to. This chapter argues that criticisms of Section 230 are largely unwarranted. The diversity of the modern internet, and ability of ordinary individuals to reach broad audiences on the internet, would be impossible without platform immunity. As such, calls for repeal of or major amendments to Section 230 are deeply unwise. The chapter concludes by pointing to important limits on Section 230 immunity and identifying some narrow amendments to Section 230 that may be warranted.
The integration of artificial intelligence (AI)-driven technologies into peace dialogues offers both innovative possibilities and critical challenges for contemporary peacebuilding practice. This article proposes a context-sensitive taxonomy of digital deliberation tools designed to guide the selection and adaptation of AI-assisted platforms in conflict-affected environments. Moving beyond static typologies, the framework accounts for variables such as scale, digital literacy, inclusivity, security, and the depth of AI integration. By situating digital peace dialogues within broader peacebuilding and digital democracy frameworks, the article examines how AI can enhance participation, scale deliberation, and support knowledge synthesis, —while also highlighting emerging concerns around algorithmic bias, digital exclusion, and cybersecurity threats. Drawing on case studies involving the United Nations (UN) and civil society actors, the article underscores the limitations of one-size-fits-all approaches and makes the case for hybrid models that balance AI capabilities with human facilitation to foster trust, legitimacy, and context-responsive dialogue. The analysis contributes to peacebuilding scholarship by engaging with the ethics of AI, the politics of digital diplomacy, and the sustainability of technological interventions in peace processes. Ultimately, the study argues for a dynamic, adaptive approach to AI integration, continuously attuned to the ethical, political, and socio-cultural dimensions of peacebuilding practice.