We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Language is the natural currency of most social communication. Until the emergence of more powerful computational methods, it simply was not feasible to measure its use in mainline social psychology. We now know that language can reveal behavioral evidence of mental states and personality traits, as well as clues to the future behavior of individuals and groups. In this chapter, we first review the history of language research in social personality psychology. We then survey the main methods for deriving psychological insights from language (ranging from data-driven to theory-driven, naturalistic to experimental, qualitative to quantitative, holistic to granular, and transparent to opaque) and describe illustrative examples of findings from each approach. Finally, we present our view of the new capabilities, real-world applications, and ethical and psychometric quagmires on the horizon as language research continues to evolve in the future.
Nigeria has a significant gender financial inclusion gap with women disproportionately represented among the financially excluded. Artificial intelligence (AI) powered financial technologies (fintech) present distinctive advantages for enhancing women’s inclusion. This includes efficiency gains, reduced transaction costs, and personalized services tailored to women’s needs. Nonetheless, AI harbours a paradox. While it promises to address financial inclusion, it can also inadvertently perpetuate and amplify gender bias. The critical question is thus, how can AI effectively address the challenges of women’s financial exclusion in Nigeria? Using publicly available data, this research undertakes a qualitative analysis of AI-powered Fintech services in Nigeria. Its objective is to understand how innovations in financial services correspond to the needs of potential users like unbanked or underserved women. The research finds that introducing innovative financial services and technology is insufficient to ensure inclusion. Financial inclusion requires the availability, accessibility, affordability, appropriateness, sustainability, and alignment of services with the needs of potential users, and policy-driven strategies that aid inclusion.
In the literature, there are polarized views regarding the capabilities of technology to embed societal values. One aisle of the debate contends that technical artifacts are value-neutral since values are not peculiar to inanimate objects. Scholars on the other side of the aisle argue that technologies tend to be value-laden. With the call to embed ethical values in technology, this article explores how AI and other adjacent technologies are designed and developed to foster social justice. Drawing insights from prior studies, this paper identifies seven African moral values considered central to actualizing social justice; of these, two stand out—respect for diversity and ethnic neutrality. By introducing use case analysis along with the Discovery, Translation, and Verification (DTV) framework and validating via Focus Group Discussion, this study revealed novel findings: first, ethical value analysis is best carried out alongside software system analysis. Second, to embed ethics in technology, interdisciplinary expertise is required. Third, the DTV approach combined with the software engineering methodology provides a promising way to embed moral values in technology. Against this backdrop, the two highlighted ethical values—respect for diversity and ethnic neutrality—help ground the pursuit of social justice.
Discussions of the development and governance of data-driven systems have, of late, come to revolve around questions of trust and trustworthiness. However, the connections between them remain relatively understudied and, more importantly, the conditions under which the latter quality of trustworthiness might reliably lead to the placing of ‘well-directed’ trust. In this paper, we argue that this challenge for the creation of ‘rich’ trustworthiness, which we term the Trustworthiness Recognition Problem, can usefully be approached as a problem of effective signalling, and suggest that its resolution can be informed by a multidisciplinary approach that relies on insights from economics and behavioural ecology. We suggest, overall, that the domain specificity inherent to the signalling theory paradigm offers an effective solution to the TRP, which we believe will be foundational to whether and how rapidly improving technologies are integrated in the healthcare space. We suggest that solving the TRP will not be possible without taking an interdisciplinary approach and suggest further avenues of inquiry that we believe will be fruitful.
The Conclusion provides a very brief recap of the issues discussed in the preceding chapters. It reflects on the larger context of regulatory change, and touches upon contemporary challenges of regulation such as the role of gender, race, sustainability, and future generations in the regulatory process.
This chapter offers an introduction to the book. It defines regulation, distinguishing it from other concepts such as governance. We define regulation as ‘intentional, organised attempts to manage or control risk or the behaviours of a different party through the exercise of authority, usually through the use of mechanisms of standard-setting, monitoring and information-gathering and behaviour modification to address a collective tension or problem’. The Introduction reflects upon the most important changes in regulation in the last two decades and the growing relevance of regulation in society. The chapter explains significant changes in the practice and context of regulation that have occurred since the first editions was published.
Technological change often prompts calls for regulation. Yet formulating regulatory policy in relation to rapidly-changing technology is complex. It requires an understanding of the politics of technology, the complexity of the innovation process, and its general impact on society. Chapter 3 introduces a variety of academic literatures across the humanities, law and the social sciences that offer insights on understanding technological change that have direct relevance to the challenges of regulating new and emerging technology. The chapter discusses different strands of scholarship, ranging from the history of technology, innovation studies and the growing field of law and technology that have until now remained largely fragmented and siloed, focusing primarily on digital technologies.
This study focuses on the practicalities of establishing and maintaining AI infrastructure, as well as the considerations for responsible governance by investigating the integration of a pre-trained large language model (LLM) with an organisation’s knowledge management system via a chat interface. The research adopts the concept of “AI as a constituted system” to emphasise the social, technical, and institutional factors that contribute to AI’s governance and accountability. Through an ethnographic approach, this article details the iterative processes of negotiation, decision-making, and reflection among organisational stakeholders as they develop, implement, and manage the AI system. The findings indicate that LLMs can be effectively governed and held accountable to stakeholder interests within specific contexts, specifically, when clear institutional boundaries facilitate innovation while navigating the risks related to data privacy and AI misbehaviour. Effective constitution and use can be attributed to distinct policy creation processes to guide AI’s operation, clear lines of responsibility, and localised feedback loops to ensure accountability for actions taken. This research provides a foundational perspective to better understand algorithmic accountability and governance within organisational contexts. It also envisions a future where AI is not universally scaled but consists of localised, customised LLMs tailored to stakeholder interests.
Contemporary life relies on regulation. The quality and safety of the water we drink, the food we eat, and the social media applications we use are all governed by multiple regulatory regimes. Although rooted in law, regulation is a multidisciplinary endeavour. Debates about regulation, particularly in the face of rapid change and the emergence of new 'risks', are now commonplace. Despite extensive scholarship, regulation is often poorly understood, even by policy-makers, with unintended and even disastrous consequences. This book offers a critical introduction to core theories, concepts, methods, tools, and techniques of regulation, including regulatory policy, instruments, enforcement, compliance, accountability and legitimacy. Weaving extracts from texts drawn from many disciplines with accessible commentary, it introduces this important field to students, scholars, and practitioners in a scholarly yet accessible and engaging manner with discussion questions and additional readings for those seeking to deepen their knowledge.
This chapter introduces social scientific perspectives and methods applicable to observing the relationship between artificial intelligence (AI) and religion. It discusses the contributions that anthropological and sociological approaches can make to this entanglement of two modern social phenomena while also drawing attention to the inherent biases and perspectives that both fields bring with them due to their histories. Examples of research on religion and AI are highlighted, especially when they demonstrate agile and new methodologies for engaging with AI in its many applications; including but not limited to online worlds, multimedia formats, games, social media and the new spaces made by technological innovation such as the innovations such as the platforms underpinning the gig economy. All these AI-enabled spaces can be entangled with religious and spiritual conceptions of the world. This chapter also aims to expand upon the relationship between AI and religion as it is perceived as a general concept or object within human society and civilisation. It explains how both anthropology and sociology can provide frameworks for conceptualising that relationship and give us ways to account for our narratives of secularisation – informed by AI development – that see religion as a remnant of a prior, less rational stage of human civilisation.
Artificial intelligence (AI) as an object and term remains enmeshed in our imaginaries, narratives, institutions and aspirations. AI has that in common with the other object of discussion in this Cambridge Companion: religion. But beyond such similarities in form and reception, we can also speak to how entangled these two objects have been, and are yet still becoming, with each other. This introductory chapter explores the difficulty of definitions and the intricacies of the histories of these two domains and their entanglements. It initially explores this relationship through the religious narratives and tropes that have had a role to play in the formation of the field of AI, in its discursive modes. It examines the history of AI and religion through the language and perspectives of some of the AI technologists and philosophers who have employed the term ‘religion’ in their discussions of the technology itself. Further, this chapter helps to set the scene for the larger conversation on religion and AI of this volume by demonstrating some of the tensions and lacunae that the following chapters address in greater detail.
The global and historical entanglements between articifial intelligence (AI)/robotic technologies and Buddhism, as a lived religion and philosophical tradition, are significant. This chapter sets out three key sites of interaction between Buddhism and AI/robotics. First, Buddhism, as an ontological model of mind (and body) that describes the conditions for what constitutes artificial life. Second, Buddhism defines the boundaries of moral personhood and thus the nature of interactions between human and non-human actors. And finally, Buddhism can be used as an ethical framework to regulate and direct the development of AI/robotics technologies. It argues that Buddhism provides an approach to technology that is grounded in the interdependence of all things, and this gives rise to both compassion and an ethical commitment to alleviate suffering.
Technology has been an integral part of biological life since the inception of terrestrial life. Evolution is the process by which biological life seeks to transcend itself in pursuit of more robust life. This chapter examines transhumanism as the use of technological means to enhance human biological function. Transhumanists see human nature as a work in progress and suggest that by responsible use of science, technology and other rational means, we shall become beings with vastly greater capacities and unlimited potential. Transhumanism has religious implications.
This article is a commentary on the relationship between artificial intelligence (AI), capitalism, and memory. The political policies of neoliberalism have reduced the capacity of individuals and groups to reflect on and change the social world, meanwhile applications of AI and algorithmic technologies, rooted in the profit-seeking objectives of global capitalism, deepen this deficit. In these conditions, memory in individuals and across society is at risk of becoming myopic. In this article, I develop the concept of myopic memory with two core claims. Firstly, I argue that AI is a technological development that cannot be divorced from the capitalist conditions from which it comes from and is implemented in service of. To this end, I reveal capitalism and colonialism's historical and contemporary use of surveillance as a way to control the populations it oppresses, imagining their pasts to determine their futures, disempowering them in the process. My second core claim emphasises that this process of disempowerment is undergoing an acute realisation four decades into the period of neoliberalism. Neoliberal policies have restructured society on the basis of being an individual consumer, leaving little time, space, and institutional capacity for citizens to reflect on their impact or challenge their dominance. As a result, with the growing role of AI and algorithmic technologies in shaping our engagement with society along similar lines of individualism, it is my conclusion that the scope of memory is being reduced and constrained within the prism of capitalism, reducing its potential, and rendering it myopic.
This chapter explores issues for Islam in relation to religious themes arising from developments in artificial intelligence (AI), conceived both as a philosophical and scientific quest to understand human intelligence and as a technological enterprise to instrumentalise it for commercial or political purposes. The monotheistic teachings of Islam are outlined to identify themes in AI that relate to central questions in the Islamic context and to addresses nuances of Islamic belief that differentiate it from the other Abrahamic traditions in consideration of AI. This chapter draws together the existing sparse literature on the subject, including notable applications of AI in Islamic contexts, and draws attention to the role of the Muslim world as a channel and expositor of knowledge between the ancient and modern world in the pre-history of AI. The chapter provides foundations for future scholarship on Islam and AI and a resource for wider scholarship on the religious, societal and cultural significance of AI.
This chapter reviews progress in the field of artificial intelligence, and considers the special case of the android: a human-like robot that people would accept as similar to humans in how they perform and behave in society. An android as considered here does not have the purpose to deceive humans into believing that the android is a human. Instead, the android self-identifies as a non-human with its own integrity as a person. To make progress on android intelligence, artificial intelligence research needs to develop computer models of how people engage in relationships, how people explain their experience in terms of stories and how people reason about the things in life that are most significant and meaningful to them. A functional capacity for religious reasoning is important because the intelligent android needs to understand its role and its relationships with other persons. Religious reasoning is taken here not to mean matters of specific confessional faith and belief according to established doctrines but about the cognitive processes involved in negotiating significant values and relationships with tangible and intangible others.
Eight major supply chains contribute to more than 50% of the global greenhouse gas emissions (GHG). These supply chains range from raw materials to end-product manufacturing. Hence, it is critical to accurately estimate the carbon footprint of these supply chains, identify GHG hotspots, explain the factors that create the hotspots, and carry out what-if analysis to reduce the carbon footprint of supply chains. Towards this, we propose an enterprise decarbonization accelerator framework with a modular structure that automates carbon footprint estimation, identification of hotspots, explainability, and what-if analysis to recommend measures to reduce the carbon footprint of supply chains. To illustrate the working of the framework, we apply it to the cradle-to-gate extent of the palm oil supply chain of a leading palm oil producer. The framework identified that the farming stage is the hotspot in the considered supply chain. As the next level of analysis, the framework identified the hotspots in the farming stage and provided explainability on factors that created hotspots. We discuss the what-if scenarios and the recommendations generated by the framework to reduce the carbon footprint of the hotspots and the resulting impact on palm oil tree yield.
This paper explores the necessary adaptations to the theory of administrative discretion when using AI systems. Regulatory frameworks in the EU, US, and Spain do not prohibit the application of AI in discretionary decision-making. Particularly, AI systems can be used when discretionary power involves correlations. However, to meet Rule of Law conditions, it is essential to establish adaptations and boundaries in areas such as duty of care, reason-giving, and judicial review. These conditions should focus on the impact of decisions on the affected individuals.
This paper aims at exploring the dynamic interplay between advanced technological developments in AI and Big Data and the sustained relevance of theoretical frameworks in scientific inquiry. It questions whether the abundance of data in the AI era reduces the necessity for theory or, conversely, enhances its importance. Arguing for a synergistic approach, the paper emphasizes the need for integrating computational capabilities with theoretical insight to uncover deeper truths within extensive datasets. The discussion extends into computational social science, where elements from sociology, psychology, and economics converge. The application of these interdisciplinary theories in the context of AI is critically examined, highlighting the need for methodological diversity and addressing the ethical implications of AI-driven research. The paper concludes by identifying future trends and challenges in AI and computational social science, offering a call to action for the scientific community, policymakers, and society. Being positioned at the intersection of AI, data science, and social theory, this paper illuminates the complexities of our digital era and inspires a re-evaluation of the methodologies and ethics guiding our pursuit of knowledge.