We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
With recent leaps in large language model technology, conversational AI offer increasingly sophisticated interactions. But is it fair to say that they can offer authentic relationships, perhaps even assuage the loneliness epidemic? In answering this question, this essay traces the history of AI authenticity, historically shaped by cultural imaginations of intelligent machines and human communication. The illusion of human-like interaction with AI has existed since at least the 1960s, when the term “Eliza effect’ was named after the first chatbot Eliza. Termed a “crisis of authenticity” by sociologist Sherry Turkle, the Eliza effect has stood for fears that AI interactions can undermine real human connections and leave users vulnerable to manipulation. More recently, however, researchers have begun investigating less anthropomorphic definitions of authenticity. The expectation - and perhaps fantasy - of authenticity stems, in turn, from a much longer history of technologically mediated communications, dating back to the invention of the telegraph in the nineteenth century. Read through this history, the essay concludes that AI relationships might not mimic human interactions but must instead acknowledge the artifice of AI, offering a new form of companionship in our mediated, often lonely, times.
This chapter introduces the main research themes of this book, which explores two current global developments. The first concerns the increased use of algorithmic systems by public authorities in a way that raises significant ethical and legal challenges. The second concerns the erosion of the rule of law and the rise of authoritarian and illiberal tendencies in liberal democracies, including in Europe. While each of these developments is worrying as such, in this book, I argue that the combination of their harms is currently underexamined. By analysing how the former development might reinforce the latter, this book seeks to provide a better understanding of how algorithmic regulation can erode the rule of law and lead to algorithmic rule by law instead. It also evaluates the current EU legal framework which is inadequate to counter this threat, and identifies new pathways forward.
This contribution examines the possibilities for individuals to access remedies against potential violations of their fundamental rights by EU actors, specifically EU agencies’ deployment of artificial intelligence (AI). Presenting the intricate landscape of the EU’s border surveillance, the chapter sheds light on the prominent role of Frontex in developing and managing AI systems, including automated risk assessments and drone-based aerial surveillance. These two examples are used to illustrate how the EU?s AI-powered conduct endangers fundamental rights protected under the EU Charter of Fundamental Rights. Risks emerge for privacy and data protection rights, non-discrimination, and other substantive rights, such as the right to asylum. In light of these concerns, the chapter then examines the possibilities to access remedies by first considering the impact of AI uses on the procedural rights to good administration and effective judicial protection, before clarifying the emerging remedial system under the AI Act in its interplay with the EU’s existing data protection framework. Lastly, the chapter sketches the evolving role of the European Data Protection Supervisor, pointing out the key areas demanding further clarifications in order to fill the remedial gaps.
This Element highlights the employment within archaeology of classification methods developed in the field of chemometrics, artificial intelligence, and Bayesian statistics. These run in both high- and low-dimensional environments and often have better results than traditional methods. Instead of a theoretical approach, it provides examples of how to apply these methods to real data using lithic and ceramic archaeological materials as case studies. A detailed explanation of how to process data in R (The R Project for Statistical Computing), as well as the respective code, are also provided in this Element.
Several African countries are developing artificial intelligence (AI) strategies and ethics frameworks with the goal of accelerating responsible AI development and adoption. However, many of these governance actions are emerging without consideration for their suitability to local contexts, including whether the proposed policies are feasible to implement and what their impact may be on regulatory outcomes. In response, we suggest that there is a need for more explicit policy learning, by looking at existing governance capabilities and experiences related to algorithms, automation, data, and digital technology in other countries and in adjacent sectors. From such learning, it will be possible to identify where existing capabilities may be adapted or strengthened to address current AI-related opportunities and risks. This paper explores the potential for learning by analysing existing policy and legislation in twelve African countries across three main areas: strategy and multi-stakeholder engagement, human dignity and autonomy, and sector-specific governance. The findings point to a variety of existing capabilities that could be relevant to responsible AI; from existing model management procedures used in banking and air quality assessment to efforts aimed at enhancing public sector skills and transparency around public–private partnerships, and the way in which existing electronic transactions legislation addresses accountability and human oversight. All of these point to the benefit of wider engagement on how existing governance mechanisms are working, and on where AI-specific adjustments or new instruments may be needed.
This paper questions how the drive toward introducing artificial intelligence (AI) in all facets of life might endanger certain African ethical values. It argues in the affirmative that indeed two primary values that are prized in nearly all versions of sub-Saharan African ethics (available in the literature) might sit in direct opposition to the fundamental motivation of corporate adoption of AI; these values are Afro-communitarianism grounded on relationality, and human dignity grounded on a normative conception of personhood. This paper offers a unique perspective on AI ethics from the African place, as there is little to no material in the literature that discusses the implications of AI on African ethical values. The paper is divided into two broad sections that are focused on (i) describing the values at risk from AI and (ii) showing how the current use of AI undermines these said values. In conclusion, I suggest how to prioritize these values in working toward the establishment of an African AI ethics framework.
Brain–computer interfaces (BCIs) exemplify a dual-use neurotechnology with significant potential in both civilian and military contexts. While BCIs hold promise for treating neurological conditions such as spinal cord injuries and amyotrophic lateral sclerosis in the future, military decisionmakers in countries such as the United States and China also see their potential to enhance combat capabilities. Some predict that U.S. Special Operations Forces (SOF) will be early adopters of BCI enhancements. This article argues for a shift in focus: the U.S. Special Operations Command (SOCOM) should pursue translational research of medical BCIs for treating severely injured or ill SOF personnel. After two decades of continuous military engagement and on-going high-risk operations, SOF personnel face unique injury patterns, both physical and psychological, which BCI technology could help address. The article identifies six key medical applications of BCIs that could benefit wounded SOF members and discusses the ethical implications of involving SOF personnel in translational research related to these applications. Ultimately, the article challenges the traditional civilian-military divide in neurotechnology, arguing that by collaborating more closely with military stakeholders, scientists can not only help individuals with medical needs, including servicemembers, but also play a role in shaping the future military applications of BCI technology.
In the literature, there are polarized views regarding the capabilities of technology to embed societal values. One aisle of the debate contends that technical artifacts are value-neutral since values are not peculiar to inanimate objects. Scholars on the other side of the aisle argue that technologies tend to be value-laden. With the call to embed ethical values in technology, this article explores how AI and other adjacent technologies are designed and developed to foster social justice. Drawing insights from prior studies, this paper identifies seven African moral values considered central to actualizing social justice; of these, two stand out—respect for diversity and ethnic neutrality. By introducing use case analysis along with the Discovery, Translation, and Verification (DTV) framework and validating via Focus Group Discussion, this study revealed novel findings: first, ethical value analysis is best carried out alongside software system analysis. Second, to embed ethics in technology, interdisciplinary expertise is required. Third, the DTV approach combined with the software engineering methodology provides a promising way to embed moral values in technology. Against this backdrop, the two highlighted ethical values—respect for diversity and ethnic neutrality—help ground the pursuit of social justice.
The EUMigraTool (EMT) provides short-term and mid-term predictions of asylum seekers arriving in the European Union, drawing on multiple sources of public information and with a focus on human rights. After 3 years of development, it has been tested in real environments by 17 NGOs working with migrants in Spain, Italy, and Greece.
This paper will first describe the functionalities, models, and features of the EMT. It will then analyze the main challenges and limitations of developing a tool for non-profit organizations, focusing on issues such as (1) the validation process and accuracy, and (2) the main ethical concerns, including the challenging exploitation plan when the main target group are NGOs.
The overall purpose of this paper is to share the results and lessons learned from the creation of the EMT, and to reflect on the main elements that need to be considered when developing a predictive tool for assisting NGOs in the field of migration.
In the mid to late 19th century, much of Africa was under colonial rule, with the colonisers exercising power over the labour and territory of Africa. However, as much as Africa has predominantly gained independence from traditional colonial rule, another form of colonial rule still dominates the African landscape. This similitude of these different forms of colonialism is found in the power dominance exhibited by Western technological corporations, just like the traditional colonialists. In this digital age, digital colonialism manifests in Africa through the control and ownership of critical digital infrastructure by foreign entities, leading to unequal data flow and asymmetrical power dynamics. This usually occurs under the guise of foreign corporations providing technological assistance to the continent.
By drawing references from the African continent, this article examines the manifestations of digital colonialism and the factors that aid its occurrence on the continent. It further explores the manifestations of digital colonialism in technologies such as Artificial Intelligence (AI) while analysing the occurrence of data exploitation on the continent and the need for African ownership in cultivating the digital future of the African continent. The paper also recognises the benefits linked to the use of AI and makes a cautious approach toward the deployment of AI tools in Africa. It then concludes by recommending the implementation of laws, regulations, and policies that guarantee the inclusiveness, transparency, and ethical values of new technologies, with strategies toward achieving a decolonised digital future on the African continent.
Generative artificial intelligence (GenAI) has gained significant popularity in recent years. It is being integrated into a variety of sectors for its abilities in content creation, design, research, and many other functionalities. The capacity of GenAI to create new content—ranging from realistic images and videos to text and even computer code—has caught the attention of both the industry and the general public. The rise of publicly available platforms that offer these services has also made GenAI systems widely accessible, contributing to their mainstream appeal and dissemination. This article delves into the transformative potential and inherent challenges of incorporating GenAI into the domain of judicial decision-making. The article provides a critical examination of the legal and ethical implications that arise when GenAI is used in judicial rulings and their underlying rationale. While the adoption of this technology holds the promise of increased efficiency in the courtroom and expanded access to justice, it also introduces concerns regarding bias, interpretability, and accountability, thereby potentially undermining judicial discretion, the rule of law, and the safeguarding of rights. Around the world, judiciaries in different jurisdictions are taking different approaches to the use of GenAI in the courtroom. Through case studies of GenAI use by judges in jurisdictions including Colombia, Mexico, Peru, and India, this article maps out the challenges presented by integrating the technology in judicial determinations, and the risks of embracing it without proper guidelines for mitigating potential harms. Finally, this article develops a framework that promotes a more responsible and equitable use of GenAI in the judiciary, ensuring that the technology serves as a tool to protect rights, reduce risks, and ultimately, augment judicial reasoning and access to justice.
In this article, I will consider the moral issues that might arise from the possibility of creating more complex and sophisticated autonomous intelligent machines or simply artificial intelligence (AI) that would have the human capacity for moral reasoning, judgment, and decision-making, and (the possibility) of humans enhancing their moral capacities beyond what is considered normal for humanity. These two possibilities raise an urgency for ethical principles that could be used to analyze the moral consequences of the intersection of AI and transhumanism. In this article, I deploy personhood-based relational ethics grounded on Afro-communitarianism as an African ethical framework to evaluate some of the moral problems at the intersection of AI and transhumanism. In doing so, I will propose some Afro-ethical principles for research and policy development in AI and transhumanism.
A detailed exploration is presented of the integration of human–machine collaboration in governance and policy decision-making, against the backdrop of increasing reliance on artificial intelligence (AI) and automation. This exploration focuses on the transformative potential of combining human cognitive strengths with machine computational capabilities, particularly emphasizing the varying levels of automation within this collaboration and their interaction with human cognitive biases. Central to the discussion is the concept of dual-process models, namely Type I and II thinking, and how these cognitive processes are influenced by the integration of AI systems in decision-making. An examination of the implications of these biases at different levels of automation is conducted, ranging from systems offering decision support to those operating fully autonomously. Challenges and opportunities presented by human–machine collaboration in governance are reviewed, with a focus on developing strategies to mitigate cognitive biases. Ultimately, a balanced approach to human–machine collaboration in governance is advocated, leveraging the strengths of both humans and machines while consciously addressing their respective limitations. This approach is vital for the development of governance systems that are both technologically advanced and cognitively attuned, leading to more informed and responsible decision-making.
This chapter uses a range of quotes and findings from the internet and the literature. The key premises of this chapter, which is illustrated with examples, are as follows. First, Big Data requires the use of algorithms. Second, algorithms can create misleading information. Third, algorithms can lead to destructive outcomes. But we should not forget that humans program algorithms. With Big Data come algorithms to run many and involved computations. We cannot oversee all these data ourselves, so we need the help of algorithms to make computations for us. We might label these algorithms as Artificial Intelligence, but this might suggest that they can do things on their own. They can run massive computations, but they need to be fed with data. And this feeding is usually done by us, by humans, and we also choose the algorithms to be used.
Summary: The aging of the population poses significant challenges in healthcare, necessitating innovative approaches. Advancements in brain imaging and artificial intelligence now allow for characterizing an individual’s state through their brain age,’’ derived from observable brain features. Exploring an individual’s biological age’’ rather than chronological age is becoming crucial to identify relevant clinical indicators and refine risk models for age-related diseases. However, traditional brain age measurement has limitations, focusing solely on brain structure assessment while neglecting functional efficiency.
Our study focuses on developing neurocognitive ages’’ specific to cognitive systems to enhance the precision of decline estimation. Leveraging international (NKI2, ADNI) and Canadian (CIMA- Q, COMPASS-ND) databases with neuroimaging and neuropsychological data from older adults [control subjects with no cognitive impairment (CON): n = 1811; people living with mild cognitive impairment (MCI): n = 1341; with Alzheimer’s disease (AD): n= 513], we predicted individual brain ages within groups. These estimations were enriched with neuropsychological data to generate specific neurocognitive ages. We used longitudinal statistical models to map evolutionary trajectories. Comparing the accuracy of neurocognitive ages to traditional brain ages involved statistical learning techniques and precision measures.
The results demonstrated that neurocognitive age enhances the prediction of individual brain and cognition change trajectories related to aging and dementia. This promising approach could strengthen diagnostic reliability, facilitate early detection of at-risk profiles, and contribute to the emergence of precision gerontology/geriatrics.
Objectives: Effectiveness of psychotherapy depends on patients’ adherence to between-session homework (HW) to practice therapeutic skills. mHealth apps can offer continuing reminders, although frequent reminders overwhelm or burden patients and therefore are ineffective. Predicting likelihood of completing daily HW and sending contextual reminders has the potential to improve HW adherence and therefore improvesymptoms.
Methods: Depressed older participants (N = 51) undergoing psychotherapy provided daily active ratings on mood, anhedonia, stress and pain via an mHealth app. Data on activity, mobilization, sociability and sleep passively were also recorded via device sensors (e.g., microphone, accelerometer, GPS etc.). Using active and passive mHealth data, we developed predictive models of daily home-work completion status using a naïve semi-supervised deep learning algorithm. Prediction accuracy was determined via time-dependent cross-validation.
Results: Study participants had a mean (SD) age of 71.4 (7.76) years, mean (SD) of 14.9 (2.93) years of education, mean (SD) BIS/BAS total of 22.6 (3.36), mean (SD) MADRS total score of20.4 (6.04) and 88.2% were of female gender, 29.4% were single, 83.8% were of non-Hispanic ethnicity, 58.8% belonged to Caucasian race and 38.2% practiced Catholic religion. With 4700 person-days HW completion response, our models show an AUC of 84.7% (sensitivity = 76.2%; specificity = 80%) estimated by cross-validation.
Conclusions: This paper demonstrates the feasibility of predicting adherence to psychotherapy in depressed older adults using actively and passively collected mHealth data. Digital interventions based on such predictive models can potentially increase adherence to psychotherapy and thereby improve its effectiveness without increasing the user notification burden.
Artificial intelligence (AI) is presented as a portal to more liberative realities, but its broad implications for society and certain groups in particular require more critical examination. This chapter takes a specifically Black theological perspective to consider the scepticism within Black communities around narrow applications of AI as well as the more speculative ideas about these technologies, for example general AI. Black theology’s perpetual push towards Black liberation, combined with womanism’s invitation to participate in processes that reconstitute Black quality of life, have perfectly situated Black theological thought for discourse around artificial intelligence. Moreover, there are four particular categories where Black theologians and religious scholars have already broken ground and might be helpful to religious discourse concerning Blackness and AI. Those areas are: white supremacy, surveillance and policing, consciousness and God. This chapter encounters several scholars and perspectives within the field of Black theology and points to potential avenues for future theological areas of concern and exploration.
Recent advances in large language models (LLMs), such as GPT-4, have spurred interest in their potential applications across various fields, including actuarial work. This paper introduces the use of LLMs in actuarial and insurance-related tasks, both as direct contributors to actuarial modelling and as workflow assistants. It provides an overview of LLM concepts and their potential applications in actuarial science and insurance, examining specific areas where LLMs can be beneficial, including a detailed assessment of the claims process. Additionally, a decision framework for determining the suitability of LLMs for specific tasks is presented. Case studies with accompanying code showcase the potential of LLMs to enhance actuarial work. Overall, the results suggest that LLMs can be valuable tools for actuarial tasks involving natural language processing or structuring unstructured data and as workflow and coding assistants. However, their use in actuarial work also presents challenges, particularly regarding professionalism and ethics, for which high-level guidance is provided.
Artificial intelligence (AI)-based health technologies (AIHTs) have already been applied in clinical practice. However, there is currently no standardized framework for evaluating them based on the principles of health technology assessment (HTA).
Methods
A two-round Delphi survey was distributed to a panel of experts to determine the significance of incorporating topics outlined in the EUnetHTA Core Model and twenty additional ones identified through literature reviews. Each panelist assigned scores to each topic. Topics were categorized as critical to include (scores 7–9), important but not critical (scores 4–6), and not important (scores 1–3). A 70 percent cutoff was used to determine high agreement.
Results
Our panel of 46 experts indicated that 48 out of the 65 proposed topics are critical and should be included in an HTA framework for AIHTs. Among the ten most crucial topics, the following emerged: accuracy of the AI model (97.78 percent), patient safety (95.65 percent), benefit–harm balance evaluated from an ethical standpoint (95.56 percent), and bias in data (91.30 percent). Importantly, our findings highlight that the Core Model is insufficient in capturing all relevant topics for AI-based technologies, as 14 out of the additional 20 topics were identified as crucial.
Conclusion
It is imperative to determine the level of agreement on AI-relevant HTA topics to establish a robust assessment framework. This framework will play a foundational role in evaluating AI tools for the early diagnosis of dementia, which is the focus of the European project AI-Mind currently being developed.