We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The global and historical entanglements between articifial intelligence (AI)/robotic technologies and Buddhism, as a lived religion and philosophical tradition, are significant. This chapter sets out three key sites of interaction between Buddhism and AI/robotics. First, Buddhism, as an ontological model of mind (and body) that describes the conditions for what constitutes artificial life. Second, Buddhism defines the boundaries of moral personhood and thus the nature of interactions between human and non-human actors. And finally, Buddhism can be used as an ethical framework to regulate and direct the development of AI/robotics technologies. It argues that Buddhism provides an approach to technology that is grounded in the interdependence of all things, and this gives rise to both compassion and an ethical commitment to alleviate suffering.
Discusses the global robotics industry, specifically how key foreign nations support commercial robots, while almost all of America’s vast spending on this technology goes to military and space exploration uses.
This introduction lays out various aspects concerning robots' entanglement with substantive law, including an all-round view of the criminal liability of humans for robots, the criminal responsibility of robots themselves, self-defense against robots, and robots as victims of crime. While Janneke de Snaijer and Marta Bo in their chapter discuss specific aspects of criminal liability and exemptions therefrom, Thomas Weigend analyzes the looming “responsibility gap” and the option of expanding the idea of corporate criminal responsibility to cover harm caused by AI devices. This is one aspect of a preventive, repressive, and long-term perspective on how criminal law can shape human–robot interaction, but also possibly an example of how the wish to regulate robots could affect criminal law itself.
Social connections have a significant impact on health across age groups, including older adults. Loneliness and social isolation are known risk factors for Alzheimer’s disease and related dementias (ADRD). Yet, we did not find a review focused on meta-analyses and systematic reviews of studies that had examined associations of social connections with cognitive decline and trials of technology-based and other social interventions to enhance social connections in people with ADRD.
Study design:
We conducted a scoping review of 11 meta-analyses and systematic reviews of social connections as possible determinants of cognitive decline in older adults with or at risk of developing ADRD. We also examined eight systematic reviews of technology-based and other social interventions in persons with ADRD.
Study results:
The strongest evidence for an association of social connections with lower risk of cognitive decline was related to social engagement and social activities. There was also evidence linking social network size to cognitive function or cognitive decline, but it was not consistently significant. A number of, though not all, studies reported a significant association of marital status with risk of ADRD. Surprisingly, evidence showing that social support reduces the risk of ADRD was weak. To varying degrees, technology-based and other social interventions designed to reduce loneliness in people with ADRD improved social connections and activities as well as quality of life but had no significant impact on cognition. We discuss strengths and limitations of the studies included.
Conclusions:
Social engagement and social activities seem to be the most consistent components of social connections for improving cognitive health among individuals with or at risk for ADRD. Socially focused technology-based and other social interventions aid in improving social activities and connections and deserve more research.
Although research in cultural psychology has established that virtually all human behaviors and cognitions are in some ways shaped by culture, culture has been surprisingly absent from the emerging literature on the psychology of technology. In this perspective article, we first review recent findings on machine aversion versus appreciation. We then offer a cross-cultural perspective in understanding how people might react differently to machines. We propose three frameworks – historical, religious, and exposure – to explain how Asians might be more accepting of machines than their Western counterparts. We end the article by discussing three exciting human–machine applications found primarily in Asia and provide future research directions.
Does technological change fuel political disruption? Drawing on fine-grained labor market data from Germany, this paper examines how technological change affects regional electorates. We first show that the well-known decline in manufacturing and routine jobs in regions with higher robot adoption or investment in information and communication technology (ICT) was more than compensated by parallel employment growth in the service sector and cognitive non-routine occupations. This change in the regional composition of the workforce has important political implications: Workers trained for these new sectors typically hold progressive political values and support progressive pro-system parties. Overall, this composition effect dominates the politically perilous direct effect of automation-induced substitution. As a result, technology-adopting regions are unlikely to turn into populist-authoritarian strongholds.
This chapter describes five “action areas” in which politically achievable changes over the coming two decades could render humankind a lot safer than it is today. For climate change, these include urgent measures for rapid decarbonization, coupled with ramped-up research on technologies for carbon removal and for solar radiation management; new international pacts among small groups of nations for emissions reductions with mutual accountability and incentives; and pre-adaptation measures for dealing effectively with unavoidable harms caused by global warming. For nuclear weapons, these include preparing contingency plans for major or limited nuclear wars, as well as risk-reduction measures than can be implemented today. For pandemics, experts point to four sensible and affordable measures that would greatly reduce the harms of future pandemics. For AI, an immediate challenge will be to prepare for chronic mass unemployment due to rising levels of automation. Finally, the chapter proposes the creation of a new federal agency, the Office for Emerging Biotechnology, to oversee and regulate cutting-edge developments in this field.
The autonomy inherent in AI systems brings legal challenges. The reason is that it is no longer possible to predict whether and how explanations and actions emanating from AI systems originate and whether they are attributable to the AI system or its operator. The core research is whether the operator of AI systems is contractually liable for the damage caused by its malfunctioning. Is contract law sufficiently prepared for the use of AI systems for contract performance? The answer is provided through a review of the common law, CISG and the German Civil Code (BGB).
From exoskeletons to lightweight robotic suits, wearable robots are changing dynamically and rapidly, challenging the timeliness of laws and regulatory standards that were not prepared for robots that would help wheelchair users walk again. In this context, equipping regulators with technical knowledge on technologies could solve information asymmetries among developers and policymakers and avoid the problem of regulatory disconnection. This article introduces pushing robot development for lawmaking (PROPELLING), an financial support to third parties from the Horizon 2020 EUROBENCH project that explores how robot testing facilities could generate policy-relevant knowledge and support optimized regulations for robot technologies. With ISO 13482:2014 as a case study, PROPELLING investigates how robot testbeds could be used as data generators to improve the regulation for lower-limb exoskeletons. Specifically, the article discusses how robot testbeds could help regulators tackle hazards like fear of falling, instability in collisions, or define the safe scenarios for avoiding any adverse consequences generated by abrupt protective stops. The article’s central point is that testbeds offer a promising setting to bring policymakers closer to research and development to make policies more attuned to societal needs. In this way, these approximations can be harnessed to unravel an optimal regulatory framework for emerging technologies, such as robots and artificial intelligence, based on science and evidence.
Robotic technologies have shown to have clear potential for providing innovation in treatments and treatment modalities for various diseases and disorders that cover unmet needs and are cost-efficient. However, the emergence of technology that promises to improve health outcomes raises the question regarding the extent to which it should be incorporated, how, made available to whom, and on what basis. Since countries usually have limited resources to favour access to state-of-the-art technologies and develop strategies to realize the right to health progressively, in this article, we investigate whether the right to health, particularly the core obligations specified under this right, helps implement medical robots.
In robot torts, robots carry out activities that are partially controlled by a human operator. Several legal and economic scholars across the world have argued for the need to rethink legal remedies as we apply them to robot torts. Yet, to date, there exists no general formulation of liability in case of robot accidents, and the proposed solutions differ across jurisdictions. We proceed in our research with a set of two companion papers. In this paper, we present the novel problems posed by robot accidents, and assess the legal challenges and institutional prospects that policymakers face in the regulation of robot torts. In the companion paper, we build on the present analysis and use an economic model to propose a new liability regime which blends negligence-based rules and strict manufacturer liability rules to create optimal incentives for robot torts.
Unlike privacy law discourse, which has primarily explored questions related to others’ knowledge, access, and use of information about us, commercial law’s central focus has been on issues related to trade involving persons, merchants, and entities. In the commercial law context, questions about knowledge and information are primarily connected to the exchange and disclosure of information needed to facilitate transactions between parties.1 This distinct historical focus has likely contributed to commercial law’s failure to adequately account for and address privacy, security, and digital domination harms. In some cases, commercial law also defers to corporate commercial practices as well.
The suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent to which software and robots already pass proposed criteria for consciousness; and argues against the moral status for AI on the grounds that human malware authors may design malware to fake consciousness. In fact, the article warns that malware authors have stronger incentives than do authors of legitimate software to create code that passes some of the criteria. Thus, code that appears to be benign, but is in fact malware, might become the most common form of software to be treated as having moral status.
This chapter, which considers selected Atwood texts over fifty years, focuses on sexual politics in her representations of women’s attempts to define and reclaim possession of their own bodies and identities. Within a framework that includes feminist theorists Simone de Beauvoir, Luce Irigaray, Joan Riviere, Andrea Dworkin, Susan Bordo, and Wendy Harcourt, the chapter considers the psychological and sociopolitical implications of body denigration. Signaling Atwood’s enduring motif of the disappearing female body without free will, from the early “mud poem” (1974), the chapter explores varieties of women’s self-obliteration and bodily reclamation in The Edible Woman and Lady Oracle, Gilead’s patriarchal domination over female bodies in The Handmaid’s Tale, women’s often ineffectual resistance to bodily objectification in Cat’s Eye and The Blind Assassin, and disturbing futuristic speculations on the possibility of complete possession of female bodies in Oryx and Crake and The Heart Goes Last through biotechnology and robotics.
Longitudinal research finds that thriving lives result from the capacity to form and sustain interpersonal relationships. This should not be left to chance or intuition but be treated as a matter of learning. The penetration of technology into every aspect of our lives has arguably put the development of empathy – the basis for good relationships – at risk. Although hyper-connected, people can feel very alone. The use of pornography is rising and gaming addictions and cyberbullying pose risks to young people. Empathy can and should be learned. In a context where our species is ageing, older generations are becoming cut off from families as living patterns change; and are suffering high levels of loneliness for longer periods. Well-designed learning experiences can close the gap between generations with reciprocal benefit. Therefore, the learning goals arising from attending to thriving at this level are: learning to develop loving and respectful relationships in diverse technologised societies; and engaging with and learning from other generations. The implications for educators are that social and emotional learning needs to be brought from the margin to the core
The final chapter of this volume provides several points of further theoretical elaborations, which, important for our overall argument, would have unduly cluttered the various chapters. It starts by considering how challenges to common sense arise from various types of dissent or deviance including children, homecomers, newcomers, strangers, foreigners, robots or aliens. It proceeds to discuss why and how object-relations and 'inter-objectivity' thought, noted by various scholars, have not received sufficient attention in psychological scholarship and certainly not in relation to influence by artefacts. The chapter lays out the theoretical foundations for such a broadening of scope. The chapter then proceeds to discuss the historically curious dominance of dual-process models over single-process alternatives. The excursions conclude by revisiting the debates concerning the authority of science in Milgram's obedience studies in light of a broader understanding of autonomy, tyranny, argumentation, legality and violence.
If you have ever wondered about the difference between “artificial intelligence” and “machine learning,” you are in luck. The purpose of this chapter is to provide background and context on key concepts in artificial intelligence (AI) and to touch on how AI tools are used in the financial markets. In recent years, hedge funds, banks, commodity trading advisors, and numerous other financial services firms have adopted AI systems and related tools from computer science to automate numerous aspects of their operations, so understanding basic AI concepts can provide insights into how these firms operate.
Socially assistive robots have successfully been trialed in residential care facilities (RCFs) for older adults. These robots may have potential for younger adults (i.e. under 65 years old) who also live in RCFs. However, it is important to investigate staff acceptability and ease-of-use of these robots. This pilot study used the Technology Acceptance Model to investigate how staff working in a specialized RCF for younger adults accept Betty, a socially assistive robot who was introduced in the facility for 12 weeks. Twenty-four staff completed pre-questionnaires, reporting that they thought Betty would have the ability to engage and entertain the residents they cared for. While there were only eight staff who completed the post-questionnaires, there were significant improvements compared to the pre-questionnaire results in areas such as residents enjoying the contact and activities. Impacting on ease-of use were technical difficulties. Although this study had limitations and could be improved by a better response rate and investigating the residents’ acceptability of Betty, this study is one of the first to report that this novel technology may have much potential for engaging adults in RCFs.
A parallel-processing system for locating parts and controlling an industrial robot is proposed. The system employs Transputers and Occam to achieve parallelism. In conjunction with a novel vibratory sensor, the system enables a robot to determine the exact location of parts which have been picked up from a semi-ordered work place. A new algorithm for obtaining the coordinates of the parts using the sensed vibration and deflection signals is described. The algorithm dispenses with the lengthy and complex equation-solving procedures previously required. Instead, it only involves looking up a data table and performing simple two-dimensional interpolation calculations. The design of the algorithm to ensure efficient parallel operation is described. Experimental results showing the successful implementation of the algorithm on the proposed system are presented.
There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have typically acknowledged. Second, I will attempt to defend the intuition that, even so, there is something ethically problematic about such targeting. I argue that an account of the nonconsequentialist foundations of the principle of distinction suggests that the use of autonomous weapon systems (AWS) is unethical by virtue of failing to show appropriate respect for the humanity of our enemies. However, the success of the strongest form of this argument depends upon understanding the robot itself as doing the killing. To the extent that we believe that, on the contrary, AWS are only a means used by combatants to kill, the idea that the use of AWS fails to respect the humanity of our enemy will turn upon an account of what is required by respect, which is essential conventional. Thus, while the theoretical foundations of the idea that AWS are weapons that are “evil in themselves” are weaker than critics have sometimes maintained, they are nonetheless sufficient to demand a prohibition of the development and deployment of such weapons.