Skip to main content Accessibility help
×
Hostname: page-component-65f69f4695-j2lhx Total loading time: 0 Render date: 2025-06-27T06:20:45.726Z Has data issue: false hasContentIssue false

25 - Human Perceptions of AI-Caused Harm

from Part III - Applications

Published online by Cambridge University Press:  17 May 2025

Kevin Tobia
Affiliation:
Georgetown University, Washington DC
Get access

Summary

The complexity involved in developing and deploying artificial intelligence (AI) systems in high-stakes scenarios may result in a “liability gap,” under which it becomes unclear who is responsible when things go awry. Scholarly and policy debates about the gap and its potential solutions have largely been theoretical, with little effort put into understanding the general public’s views on the subject. In this chapter, we present two empirical studies exploring laypeople’s perceptions of responsibility for AI-caused harm. First, we study the proposal to grant legal personhood to AI systems and show that it may conflict with laypeople’s policy preferences. Second, we investigate how people divide legal responsibility between users and developers of machines in a variety of situations and find that, while both are expected to pay legal damages, laypeople anticipate developers to bear the largest share of the liability in most cases. Our examples demonstrate how empirical research can help inform future AI regulation and provide novel lines of research to ensure that this transformative technology is regulated and deployed in a more democratic manner.

Information

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Book purchase

Temporarily unavailable

References

Angwin, J. et al. 2016. Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica. www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingGoogle Scholar
Asaro, P. M. 2007. Robots and responsibility from a legal perspective. Proceedings of the IEEE. 4, 14, 20–24.Google Scholar
Asaro, P. M. 2016. The liability problem for autonomous artificial agents. AAAI Spring Symposia. 190–194.Google Scholar
Awad, E. et al. 2020a. Crowdsourcing moral machines. Communications of the ACM. 63, 3, 48–55.CrossRefGoogle Scholar
Awad, E. et al. 2020b. Drivers are blamed more than their automated cars when both make mistakes. Nature Human Behaviour. 4, 2, 134–143.Google Scholar
Barfield, W. 2018. Liability for autonomous and artificially intelligent robots. Paladyn, Journal of Behavioral Robotics. 9, 1, 193–203.CrossRefGoogle Scholar
Beck, S. 2016. The problem of ascribing legal responsibility in the case of robotics. AI & Society. 31, 4, 473–481.CrossRefGoogle Scholar
Bleher, H. and Braun, M. 2022. Diffused responsibility: Attributions of responsibility in the use of AI-driven clinical decision support systems. AI and Ethics. 2, 1–15.CrossRefGoogle ScholarPubMed
Bryson, J. J. et al. 2017. Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence and Law. 25, 3, 273–291.CrossRefGoogle Scholar
Cave, S. et al. 2018. Portrayals and perceptions of AI and why they matter. The Royal Society. www.repository.cam.ac.uk/items/0f28477d-96c5-4740-ad8c-92224b6efaf1Google Scholar
Čerka, P. et al. 2015. Liability for damages caused by artificial intelligence. Computer Law & Security Review. 31, 3, 376–389.CrossRefGoogle Scholar
Champagne, M. and Tonkens, R. 2015. Bridging the responsibility gap in automated warfare. Philosophy & Technology. 28, 1, 125–137.CrossRefGoogle Scholar
Chomanski, B. 2021. Liability for robots: Sidestepping the gaps. Philosophy & Technology. 34, 4, 1013–1032.CrossRefGoogle Scholar
Coeckelbergh, M. 2020. Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics. 26, 4, 2051–2068.CrossRefGoogle Scholar
Dastin, J. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08GGoogle Scholar
Dazio, S. and The Associated Press. 2023. Tesla driver who killed 2 people while using autopilot must pay $23,000 in restitution without having to serve any jail time. Fortune. https://fortune.com/2023/12/15/tesla-driver-to-pay-23k-in-restitution-crash-killed-2-people/Google Scholar
Delvaux, M. 2017. Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103 (INL)). European Parliament Committee on Legal Affairs.Google Scholar
Diamantis, M. E. 2021. Algorithms acting badly: A solution from corporate law. The George Washington Law Review. 89, 801.Google Scholar
European Commission. 2021. Communication from the commission to the European parliament, the council, the European Economic and Social Committee and the Committee of the Regions empty: Fostering a European approach to artificial intelligence.Google Scholar
Furlough, C. et al. 2021. Attributing blame to robots: I. The influence of robot autonomy. Human Factors. 63, 4, 592–602.CrossRefGoogle ScholarPubMed
Glavaničová, D. and Pascucci, M. 2022. Vicarious liability: A solution to a problem of AI responsibility? Ethics and Information Technology. 24, 3, 1–11.CrossRefGoogle Scholar
Graham, J. et al. 2009. Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology. 96, 5, 1029.CrossRefGoogle ScholarPubMed
Heaven, W. D. 2020. Predictive policing algorithms are racist. They need to be dismantled. MIT Technology Review. www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/Google Scholar
Hidalgo, C. A. et al. 2021. How humans judge machines. MIT Press.CrossRefGoogle Scholar
Johnson, D. G. 2006. Computer systems: Moral entities but not moral agents. Ethics and Information Technology. 8, 4, 195–204.CrossRefGoogle Scholar
Johnson, D. G. 2015. Technology with no human responsibility? Journal of Business Ethics. 127, 4, 707–715.CrossRefGoogle Scholar
Levin, S. 2020. Safety driver charged in 2018 incident where self-driving Uber car killed a woman. The Guardian. www.theguardian.com/us-news/2020/sep/16/uber-self-driving-car-death-safety-driver-chargedGoogle Scholar
Lima, G. et al. 2021a. Human perceptions on moral responsibility of AI: A case study in AI-assisted bail decision-making. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.CrossRefGoogle Scholar
Lima, G. et al. 2021b. The conflict between people’s urge to punish AI and legal systems. Frontiers in Robotics and AI. 8, 756242.CrossRefGoogle Scholar
Lima, G. et al. 2023. Who should pay when machines cause harm: How people distribute legal damages between developers and users. Proceedings of the 2023 ACM Conference on Fairness, Accountability and Transparency (FAccT).CrossRefGoogle Scholar
Liu, P. et al. 2021. Psychological consequences of legal responsibility misattribution associated with automated vehicles. Ethics and Information Technology. 23, 4, 763–776.CrossRefGoogle Scholar
Malle, B. F. et al. 2015. Sacrifice one for the good of many? People apply different moral norms to human and robot agents. 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 117–124.Google Scholar
Matthias, A. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology. 6, 3, 175–183.CrossRefGoogle Scholar
Montgomery, B. 2024. Mother says AI chatbot led her son to kill himself in lawsuit against its maker. The Guardian. www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-deathGoogle Scholar
Obermeyer, Z. et al. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 366, 6464, 447–453.CrossRefGoogle Scholar
Rahwan, I. 2018. Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology. 20, 1, 5–14.CrossRefGoogle Scholar
Sætra, H. S. 2021. Confounding complexity of machine action: A Hobbesian account of machine responsibility. International Journal of Technoethics (IJT). 12, 1, 87–100.Google Scholar
Sio, F. S. de and Mecacci, G. 2021. Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology. 1–28.Google Scholar
Smith, H. 2021. Clinical AI: Opacity, accountability, responsibility and liability. AI & Society. 36, 2, 535–545.CrossRefGoogle Scholar
Solaiman, S. M. 2017. Legal personality of robots, corporations, idols and chimpanzees: A quest for legitimacy. Artificial Intelligence and Law. 25, 2, 155–179.CrossRefGoogle Scholar
Sommers, R. 2021. Experimental jurisprudence. Science. 373, 6553, 394–395.CrossRefGoogle ScholarPubMed
Sparrow, R. 2007. Killer robots. Journal of Applied Philosophy. 24, 1, 62–77.CrossRefGoogle Scholar
Tigard, D. W. 2020. There is no techno-responsibility gap. Philosophy & Technology. 1–19.Google Scholar
Tobia, K. 2022. Experimental jurisprudence. The University of Chicago Law Review. 89, 3, 735–802.Google Scholar
Tobia, K. et al. 2021. When does physician use of AI increase liability? Journal of Nuclear Medicine. 62, 1, 17–21.CrossRefGoogle ScholarPubMed
Turner, J. 2018. Robot rules: Regulating artificial intelligence. Springer.Google Scholar
van de Poel, I. 2015. The problem of many hands. Moral responsibility and the problem of many hands. Routledge. 62–104.CrossRefGoogle Scholar
van den Hoven van Genderen, R. 2018. Do we need new legal personhood in the age of robots and AI? Robotics, AI and the future of law. Springer. 15–55.Google Scholar
Vladeck, D. C. 2014. Machines without principals: Liability rules and artificial intelligence. Washington Law Review. 89, 117.Google Scholar
Wakabayashi, D. 2018. Self-driving Uber car kills pedestrian in Arizona, where robots roam. New York Times. www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.htmlGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×