Hostname: page-component-78c5997874-fbnjt Total loading time: 0 Render date: 2024-11-11T02:24:02.518Z Has data issue: false hasContentIssue false

The Mandatory Ontology of Robot Responsibility

Published online by Cambridge University Press:  10 June 2021

Marc Champagne*
Affiliation:
Department of Philosophy, Kwantlen Polytechnic University, Surrey, BC, V3W 2M8, Canada

Abstract

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Commentary
Copyright
© The Author(s), 2021. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Notes

1. Herrmann, E, Call, J, Hernández-Lloreda, MV, Hare, B, Tomasello, M. Humans have evolved specialized skills of social cognition: The cultural intelligence hypothesis. Science 2007;317(5843):1360–6.CrossRefGoogle ScholarPubMed

2. For more on morality and the propensity to project faces (called pareidolia), see Friesen, BK Moral Systems and the Evolution of Human Rights. Springer: Dordrecht; 2015:28.Google Scholar

3. Stowers, K, Leyva, K, Hancock, GM, Hancock, PA. Life or death by robot? Ergonomics in Design 2016;24(3):18.CrossRefGoogle Scholar

4. Bringsjord, S. What Robots Can and Can’t Be. Dordrecht: Springer; 1992:34.Google Scholar

5. Dalton-Brown, S. The ethics of medical AI and the physician-patient relationship. Cambridge Quarterly of Healthcare Ethics 2020;29(1):116.CrossRefGoogle ScholarPubMed

6. See note 4.

7. Allen, C, Wallach, W. Moral Machines: Teaching Robots Right from Wrong. New York: Oxford University Press; 2009.Google Scholar

8. Tigard D. Artificial moral responsibility: How we can and cannot hold machines responsible. Cambridge Quarterly of Healthcare Ethics; available at https://doi.org/10.1017/S0963180120000985.

9. Dennett, DC. The Intentional Stance. Cambridge: MIT Press; 1987.Google Scholar

10. An ambitious attempt to explain the intentional stance’s predictive success can be found in Dennett, DC. Real patterns. The Journal of Philosophy 1991;88(1):2751.CrossRefGoogle Scholar

11. Fodor, JA. A Theory of Content and Other Essays. Cambridge: MIT Press; 1994:6.Google Scholar

12. Fake nurse in Quebec discovered and fired—after 20 years on the job. Montreal Gazette 2019 Jun 1.

13. See note 12. Interestingly, if we go back to his actual writings, we find that the founder of pragmatism Charles Sanders Peirce invited us to “[c]onsider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object” (emphasis added). Peirce, CS. The Essential Peirce, Bloomington: Indiana University Press; 1992:132.Google Scholar Highlighting that a difference makes no practical difference is thus persuasive only if it can be shown that the difference at hand cannot possibly make any practical difference in any conceivable future. Clearly, no robotic duplicate can meet this more demanding standard.

14. See note 8.

15. See note 8.

16. Asimov, I. The Complete Robot. Garden City: Doubleday; 1982:36.Google Scholar

17. See note 8.

18. See note 8.

19. Champagne, M. Disjunctivism and the ethics of disbelief. Philosophical Papers 2015;44(2):139–63.Google Scholar

20. Strawson, PF. Freedom and Resentment and Other Essays. London: Routledge; 2008:23.CrossRefGoogle Scholar

21. Himma, K. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? Ethics and Information Technology 2009;11(1):1929.Google Scholar

22. See note 7.

23. See note 8.

24. Champagne, M. Consciousness and the Philosophy of Signs: How Peircean Semiotics Combines Phenomenal Qualia and Practical Effects. Cham: Springer; 2018.CrossRefGoogle Scholar

25. See note 8.

26. See note 8.

27. Berman, PS. Rats, pigs, and statues on trial: The creation of cultural narratives in the prosecution of animals and inanimate objects. New York University Law Review 1994;69:288326.Google Scholar

28. Sparrow, R. Killer robots. Journal of Applied Philosophy 2007;24(1):10.CrossRefGoogle Scholar

29. Champagne, M, Tonkens, R. Bridging the responsibility gap in automated warfare. Philosophy and Technology 2015;28(1):126.CrossRefGoogle Scholar

30. See Champagne, M. Axiomatizing umwelt normativity. Sign Systems Studies 2011;39(1):23.CrossRefGoogle Scholar

31. See note 8.

32. The episode, titled The measure of a man, is the ninth episode of the series’ second season and originally aired on February 13, 1989.

33. Hart, HLA. Punishment and Responsibility: Essays in the Philosophy of Law. Oxford: Oxford University Press; 2008:210–30.CrossRefGoogle Scholar

34. Shoemaker, D. Attributability, answerability, and accountability: Toward a wider theory of moral responsibility. Ethics 2011;121(3):602–32.CrossRefGoogle Scholar

35. Ortenzi, V, Controzzi, M, Cini, F, Leitner, J, Bianchi, M, Roa, MA, Corke, P. Robotic manipulation and the role of the task in the metric of success. Nature Machine Intelligence 2019;1(8):343.CrossRefGoogle Scholar

36. Edmonds, M, Gao, F, Liu, H, Xie, X, Qi, S, Rothrock, B, Zhu, Y, Wu, YN, Lu, H, Zhu, SC. A tale of two explanations: Enhancing human trust by explaining robot behavior. Science Robotics 2019;4(37):eaay4663.CrossRefGoogle ScholarPubMed

37. Brandom, RB. Making It Explicit: Reasoning, Representing, and Discursive Commitment. Cambridge: Harvard University Press; 1994:87.Google Scholar

38. Williams, G. Responsibility as a virtue. Ethical Theory and Moral Practice 2008;11(4):460.CrossRefGoogle Scholar

39. See note 36, Edmonds et al. 2019:9.

40. The shorter life spans of replicants, which is a much more important difference, would also have to match ours.

41. I am borrowing this metaphor from Jordan Peterson. See Champagne, M. Myth, Meaning, and Antifragile Individualism: On the Ideas of Jordan Peterson. Exeter: Imprint Academic; 2020.Google Scholar

42. See note 8.

43. Bartels, DM, Pizarro, DA. The mismeasure of morals: Antisocial personality traits predict utilitarian responses to moral dilemmas. Cognition 2011;121(1):154–61.CrossRefGoogle ScholarPubMed

44. See note 8.

45. See note 8.

46. See note 8.

47. Graham, J, Haidt, J, Koleva, S, Motyl, M, Iyer, R, Wojcik, SP, Ditto, PH. Moral foundations theory. Advances in Experimental Social Psychology 2013;47:55130.CrossRefGoogle Scholar

48. Henley, J, Grant, H, Elgot, J, McVeigh, K, O’Carroll L. Britons rally to help people fleeing war and terror in Middle East. The Guardian; 2015 Sep 3.Google Scholar

49. See note 8. For instance, it is easy to see how the Flesh Fairs from the 2001 Steven Spielberg movie A.I., where robots are destroyed “in a cross between a county fair, heavy metal concert, and WWF match or monster truck rally,” could make one insensitive to violence toward humans. See Kreider, T. A.I.: Artificial Intelligence. Film Quarterly 2002;56(2):36.CrossRefGoogle Scholar

50. See note 27, Berman 1994:326.