Imagine a human living on earth 1,000 years ago, before the discovery of electricity or before anyone could have imagined a computer let alone an autonomous robot. Suppose this person suddenly encountered a robot transported back from the year 2022 (or, perhaps, an alien life form or probe from another planet). How would this person construe the mechanical entity? Would they treat it as a “depiction” of a real agent or simply as an agent, full stop?
Lacking any cultural, personal, or historical concept of the idea of a “robot,” it seems unlikely that a twelfth-century human would take the object before them as a human-made artifact designed to “depict” authentic agency. More likely, they would construe this unknown entity as a real agent of some kind.
Agency distinctions are not just limited to prehistoric analogies: Even well-informed individuals can perceive artifacts as having agency and intelligence. Indeed, one could imagine scenarios in which many people today, upon encountering a robot, artificial intelligence (AI), or deepfake, would not take certain artifacts as “depictions,” but as real agents. There is evidence that contemporary humans do perceive artificial agents as real. To take just one example, an AI researcher at Google has recently been suspended for arguing that a program they were interacting with had achieved sentience; this was followed by an MIT research professor who argued that Amazon's Alexa could also become sentient (Kaplan, Reference Kaplan2022).
Clark and Fischer (C&F) do an outstanding job outlining a particular cultural framing, or schema, of robots. Crucially, however, their theory is not and cannot be a universal theory of how all humans can, do, or will perceive and interact with artificial kinds. What is missing from C&F's theory is an anthropological viewpoint. Through such a lens, one can see that the notion of robots as “depictions” of real agents requires expectations – a mental model of what a robot is – that are not shared by all humans.
C&F cite individuals such as Danish theatergoers who bring a priori assumptions about robots from films, popular media, science, and school. Such prior expectations about robots allow people to act within a culturally delineated frame. They interact with a robot as if it had agency while knowing that the robot is a mere artifact, with no agency beyond that extended by its author.
From an anthropological perspective, it seems clear that this is a culturally provided mode of interaction, not one that has been available to all or even most humans across the span of history. Indeed, we suggest that this may not be the way that all or most humans currently perceive or will perceive robots in the future. Robots-as-depictions might be a category of robots that will always exist, but it is unlikely to be the only category of robots or artificial agents.
In evaluating C&F's proposal, it is important to distinguish between real and perceived agency. The question of what makes something a real, or actual, agent is largely a philosophical question. The question of when people perceive, or construe, an entity as a real agent is a question for psychology and anthropology (Barrett, Todd, Miller, & Blythe, Reference Barrett, Todd, Miller and Blythe2005; Gergely & Csibra, Reference Gergely and Csibra2003). C&F's article is primarily concerned with the second question and assumes that robots are not real agents. However, we argue that we should not take this for granted. It is possible for artificial agents to have real agency. As the technological sophistication of robotics and AI grows, this becomes increasingly likely.
While AI is still in its infancy, we can look to how humans interact with artifacts as a guide to how we will ultimately treat artificial agency. Consider for instance a blind person and how she interacts with her cane: Studies have demonstrated that the cane is treated as a part of the body (Malafouris, Reference Malafouris2008; Maravita & Iriki, Reference Maravita and Iriki2004). The effect is even more pronounced with artificial limbs (van den Heiligenberg et al., Reference van den Heiligenberg, Yeung, Brugger, Culham and Makin2017, Reference van den Heiligenberg, Orlov, Macdonald, Duff, Henderson Slater, Beckmann and Makin2018), and this was likely true with stone tools as they became integral to the livelihood of prehistoric Homo (Haggard, Martin, Taylor-Clarke, Jeannerod, & Franck, Reference Haggard, Martin, Taylor-Clarke, Jeannerod and Franck2003; Malafouris, Reference Malafouris2020).
There is also evidence to support the direct impact of artifacts on our biology. As Homo increased its reliance on physical artifacts, our genus' bodies grew less muscular and robust as a result (Ruff, Reference Ruff2005). The same may be true of cognitive tools: Homo sapiens have experienced more than a 5% reduction in brain mass throughout the Late Pleistocene and Holocene (Stibel, Reference Stibel2021) and that loss of brain mass has been linked to an increased use of cognitive tools (DeSilva et al., Reference DeSilva, Traniello, Claxton and Fannin2021). Modern technology, such as the internet and cell phones, have been shown to supplant thinking more broadly (Barr, Pennycook, Stolz, & Fugelsang, Reference Barr, Pennycook, Stolz and Fugelsang2015; Sparrow, Liu, & Wegner, Reference Sparrow, Liu and Wegner2011). Cognitive tools enable thought to move to and from the brain by offloading cognition from biological wetware to artificial hardware. Just as physical artifacts offload physical exertion, cognitive offloading may allow our expensive brain tissue to be selected against while enabling our intelligence to increase (DeSilva et al., Reference DeSilva, Traniello, Claxton and Fannin2021; Stibel, Reference Stibel2021).
When an artifact is used in the thinking process, it is as much a part of the process as are the neurons in the brain (Clark & Chalmers, Reference Clark and Chalmers1998; Malafouris, Reference Malafouris2020; Maravita & Iriki, Reference Maravita and Iriki2004). In that respect, artifacts are already a part of human agency so it seems reasonable to believe that, as AI gains in sophistication, we will perceive artificially intelligent agents as real and not depictions. Part of the problem may be that the term “artificial intelligence” is loaded. The technology humans create is artificial, but the intelligence created is real: artificial minds can have real intelligence. At present, most social robots are not yet sophisticated enough to arouse any response beyond a depiction, an imitation of something that has agency. But as artificial agents gain in sophistication and intelligence, it is likely that humans will treat them as having real agency.
Imagine a human living on earth 1,000 years ago, before the discovery of electricity or before anyone could have imagined a computer let alone an autonomous robot. Suppose this person suddenly encountered a robot transported back from the year 2022 (or, perhaps, an alien life form or probe from another planet). How would this person construe the mechanical entity? Would they treat it as a “depiction” of a real agent or simply as an agent, full stop?
Lacking any cultural, personal, or historical concept of the idea of a “robot,” it seems unlikely that a twelfth-century human would take the object before them as a human-made artifact designed to “depict” authentic agency. More likely, they would construe this unknown entity as a real agent of some kind.
Agency distinctions are not just limited to prehistoric analogies: Even well-informed individuals can perceive artifacts as having agency and intelligence. Indeed, one could imagine scenarios in which many people today, upon encountering a robot, artificial intelligence (AI), or deepfake, would not take certain artifacts as “depictions,” but as real agents. There is evidence that contemporary humans do perceive artificial agents as real. To take just one example, an AI researcher at Google has recently been suspended for arguing that a program they were interacting with had achieved sentience; this was followed by an MIT research professor who argued that Amazon's Alexa could also become sentient (Kaplan, Reference Kaplan2022).
Clark and Fischer (C&F) do an outstanding job outlining a particular cultural framing, or schema, of robots. Crucially, however, their theory is not and cannot be a universal theory of how all humans can, do, or will perceive and interact with artificial kinds. What is missing from C&F's theory is an anthropological viewpoint. Through such a lens, one can see that the notion of robots as “depictions” of real agents requires expectations – a mental model of what a robot is – that are not shared by all humans.
C&F cite individuals such as Danish theatergoers who bring a priori assumptions about robots from films, popular media, science, and school. Such prior expectations about robots allow people to act within a culturally delineated frame. They interact with a robot as if it had agency while knowing that the robot is a mere artifact, with no agency beyond that extended by its author.
From an anthropological perspective, it seems clear that this is a culturally provided mode of interaction, not one that has been available to all or even most humans across the span of history. Indeed, we suggest that this may not be the way that all or most humans currently perceive or will perceive robots in the future. Robots-as-depictions might be a category of robots that will always exist, but it is unlikely to be the only category of robots or artificial agents.
In evaluating C&F's proposal, it is important to distinguish between real and perceived agency. The question of what makes something a real, or actual, agent is largely a philosophical question. The question of when people perceive, or construe, an entity as a real agent is a question for psychology and anthropology (Barrett, Todd, Miller, & Blythe, Reference Barrett, Todd, Miller and Blythe2005; Gergely & Csibra, Reference Gergely and Csibra2003). C&F's article is primarily concerned with the second question and assumes that robots are not real agents. However, we argue that we should not take this for granted. It is possible for artificial agents to have real agency. As the technological sophistication of robotics and AI grows, this becomes increasingly likely.
While AI is still in its infancy, we can look to how humans interact with artifacts as a guide to how we will ultimately treat artificial agency. Consider for instance a blind person and how she interacts with her cane: Studies have demonstrated that the cane is treated as a part of the body (Malafouris, Reference Malafouris2008; Maravita & Iriki, Reference Maravita and Iriki2004). The effect is even more pronounced with artificial limbs (van den Heiligenberg et al., Reference van den Heiligenberg, Yeung, Brugger, Culham and Makin2017, Reference van den Heiligenberg, Orlov, Macdonald, Duff, Henderson Slater, Beckmann and Makin2018), and this was likely true with stone tools as they became integral to the livelihood of prehistoric Homo (Haggard, Martin, Taylor-Clarke, Jeannerod, & Franck, Reference Haggard, Martin, Taylor-Clarke, Jeannerod and Franck2003; Malafouris, Reference Malafouris2020).
There is also evidence to support the direct impact of artifacts on our biology. As Homo increased its reliance on physical artifacts, our genus' bodies grew less muscular and robust as a result (Ruff, Reference Ruff2005). The same may be true of cognitive tools: Homo sapiens have experienced more than a 5% reduction in brain mass throughout the Late Pleistocene and Holocene (Stibel, Reference Stibel2021) and that loss of brain mass has been linked to an increased use of cognitive tools (DeSilva et al., Reference DeSilva, Traniello, Claxton and Fannin2021). Modern technology, such as the internet and cell phones, have been shown to supplant thinking more broadly (Barr, Pennycook, Stolz, & Fugelsang, Reference Barr, Pennycook, Stolz and Fugelsang2015; Sparrow, Liu, & Wegner, Reference Sparrow, Liu and Wegner2011). Cognitive tools enable thought to move to and from the brain by offloading cognition from biological wetware to artificial hardware. Just as physical artifacts offload physical exertion, cognitive offloading may allow our expensive brain tissue to be selected against while enabling our intelligence to increase (DeSilva et al., Reference DeSilva, Traniello, Claxton and Fannin2021; Stibel, Reference Stibel2021).
When an artifact is used in the thinking process, it is as much a part of the process as are the neurons in the brain (Clark & Chalmers, Reference Clark and Chalmers1998; Malafouris, Reference Malafouris2020; Maravita & Iriki, Reference Maravita and Iriki2004). In that respect, artifacts are already a part of human agency so it seems reasonable to believe that, as AI gains in sophistication, we will perceive artificially intelligent agents as real and not depictions. Part of the problem may be that the term “artificial intelligence” is loaded. The technology humans create is artificial, but the intelligence created is real: artificial minds can have real intelligence. At present, most social robots are not yet sophisticated enough to arouse any response beyond a depiction, an imitation of something that has agency. But as artificial agents gain in sophistication and intelligence, it is likely that humans will treat them as having real agency.
Competing interest
None.