Addressing the criticisms of the trait attribution approach
The first limitation according to C&F is that the trait attribution approach cannot explain individual differences in willingness to interact with social robots. However, this can be explained by research demonstrating individual differences in attributing human-like characteristics to other humans (Haslam & Loughnan, Reference Haslam and Loughnan2014) and to nonhumans (Waytz, Cacioppo, & Epley, Reference Waytz, Cacioppo and Epley2010). The same processes that affect trait attributions to other humans can explain the willingness to interact with robots. For example, people vary in how much humanness (e.g., agency, experience) they attribute to outgroup members (e.g., Krumhuber, Swiderska, Tsankova, Kamble, & Kappas, Reference Krumhuber, Swiderska, Tsankova, Kamble and Kappas2015), pets (e.g., McConnell, Lloyd, & Buchanan, Reference McConnell, Lloyd and Buchanan2017), and fictional characters (e.g., Banks & Bowman, Reference Banks and Bowman2016). This variability emerges across individuals, situations (e.g., Smith et al., Reference Smith, Pasek, Vishkin, Johnson, Shackleford and Ginges2022), and within interactions (e.g., Haslam, Reference Haslam2006). According to the trait attribution approach, similar individual and situational factors can predict when people respond to a robot in a human-like way.
The second issue that C&F identify is a change in the way people interact with social robots within an interaction. But considerable research and theory in psychology suggest that the way an interaction unfolds is dynamically affected by many factors, such as perceived traits, goals, and abilities (e.g., see Freeman, Stolier, & Brooks, Reference Freeman, Stolier and Brooks2020). For example, the accessibility of stereotypes and goals can change over a relatively short amount of time (e.g., Ferguson & Bargh, Reference Ferguson and Bargh2004; Kunda, Davies, Adams, & Spencer, Reference Kunda, Davies, Adams and Spencer2002; Melnikoff & Bailey, Reference Melnikoff and Bailey2018). The inherently dynamic context of an interaction, with constantly varying types of information being introduced verbally and nonverbally, predicts changing attributions of one's interaction partner, whether human or robot. The same trait attribution principles that guide human interactions can be used to explain the change in perspective in human–robot interactions.
The third unresolved question raised by C&F is selectivity – people notice some of the robots' capabilities but not others. The trait attribution approach aligns with work in social cognition suggesting that people are more sensitive to some kinds of information than others, depending on individual differences and situational factors (e.g., Brewer, Reference Brewer, Scrull and Wyer1988; Fiske & Neuberg, Reference Fiske and Neuberg1990). For example, people positively evaluate competence in others, unless the other is immoral (Landy, Piazza, & Goodwin, Reference Landy, Piazza and Goodwin2016). Although much is not yet known about precisely which aspects of an interaction or agent are considered relevant and when, we argue that these basic principles of psychology can explain the characteristic of selectivity in trait inferences about social robots. Note that the social depictions approach also cannot explain exactly which aspects will be influential when, and for whom.
Advantages and limitations of trait attribution
In addressing these three points we suggest that the trait attribution approach can explain phenomena that C&F argue are inconsistent with it. By showing how the same principles that explain human–human interaction can explain human–robot interaction, we argue that trait attribution is a parsimonious approach to explaining human–robot interactions.
The trait attribution approach is a broad concept; different research lines focus on different types of attributions to explain human–robot interactions. Anthropomorphism, for example, affects how much people trust robots such as self-driving cars (Waytz, Heafner, & Epley, Reference Waytz, Heafner and Epley2014). People's reliance on algorithms for tasks hinges on their perception that they have human-like emotions (Castelo, Bos, & Lehmann, Reference Castelo, Bos and Lehmann2019). People's aversion to algorithms making moral decisions depends on the mind they attribute to them (Bigman & Gray, Reference Bigman and Gray2018), and their resistance to medical algorithms is based on attributing to them an inability to appreciate human uniqueness (Longoni, Bonezzi, & Morewedge, Reference Longoni, Bonezzi and Morewedge2019). Similarly, people's diminished outrage at discrimination by algorithms is a result of perceiving algorithms as less prejudiced than humans (Bigman, Gray, Waytz, Arnestad, & Wilson, Reference Bigman, Gray, Waytz, Arnestad and Wilson2022). The attribution approach to studying human–robot interactions extends beyond the strict attribution of only traits.
One possible limitation of the trait attribution approach is that it cannot explain the apparent intentionality of the duality of some human–robot interactions. That is, people seem to at times knowingly suspend their disbelief and cycle between treating a robot as agentic versus non-agentic. Although the trait approach can potentially explain going back and forth in attributions of agency, it does not address the role of intentionality in (and awareness of) doing so, and more research on the importance of this characteristic would be helpful. Moreover, it is an open question when people will interact with robots as actual social agents rather than depictions of social agents. We agree that sometimes people might interact with social robots as depictions, but that does not mean that they always do so. One untested possibility is that the more mind a robot is perceived as having, the less likely people are to treat it as a depiction. To the extent that robots are increasingly more agentic, trait attribution approach parsimoniously explains interactions where people interact with social robots as “real” agents rather than depictions: loving an artificial agent (Oliver, Reference Oliver2020), thinking an AI is sentient when it displays sophisticated language and conversation (Allyn, Reference Allyn2022), and feeling bad about punishing them (Bartneck, Verbunt, Mubin, & Al Mahmud, Reference Bartneck, Verbunt, Mubin and Al Mahmud2007).
C&F assume that social cognition of humans is unique, and cannot be applied to nonhuman entities. We argue that social cognition is broad. Humans as targets of social cognition share the space with other entities, even if they have a special place in it. By our account, the difference between the social cognition of humans and the social cognition of robots is mostly quantitative, not qualitative.
C&F propose that humans understand social robots as depictions rather than actual agents. This perspective focuses on the psychologically under-explained duality with which people can react to entities as simultaneously (or alternatively) agentic and non-agentic. We disagree that the trait attribution approach cannot handle the three questions that C&F identify. We argue that the trait attribution approach, based on decades of research on social cognition, can explain these issues, and variance in human–robot interaction more broadly. Moreover, this approach is parsimonious in that it assumes the same psychological processes that guide human–human interaction guide human–robot interaction.
Addressing the criticisms of the trait attribution approach
The first limitation according to C&F is that the trait attribution approach cannot explain individual differences in willingness to interact with social robots. However, this can be explained by research demonstrating individual differences in attributing human-like characteristics to other humans (Haslam & Loughnan, Reference Haslam and Loughnan2014) and to nonhumans (Waytz, Cacioppo, & Epley, Reference Waytz, Cacioppo and Epley2010). The same processes that affect trait attributions to other humans can explain the willingness to interact with robots. For example, people vary in how much humanness (e.g., agency, experience) they attribute to outgroup members (e.g., Krumhuber, Swiderska, Tsankova, Kamble, & Kappas, Reference Krumhuber, Swiderska, Tsankova, Kamble and Kappas2015), pets (e.g., McConnell, Lloyd, & Buchanan, Reference McConnell, Lloyd and Buchanan2017), and fictional characters (e.g., Banks & Bowman, Reference Banks and Bowman2016). This variability emerges across individuals, situations (e.g., Smith et al., Reference Smith, Pasek, Vishkin, Johnson, Shackleford and Ginges2022), and within interactions (e.g., Haslam, Reference Haslam2006). According to the trait attribution approach, similar individual and situational factors can predict when people respond to a robot in a human-like way.
The second issue that C&F identify is a change in the way people interact with social robots within an interaction. But considerable research and theory in psychology suggest that the way an interaction unfolds is dynamically affected by many factors, such as perceived traits, goals, and abilities (e.g., see Freeman, Stolier, & Brooks, Reference Freeman, Stolier and Brooks2020). For example, the accessibility of stereotypes and goals can change over a relatively short amount of time (e.g., Ferguson & Bargh, Reference Ferguson and Bargh2004; Kunda, Davies, Adams, & Spencer, Reference Kunda, Davies, Adams and Spencer2002; Melnikoff & Bailey, Reference Melnikoff and Bailey2018). The inherently dynamic context of an interaction, with constantly varying types of information being introduced verbally and nonverbally, predicts changing attributions of one's interaction partner, whether human or robot. The same trait attribution principles that guide human interactions can be used to explain the change in perspective in human–robot interactions.
The third unresolved question raised by C&F is selectivity – people notice some of the robots' capabilities but not others. The trait attribution approach aligns with work in social cognition suggesting that people are more sensitive to some kinds of information than others, depending on individual differences and situational factors (e.g., Brewer, Reference Brewer, Scrull and Wyer1988; Fiske & Neuberg, Reference Fiske and Neuberg1990). For example, people positively evaluate competence in others, unless the other is immoral (Landy, Piazza, & Goodwin, Reference Landy, Piazza and Goodwin2016). Although much is not yet known about precisely which aspects of an interaction or agent are considered relevant and when, we argue that these basic principles of psychology can explain the characteristic of selectivity in trait inferences about social robots. Note that the social depictions approach also cannot explain exactly which aspects will be influential when, and for whom.
Advantages and limitations of trait attribution
In addressing these three points we suggest that the trait attribution approach can explain phenomena that C&F argue are inconsistent with it. By showing how the same principles that explain human–human interaction can explain human–robot interaction, we argue that trait attribution is a parsimonious approach to explaining human–robot interactions.
The trait attribution approach is a broad concept; different research lines focus on different types of attributions to explain human–robot interactions. Anthropomorphism, for example, affects how much people trust robots such as self-driving cars (Waytz, Heafner, & Epley, Reference Waytz, Heafner and Epley2014). People's reliance on algorithms for tasks hinges on their perception that they have human-like emotions (Castelo, Bos, & Lehmann, Reference Castelo, Bos and Lehmann2019). People's aversion to algorithms making moral decisions depends on the mind they attribute to them (Bigman & Gray, Reference Bigman and Gray2018), and their resistance to medical algorithms is based on attributing to them an inability to appreciate human uniqueness (Longoni, Bonezzi, & Morewedge, Reference Longoni, Bonezzi and Morewedge2019). Similarly, people's diminished outrage at discrimination by algorithms is a result of perceiving algorithms as less prejudiced than humans (Bigman, Gray, Waytz, Arnestad, & Wilson, Reference Bigman, Gray, Waytz, Arnestad and Wilson2022). The attribution approach to studying human–robot interactions extends beyond the strict attribution of only traits.
One possible limitation of the trait attribution approach is that it cannot explain the apparent intentionality of the duality of some human–robot interactions. That is, people seem to at times knowingly suspend their disbelief and cycle between treating a robot as agentic versus non-agentic. Although the trait approach can potentially explain going back and forth in attributions of agency, it does not address the role of intentionality in (and awareness of) doing so, and more research on the importance of this characteristic would be helpful. Moreover, it is an open question when people will interact with robots as actual social agents rather than depictions of social agents. We agree that sometimes people might interact with social robots as depictions, but that does not mean that they always do so. One untested possibility is that the more mind a robot is perceived as having, the less likely people are to treat it as a depiction. To the extent that robots are increasingly more agentic, trait attribution approach parsimoniously explains interactions where people interact with social robots as “real” agents rather than depictions: loving an artificial agent (Oliver, Reference Oliver2020), thinking an AI is sentient when it displays sophisticated language and conversation (Allyn, Reference Allyn2022), and feeling bad about punishing them (Bartneck, Verbunt, Mubin, & Al Mahmud, Reference Bartneck, Verbunt, Mubin and Al Mahmud2007).
C&F assume that social cognition of humans is unique, and cannot be applied to nonhuman entities. We argue that social cognition is broad. Humans as targets of social cognition share the space with other entities, even if they have a special place in it. By our account, the difference between the social cognition of humans and the social cognition of robots is mostly quantitative, not qualitative.
Financial support
This work was supported by Office of Naval Research N00014-19-1-2299, Modeling and planning with human impressions of robots.
Competing interest
None.