In their article, Clark and Fischer (C&F) state: “It is one thing to tacitly distinguish the three perspectives on a robot (a matter of cognition) and quite another to answer questions about them (a matter of meta-cognition)” (target article, sect. 4.5, para. 1). In supporting their theory that it may be difficult for some people to think through their own conceptualizations of social agents, C&F reference Kahn et al.'s (Reference Kahn, Kanda, Ishiguro, Freier, Severson, Gill and Shen2012) study where children ages 9–15 were asked questions about a socially contingent robot called Robovie. They argue that the language used in the study may obscure Robovie's status as a depiction of a social entity and explain why children struggled to categorize Robovie. Although we agree that prompting children to think about the ontology of social robots poses challenges, we also believe that taking a developmental perspective when considering social robots may lead to a different interpretation altogether: This generation of children do not view social robots as representations of social beings, but rather, as Kahn et al. (Reference Kahn, Kanda, Ishiguro, Freier, Severson, Gill and Shen2012) posited, they view social robots as belonging to a new ontological category. Although C&F state that “It is an open question what children understand about social robots at each age” (target article, sect. 6.4, para. 2), we propose that recent research on children's understanding of virtual assistants provides valuable insight into how children construe social robots.
Nearly half of American parents of children under age 9 indicate that they have at least one virtual assistant in their home (Rideout & Robb, Reference Rideout and Robb2020), meaning that these devices are far more likely to be familiar to children than even the most popular social robots. Virtual assistants are interactive and conversational and behave in socially contingent ways. Recent research suggests that children as young as age 4 can effectively interact with virtual assistants (e.g., Lovato & Piper, Reference Lovato and Piper2015; Lovato, Piper, & Wartella, Reference Lovato, Piper and Wartella2019; Oranç & Ruggeri, Reference Oranç and Ruggeri2021; Xu & Warschauer, Reference Xu and Warschauer2020) and, by age 7, children view them as reliable information sources (Girouard-Hallam & Danovitch, Reference Girouard-Hallam and Danovitch2022). Moreover, children ascribe both artifact and non-artifact characteristics to these devices. Children ages 6–10 attribute mental characteristics like intelligence, social characteristics like the capacity for friendship, and some moral standing to a familiar virtual assistant (Girouard-Hallam, Streble, & Danovitch, Reference Girouard-Hallam, Streble and Danovitch2021), but they also hold that virtual assistants cannot breathe and are not alive (Girouard-Hallam & Danovitch, Reference Girouard-Hallam and Danovitch2022).
Thus, similar to the children in Kahn et al.'s (Reference Kahn, Kanda, Ishiguro, Freier, Severson, Gill and Shen2012) Robovie study, children do not treat virtual assistants entirely like other humans nor like inanimate objects. Instead, children may view them as belonging to a new ontological category that occupies its own niche between person and artifact (e.g., Kahn, Gary, & Shen, Reference Kahn, Gary and Shen2013; Kahn & Shen, Reference Kahn, Shen, Budwig, Turiel and Zelazo2017; Severson & Carlson, Reference Severson and Carlson2010). In a study examining children's ontological beliefs about virtual assistants, Festerling and Siraj (Reference Festerling and Siraj2020) found that 6–10-year-old children had clear ontological beliefs about humans and artifacts, but children believed that virtual assistants possessed human and artifact features simultaneously. Thus, children view virtual assistants as a unique entity rather than as a mechanical depiction of a non-unique entity, such as a person. Contrary to C&F's arguments that people view social robots as non-real facsimiles of real social agents by engaging with them and then appreciating their qualities (the dual-layer argument; target article, sect. 6.4, para. 2), and that children in particular treat robots “as interactive toys – as props in make-believe social play” (target article, sect. 6.4, para. 1), children appear to believe that virtual assistants are at once animate and inanimate, rather than separating these entities into a real structure and an imaginary depiction.
Children's para-social partnerships with virtual assistants further contribute to the idea that children view virtual assistants as a new ontological category, occupying a unique space between artifact and person. Para-social relationships are emotionally tinged and one-sided, and they commonly occur between children and media characters, such as characters from popular television shows (Richards & Calvert, Reference Richards, Calvert, Barr and Linebarger2017). Parents report that their young children form para-social relationships with virtual assistants and that these relationships result from children's exposure to these socially contingent devices (Hoffman, Owen, & Calvert, Reference Hoffman, Owen and Calvert2021). Thus, it seems that the more time children spend with virtual assistants, which can respond and engage in conversation with them, the more likely they are to believe that virtual assistants are companions that care for them and that should be cared for in turn. Similarly, there is evidence that children treat virtual assistants as trusted social partners, and benefit from pedagogical exchanges with them similar to the ones they have with human partners (Xu et al., Reference Xu, Wang, Collins, Lee and Warschauer2021). C&F use Fischer's (Reference Fischer2016) hypothesis that some people are “players” and some are “non-players” to explain that “not everyone is willing to play along with a robot – or to do so all the time” (target article, sect. 7.2, para. 7). We propose that children who regularly interact with virtual assistants accrue a willingness to engage as “players” with these devices, which by extension changes the way that they view them and might change the way they view social robots as well.
In conclusion, as this generation of children grows up with virtual assistants and similar devices, and virtual assistants occupy an increasing part in adults' day-to-day lives, it will be necessary to re-evaluate C&F's stance. Interactions with virtual assistants may reveal a more complex general relationship between humans and robots than C&F claim. It may be that rather than viewing social robots as depictions of social agents, children and adults who have experience with virtual assistants instead view them as semi-social agents. In other words, they may view social robots not as a composite of several parts, but rather as a unique assemblage of human and artifact characteristics. Additional empirical research that takes a developmental approach to examining the conversations and interactions people have with virtual assistants could aid in testing C&F's hypothesis that “people construe social robots not as agents per se, but as depictions of agents” (target article, sect. 1, para. 3). A developmental and ontological perspective on social robots may move the conversation beyond mere depiction to a deeper understanding of the role social robots play in our daily lives and how we view them in turn.
In their article, Clark and Fischer (C&F) state: “It is one thing to tacitly distinguish the three perspectives on a robot (a matter of cognition) and quite another to answer questions about them (a matter of meta-cognition)” (target article, sect. 4.5, para. 1). In supporting their theory that it may be difficult for some people to think through their own conceptualizations of social agents, C&F reference Kahn et al.'s (Reference Kahn, Kanda, Ishiguro, Freier, Severson, Gill and Shen2012) study where children ages 9–15 were asked questions about a socially contingent robot called Robovie. They argue that the language used in the study may obscure Robovie's status as a depiction of a social entity and explain why children struggled to categorize Robovie. Although we agree that prompting children to think about the ontology of social robots poses challenges, we also believe that taking a developmental perspective when considering social robots may lead to a different interpretation altogether: This generation of children do not view social robots as representations of social beings, but rather, as Kahn et al. (Reference Kahn, Kanda, Ishiguro, Freier, Severson, Gill and Shen2012) posited, they view social robots as belonging to a new ontological category. Although C&F state that “It is an open question what children understand about social robots at each age” (target article, sect. 6.4, para. 2), we propose that recent research on children's understanding of virtual assistants provides valuable insight into how children construe social robots.
Nearly half of American parents of children under age 9 indicate that they have at least one virtual assistant in their home (Rideout & Robb, Reference Rideout and Robb2020), meaning that these devices are far more likely to be familiar to children than even the most popular social robots. Virtual assistants are interactive and conversational and behave in socially contingent ways. Recent research suggests that children as young as age 4 can effectively interact with virtual assistants (e.g., Lovato & Piper, Reference Lovato and Piper2015; Lovato, Piper, & Wartella, Reference Lovato, Piper and Wartella2019; Oranç & Ruggeri, Reference Oranç and Ruggeri2021; Xu & Warschauer, Reference Xu and Warschauer2020) and, by age 7, children view them as reliable information sources (Girouard-Hallam & Danovitch, Reference Girouard-Hallam and Danovitch2022). Moreover, children ascribe both artifact and non-artifact characteristics to these devices. Children ages 6–10 attribute mental characteristics like intelligence, social characteristics like the capacity for friendship, and some moral standing to a familiar virtual assistant (Girouard-Hallam, Streble, & Danovitch, Reference Girouard-Hallam, Streble and Danovitch2021), but they also hold that virtual assistants cannot breathe and are not alive (Girouard-Hallam & Danovitch, Reference Girouard-Hallam and Danovitch2022).
Thus, similar to the children in Kahn et al.'s (Reference Kahn, Kanda, Ishiguro, Freier, Severson, Gill and Shen2012) Robovie study, children do not treat virtual assistants entirely like other humans nor like inanimate objects. Instead, children may view them as belonging to a new ontological category that occupies its own niche between person and artifact (e.g., Kahn, Gary, & Shen, Reference Kahn, Gary and Shen2013; Kahn & Shen, Reference Kahn, Shen, Budwig, Turiel and Zelazo2017; Severson & Carlson, Reference Severson and Carlson2010). In a study examining children's ontological beliefs about virtual assistants, Festerling and Siraj (Reference Festerling and Siraj2020) found that 6–10-year-old children had clear ontological beliefs about humans and artifacts, but children believed that virtual assistants possessed human and artifact features simultaneously. Thus, children view virtual assistants as a unique entity rather than as a mechanical depiction of a non-unique entity, such as a person. Contrary to C&F's arguments that people view social robots as non-real facsimiles of real social agents by engaging with them and then appreciating their qualities (the dual-layer argument; target article, sect. 6.4, para. 2), and that children in particular treat robots “as interactive toys – as props in make-believe social play” (target article, sect. 6.4, para. 1), children appear to believe that virtual assistants are at once animate and inanimate, rather than separating these entities into a real structure and an imaginary depiction.
Children's para-social partnerships with virtual assistants further contribute to the idea that children view virtual assistants as a new ontological category, occupying a unique space between artifact and person. Para-social relationships are emotionally tinged and one-sided, and they commonly occur between children and media characters, such as characters from popular television shows (Richards & Calvert, Reference Richards, Calvert, Barr and Linebarger2017). Parents report that their young children form para-social relationships with virtual assistants and that these relationships result from children's exposure to these socially contingent devices (Hoffman, Owen, & Calvert, Reference Hoffman, Owen and Calvert2021). Thus, it seems that the more time children spend with virtual assistants, which can respond and engage in conversation with them, the more likely they are to believe that virtual assistants are companions that care for them and that should be cared for in turn. Similarly, there is evidence that children treat virtual assistants as trusted social partners, and benefit from pedagogical exchanges with them similar to the ones they have with human partners (Xu et al., Reference Xu, Wang, Collins, Lee and Warschauer2021). C&F use Fischer's (Reference Fischer2016) hypothesis that some people are “players” and some are “non-players” to explain that “not everyone is willing to play along with a robot – or to do so all the time” (target article, sect. 7.2, para. 7). We propose that children who regularly interact with virtual assistants accrue a willingness to engage as “players” with these devices, which by extension changes the way that they view them and might change the way they view social robots as well.
In conclusion, as this generation of children grows up with virtual assistants and similar devices, and virtual assistants occupy an increasing part in adults' day-to-day lives, it will be necessary to re-evaluate C&F's stance. Interactions with virtual assistants may reveal a more complex general relationship between humans and robots than C&F claim. It may be that rather than viewing social robots as depictions of social agents, children and adults who have experience with virtual assistants instead view them as semi-social agents. In other words, they may view social robots not as a composite of several parts, but rather as a unique assemblage of human and artifact characteristics. Additional empirical research that takes a developmental approach to examining the conversations and interactions people have with virtual assistants could aid in testing C&F's hypothesis that “people construe social robots not as agents per se, but as depictions of agents” (target article, sect. 1, para. 3). A developmental and ontological perspective on social robots may move the conversation beyond mere depiction to a deeper understanding of the role social robots play in our daily lives and how we view them in turn.
Financial support
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.
Competing interest
None.