Smart, kind, thoughtful, creative, loving, and curious. Not only aspirational human attributes, but these are all characteristics attributed to Meta AI, the new chatbot available on Facebook, Instagram, and Threads.Footnote 1 When, in April 2024, Meta rolled out the newest addition to its line-up of social media projects, they promoted its ability to “learn, get things done, create content, and connect.”Footnote 2 But as a conversational AI, it is also built to have a human-like persona.
In this, Meta AI builds on the increasing popularity of conversational AI platforms such as Replika and Character.AI, which offer a variety of bots, many of which can be customized by appearance, background, and characteristics. Meta’s not the first to release an in-app chatbot – SnapChat, for instance, released My AI early in 2023 – but it is the most widespread rollout.Footnote 3 We’ve moved beyond Alexa and Siri to AI friends.
“By the end of the decade,” Mark Zuckerberg predicted, “I think lots of people will talk to AIs frequently throughout the day.”Footnote 4 Beyond its obvious self-interest, Zuckerberg’s claim echoes much of the research on chatbots, which claims that chatbots “can fill some of the same needs as human acquaintances.” As they become more sophisticated, work by sociologists and psychologists suggests, “there is hope that these conversational agents will soon provide personalized social support to a variety of users.”Footnote 5 According to this way of thinking, technical innovation is the only thing standing in the way of consistent, fulfilling human-AI interaction. From researchers to corporate executives, the hope – and hype – for large language models to offer companionship abounds, even going so far as to suggest that they may assuage the “loneliness epidemic” identified by Surgeon General.Footnote 6
Yet, Murthy’s report identifies online interactions as a key cause of the loneliness epidemic. Lacking meaningful in-person relationships, people turn to online interactions, which then drain their attention even further from the people around them. This type of interaction, Murthy writes, “displaces in-person engagement, monopolizes our attention, reduces the quality of our interactions, and even diminishes our self-esteem.”Footnote 7
Zuckerberg’s technosolutionist thinking imagines that technology can fix the very problems that it has created. If only the chatbot is good enough, this way of thinking goes, it will be an effective substitute for human interaction. Researchers often discuss this good-enough threshold in terms of authenticity for chatbots and the large language models they run on.Footnote 8 A 2018 study, for instance, defined authenticity for chatbots as “the interplay of learning by experience, anthropomorphizing, having a transparent purpose, showing (advanced) conversational behavior and being coherent.” With this guideline in mind, they propose, “researchers will be able to improve socio-ethics for emerging technologies.”Footnote 9 But authenticity for AI – whose artifice is in its name – is a complex question, one that has been shaped by cultural imaginations of both intelligent machines and human communication. The history of human contact with and through machines shapes the expectations we bring to those interactions today, from fear of AI to technosolutionist fantasies.
The propensity of humans to anthropomorphize their conversations with computers is as old as the oldest chatbot. The illusion of humanness that some people experience when interacting with a computer is called “the Eliza effect,” after the first chatbot, Eliza, invented by MIT professor Joseph Weizenbaum in the mid-1960s.
Sociologist Sherry Turkle called the invention of Eliza a “crisis of authenticity.”Footnote 10 Eliza ushered in new questions about authentic relationships, a term “traditionally reserved for relationships in which all parties were capable of feeling [trust, caring, empathy] – that is, where all parties were people.”Footnote 11 The new reliance on technology for interaction plus the illusion of mutual understanding was tempting, Turkle claims, but ultimately empty. The lack of relational authenticity left people open, she believed, to being fooled by whoever was controlling the bot – a scammer, a corporation, a government. But more insidiously, it also hampered the ability to form meaningful connections with other humans. The Eliza effect, Turkle believes, substituted the simulation of a relationship for the real thing.
As chatbots have become more imbricated in the fabric of everyday life, the concept of AI authenticity has become more complex. Some models for authenticity work along Turkle’s way of thinking, attempting to create such a good simulation that users will feel that it is authentic. A 2018 study of customer service chatbots, for instance, argues that “a messaging agent has to anthropomorphize in order to be authentic.”Footnote 12 The authors quote interview subjects who all use language derived from the fictional simulation: the bot should “create a human persona, which you can relate to and model for yourself,” what the authors refer to as a “character.”Footnote 13 In this way of thinking, AI authenticity is a matter of convincing a user of the chatbot’s human likeness.
With the recent explosion in large language model innovation, however, researchers are looking to refine the concept of AI authenticity away from imitation. A 2023 study by business and computer science researchers defines AI authenticity not as “human-likeness” but the expectation that “the companion chatbot would evolve in its own unique way as a result of communicating with its human counterpart.”Footnote 14 Users will be more likely to evaluate the bot as a legitimate companion if it communicates not exactly as a human would but nonetheless providing a pleasurable interaction experience. In this way, the more recent study takes the conclusion of the 2018 study—“we do not need intelligent, but rather socially intelligent agents” and gives it depth. Authenticity, here, is not a successful copy but a new original.Footnote 15
In both cases, though, the research looks to human users to rate AI authenticity, a set of expectations shaped long before the invention of chatbots. For the Eliza effect to take hold in the 1960s, users had to already have a framework for disembodied text as a sign of humanness, devoid of the embodied touches of handwriting or voice. That experience of disembodied real-time communication originated not with Eliza in the 1960s, but in the nineteenth century with the telegraph. For the first time, simultaneous communication was possible at a distance. Two people – or more – could banter without hearing a word and flirt without a look, separated by hundreds or thousands of miles.
Newspaper accounts warned of the dangers of separating the body from language in this way, detailing what we’d now call catfishing scandals. Other accounts – usually fictional – were more romantic. Ella Cheever Thayer’s 1879 hit novel, Wired Love: A Romance in Dots and Dashes, recounts the courtship-at-a-distance of Clem and Nathalie, two young telegraph operators. The novel turns on the question of whether it’s possible to form a connection with words only. How long can they exist only in language? At what point does the body matter, if at all?
Wired Love shows how the expectations of a body on the other side endure. In the middle of the novel, Clem shows up at Nathalie’s office unannounced, oily, crooked, and musky. He struts in with an “air of cheap assurance,” and flashes his trinkets, reeking of cheap cologne, hair coated in bear grease.Footnote 16 He winks and oozes all over her, leaving Nathalie flummoxed. How could this be the person whose words she has fallen for?
It turns out that the visit was only a prank by a jealous telegraph operator on the same circuit who had been listening to their flirtation through the wire. This plot device depends on readers who, in 1879, are at once suspicious of disembodied communication and enchanted by it. And, in the end, the real Clem appears in a body as elegant as his prose. The final lines of the book appear in Morse code, a puzzle for the reader. Even as Clem and Nathalie swear off dots and dashes in favor of face-to-face interaction, the novel still captures the allure of wired love. The fantasy that real connection happens exclusively through language, without the complications of physicality, social context, or material circumstance.
Scholars have come to believe that these mediated, instantaneous conversations now define the modern experience of communication. That is, we understand our interactions with others as inherently entangled with technology, or as Laura Otis puts it, leading “users to think as though they were part of a net.”Footnote 17 Even when speaking face to face, we feel a circuitousness, the paradoxical feeling of distanced closeness engendered by the phone, the internet, or – as in Nathalie’s case – the network of telegraph wires. In other words, when our personal interactions already feel mediated, it’s even easier to imagine that a mediated conversation is personal.
Together, this gives us two, sometimes conflicting beliefs: true connection happens through language alone, which contributes to the feeling of humanness or authenticity we can get with an AI. And, at the same time, there is also the sense that our in-person interactions already feel mediated, which makes it harder to connect. In addition, much of the research on AI authenticity comes out of business-oriented research. Creating trust in chatbots, for these studies, is a way to improve sales, not solve loneliness.
These beliefs will change as the technology changes, but to understand the fantasy being peddled by Meta AI and other chatbots, we must first understand our technologically mediated histories. They, in turn, shape our present-day evaluations of this technology. Authenticity in AI relationships, I believe, will not be marked by mimicking human relationships, a one-to-one replacement of humans with AI. Instead, it will be an acknowledgment of the artifice of artificial intelligence, a new kind of relationship for our mediated (and often lonely) times.
Author contribution
Conceptualization: M.W.; Data curation: M.W.; Funding acquisition: M.W.; Investigation: M.W.; Methodology: M.W.; Project administration: M.W.; Resources: M.W.; Software: M.W.; Supervision: M.W.; Validation: M.W.; Visualization: M.W.; Writing – original draft: M.W.; Writing – review & editing: M.W.
Financial Support
This research received no specific grant from any funding agency, commercial or not-for-profit sectors.
Competing interest
The author declares none.