Introduction
Epistemic burden (Pierre et al. Reference Pierre, Crooks, Currie, Paris and Pasquetto2021), a concept drawn from social epistemology, black feminist thought, and other branches of social and cultural theory, is a way to describe how dominant culture actors discredit and blame the knowledge and knowledge practices of minoritized groups to further disadvantage these already burdened groups in a way that compounds their oppression. Burden is a unique theme when it comes to governance of many types. Generally, policy only attends to burden when it is too large or visible to ignore, and otherwise ignores and perpetuates it through a number of mechanisms. Elsewhere, we discussed epistemic burden in the context of collecting evidence of police brutality in the US (Paris et al. Reference Paris, Pierre, Currie, Pasquetto, Currie, Knox and McGregor2022). Lawmakers’ expectation that more and more data is required to act on police brutality has the effect of indefinitely stalling interventions that could effectively fight it. Requiring “more data” can hardly be the solution to the problem as statistics on police brutality are coproduced by systems of power and oppression to specific ends (Paris et al. Reference Paris, Pierre, Currie, Pasquetto, Currie, Knox and McGregor2022). Survivors of police brutality and their families organize to collect various forms of evidence aimed at challenging or completing official government-produced data. However, they find themselves overwhelmed by such practices and, most importantly, by the lack of recognition given to them by law enforcement agencies and the legal system. Local, community-led data collection and analyses efforts are disregarded, discredited, or not taken seriously, mainly because they start from different epistemological assumptions and, most importantly, point to the necessity of considering radical solutions that make those in power uncomfortable (i.e., the abolition of carceral structure). In this sense, the victims of police brutality are shouldered with the epistemic burden of making the phenomenon itself visible to people who will never fix it (Paris et al. Reference Paris, Pierre, Currie, Pasquetto, Currie, Knox and McGregor2022).

Figure 8.1 Visual themes from hidden virality and the everyday burden of correcting WhatsApp mis- and disinformation.
In terms of misinformation governance, burden is also largely unaddressed. Work to make online spaces better, with less harassment, hate speech, and false and misleading information, takes resources – epistemic resources like time, education, and the ability to present arguments, but also social resources like higher social profiles, which are commonly predicated on wealth, privilege, and notoriety (Fricker Reference Fricker2007; Paris et al. Reference Paris, Pierre, Currie, Pasquetto, Currie, Knox and McGregor2022; Pierre et al. Reference Pierre, Crooks, Currie, Paris and Pasquetto2021). Often, those with the fewest resources are the most targeted and, paradoxically, the ones who disproportionally bear the burden of making online spaces better (Citron Reference Citron2016; Collins-Dexter Reference Collins-Dexter2020; Kuo and Marwick Reference Kuo and Marwick2021; Noble Reference Noble2013). This reality is rarely taken into consideration in conversations around misinformation and attendant concerns of using media literacy, information literacy, and fact-checking to mitigate it. Such interventions – like in the case of requesting more top-down data to prove police brutality – can hardly solve problems of false and misleading information given that power imbalances actively discourage users from sharing verified information that goes against local norms, and from correcting those who hold power. Most importantly, by failing to frame issues of misinformation into the broader cultural and economic dynamics that lead to them (tech colonialism in primis), literacy and fact-checking approaches leave the public with the false and unfair impression that users are at fault for causing, and at the same time responsible for countering, the problem of misinformation, while they are only one part of the story.
In this chapter, we apply the concept of epistemic burden to describe how Meta, a powerful US-based company, shapes the technical affordances that lead to cumbersome everyday practices of correction for mis- and disinformation on WhatsApp for users in India. In Latin America, Africa, Europe, the Middle East, and Asia, private WhatsApp chats are an irreplaceable tool for the organization and maintenance of social life (Rossini et al. Reference Rossini, Stromer-Galley, Baptista and Veiga de Oliveira2020; Wardle Reference Wardle2020). Multiple factors seem to have led to WhatsApp’s success worldwide. It is affordable and user-friendly, and it provides users with great flexibility in terms of which format to choose for sharing content (text, audio, video, etc.), full control over the selection of who sees such content (there is no algorithmic content curation), and a perceived feeling of privacy (mostly given by the presence of an encryption protocol). In addition, WhatsApp users tend to know each other personally, suggesting a prevalence of close contacts – a factor that further contributes to a perceived sense of privacy (Pasquetto et al. Reference Pasquetto, Jahani, Atreja and Baum2022). Taken together, these affordances have made WhatsApp the most popular messaging app and preferred means of everyday communication and information sharing in many countries including India, where we conducted our research.
While information on WhatsApp can theoretically be shared freely without prejudice, empowering minoritized groups to share their experiences and amplify their voices (Lim Reference Lim2020), many use WhatsApp in ways that negate and sometimes directly inhibit enacting or expressing these freedoms. WhatsApp is a notorious vehicle for mis- and disinformation, and uncivil and dangerous speech. The use of WhatsApp has fostered pressing concerns around elections, ethnic violence, and other damaging consequences. False and misleading information spread through WhatsApp has been linked to tipping Brazilian, Indian, and Nigerian elections to authoritarian candidates (Benghani Reference Benghani2019; Cheeseman et al. Reference Cheeseman, Fisher, Hassan and Hitchen2020; Garimella and Eckles Reference Garimella and Eckles2020; Machado et al. Reference Machado, Kira, Narayanan, Kollanyi and Howard2019), the genocide and forced migration of Rohingya Muslims in Myanmar (International Crisis Group 2017; Siddiquee Reference Siddiquee2020), and deaths due to disinformation about a global kidnapping ring (Banaji et al. Reference Banaji, Bhat, Agarwal, Passanha and Pravin2019). During the COVID-19 outbreak, hoaxes, anonymous rumors, and conspiracy theories spread widely on mobile instant messaging platforms (Naeem and Bhatti Reference Naeem and Bhatti2020). In Lombardy – the Italian region most affected early in the pandemic – traffic on WhatsApp and Facebook Messenger quickly doubled after its onset, and with it the viral sharing of false information and conspiracy theories (Facebook Inc. 2020). In India, where we conducted our research, mis- and disinformation on WhatsApp not only caused health concerns by projecting alternative medicines as potential cures for COVID-19, but also led to panic situations through unverified claims about internet shutoffs and shortages of essential commodities during the lockdown period (Khan Reference Khan2020; Pal Reference Joyojeet2020).
WhatsApp is particularly concerning when it comes to mis- and disinformation and uncivil and dangerous speech as we see the activation of what is known as the phenomenon of hidden virality, a construct originally theorized by Paris and Donovan (Reference Paris and Donovan2019) that refers to unvetted, insular discourse on encrypted, private platforms that takes on a character of truth and remains mostly unnoticed until causing real-world harm. In this chapter, we discuss how and why hidden virality is activated on WhatsApp. We investigate how WhatsApp’s users perceive the problem of mis- and disinformation on the platform and deal with it daily, and we also discuss how WhatsApp’s sociotechnical affordances (either actual or perceived) can encourage users to share dangerous speech.
The chapter draws from digital ethnographic work that included semi-structured interviews and chat texting with forty self-reported WhatsApp users and group administrators who are residents of India. All who reported their location were either in or near (within fifty miles of) large cities. We conducted thirty interviews between January 2019 and March 2020. Fifteen formal interviews were conducted between January and March 2019, and fifteen more between January and March 2020. Chat conversations with ten more individuals were conducted between January and March 2020. After the official interviews ended, we maintained chat conversations with some of the users for about six months. Employing an anticolonial, feminist praxis, we actively worked to create transparent relationships with our participants over time, building reciprocal trust and respect. Foundational in establishing these relationships was the acknowledgment of our positionality as western researchers with no lived experience of Indian historical and sociocultural dynamics. Chats and video calls were conducted in English. Whenever participants shared content in other languages (Hindi, Bengali, Tamil, Malayalam), it was translated into English by research assistants from the region, who also helped to contextualize shared information culturally and socially.
We found that for Indian WhatsApp users, the burden on the powerless of correcting everyday misinformation is exacerbated due to high power distance (Fichman and Rathi Reference Fichman and Rathi2021) between administrators and users in small private encrypted groups, especially when users are not part of the dominant hegemonic subculture but the administrator is. In these cases, individuals, specifically women, who are targeted by misinformation or harassment do not correct it and struggle like bystanders in other settings (Fichman and Sanfilippo Reference Fichman and Sanfilippo2015; Herring et al. Reference Herring, Job-Sluder, Scheckler and Barab2002; Maltby et al. Reference Maltby, Day and Frosch2016; Shachaf and Hara Reference Shachaf and Hara2010). We found that promoting information accuracy is challenging in our research context with Indian WhatsApp users, but this phenomenon and difficulties surrounding it are neither unique to encrypted messaging, as they even occur on open social media platforms (Citron Reference Citron2016), nor to the Indian context (Collins-Dexter Reference Collins-Dexter2020).
WhatsApp’s Initial Deployment and Usage in India
WhatsApp’s 2010 introduction to India came just as 3G connectivity grew in the country (Kumar et al. Reference Kumar, Verma, Ojha and Venkat Babu2012), causing a meteoric rise in use. Rather than having to buy a certain number of SMS messages from Indian telecommunications carriers, users could download the mobile app and pay a flat, very low fee for unlimited messaging that resembled SMS. In Reference Noble2013, Facebook was underutilized by large segments of the global market (Prasad Reference Prasad2018). Trying to remedy this, that same year Facebook launched Free Basics, a mobile app allowing users to navigate a handful of platform-selected apps and services without using their data allowance. During its first year, Free Basics faced strong government backlash, while WhatsApp’s popularity surged. WhatsApp gained 40 million users from April to September 2014, and by the end of that period, 10 percent of WhatsApp users worldwide were Indian (Diwanji Reference Diwanji2019).
WhatsApp initially ran on a subscription model of $1/year; sometimes the first year’s subscription was free, but users paid later. Facebook purchased WhatsApp for $19 billion in February 2014, and from 2016 WhatsApp was offered completely free of charge (WhatsApp 2016). That same year, the Telecommunication Regulatory Authority of India banned Free Basics in the country, stating that it, and similar zero-rating programs that allow the use of a narrow set of apps and services without cost to data plans, violated net neutrality (Telecom Regulatory Authority of India 2016). Facebook’s WhatsApp does not offer the same infrastructure as Free Basics, but it is used for voice calling and texting and is used to organize everyday social life, allowing users to pay bills, book hotels, make restaurant reservations, send greeting cards, schedule doctor’s appointments, post or answer real estate ads, and share information informally, the last of which has become the topic of much research and debate.
Adding to WhatsApp’s appeal in India, the app uses Signal Protocol, developed by Open Whisper Systems (OWS), an open-source nonprofit organization now part of the Signal Foundation. OWS received funding from the Open Technology Fund, started in 2012 by US-backed Radio Free Asia; it has ties to the US Agency of Global Media (USAGM), formerly the Broadcasting Board of Governors (BBG), which encourages support of the United States’ version of democracy abroad through its communication channels (BBG 2014, 2017; Roose Reference Roose2018). Indeed, pro-democracy and human rights activists across the South Pacific rely on these encrypted technologies for mobilization (Lim Reference Lim2020). WhatsApp’s Signal Protocol boasts end-to-end encryption (E2EE), where both sender and receiver have keys to encrypt and decrypt messages that they share. No one else – not even the service provider (WhatsApp) or government – can decrypt message contents (WhatsApp 2020).
A “Cascade of Affordances” Locked Users In
In addition to its early adoption in the region and the use of E2EE protocols, a series of other factors have contributed to the successful adoption of WhatsApp in India and worldwide. While enabling limited broadcast communication, WhatsApp provides users with great control and flexibility over information sharing and management overall. Even groups on WhatsApp constitute highly regulated spaces as admins can remove or ban members at any time at their discretion (Valeriani and Vaccari Reference Valeriani and Vaccari2018). The admins tend to set a group’s tone and, using feedback from other members, decide what is and is not allowed to be discussed within the group. Therefore, when used for political campaigning, WhatsApp groups support microsegmentation and microtargeting of audiences (Evangelista and Bruno Reference Evangelista and Bruno2019). Contrary to social media platforms, WhatsApp is not subjected to algorithmic curation or platform content moderation. If users want to share a particular message, they can explicitly state and select the audience for that message. WhatsApp also provides delivery and read receipts for each message, which act as a further indicator of whether the intended audience has received/read the message. In addition to all this, users can also quickly and easily share content in different modalities (text, audio, video, images). Images are often shared in the form of “forwarded messages,” which report no or little information about the source of that message or where it originated. This is unlike social media platforms such as Facebook or Twitter, where sharing or retweeting also carries information about the original post and its source, even if these can be easily manipulated (Acker and Chaiet Reference Acker and Chaiet2020). Except for political campaigners, WhatsApp users typically know most of their contacts personally, as WhatsApp’s users need a person’s phone number to add them as their contact, suggesting a prevalence of close and personal connections. One respondent shared:
WhatsApp is a good tool for communication because it is a free app that every friend of mine has in their hand at any given time. I receive messages only from people I am in touch with and not everyone who wants to contact me or send anonymous messages. Again, it is so easy to voice call, video call, and even voicemail or send and receive photos/pictures whenever … Compared to other apps, WhatsApp is easy, quick, reliable, and can be used by anyone at any time to communicate with people you know.
Ease of use seems a key factor that drove participants to the platform in the first place. Participants also reported having started to use WhatsApp for its unrestricted availability among many demographics and its inexpensive service. “Everyone already being there” plays a role in keeping them hooked on it. As participants noted – at the time of collecting our interviews – all their close family members and friends are on WhatsApp. Participants employed it mainly to communicate with family, friends, and colleagues in private chats and on calls or in small groups. Our conversations focused on the use of WhatsApp for one-on-one communication and small-group communication.
I’m in four or five groups total on WhatsApp. One is my family group, the immediate family, another one is for high school friends (we formed the group two years ago, 46 years after graduation), one is for my wife’s family, one for university friends (the 1974 graduating class).
I use WhatsApp to communicate with my family members and friends. We have one local group for the family, mainly used for sharing information. I ask about their difficulties when I’m not at home. I travel quite a lot. … I’m in three groups, all about family or close friends … One is family, one for my community, relatives, other extended family, cousins.
Participants were generally enthusiastic about WhatsApp and reported feeling gratitude: It enables them to talk with their loved ones at no cost, which is perceived to be a great advantage, especially by older generations.
I like WhatsApp very much. I like that I do not have to pay anything to talk with people, only my Internet bill. My wife can call my daughter in Europe every day; that would have been very expensive years ago. Life was terrible those days. … I have seen my grandson growing up on WhatsApp. It is a very useful tool. I’m glad it exists; I use it at least two or three hours per day.
Our respondents in India use WhatsApp for numerous pro-social activities, including communicating with distant family and friends. For those users, WhatsApp’s accessibility, ability to interface with different people in different ways, and low cost make it the right tool for many tasks. Interview data also suggest that participants’ preferred means of communication on WhatsApp is voice messages. Participants seemed to perceive audio as the fastest and easiest way of sharing information on the platform, as they can record and listen to voice messages while doing other activities. Indeed, WhatsApp allows users to lock the recording button until the recording is over, freeing their hands for other activities. Participants explained that they record and receive multiple chains of voice messages per day, which can be auto-played in sequence, without interruption. They seem to perceive audio messages to be an effortless way of consuming information. It might be that voice-based communication gives a more personal experience than text- or image-based content that does not directly involve another individual.
Individually, each one of these affordances has made WhatsApp attractive to users in India (as well as worldwide) but might not have been sufficient to ensure their long-term commitment to the platform. However, in combination, these affordances have made it costly for users to leave the platform for a new information-sharing space. In a sense, WhatsApp’s affordances work as a “cascade” that keeps users from exiting its usage flow and routine information practices (Figure 8.2)

Figure 8.2 Cascading WhatsApp affordances.
WhatsApp’s free-to-use services in India initially drew people to the platform. Its technical features of connected apps and merging information with users’ phone contacts enabled them to connect widely and often with their close friends and family. Once the app had been adopted by the majority of mobile users and their close contacts, it became the place “where everyone already is” – a factor that made it costly for users to leave the platform for another service. On top of this, users’ “perceived privacy” – resulting from their awareness of E2EE protocols on the platform – encouraged continued usage of WhatsApp as it promotes feelings of privacy, closeness, and ease of expression. Like a powerful cascade of water incessantly falling over a person beneath a waterfall, WhatsApp’s affordances “trap” users and keep them from swimming away.
“Just-In-Case” Information Sharing on WhatsApp
While ensuring its long-term adoption, on the one hand, WhatsApp’s affordances also set the platform up as an environment in which users feel particularly comfortable sharing information, including that which they might consider controversial (such as misinformation, disinformation, and uncivil and dangerous speech) and that they might be afraid to share in other spaces. It has been shown how, generally speaking, great flexibility in terms of audience selection and the possibility of retaining control over self-presentation is an important motivator for sharing news online (Ihm and Kim Reference Ihm and Kim2018). Valeriani and Vaccari (Reference Valeriani and Vaccari2018) noted that “network selection and message control allowed by messaging apps facilitate the circulation of controversial information within closed circles”; as a result, users who politically censor themselves on social media are more likely to engage in political discussions on WhatsApp. Also, communication on messaging apps increases as a response to distrust in mainstream media and reluctance to express oneself in online forums for fear of repercussions (Kuru Reference Kuru2019).
Our investigation suggests that WhatsApp’s E2EE feature, and the resulting feeling of “perceived privacy,” also played a crucial role in encouraging the sharing of controversial content. Nearly all our participants expressed feeling freer to have “more honest” conversations on WhatsApp’s various communication structures than on other platforms, most notably Facebook. The secure communications promised by WhatsApp’s E2EE play a key role in this perception. All participants were aware of the E2EE protocol and mentioned it as the main factor for liking and using the platform. They reported being particularly appreciative of the idea that WhatsApp’s conversations cannot be accessed and monitored by authorities; phone calls cannot be recorded and outsiders cannot join chat groups. However, E2EE is not nearly as impenetrable as it is marketed to be. There are several hacks and third-party apps that can be used to access, download, and analyze WhatsApp data and metadata, even when conversations have been deleted, or to record phone calls. In addition to this, groups set as “public” can in fact be joined by outsiders, including researchers (when an invite link exists on the web). None of the individuals we talked to reported being aware of any difference between “closed groups” and “open groups.” All participants reported exclusively being members of what they perceived to be closed, private groups that can only be joined with the permission of the group administrator. Given these limited capabilities of E2EE to protect WhatsApp users’ privacy, we refer to the privacy offered by E2EE as “perceived privacy.”
The fact that WhatsApp allows for audience selection and message control, in addition to users’ perception that WhatsApp’s E2EE allows free and honest conversations, manifests in various uses. Respondents most commonly reported using the app “to share jokes and fun content” that are often dark in tone or contain adult content, and often related to current events and breaking news. When asked “How is content shared on WhatsApp different from content shared on Facebook?” they consistently reported that content on WhatsApp is funnier and more engaging than content shared on Facebook. The second most common answer was that WhatsApp content presents more useful or helpful information compared to Facebook content, typically related to current events and urgent situations.
While most participants claimed awareness that content circulating on WhatsApp might be inaccurate or misleading (“risky”), this did not stop them from sharing it. Our conversations with participants revealed that when exposed to an unverified rumor, they seemed to value its potential utility more than its potential inaccuracy. In other words, when they receive a piece of content on WhatsApp and believe that it is potentially useful or interesting to someone they care about, they share it despite knowing that it could be inaccurate. Accuracy is not the primary concern, especially in times of crisis. Instead, users engage in what we call just-in-case sharing in which they place more importance on the potential benefits for friends and family should the information be true than the social repercussions they might face if the rumor is false or inaccurate (e.g., losing credibility). Signs of this behavior emerged during multiple interactions with our study participants. During an informal conversation via chat between one of the authors and a user, the user stated that “I re-share what they sent me even if it might be not accurate, just in case it is true.”
Admin-led Moderation of Dangerous Speech in Small, Closed Groups
A significant amount of offensive, dangerous, and explicit content proliferates within WhatsApp’s groups. Adult content is particularly common in men-only chats. Nationalistic or religious content inciting hatred toward others is also shared quite often, and, as our participants noted, it “goes both ways,” with Hindus sharing offensive content about Muslims and vice versa. Women and members of religious minorities actively reach out to group administrators to ask for the removal of accounts that share inappropriate or offensive content. A user reported being ostracized by group members for refusing to share nationalistic and religious content that he perceived as offensive toward the Muslim minority:
To join each group, you either know the administrator or you need a recommendation from someone. You have to conform to the norm of that community. There is that expectation if you try to criticize. Once, they emotionally blackmailed me. I was ostracized from that community. “You don’t love your country,” they told me. These groups of colleagues and friends are closely guarded communities.
Everyday communications on WhatsApp happen in small groups that are perceived as “closed,” meaning that they are managed by someone users personally know. Participants voiced concerns about unmoderated content spread in such groups, independently of age and sex. Closed WhatsApp groups’ information-sharing practices operate on what we refer to as a membership model; group administrators add members to the group, controlling who is in and who is out. They set the group’s tone and, sometimes using feedback from other members, decide what is and is not allowed to be posted. Group members concerned with the spread of dangerous speech observed that the issue with offensive and inappropriate content is not only the content itself but where it is shared. From this perspective, if controversial content is shared where it is allowed, WhatsApp users tolerate it.
The Burden of Correcting Everyday Mis- and Disinformation
Participants also noted that while they are willing to fact-check WhatsApp forwards for their own sake, they rarely engage in correcting other users by sharing evidence. Most users expressed the desire to correct other users on a daily basis. However, when asked to provide a concrete example of such behavior, only two respondents were able to do so. This specific finding might help with the interpretation of results from survey research based on self-reported correction behavior that found that corrections are quite common on WhatsApp, suggesting that such pro-social behavior might be over-reported (Rossini et al. Reference Rossini, Stromer-Galley, Baptista and Veiga de Oliveira2020). When asked to elaborate further on the challenges of correcting others, participants indicated that while they think that correcting others is important, doing it might be considered impolite or rude because of cultural factors, especially if the sharer is senior or “outranks” them in terms of social status, clearly displaying the concept of power distance and how it reduces to bystanders those targeted by or wishing to act in solidarity with those who are targeted by misinformation or harassment (Fichman and Rathi Reference Fichman and Rathi2021; Fichman and Sanfilippo Reference Fichman and Sanfilippo2015; Herring et al. Reference Herring, Job-Sluder, Scheckler and Barab2002; Maltby et al. Reference Maltby, Day and Frosch2016). A participant clearly spelled out what seemed to be a feeling shared by most of the individuals we talked to: that the onus should not be on users to address information problems with the app but on WhatsApp itself.
I think it is WhatsApp’s responsibility to clean up these messages, not mine. Otherwise, their [WhatsApp’s] credibility will go down; people are getting tired of WhatsApp. There is no mechanism for fact-checking; it is not right.
Due to the platform’s E2EE, total share limits are the only method of moderating speech. This is a source of frustration for users we spoke with, including a few young participants who would like to be able to report false, offensive, and dangerous content on WhatsApp.
We are a democratic country, but our central government … has brought fights between castes and religions. … Offensive speeches and whatnot; people are dying u know in Delhi! At JNU University, people share hate speech through WhatsApp statuses … India is getting messed up 😞 I fear what’s gonna happen … I like WhatsApp because it makes my life easier. But one thing I would like to change is if the report button was there and WhatsApp could have a check when someone reports a profile (due to offensive content).
Because the platform lacks fact-checking mechanisms and content moderation strategies, participants perceive that the burden of cleaning up the everyday sharing of misinformation on the platform is on them. However, correcting others is a practice that goes against their cultural upbringing and makes them very uncomfortable.
We have already noted that the burden of keeping WhatsApp information safe in groups as well as in one-to-one conversations is on users. Minorities or individuals sympathetic toward minorities, such as women, students, and religious minorities find themselves in charge of conducting this delicate work. But in order to have content removed or accounts blocked, these individuals have to convince the administrators of the gravity of the situation. Administrators deliberately decide what should or should not be moderated. WhatsApp groups, then, create situations in which dissent is silenced in favor of the perceived group interests because members are unable to report offensive, false, or misleading content anonymously and face the threat of being questioned, harassed, or ostracized by other members, both online and offline. In these circumstances, we suggest that Meta’s WhatsApp has shaped information practices within local, and now global, contexts without much knowledge of these contexts or consideration of the risks faced by users, such as the dangerous consequences of false and dangerous speech mentioned in the introduction (Benghani Reference Benghani2019; Cheeseman et al. Reference Cheeseman, Fisher, Hassan and Hitchen2020; Garimella and Eckles Reference Garimella and Eckles2020; Machado et al. Reference Machado, Kira, Narayanan, Kollanyi and Howard2019).
Unpacking the sociotechnical affordances of WhatsApp in India that have led to hidden virality (“perceived privacy” above all) suggests how infrastructural design and deployment have allowed the company and its owners to engage in these goals in ways that have exonerated them of accountability for entering global markets and wreaking havoc in the name of profit, entering markets as infrastructural actors in countries such as Myanmar and India, and allowing disinformation content to circulate unchecked until it resulted in genocide against Rohingya Muslims in Myanmar (Mozur Reference Mozur2018) and lynching in rural India (Liao Reference Liao and Shannon2018). Attempts to mitigate the negative effects of hidden virality in India have done little to curb its spread, as our respondents reported seeing no changes in the amount of disinformation that crossed their feeds after the platform limited sharing, with one user sharing one message to just five users or groups. Meta’s interventions to counter misinformation include labeling viral forwards and chain messages, limiting forwards, and the design of on-platform Tip-Lines and ChatBots. Little evidence exists on whether any of these interventions work, or to what extent. Due to WhatsApp’s encrypted nature, it is difficult for researchers to investigate the efficacy of any on-platform fact-checking efforts. Meanwhile, platforms and researchers alike have tried to shift the blame to users’ misuse and lack of education (Chakrabarti, Stengel, and Solanki Reference Chakrabarti, Stengel and Solanki2018). However, users are only part of the story; this paper lays bare that they are engaging with the platform exactly as it was designed (Table 8.1). While the sample for the qualitative study may have been biased toward better-educated individuals living in or in the proximity of large cities, the fact that our respondents feel they cannot directly influence how the platform works reveals much about structural power at work.
Table 8.1 Definitions of key concepts from the chapter
Hidden virality | Refers to how unvetted, insular discourse on digital media can take on a character of truth and remain unnoticed until causing real-world harm. |
---|---|
Epistemic burden | Describes how dominant culture actors discredit and blame the knowledge and knowledge practices of minoritized groups to further disadvantage these already burdened groups in a way that compounds their oppression. |
Cascading affordances | Like a powerful cascade of water incessantly falling over a person beneath a waterfall, cascading affordances “trap” users and keep them from swimming away (i.e., moving to a new service). |
Just-in-case sharing | Certain users place more importance on the potential benefits for friends and family should a piece of information be true than the repercussions they and others might face if the rumor is false or inaccurate (e.g., losing credibility). As a result, users are aware of the potential falsity of the information that they encounter online, but they share it anyway. |
Possible Solutions: Changing Norms around Design
Given all these difficulties in responding to misinformation on WhatsApp, how can hidden virality be captured and addressed? Now that we have given a bit of background and described the everyday burden for information curation that is offloaded to users on WhatsApp and similar platforms, thinking with the Governing Knowledge Commons framework and governance strategies focusing on changing norms around design offer some promising avenues. Here we outline solutions to change norms around design, and the benefits and shortcomings of each of these action areas, who would be responsible for enacting solutions, and who would be affected.
Meta shows that it “values free speech” by making moderation technically impossible, all the while cultivating economic power from WhatsApp’s widespread use. This is an entrenched norm in the tech industry and with Meta and WhatsApp in particular. One simple but likely superficial way is for users and lawmakers across geopolitical borders to demand a change in platform norms, for example by ordering more transparency from these platforms about what content is actually shared and through which mechanisms, and allowing users to easily opt in and out of sharing certain types of data with certain parties, if they so wish. The institution of legislative policy, namely the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) in a few key geopolitical areas, has effectively ushered in opt-in design features across a number of websites and platforms. Solutions for WhatsApp might follow similar protocols.
But this would not necessarily address the problem of the epistemic burden foisted upon those who are minoritized within private WhatsApp groups. More appropriately framing WhatsApp’s onslaught of offensive and dangerous content as a platform problem would require concrete design- and norm-changing measures, such as redesigning the platform to reduce the spread of such content, likely decrease engagement, and reduce the ability to leverage the economic power of the user base. WhatsApp has limited sharing messages to keep information from spreading too quickly. While this is a positive step for the platform, the respondents in our study say they see just as much disinformation as before these share limits were enacted.
While it wouldn’t necessarily limit the amount of disinformation that people see, requiring users themselves to label manipulated or satirical information might be useful to some degree in terms of limiting the work that bystanders do to debunk certain types of problematic information, as we have seen with legislation and moderation around manipulated audiovisual content (Paris Reference Paris2021). But this intervention would likely not decrease the amount of problematic content that is offensive and troublesome to certain disenfranchised bystanders, nor would it reduce the tendency of the most persistent offenders to see this as censorship, as already happens with social media posts labeled as false (Ognyanova et al. Reference Ognyanova, Lazer, Robertson and Wilson2020). Moreover, without oversight and content moderation from the platform, this intervention would be impossible to enforce.
Taking note from respondents of this study, adding an anonymous reporting feature might be a useful intervention for WhatsApp to institute. But it would need accountability and oversight measures to keep it from falling into the reporting traps that are so prevalent on other platforms – namely that nothing happens unless the person reporting, or offending, has some level of notoriety or social capital. Some examples of this are harassment reporting around caste in India, where higher caste members at tech companies ignore the harassment claims brought by those who are of lower caste (Soundarajan et al. Reference Soundarajan, Kumar, Nair and Greely2019), or when women who are not public figures report manipulated nude and sexualized images of themselves being spread across platforms (Paris Reference Paris2021). In these and other cases, it is precisely this resource of social capital that respondents lack, as do others on Twitter and other platforms who encounter difficulties in having their reports reviewed.
Then, also, there are strong normative expectations around existing WhatsApp affordances coming from within the user base to keep the platform as is. Redesigning the platform and instituting anonymous reporting might detract from the perceived benefits of the platform for parties who need or like it because people can share whatever they want with whomever they want, with no formalized oversight. Tech companies and their adherents often argue that changing the platform would decrease WhatsApp’s pro-social possibilities; as Lim (Reference Lim2020) notes, there are pro-democracy and human rights groups that use the platform only because of these affordances.
But the deluge of antisocial content on WhatsApp resulting in real-world consequences need not be justified as problems necessarily generated and sustained by users’ information and social practices. Instead, drawing on science and technology studies (STS) and Costanza-Chock’s (Reference Costanza-Chock2020) design justice branch of critical informatics, we must problematize this framing and envision new sociotechnical solutions that better suit user needs. This reimagination of sociotechnical systems must take the politics of knowledge into account and demand difficult discussions of design goals to focus on the goals of users, including them in the design process as experts on their informational needs, not just as data sources to exploit. This work demands that technologists grapple with the complex cultural and political contexts these technologies will be used in and develop tools that are useful, not exploitative. There are many possible avenues ahead, but all require rethinking our relationships with one another and the global community, how we communicate and cooperate within groups, and what roles technology should play in our lives.
Conclusions
As encrypted messaging apps remain one of the last uncensored spaces on the internet, it becomes vital to disentangle the dynamics of hidden virality, or when, how, and why dangerous content goes viral in these closed spaces while remaining unnoticed to outsiders. This chapter has provided an overview of the dynamics that made false and dangerous content widespread on WhatsApp in India between 2020 and 2021 while discussing users’ takes on such issues. While we focus on WhatsApp in India, such methods of ethnographic research can be employed with users of other encrypted, private messaging platforms like Telegram or Signal in other countries across the globe. Certain issues around the political economy of the platform would be different in other countries and app contexts, each having their own political-economic concerns and issues around social capital (Abubakar, Hafiz, and Dasuki Reference Abubakar and Dasuki2018; Chauchard and Garimella Reference Chauchard and Garimella2022; Soares et al. Reference Soares, Recuero, Volcan, Fagundes and Sodré2021). Further studies in this vein would provide bases for rich comparisons and a better understanding of the interlinked phenomena we see at play in this study.
The wide-scale, long-term adoption of WhatsApp in India results from the deployment of not one but a “cascade” of sociotechnical and interdependent affordances (Figure 8.1). This cascading effect has clear positive consequences for Meta, the corporate entity behind this technology. WhatsApp’s affordances were not designed in a vacuum or by means of neutral intentions: They result from aggregating data across applications on user phones to leverage contacts and services and increase engagement. As Cecere, Le Guel, and Lefrere (Reference Cecere, Le Guel and Lefrere2020), Glick and Ruetschlin (Reference Glick and Ruetschlin2019), and Tang (Reference Tang2016) note, the promise of corporate benefits drove the inclusion and successive maintenance of E2EE as a key affordance of WhatsApp in the first place; the goal of offering aggregated services at a low cost is to generate networked data that is extremely valuable to WhatsApp and its owners, making the sociotechnical affordances a tool that furthers profit-based goals. However, the same cascading effect might not be equally beneficial to users.
Users feel they must stay on the platform even though they may not like everything about it, all the while interacting with others and with information, and engaging in other activities, generating data stored on WhatsApp’s servers that holds promise for revenue. WhatsApp’s revenue model centers on extracting data from WhatsApp and using it to grow and market other Meta products. For example, WhatsApp has access to the phone owner’s contact list. Contacts are used to suggest “new friends” on Facebook, grow Facebook’s user base, and increase its market value. E2EE offers corporate benefits as it bypasses both external and internal platform oversight, which results in diminished accountability for powerful stakeholders (platform owners, shareholders, and government entities) and shifts responsibility to users, as they are the only arbiters of what content is acceptable.
The possibility of sharing information in what is perceived to be a private environment, combined with the possibility of selecting specific audiences and retaining control over messages, encourages the sharing of what is perceived to be “risky but useful” content. Such content rarely comes from official sources or mainstream media, and it is shared through a text message or an audio file and closely resembles “rumors,” which have been defined as public-facing statements imbued with private hypotheses about the workings of the world (Rosnow Reference Rosnow1991), and products of sense-making that people generate to cope with uncertainty and concomitant anxiety (Rosnow Reference Rosnow1988). These definitions both suggest that rumors offer a “collective problem-solving opportunity to individuals who participate” (Kwon, Cha, and Jung Reference Kwon, Cha and Jung2017). A key contribution of this study of how rumors spread on messaging apps is our proposal that WhatsApp users are often aware of the potential falsity of the information that they encounter on WhatsApp, but they share it anyway. This is what we call the “just-in-case” sharing practice: WhatsApp users place more importance on the potential benefits for friends and family should the information be true than the repercussions they and others might face if the rumor is false or inaccurate (e.g., losing credibility). This practice is made possible by a combination of factors and dominant cultural norms, which include the urge to care for close ones by sharing potentially useful content, the lack of awareness of the potential risks involved in amplifying false or misleading content, and the widespread preference among our participants for not correcting family members and close friends out of respect and politeness.
Our work speaks to the need to address many types of epistemic burden that manifest themselves as users engage in information and social practices through WhatsApp and other similar encrypted apps. In these examples, we see how Meta, a powerful US-based technology company, has shaped technical affordances that lead to more work in the everyday practices of correcting mis-and disinformation for those who are already disadvantaged. Typically, tech companies, the popular press, and sometimes even academics blame the presence of disinformation and hate speech on WhatsApp on the knowledge and knowledge practices of the users. This practice fails to acknowledge the role played by western companies and, most importantly, how such blame (intentional or not) further disadvantages these already burdened groups in a way that compounds their oppression. Perceived privacy, when combined with the impossibility for users to report inappropriate content anonymously on WhatsApp, actively encourages the spread of offensive content on the platform. Offensive content is particularly prominent in small, closed groups, which are tightly controlled by administrators. Group members need to personally contact administrators every time they are exposed to offensive content and ask them to remove it. The burden of flagging offensive content falls on the group members who might feel hurt by it, but once again must challenge dominant cultural norms to request moderation.