Skip to main content Accessibility help
×
Hostname: page-component-f554764f5-qhdkw Total loading time: 0 Render date: 2025-04-11T03:52:38.477Z Has data issue: false hasContentIssue false

8 - Hidden Virality and the Everyday Burden of Correcting WhatsApp Mis- and Disinformation

Published online by Cambridge University Press:  13 March 2025

Madelyn R. Sanfilippo
Affiliation:
University of Illinois School of Information Sciences
Melissa G. Ocepek
Affiliation:
University of Illinois School of Information Sciences

Summary

The spread of false and misleading information, hate speech, and harassment on WhatsApp has generated concern about elections, been implicated in ethnic violence, and been linked to other disastrous events across the globe. On WhatsApp, we see the activation of what is known as the phenomenon of hidden virality, which characterizes how unvetted, insular discourse on encrypted, private platforms takes on a character of truth and remains mostly unnoticed until causing real-world harm. In this book chapter, we discuss what factors contribute to the activation of hidden virality on WhatsApp while answering the following questions: 1) To what extent and how do WhatsApp’s sociotechnical affordances encourage the sharing of mis- and disinformation on the platform, and 2) How do WhatsApp’s users perceive and deal with mis- and disinformation daily? Our findings indicate that WhatsApp’s affordance of perceived privacy actively encourages the spread of false and offensive content on the platform, especially when combined with it being impossible for users to report inappropriate content anonymously. Groups in which such content is prominent are tightly controlled by administrators who typically hold dominant cultural positions (e.g., they are senior and male). Users who feel hurt by false and offensive content need to personally ask administrators for its removal. But this is not an easy job, as it requires users to challenge dominant cultural norms, causing them stress and anxiety. Users would rather have WhatsApp take on the burden of moderating problematic content. We close the chapter by situating our findings in relation to cultural and economic power dynamics. We bring attention to the fact that if WhatsApp does not start to take action to reduce and prevent the real-world harm of hidden virality, its affordances of widespread accessibility and encryption will keep promoting its market advantages, leaving the burden of moderating content to fall on minoritized users.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

Introduction

Epistemic burden (Pierre et al. Reference Pierre, Crooks, Currie, Paris and Pasquetto2021), a concept drawn from social epistemology, black feminist thought, and other branches of social and cultural theory, is a way to describe how dominant culture actors discredit and blame the knowledge and knowledge practices of minoritized groups to further disadvantage these already burdened groups in a way that compounds their oppression. Burden is a unique theme when it comes to governance of many types. Generally, policy only attends to burden when it is too large or visible to ignore, and otherwise ignores and perpetuates it through a number of mechanisms. Elsewhere, we discussed epistemic burden in the context of collecting evidence of police brutality in the US (Paris et al. Reference Paris, Pierre, Currie, Pasquetto, Currie, Knox and McGregor2022). Lawmakers’ expectation that more and more data is required to act on police brutality has the effect of indefinitely stalling interventions that could effectively fight it. Requiring “more data” can hardly be the solution to the problem as statistics on police brutality are coproduced by systems of power and oppression to specific ends (Paris et al. Reference Paris, Pierre, Currie, Pasquetto, Currie, Knox and McGregor2022). Survivors of police brutality and their families organize to collect various forms of evidence aimed at challenging or completing official government-produced data. However, they find themselves overwhelmed by such practices and, most importantly, by the lack of recognition given to them by law enforcement agencies and the legal system. Local, community-led data collection and analyses efforts are disregarded, discredited, or not taken seriously, mainly because they start from different epistemological assumptions and, most importantly, point to the necessity of considering radical solutions that make those in power uncomfortable (i.e., the abolition of carceral structure). In this sense, the victims of police brutality are shouldered with the epistemic burden of making the phenomenon itself visible to people who will never fix it (Paris et al. Reference Paris, Pierre, Currie, Pasquetto, Currie, Knox and McGregor2022).

Figure 8.1 Visual themes from hidden virality and the everyday burden of correcting WhatsApp mis- and disinformation.

In terms of misinformation governance, burden is also largely unaddressed. Work to make online spaces better, with less harassment, hate speech, and false and misleading information, takes resources – epistemic resources like time, education, and the ability to present arguments, but also social resources like higher social profiles, which are commonly predicated on wealth, privilege, and notoriety (Fricker Reference Fricker2007; Paris et al. Reference Paris, Pierre, Currie, Pasquetto, Currie, Knox and McGregor2022; Pierre et al. Reference Pierre, Crooks, Currie, Paris and Pasquetto2021). Often, those with the fewest resources are the most targeted and, paradoxically, the ones who disproportionally bear the burden of making online spaces better (Citron Reference Citron2016; Collins-Dexter Reference Collins-Dexter2020; Kuo and Marwick Reference Kuo and Marwick2021; Noble Reference Noble2013). This reality is rarely taken into consideration in conversations around misinformation and attendant concerns of using media literacy, information literacy, and fact-checking to mitigate it. Such interventions – like in the case of requesting more top-down data to prove police brutality – can hardly solve problems of false and misleading information given that power imbalances actively discourage users from sharing verified information that goes against local norms, and from correcting those who hold power. Most importantly, by failing to frame issues of misinformation into the broader cultural and economic dynamics that lead to them (tech colonialism in primis), literacy and fact-checking approaches leave the public with the false and unfair impression that users are at fault for causing, and at the same time responsible for countering, the problem of misinformation, while they are only one part of the story.

In this chapter, we apply the concept of epistemic burden to describe how Meta, a powerful US-based company, shapes the technical affordances that lead to cumbersome everyday practices of correction for mis- and disinformation on WhatsApp for users in India. In Latin America, Africa, Europe, the Middle East, and Asia, private WhatsApp chats are an irreplaceable tool for the organization and maintenance of social life (Rossini et al. Reference Rossini, Stromer-Galley, Baptista and Veiga de Oliveira2020; Wardle Reference Wardle2020). Multiple factors seem to have led to WhatsApp’s success worldwide. It is affordable and user-friendly, and it provides users with great flexibility in terms of which format to choose for sharing content (text, audio, video, etc.), full control over the selection of who sees such content (there is no algorithmic content curation), and a perceived feeling of privacy (mostly given by the presence of an encryption protocol). In addition, WhatsApp users tend to know each other personally, suggesting a prevalence of close contacts – a factor that further contributes to a perceived sense of privacy (Pasquetto et al. Reference Pasquetto, Jahani, Atreja and Baum2022). Taken together, these affordances have made WhatsApp the most popular messaging app and preferred means of everyday communication and information sharing in many countries including India, where we conducted our research.

While information on WhatsApp can theoretically be shared freely without prejudice, empowering minoritized groups to share their experiences and amplify their voices (Lim Reference Lim2020), many use WhatsApp in ways that negate and sometimes directly inhibit enacting or expressing these freedoms. WhatsApp is a notorious vehicle for mis- and disinformation, and uncivil and dangerous speech. The use of WhatsApp has fostered pressing concerns around elections, ethnic violence, and other damaging consequences. False and misleading information spread through WhatsApp has been linked to tipping Brazilian, Indian, and Nigerian elections to authoritarian candidates (Benghani Reference Benghani2019; Cheeseman et al. Reference Cheeseman, Fisher, Hassan and Hitchen2020; Garimella and Eckles Reference Garimella and Eckles2020; Machado et al. Reference Machado, Kira, Narayanan, Kollanyi and Howard2019), the genocide and forced migration of Rohingya Muslims in Myanmar (International Crisis Group 2017; Siddiquee Reference Siddiquee2020), and deaths due to disinformation about a global kidnapping ring (Banaji et al. Reference Banaji, Bhat, Agarwal, Passanha and Pravin2019). During the COVID-19 outbreak, hoaxes, anonymous rumors, and conspiracy theories spread widely on mobile instant messaging platforms (Naeem and Bhatti Reference Naeem and Bhatti2020). In Lombardy – the Italian region most affected early in the pandemic – traffic on WhatsApp and Facebook Messenger quickly doubled after its onset, and with it the viral sharing of false information and conspiracy theories (Facebook Inc. 2020). In India, where we conducted our research, mis- and disinformation on WhatsApp not only caused health concerns by projecting alternative medicines as potential cures for COVID-19, but also led to panic situations through unverified claims about internet shutoffs and shortages of essential commodities during the lockdown period (Khan Reference Khan2020; Pal Reference Joyojeet2020).

WhatsApp is particularly concerning when it comes to mis- and disinformation and uncivil and dangerous speech as we see the activation of what is known as the phenomenon of hidden virality, a construct originally theorized by Paris and Donovan (Reference Paris and Donovan2019) that refers to unvetted, insular discourse on encrypted, private platforms that takes on a character of truth and remains mostly unnoticed until causing real-world harm. In this chapter, we discuss how and why hidden virality is activated on WhatsApp. We investigate how WhatsApp’s users perceive the problem of mis- and disinformation on the platform and deal with it daily, and we also discuss how WhatsApp’s sociotechnical affordances (either actual or perceived) can encourage users to share dangerous speech.

The chapter draws from digital ethnographic work that included semi-structured interviews and chat texting with forty self-reported WhatsApp users and group administrators who are residents of India. All who reported their location were either in or near (within fifty miles of) large cities. We conducted thirty interviews between January 2019 and March 2020. Fifteen formal interviews were conducted between January and March 2019, and fifteen more between January and March 2020. Chat conversations with ten more individuals were conducted between January and March 2020. After the official interviews ended, we maintained chat conversations with some of the users for about six months. Employing an anticolonial, feminist praxis, we actively worked to create transparent relationships with our participants over time, building reciprocal trust and respect. Foundational in establishing these relationships was the acknowledgment of our positionality as western researchers with no lived experience of Indian historical and sociocultural dynamics. Chats and video calls were conducted in English. Whenever participants shared content in other languages (Hindi, Bengali, Tamil, Malayalam), it was translated into English by research assistants from the region, who also helped to contextualize shared information culturally and socially.

We found that for Indian WhatsApp users, the burden on the powerless of correcting everyday misinformation is exacerbated due to high power distance (Fichman and Rathi Reference Fichman and Rathi2021) between administrators and users in small private encrypted groups, especially when users are not part of the dominant hegemonic subculture but the administrator is. In these cases, individuals, specifically women, who are targeted by misinformation or harassment do not correct it and struggle like bystanders in other settings (Fichman and Sanfilippo Reference Fichman and Sanfilippo2015; Herring et al. Reference Herring, Job-Sluder, Scheckler and Barab2002; Maltby et al. Reference Maltby, Day and Frosch2016; Shachaf and Hara Reference Shachaf and Hara2010). We found that promoting information accuracy is challenging in our research context with Indian WhatsApp users, but this phenomenon and difficulties surrounding it are neither unique to encrypted messaging, as they even occur on open social media platforms (Citron Reference Citron2016), nor to the Indian context (Collins-Dexter Reference Collins-Dexter2020).

WhatsApp’s Initial Deployment and Usage in India

WhatsApp’s 2010 introduction to India came just as 3G connectivity grew in the country (Kumar et al. Reference Kumar, Verma, Ojha and Venkat Babu2012), causing a meteoric rise in use. Rather than having to buy a certain number of SMS messages from Indian telecommunications carriers, users could download the mobile app and pay a flat, very low fee for unlimited messaging that resembled SMS. In Reference Noble2013, Facebook was underutilized by large segments of the global market (Prasad Reference Prasad2018). Trying to remedy this, that same year Facebook launched Free Basics, a mobile app allowing users to navigate a handful of platform-selected apps and services without using their data allowance. During its first year, Free Basics faced strong government backlash, while WhatsApp’s popularity surged. WhatsApp gained 40 million users from April to September 2014, and by the end of that period, 10 percent of WhatsApp users worldwide were Indian (Diwanji Reference Diwanji2019).

WhatsApp initially ran on a subscription model of $1/year; sometimes the first year’s subscription was free, but users paid later. Facebook purchased WhatsApp for $19 billion in February 2014, and from 2016 WhatsApp was offered completely free of charge (WhatsApp 2016). That same year, the Telecommunication Regulatory Authority of India banned Free Basics in the country, stating that it, and similar zero-rating programs that allow the use of a narrow set of apps and services without cost to data plans, violated net neutrality (Telecom Regulatory Authority of India 2016). Facebook’s WhatsApp does not offer the same infrastructure as Free Basics, but it is used for voice calling and texting and is used to organize everyday social life, allowing users to pay bills, book hotels, make restaurant reservations, send greeting cards, schedule doctor’s appointments, post or answer real estate ads, and share information informally, the last of which has become the topic of much research and debate.

Adding to WhatsApp’s appeal in India, the app uses Signal Protocol, developed by Open Whisper Systems (OWS), an open-source nonprofit organization now part of the Signal Foundation. OWS received funding from the Open Technology Fund, started in 2012 by US-backed Radio Free Asia; it has ties to the US Agency of Global Media (USAGM), formerly the Broadcasting Board of Governors (BBG), which encourages support of the United States’ version of democracy abroad through its communication channels (BBG 2014, 2017; Roose Reference Roose2018). Indeed, pro-democracy and human rights activists across the South Pacific rely on these encrypted technologies for mobilization (Lim Reference Lim2020). WhatsApp’s Signal Protocol boasts end-to-end encryption (E2EE), where both sender and receiver have keys to encrypt and decrypt messages that they share. No one else – not even the service provider (WhatsApp) or government – can decrypt message contents (WhatsApp 2020).

A “Cascade of Affordances” Locked Users In

In addition to its early adoption in the region and the use of E2EE protocols, a series of other factors have contributed to the successful adoption of WhatsApp in India and worldwide. While enabling limited broadcast communication, WhatsApp provides users with great control and flexibility over information sharing and management overall. Even groups on WhatsApp constitute highly regulated spaces as admins can remove or ban members at any time at their discretion (Valeriani and Vaccari Reference Valeriani and Vaccari2018). The admins tend to set a group’s tone and, using feedback from other members, decide what is and is not allowed to be discussed within the group. Therefore, when used for political campaigning, WhatsApp groups support microsegmentation and microtargeting of audiences (Evangelista and Bruno Reference Evangelista and Bruno2019). Contrary to social media platforms, WhatsApp is not subjected to algorithmic curation or platform content moderation. If users want to share a particular message, they can explicitly state and select the audience for that message. WhatsApp also provides delivery and read receipts for each message, which act as a further indicator of whether the intended audience has received/read the message. In addition to all this, users can also quickly and easily share content in different modalities (text, audio, video, images). Images are often shared in the form of “forwarded messages,” which report no or little information about the source of that message or where it originated. This is unlike social media platforms such as Facebook or Twitter, where sharing or retweeting also carries information about the original post and its source, even if these can be easily manipulated (Acker and Chaiet Reference Acker and Chaiet2020). Except for political campaigners, WhatsApp users typically know most of their contacts personally, as WhatsApp’s users need a person’s phone number to add them as their contact, suggesting a prevalence of close and personal connections. One respondent shared:

WhatsApp is a good tool for communication because it is a free app that every friend of mine has in their hand at any given time. I receive messages only from people I am in touch with and not everyone who wants to contact me or send anonymous messages. Again, it is so easy to voice call, video call, and even voicemail or send and receive photos/pictures whenever … Compared to other apps, WhatsApp is easy, quick, reliable, and can be used by anyone at any time to communicate with people you know.

(Study participant #10, interview round II, February 2020)

Ease of use seems a key factor that drove participants to the platform in the first place. Participants also reported having started to use WhatsApp for its unrestricted availability among many demographics and its inexpensive service. “Everyone already being there” plays a role in keeping them hooked on it. As participants noted – at the time of collecting our interviews – all their close family members and friends are on WhatsApp. Participants employed it mainly to communicate with family, friends, and colleagues in private chats and on calls or in small groups. Our conversations focused on the use of WhatsApp for one-on-one communication and small-group communication.

I’m in four or five groups total on WhatsApp. One is my family group, the immediate family, another one is for high school friends (we formed the group two years ago, 46 years after graduation), one is for my wife’s family, one for university friends (the 1974 graduating class).

(Study participant #1, interview round II, February 2020)

I use WhatsApp to communicate with my family members and friends. We have one local group for the family, mainly used for sharing information. I ask about their difficulties when I’m not at home. I travel quite a lot. … I’m in three groups, all about family or close friends … One is family, one for my community, relatives, other extended family, cousins.

(Study participant #2, interview round II, February 2020)

Participants were generally enthusiastic about WhatsApp and reported feeling gratitude: It enables them to talk with their loved ones at no cost, which is perceived to be a great advantage, especially by older generations.

I like WhatsApp very much. I like that I do not have to pay anything to talk with people, only my Internet bill. My wife can call my daughter in Europe every day; that would have been very expensive years ago. Life was terrible those days. … I have seen my grandson growing up on WhatsApp. It is a very useful tool. I’m glad it exists; I use it at least two or three hours per day.

(Study participant #1, interview round II, February 2020)

Our respondents in India use WhatsApp for numerous pro-social activities, including communicating with distant family and friends. For those users, WhatsApp’s accessibility, ability to interface with different people in different ways, and low cost make it the right tool for many tasks. Interview data also suggest that participants’ preferred means of communication on WhatsApp is voice messages. Participants seemed to perceive audio as the fastest and easiest way of sharing information on the platform, as they can record and listen to voice messages while doing other activities. Indeed, WhatsApp allows users to lock the recording button until the recording is over, freeing their hands for other activities. Participants explained that they record and receive multiple chains of voice messages per day, which can be auto-played in sequence, without interruption. They seem to perceive audio messages to be an effortless way of consuming information. It might be that voice-based communication gives a more personal experience than text- or image-based content that does not directly involve another individual.

Individually, each one of these affordances has made WhatsApp attractive to users in India (as well as worldwide) but might not have been sufficient to ensure their long-term commitment to the platform. However, in combination, these affordances have made it costly for users to leave the platform for a new information-sharing space. In a sense, WhatsApp’s affordances work as a “cascade” that keeps users from exiting its usage flow and routine information practices (Figure 8.2)

Figure 8.2 Cascading WhatsApp affordances.

WhatsApp’s free-to-use services in India initially drew people to the platform. Its technical features of connected apps and merging information with users’ phone contacts enabled them to connect widely and often with their close friends and family. Once the app had been adopted by the majority of mobile users and their close contacts, it became the place “where everyone already is” – a factor that made it costly for users to leave the platform for another service. On top of this, users’ “perceived privacy” – resulting from their awareness of E2EE protocols on the platform – encouraged continued usage of WhatsApp as it promotes feelings of privacy, closeness, and ease of expression. Like a powerful cascade of water incessantly falling over a person beneath a waterfall, WhatsApp’s affordances “trap” users and keep them from swimming away.

“Just-In-Case” Information Sharing on WhatsApp

While ensuring its long-term adoption, on the one hand, WhatsApp’s affordances also set the platform up as an environment in which users feel particularly comfortable sharing information, including that which they might consider controversial (such as misinformation, disinformation, and uncivil and dangerous speech) and that they might be afraid to share in other spaces. It has been shown how, generally speaking, great flexibility in terms of audience selection and the possibility of retaining control over self-presentation is an important motivator for sharing news online (Ihm and Kim Reference Ihm and Kim2018). Valeriani and Vaccari (Reference Valeriani and Vaccari2018) noted that “network selection and message control allowed by messaging apps facilitate the circulation of controversial information within closed circles”; as a result, users who politically censor themselves on social media are more likely to engage in political discussions on WhatsApp. Also, communication on messaging apps increases as a response to distrust in mainstream media and reluctance to express oneself in online forums for fear of repercussions (Kuru Reference Kuru2019).

Our investigation suggests that WhatsApp’s E2EE feature, and the resulting feeling of “perceived privacy,” also played a crucial role in encouraging the sharing of controversial content. Nearly all our participants expressed feeling freer to have “more honest” conversations on WhatsApp’s various communication structures than on other platforms, most notably Facebook. The secure communications promised by WhatsApp’s E2EE play a key role in this perception. All participants were aware of the E2EE protocol and mentioned it as the main factor for liking and using the platform. They reported being particularly appreciative of the idea that WhatsApp’s conversations cannot be accessed and monitored by authorities; phone calls cannot be recorded and outsiders cannot join chat groups. However, E2EE is not nearly as impenetrable as it is marketed to be. There are several hacks and third-party apps that can be used to access, download, and analyze WhatsApp data and metadata, even when conversations have been deleted, or to record phone calls. In addition to this, groups set as “public” can in fact be joined by outsiders, including researchers (when an invite link exists on the web). None of the individuals we talked to reported being aware of any difference between “closed groups” and “open groups.” All participants reported exclusively being members of what they perceived to be closed, private groups that can only be joined with the permission of the group administrator. Given these limited capabilities of E2EE to protect WhatsApp users’ privacy, we refer to the privacy offered by E2EE as “perceived privacy.”

The fact that WhatsApp allows for audience selection and message control, in addition to users’ perception that WhatsApp’s E2EE allows free and honest conversations, manifests in various uses. Respondents most commonly reported using the app “to share jokes and fun content” that are often dark in tone or contain adult content, and often related to current events and breaking news. When asked “How is content shared on WhatsApp different from content shared on Facebook?” they consistently reported that content on WhatsApp is funnier and more engaging than content shared on Facebook. The second most common answer was that WhatsApp content presents more useful or helpful information compared to Facebook content, typically related to current events and urgent situations.

While most participants claimed awareness that content circulating on WhatsApp might be inaccurate or misleading (“risky”), this did not stop them from sharing it. Our conversations with participants revealed that when exposed to an unverified rumor, they seemed to value its potential utility more than its potential inaccuracy. In other words, when they receive a piece of content on WhatsApp and believe that it is potentially useful or interesting to someone they care about, they share it despite knowing that it could be inaccurate. Accuracy is not the primary concern, especially in times of crisis. Instead, users engage in what we call just-in-case sharing in which they place more importance on the potential benefits for friends and family should the information be true than the social repercussions they might face if the rumor is false or inaccurate (e.g., losing credibility). Signs of this behavior emerged during multiple interactions with our study participants. During an informal conversation via chat between one of the authors and a user, the user stated that “I re-share what they sent me even if it might be not accurate, just in case it is true.”

Admin-led Moderation of Dangerous Speech in Small, Closed Groups

A significant amount of offensive, dangerous, and explicit content proliferates within WhatsApp’s groups. Adult content is particularly common in men-only chats. Nationalistic or religious content inciting hatred toward others is also shared quite often, and, as our participants noted, it “goes both ways,” with Hindus sharing offensive content about Muslims and vice versa. Women and members of religious minorities actively reach out to group administrators to ask for the removal of accounts that share inappropriate or offensive content. A user reported being ostracized by group members for refusing to share nationalistic and religious content that he perceived as offensive toward the Muslim minority:

To join each group, you either know the administrator or you need a recommendation from someone. You have to conform to the norm of that community. There is that expectation if you try to criticize. Once, they emotionally blackmailed me. I was ostracized from that community. “You don’t love your country,” they told me. These groups of colleagues and friends are closely guarded communities.

Everyday communications on WhatsApp happen in small groups that are perceived as “closed,” meaning that they are managed by someone users personally know. Participants voiced concerns about unmoderated content spread in such groups, independently of age and sex. Closed WhatsApp groups’ information-sharing practices operate on what we refer to as a membership model; group administrators add members to the group, controlling who is in and who is out. They set the group’s tone and, sometimes using feedback from other members, decide what is and is not allowed to be posted. Group members concerned with the spread of dangerous speech observed that the issue with offensive and inappropriate content is not only the content itself but where it is shared. From this perspective, if controversial content is shared where it is allowed, WhatsApp users tolerate it.

The Burden of Correcting Everyday Mis- and Disinformation

Participants also noted that while they are willing to fact-check WhatsApp forwards for their own sake, they rarely engage in correcting other users by sharing evidence. Most users expressed the desire to correct other users on a daily basis. However, when asked to provide a concrete example of such behavior, only two respondents were able to do so. This specific finding might help with the interpretation of results from survey research based on self-reported correction behavior that found that corrections are quite common on WhatsApp, suggesting that such pro-social behavior might be over-reported (Rossini et al. Reference Rossini, Stromer-Galley, Baptista and Veiga de Oliveira2020). When asked to elaborate further on the challenges of correcting others, participants indicated that while they think that correcting others is important, doing it might be considered impolite or rude because of cultural factors, especially if the sharer is senior or “outranks” them in terms of social status, clearly displaying the concept of power distance and how it reduces to bystanders those targeted by or wishing to act in solidarity with those who are targeted by misinformation or harassment (Fichman and Rathi Reference Fichman and Rathi2021; Fichman and Sanfilippo Reference Fichman and Sanfilippo2015; Herring et al. Reference Herring, Job-Sluder, Scheckler and Barab2002; Maltby et al. Reference Maltby, Day and Frosch2016). A participant clearly spelled out what seemed to be a feeling shared by most of the individuals we talked to: that the onus should not be on users to address information problems with the app but on WhatsApp itself.

I think it is WhatsApp’s responsibility to clean up these messages, not mine. Otherwise, their [WhatsApp’s] credibility will go down; people are getting tired of WhatsApp. There is no mechanism for fact-checking; it is not right.

Due to the platform’s E2EE, total share limits are the only method of moderating speech. This is a source of frustration for users we spoke with, including a few young participants who would like to be able to report false, offensive, and dangerous content on WhatsApp.

We are a democratic country, but our central government … has brought fights between castes and religions. … Offensive speeches and whatnot; people are dying u know in Delhi! At JNU University, people share hate speech through WhatsApp statuses … India is getting messed up 😞 I fear what’s gonna happen … I like WhatsApp because it makes my life easier. But one thing I would like to change is if the report button was there and WhatsApp could have a check when someone reports a profile (due to offensive content).

Because the platform lacks fact-checking mechanisms and content moderation strategies, participants perceive that the burden of cleaning up the everyday sharing of misinformation on the platform is on them. However, correcting others is a practice that goes against their cultural upbringing and makes them very uncomfortable.

We have already noted that the burden of keeping WhatsApp information safe in groups as well as in one-to-one conversations is on users. Minorities or individuals sympathetic toward minorities, such as women, students, and religious minorities find themselves in charge of conducting this delicate work. But in order to have content removed or accounts blocked, these individuals have to convince the administrators of the gravity of the situation. Administrators deliberately decide what should or should not be moderated. WhatsApp groups, then, create situations in which dissent is silenced in favor of the perceived group interests because members are unable to report offensive, false, or misleading content anonymously and face the threat of being questioned, harassed, or ostracized by other members, both online and offline. In these circumstances, we suggest that Meta’s WhatsApp has shaped information practices within local, and now global, contexts without much knowledge of these contexts or consideration of the risks faced by users, such as the dangerous consequences of false and dangerous speech mentioned in the introduction (Benghani Reference Benghani2019; Cheeseman et al. Reference Cheeseman, Fisher, Hassan and Hitchen2020; Garimella and Eckles Reference Garimella and Eckles2020; Machado et al. Reference Machado, Kira, Narayanan, Kollanyi and Howard2019).

Unpacking the sociotechnical affordances of WhatsApp in India that have led to hidden virality (“perceived privacy” above all) suggests how infrastructural design and deployment have allowed the company and its owners to engage in these goals in ways that have exonerated them of accountability for entering global markets and wreaking havoc in the name of profit, entering markets as infrastructural actors in countries such as Myanmar and India, and allowing disinformation content to circulate unchecked until it resulted in genocide against Rohingya Muslims in Myanmar (Mozur Reference Mozur2018) and lynching in rural India (Liao Reference Liao and Shannon2018). Attempts to mitigate the negative effects of hidden virality in India have done little to curb its spread, as our respondents reported seeing no changes in the amount of disinformation that crossed their feeds after the platform limited sharing, with one user sharing one message to just five users or groups. Meta’s interventions to counter misinformation include labeling viral forwards and chain messages, limiting forwards, and the design of on-platform Tip-Lines and ChatBots. Little evidence exists on whether any of these interventions work, or to what extent. Due to WhatsApp’s encrypted nature, it is difficult for researchers to investigate the efficacy of any on-platform fact-checking efforts. Meanwhile, platforms and researchers alike have tried to shift the blame to users’ misuse and lack of education (Chakrabarti, Stengel, and Solanki Reference Chakrabarti, Stengel and Solanki2018). However, users are only part of the story; this paper lays bare that they are engaging with the platform exactly as it was designed (Table 8.1). While the sample for the qualitative study may have been biased toward better-educated individuals living in or in the proximity of large cities, the fact that our respondents feel they cannot directly influence how the platform works reveals much about structural power at work.

Table 8.1 Definitions of key concepts from the chapter

Hidden viralityRefers to how unvetted, insular discourse on digital media can take on a character of truth and remain unnoticed until causing real-world harm.
Epistemic burdenDescribes how dominant culture actors discredit and blame the knowledge and knowledge practices of minoritized groups to further disadvantage these already burdened groups in a way that compounds their oppression.
Cascading affordancesLike a powerful cascade of water incessantly falling over a person beneath a waterfall, cascading affordances “trap” users and keep them from swimming away (i.e., moving to a new service).
Just-in-case sharingCertain users place more importance on the potential benefits for friends and family should a piece of information be true than the repercussions they and others might face if the rumor is false or inaccurate (e.g., losing credibility). As a result, users are aware of the potential falsity of the information that they encounter online, but they share it anyway.

Possible Solutions: Changing Norms around Design

Given all these difficulties in responding to misinformation on WhatsApp, how can hidden virality be captured and addressed? Now that we have given a bit of background and described the everyday burden for information curation that is offloaded to users on WhatsApp and similar platforms, thinking with the Governing Knowledge Commons framework and governance strategies focusing on changing norms around design offer some promising avenues. Here we outline solutions to change norms around design, and the benefits and shortcomings of each of these action areas, who would be responsible for enacting solutions, and who would be affected.

Meta shows that it “values free speech” by making moderation technically impossible, all the while cultivating economic power from WhatsApp’s widespread use. This is an entrenched norm in the tech industry and with Meta and WhatsApp in particular. One simple but likely superficial way is for users and lawmakers across geopolitical borders to demand a change in platform norms, for example by ordering more transparency from these platforms about what content is actually shared and through which mechanisms, and allowing users to easily opt in and out of sharing certain types of data with certain parties, if they so wish. The institution of legislative policy, namely the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) in a few key geopolitical areas, has effectively ushered in opt-in design features across a number of websites and platforms. Solutions for WhatsApp might follow similar protocols.

But this would not necessarily address the problem of the epistemic burden foisted upon those who are minoritized within private WhatsApp groups. More appropriately framing WhatsApp’s onslaught of offensive and dangerous content as a platform problem would require concrete design- and norm-changing measures, such as redesigning the platform to reduce the spread of such content, likely decrease engagement, and reduce the ability to leverage the economic power of the user base. WhatsApp has limited sharing messages to keep information from spreading too quickly. While this is a positive step for the platform, the respondents in our study say they see just as much disinformation as before these share limits were enacted.

While it wouldn’t necessarily limit the amount of disinformation that people see, requiring users themselves to label manipulated or satirical information might be useful to some degree in terms of limiting the work that bystanders do to debunk certain types of problematic information, as we have seen with legislation and moderation around manipulated audiovisual content (Paris Reference Paris2021). But this intervention would likely not decrease the amount of problematic content that is offensive and troublesome to certain disenfranchised bystanders, nor would it reduce the tendency of the most persistent offenders to see this as censorship, as already happens with social media posts labeled as false (Ognyanova et al. Reference Ognyanova, Lazer, Robertson and Wilson2020). Moreover, without oversight and content moderation from the platform, this intervention would be impossible to enforce.

Taking note from respondents of this study, adding an anonymous reporting feature might be a useful intervention for WhatsApp to institute. But it would need accountability and oversight measures to keep it from falling into the reporting traps that are so prevalent on other platforms – namely that nothing happens unless the person reporting, or offending, has some level of notoriety or social capital. Some examples of this are harassment reporting around caste in India, where higher caste members at tech companies ignore the harassment claims brought by those who are of lower caste (Soundarajan et al. Reference Soundarajan, Kumar, Nair and Greely2019), or when women who are not public figures report manipulated nude and sexualized images of themselves being spread across platforms (Paris Reference Paris2021). In these and other cases, it is precisely this resource of social capital that respondents lack, as do others on Twitter and other platforms who encounter difficulties in having their reports reviewed.

Then, also, there are strong normative expectations around existing WhatsApp affordances coming from within the user base to keep the platform as is. Redesigning the platform and instituting anonymous reporting might detract from the perceived benefits of the platform for parties who need or like it because people can share whatever they want with whomever they want, with no formalized oversight. Tech companies and their adherents often argue that changing the platform would decrease WhatsApp’s pro-social possibilities; as Lim (Reference Lim2020) notes, there are pro-democracy and human rights groups that use the platform only because of these affordances.

But the deluge of antisocial content on WhatsApp resulting in real-world consequences need not be justified as problems necessarily generated and sustained by users’ information and social practices. Instead, drawing on science and technology studies (STS) and Costanza-Chock’s (Reference Costanza-Chock2020) design justice branch of critical informatics, we must problematize this framing and envision new sociotechnical solutions that better suit user needs. This reimagination of sociotechnical systems must take the politics of knowledge into account and demand difficult discussions of design goals to focus on the goals of users, including them in the design process as experts on their informational needs, not just as data sources to exploit. This work demands that technologists grapple with the complex cultural and political contexts these technologies will be used in and develop tools that are useful, not exploitative. There are many possible avenues ahead, but all require rethinking our relationships with one another and the global community, how we communicate and cooperate within groups, and what roles technology should play in our lives.

Conclusions

As encrypted messaging apps remain one of the last uncensored spaces on the internet, it becomes vital to disentangle the dynamics of hidden virality, or when, how, and why dangerous content goes viral in these closed spaces while remaining unnoticed to outsiders. This chapter has provided an overview of the dynamics that made false and dangerous content widespread on WhatsApp in India between 2020 and 2021 while discussing users’ takes on such issues. While we focus on WhatsApp in India, such methods of ethnographic research can be employed with users of other encrypted, private messaging platforms like Telegram or Signal in other countries across the globe. Certain issues around the political economy of the platform would be different in other countries and app contexts, each having their own political-economic concerns and issues around social capital (Abubakar, Hafiz, and Dasuki Reference Abubakar and Dasuki2018; Chauchard and Garimella Reference Chauchard and Garimella2022; Soares et al. Reference Soares, Recuero, Volcan, Fagundes and Sodré2021). Further studies in this vein would provide bases for rich comparisons and a better understanding of the interlinked phenomena we see at play in this study.

The wide-scale, long-term adoption of WhatsApp in India results from the deployment of not one but a “cascade” of sociotechnical and interdependent affordances (Figure 8.1). This cascading effect has clear positive consequences for Meta, the corporate entity behind this technology. WhatsApp’s affordances were not designed in a vacuum or by means of neutral intentions: They result from aggregating data across applications on user phones to leverage contacts and services and increase engagement. As Cecere, Le Guel, and Lefrere (Reference Cecere, Le Guel and Lefrere2020), Glick and Ruetschlin (Reference Glick and Ruetschlin2019), and Tang (Reference Tang2016) note, the promise of corporate benefits drove the inclusion and successive maintenance of E2EE as a key affordance of WhatsApp in the first place; the goal of offering aggregated services at a low cost is to generate networked data that is extremely valuable to WhatsApp and its owners, making the sociotechnical affordances a tool that furthers profit-based goals. However, the same cascading effect might not be equally beneficial to users.

Users feel they must stay on the platform even though they may not like everything about it, all the while interacting with others and with information, and engaging in other activities, generating data stored on WhatsApp’s servers that holds promise for revenue. WhatsApp’s revenue model centers on extracting data from WhatsApp and using it to grow and market other Meta products. For example, WhatsApp has access to the phone owner’s contact list. Contacts are used to suggest “new friends” on Facebook, grow Facebook’s user base, and increase its market value. E2EE offers corporate benefits as it bypasses both external and internal platform oversight, which results in diminished accountability for powerful stakeholders (platform owners, shareholders, and government entities) and shifts responsibility to users, as they are the only arbiters of what content is acceptable.

The possibility of sharing information in what is perceived to be a private environment, combined with the possibility of selecting specific audiences and retaining control over messages, encourages the sharing of what is perceived to be “risky but useful” content. Such content rarely comes from official sources or mainstream media, and it is shared through a text message or an audio file and closely resembles “rumors,” which have been defined as public-facing statements imbued with private hypotheses about the workings of the world (Rosnow Reference Rosnow1991), and products of sense-making that people generate to cope with uncertainty and concomitant anxiety (Rosnow Reference Rosnow1988). These definitions both suggest that rumors offer a “collective problem-solving opportunity to individuals who participate” (Kwon, Cha, and Jung Reference Kwon, Cha and Jung2017). A key contribution of this study of how rumors spread on messaging apps is our proposal that WhatsApp users are often aware of the potential falsity of the information that they encounter on WhatsApp, but they share it anyway. This is what we call the “just-in-case” sharing practice: WhatsApp users place more importance on the potential benefits for friends and family should the information be true than the repercussions they and others might face if the rumor is false or inaccurate (e.g., losing credibility). This practice is made possible by a combination of factors and dominant cultural norms, which include the urge to care for close ones by sharing potentially useful content, the lack of awareness of the potential risks involved in amplifying false or misleading content, and the widespread preference among our participants for not correcting family members and close friends out of respect and politeness.

Our work speaks to the need to address many types of epistemic burden that manifest themselves as users engage in information and social practices through WhatsApp and other similar encrypted apps. In these examples, we see how Meta, a powerful US-based technology company, has shaped technical affordances that lead to more work in the everyday practices of correcting mis-and disinformation for those who are already disadvantaged. Typically, tech companies, the popular press, and sometimes even academics blame the presence of disinformation and hate speech on WhatsApp on the knowledge and knowledge practices of the users. This practice fails to acknowledge the role played by western companies and, most importantly, how such blame (intentional or not) further disadvantages these already burdened groups in a way that compounds their oppression. Perceived privacy, when combined with the impossibility for users to report inappropriate content anonymously on WhatsApp, actively encourages the spread of offensive content on the platform. Offensive content is particularly prominent in small, closed groups, which are tightly controlled by administrators. Group members need to personally contact administrators every time they are exposed to offensive content and ask them to remove it. The burden of flagging offensive content falls on the group members who might feel hurt by it, but once again must challenge dominant cultural norms to request moderation.

Footnotes

Britt Paris is a critical informatics scholar studying how groups build, use, and understand information systems according to their values, and how these systems influence evidentiary standards and political action. Paris is an assistant professor at Rutgers University in the Department of Library and Information Science. Her current work focuses on critical perspectives around techno solutionism for political problems, and practices and discourse around alternative internet infrastructure. She did her postdoctoral research at the Data and Society Research Institute focusing on audiovisual disinformation, and remains affiliated there.

Irene Pasquetto is a scholar in the field of information and communication science. She is an assistant professor at the University of Michigan School of Information where she teaches Ethics of Information Technologies and Digital Curation. Her most recent research work focuses on issues of science mis- and disinformation, open science practices, and public understanding and use/misuse of science products and infrastructures. As a research affiliate at the Harvard Kennedy School’s Shorenstein Center on Media, Politics, and Public Policy, she cofounded the Harvard Kennedy School Misinformation Review.

References

Abubakar, Naima Hafiz, and Dasuki, Salihu Ibrahim. 2018. “Empowerment in Their Hands: Use of WhatsApp by Women in Nigeria.” Gender, Technology and Development 22 (2): 164183. https://doi.org/10.1080/09718524.2018.1509490.CrossRefGoogle Scholar
Acker, Amelia, and Chaiet, Mitch. 2020. “The Weaponization of Web Archives: Data Craft and COVID-19 Publics.” Harvard Kennedy School Misinformation Review 1 (3): 111. https://doi.org/10.37016/mr-2020-41.Google Scholar
Banaji, Shakuntala, Bhat, Ramnath, Agarwal, Anushi, Passanha, Nihal, and Pravin, Mukti Sadhana. 2019. WhatsApp Vigilantes: An Exploration of Citizen Reception and Circulation of WhatsApp Misinformation Linked to Mob Violence in India. London: London School of Economics. http://eprints.lse.ac.uk/104316/1/Banaji_whatsapp_vigilantes_exploration_of_citizen_reception_published.pdf.Google Scholar
Benghani, Priyanjana. 2019. “India Had Its First ‘WhatsApp Election.’ We Have a Million Messages from It.” Columbia Journalism Review. October 16. www.cjr.org/tow_center/india-whatsapp-analysis-election-security.php.Google Scholar
Broadcasting Board of Governors (BBG). 2014. BBG Board Meeting, Part 4. Broadcasting Board of Governors headquarters in Washington, DC. April 11. http://archive.org/details/BBGBoardMeeting04112014.Google Scholar
Broadcasting Board of Governors (BBG). 2017. “2016 Annual Report.” https://2016.bbg.gov/.Google Scholar
Cecere, Grazia, Le Guel, Fabrice, and Lefrere, Vincent. “Economics of Free Mobile Applications: Personal Data and Third Parties.” Preprint, Social Science Research Network, last revised: March 15 2020. https://doi.org/10.2139/ssrn.3136661.CrossRefGoogle Scholar
Chakrabarti, Santanu, Stengel, Lucile, and Solanki, Sapna. 2018. “Fake News and the Ordinary Citizen in India.” Beyond ‘Fake News’. London: British Broadcasting Corporation. http://downloads.bbc.co.uk/mediacentre/duty-identity-credibility.pdf.Google Scholar
Chauchard, Simon, and Garimella, Kiran. 2022. “What Circulates on Partisan WhatsApp in India? Insights from an Unusual Dataset.” Journal of Quantitative Description: Digital Media 2 (March): 176. https://doi.org/10.51685/jqd.2022.006.Google Scholar
Cheeseman, Nic, Fisher, Jonathan, Hassan, Idayat, and Hitchen, Jamie. 2020. “Social Media Disruption: Nigeria’s WhatsApp Politics.” Journal of Democracy 31 (3): 145159. https://doi.org/10.1353/jod.2020.0037.CrossRefGoogle Scholar
Citron, Danielle Keats. 2016. Hate Crimes in Cyberspace. Reprint ed. Place of publication not identified: Harvard University Press. www.hup.harvard.edu/books/9780674659902.Google Scholar
Collins-Dexter, Brandi. 2020. “Canaries in the Coal Mine: COVID-19 Misinformation and Black Communities.” Technology and Social Change Research Project. The Shorenstein Center on Media, Politics and Public Policy. https://doi.org/10.37016/TASC-2020-01.CrossRefGoogle Scholar
Costanza-Chock, Sasha. 2020. Design Justice: Community-Led Practices to Build the Worlds We Need. Cambridge, MA: MIT.CrossRefGoogle Scholar
Diwanji, Sanika. 2019. “India – Monthly Active Users on WhatsApp 2017.” Statista. Accessed 2019. www.statista.com/statistics/280914/monthly-active-whatsapp-users-in-india/.Google Scholar
Evangelista, Rafael, and Bruno, Fernanda. 2019. “WhatsApp and Political Instability in Brazil: Targeted Messages and Political Radicalisation.” Internet Policy Review 8 (4): 123. https://doi.org/10.14763/2019.4.1434.CrossRefGoogle Scholar
Facebook, Inc. 2020. “Facebook Press Call Transcript.” Final Transcript March 18, 2020. https://about.fb.com/wp-content/uploads/2020/03/March-18-2020-Press-Call-Transcript.pdf.Google Scholar
Fichman, Pnina, and Rathi, Maanvi. 2021. “Cross-Cultural Analysis of Trolling Behaviors.” Proceedings of the Association for Information Science and Technology 58 (1): 716717. https://doi.org/10.1002/pra2.539.CrossRefGoogle Scholar
Fichman, Pnina, and Sanfilippo, Madelyn Rose. 2015. “The Bad Boys and Girls of Cyberspace: How Gender and Context Impact Perception of and Reaction to Trolling.” Social Science Computer Review 33 (2): 163180. https://doi.org/10.1177/0894439314533169.CrossRefGoogle Scholar
Fricker, Miranda. 2007. Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press. https://academic.oup.com/book/32817.CrossRefGoogle Scholar
Garimella, Kiran, and Eckles, Dean. 2020. “Images and Misinformation in Political Groups: Evidence from WhatsApp in India.” Harvard Kennedy School Misinformation Review 1 (5): 112. https://doi.org/10.37016/mr-2020-030.Google Scholar
Glick, Mark, and Ruetschlin, Catherine. 2019. “Big Tech Acquisitions and the Potential Competition Doctrine: The Case of Facebook.” Working Paper Series. Institute for New Economic Thinking, October. www.ineteconomics.org/research/research-papers/big-tech-acquisitions-and-the-potential-competition-doctrine-the-case-of-facebook.CrossRefGoogle Scholar
Herring, Susan, Job-Sluder, Kirk, Scheckler, Rebecca, and Barab, Sasha. 2002. “Searching for Safety Online: Managing ‘Trolling’ in a Feminist Forum.” The Information Society 18 (5): 371384. https://doi.org/10.1080/01972240290108186.CrossRefGoogle Scholar
Ihm, Jennifer, and Kim, Eun-mee. 2018. “The Hidden Side of News Diffusion: Understanding Online News Sharing as an Interpersonal Behavior.” New Media and Society 20 (11): 43464365. https://doi.org/10.1177/1461444818772847.CrossRefGoogle Scholar
International Crisis Group. 2017. “Myanmar’s Rohingya Crisis Enters a Dangerous New Phase.” Report No 292. Belgium: International Crisis Group. www.crisisgroup.org/asia/south-east-asia/myanmar/292-myanmars-rohingya-crisis-enters-dangerous-new-phase.Google Scholar
Joyojeet, Pal, 2020. “Temporal Patterns in COVID-19 Misinformation in India.” Joyojeet Pal (blog). April 16. http://joyojeet.people.si.umich.edu/temporal-patterns-in-covid-19-misinformation-in-india/.Google Scholar
Khan, Nabeela. 2020. “Trends in Covid-19 Misinformation in India.” Health Analytics Asia (blog). June 12. www.ha-asia.com/trends-in-covid-19-misinformation-in-india/.Google Scholar
Kumar, Harish, Verma, Pushpneel, Ojha, Rudrapratap, and Venkat Babu, G.. 2012. “Study Analysis of Capacity Enhance and Coverage Increase with Repeaters in 3rd Generation for Rural/Semi Urban Mobile Communication System in India.” In 2012 Second International Conference on Digital Information and Communication Technology and It’s Applications (DICTAP), 368–372. https://doi.org/10.1109/DICTAP.2012.6215377.CrossRefGoogle Scholar
Kuo, Rachel, and Marwick, Alice. 2021. “Critical Disinformation Studies: History, Power, and Politics.” Harvard Kennedy School Misinformation Review 2 (4): 112. https://doi.org/10.37016/mr-2020-76.Google Scholar
Kuru, Ozan. 2019. “Understanding Informational Processing in WhatsApp Groups: A Comparative Study of User Perceptions and Practices in Turkey, Singapore, and the USA.” Conference Presentation at the American Political Science Association 2019, Washington DC., August 28. www.asc.upenn.edu/news-events/news/annenberg-presentations-apsa-2019.Google Scholar
Kwon, Sejeong, Cha, Meeyoung, and Jung, Kyomin. 2017. “Rumor Detection over Varying Time Windows.” PLoS ONE 12 (1): e0168344. https://doi.org/10.1371/journal.pone.0168344.CrossRefGoogle ScholarPubMed
Liao, , Shannon, . 2018. “WhatsApp Tests Limiting Message Forwarding after Violent Lynchings in India.” The Verge, July 20. www.theverge.com/2018/7/20/17595478/whatsapp-message-forwarding-end-violent-lynching-india.Google Scholar
Lim, Gabrielle. 2020. “Securitize/Counter-Securitize.” Data and Society Research Institute, March 25. https://datasociety.net/library/securitize-counter-securitize/.Google Scholar
Machado, Caio, Kira, Beatriz, Narayanan, Vidya, Kollanyi, Bence, and Howard, Philip. 2019. “A Study of Misinformation in WhatsApp Groups with a Focus on the Brazilian Presidential Elections.” In WWW ’19: Companion Proceedings of The 2019 World Wide Web Conference. New York: Association for Computing Machinery, 10131019. https://doi.org/10.1145/3308560.3316738.CrossRefGoogle Scholar
Maltby, John, Day, Liz, and Frosch, Caren A. et al. 2016. “Implicit Theories of Online Trolling: Evidence That Attention-Seeking Conceptions Are Associated with Increased Psychological Resilience.” British Journal of Psychology 107 (3): 448466. https://doi.org/10.1111/bjop.12154.CrossRefGoogle ScholarPubMed
Mozur, Paul. 2018. “A Genocide Incited on Facebook, With Posts From Myanmar’s Military.” The New York Times (sec. Technology), October 15. www.nytimes.com/2018/10/15/technology/myanmar-facebook-genocide.html.Google Scholar
Naeem, Salman Bin, and Bhatti, Rubina. 2020. “The Covid‐19 ‘Infodemic’: A New Front for Information Professionals.” Health Information and Libraries Journal 37 (3): 233239. https://doi.org/10.1111/hir.12311.CrossRefGoogle ScholarPubMed
Noble, Safiya. 2013. “Google Search: Hyper-Visibility as a Means of Rendering Black Women and Girls Invisible.” InVisible Culture: An Electronic Journal for Visual Culture, no. 19 (October 29). http://ivc.lib.rochester.edu/google-search-hyper-visibility-as-a-means-of-rendering-black-women-and-girls-invisible/.CrossRefGoogle Scholar
Ognyanova, Katherine, Lazer, David, Robertson, Ronald E., and Wilson, Christo. 2020. “Misinformation in Action: Fake News Exposure Is Linked to Lower Trust in Media, Higher Trust in Government When Your Side Is in Power.” Harvard Kennedy School Misinformation Review 1(4): 119. https://doi.org/10.37016/mr-2020-024.Google Scholar
Paris, Britt. 2021. “Configuring Fakes: Digitized Bodies, the Politics of Evidence, and Agency.” Social Media + Society (online) 7 (4). https://doi.org/10.1177/20563051211062919.CrossRefGoogle Scholar
Paris, Britt and Donovan, Joan. 2019. “Deepfakes and Cheap Fakes: The Manipulation of Audio and Visual Evidence.” Data and Society. September 18. https://datasociety.net/output/deepfakes-and-cheap-fakes/.Google Scholar
Paris, Britt, Pierre, Jennifer, Currie, Morgan, and Pasquetto, Irene. 2022. “Data Burdens: Epistemologies of Evidence in Police Reform and Abolition Movements.” In Data Justice and the Right to the City, ed. Currie, Morgan, Knox, Jeremy, and McGregor, Callum, 301325. Edinburgh: University of Edinburgh Press.CrossRefGoogle Scholar
Pasquetto, Irene V., Jahani, Eaman, Atreja, Shubham, and Baum, Matthew. 2022. “Social Debunking of Misinformation on WhatsApp: The Case for Strong and In-Group Ties.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1), Article 117: 135. https://doi.org/10.1145/3512964.Google Scholar
Pierre, Jennifer, Crooks, Roderic, Currie, Morgan E., Paris, Britt S., and Pasquetto, Irene V.. 2021. “Getting Ourselves Together: Data-Centered Participatory Design Research and Epistemic Burden.” In CHI ’21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. Yokohama, Japan. Article 406: 111. https://doi.org/10.1145/3411764.3445103.Google Scholar
Prasad, Revati. 2018. “Ascendant India, Digital India: How Net Neutrality Advocates Defeated Facebook’s Free Basics.” Media, Culture and Society 40 (3): 415431. https://doi.org/10.1177/0163443717736117.CrossRefGoogle Scholar
Roose, Kevin. 2018. “U.S.-Funded Broadcaster Directed Ads to Americans.” The New York Times (sec. Technology), July 19. www.nytimes.com/2018/07/19/technology/facebook-ads-propaganda.html.Google Scholar
Rosnow, Ralph L. 1988. “Rumor as Communication: A Contextualist Approach.” Journal of Communication 38 (1): 1228. https://doi.org/10.1111/j.1460-2466.1988.tb02033.x.CrossRefGoogle Scholar
Rosnow, Ralph L. 1991. “Inside Rumor: A Personal Journey.” American Psychologist 46 (5): 484496. https://doi.org/10.1037/0003-066X.46.5.484.CrossRefGoogle Scholar
Rossini, Patrícia, Stromer-Galley, Jennifer, Baptista, Erica Anita, and Veiga de Oliveira, Vanessa. 2020. “Dysfunctional Information Sharing on WhatsApp and Facebook: The Role of Political Talk, Cross-Cutting Exposure and Social Corrections.” New Media and Society 23 (8): 2430-2451. https://doi.org/10.1177/1461444820928059.CrossRefGoogle Scholar
Shachaf, Pnina, and Hara, Noriko. 2010. “Beyond Vandalism: Wikipedia Trolls.” Journal of Information Science 36 (3): 357370. https://doi.org/10.1177/0165551510365390.CrossRefGoogle Scholar
Siddiquee, Md Ali. 2020. “The Portrayal of the Rohingya Genocide and Refugee Crisis in the Age of Post-Truth Politics.” Asian Journal of Comparative Politics 5 (2): 89103. https://doi.org/10.1177/2057891119864454.CrossRefGoogle Scholar
Soares, Felipe Bonow, Recuero, Raquel, Volcan, Taiane, Fagundes, Giane, and Sodré, Giéle. 2021. “Research Note: Bolsonaro’s Firehose: How Covid-19 Disinformation on WhatsApp Was Used to Fight a Government Political Crisis in Brazil.” Harvard Kennedy School Misinformation Review 2 (1): 112. https://doi.org/10.37016/mr-2020-54.Google Scholar
Soundarajan, Thenmozhi, Kumar, Abishek, Nair, Priya, and Greely, Josh. 2019. “Facebook India Report.” NYC: Equality Labs. www.equalitylabs.org/facebookindiareport.Google Scholar
Tang, Ailie K. Y. 2016. “Mobile App Monetization: App Business Models in the Digital Era.” International Journal of Innovation, Management and Technology 7 (5): 224227. https://doi.org/10.18178/ijimt.2016.7.5.677.CrossRefGoogle Scholar
Telecom Regulatory Authority of India. 2016. “Prohibition of Discriminatory Tariffs for Data Services Regulations.” February 8, 2. www.trai.gov.in/sites/default/files/Regulation_Data_Service.pdf.Google Scholar
Valeriani, Augusto, and Vaccari, Cristian. 2018. “Political Talk on Mobile Instant Messaging Services: A Comparative Analysis of Germany, Italy, and the UK.” Information, Communication and Society 21 (11): 17151731. https://doi.org/10.1080/1369118X.2017.1350730.CrossRefGoogle Scholar
Wardle, Claire. 2020. “Monitoring and Reporting Inside Closed Groups and Messaging Apps.” In Verification Handbook for Disinformation and Media Manipulation, ed. Craig Silverman, 94–97. DataJournalism.com. https://datajournalism.com/read/handbook/verification-3/investigating-platforms/7-monitoring-and-reporting-inside-closed-groups-and-messaging-apps.Google Scholar
WhatsApp. 2016. “Making WhatsApp Free and More Useful.” WhatsApp.Com (blog). September. https://blog.whatsapp.com/making-whats-app-free-and-more-useful.Google Scholar
WhatsApp. 2020. “WhatsApp Privacy.” WhatsApp.Com (blog). 2020. www.whatsapp.com/privacy.Google Scholar
Figure 0

Figure 8.1 Visual themes from hidden virality and the everyday burden of correcting WhatsApp mis- and disinformation.

Figure 1

Figure 8.2 Cascading WhatsApp affordances.

Figure 2

Table 8.1 Definitions of key concepts from the chapter

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×