We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The attentive public widely believes a false proposition, namely, that the race Implicit Association Test (“IAT”) measures unconscious bias within individuals that causes discriminatory behavior. We document how prominent social psychologists created this misconception and the field helped perpetuate it for years, while skeptics were portrayed as a small group of non-experts with questionable motives. When a group highly values a goal and leaders of the group reward commitment to that goal while marginalizing dissent, the group will often go too far before it realizes that it has gone too far. To avoid the sort of groupthink that produced the mismatch between what science now knows about the race IAT and what the public believes, social psychologists need to self-consciously embrace skepticism when evaluating claims consistent with their beliefs and values, and governing bodies need to put in place mechanisms that ensure that official pronouncements on policy issues, such as white papers and amicus briefs, are the product of rigorous and balanced reviews of the scientific evidence and its limitations.
Malgré l'attention accordée à l'enjeu de la mésinformation au cours des dernières années, peu d’études ont examiné l'appui des citoyens pour les mesures visant à y faire face. À l'aide de données récoltées lors des élections québécoises de 2022 et de modèles par blocs récursifs, cet article montre que l'appui aux interventions contre la mésinformation est élevé en général, mais que les individus ayant une idéologie de droite, appuyant le Parti conservateur du Québec et n'ayant pas confiance dans les médias et les scientifiques sont plus susceptibles de s'y opposer. Ceux qui ne sont pas préoccupés par l'enjeu, priorisent la protection de la liberté d'expression ou adhèrent aux fausses informations y sont aussi moins favorables. Les résultats suggèrent que dépolitiser l'enjeu de la mésinformation et travailler à renforcer la confiance envers les institutions pourraient augmenter la légitimité perçue et l'efficacité de notre réponse face à la mésinformation.
Armed conflict presents a multitude of risks to civilians, prisoners of war and others caught in the middle of hostilities. Harmful information spreading on social media compounds such risks in a variety of tangible ways, from potentially influencing acts that cause physical harm to undermining a person's financial stability, contributing to psychological distress, spurring social ostracization and eroding societal trust in evidentiary standards, among many others. Despite this span of risks, no typology exists that maps the full range of such harms. This article attempts to fill this gap, proposing a typology of harms related to the spread of harmful information on social media platforms experienced by persons affected by armed conflict. Developed using real-world examples, it divides potential harm into five categories: harms to life and physical well-being, harms to economic or financial well-being, harms to psychological well-being, harms to social inclusion or cultural well-being, and society-wide harms. After detailing each component of the typology, the article concludes by laying out several implications, including the need to view harmful information as a protection risk, the importance of a conflict-specific approach to harmful information, the relevance of several provisions under international law, and the possible long-term consequences for societies from harmful information.
The information used for this typology is based entirely on open-source reporting covering acts that occurred during armed conflict and that were seemingly related to identified harmful information on social media platforms or messaging applications. The authors did not verify any reported incidents or information beyond what was included in cited sources. Throughout the article, sources have been redacted from citations where there is a risk of reprinting harmful information or further propagating it, and where redaction was necessary to avoid the impression that the authors were attributing acts to particular groups or actors.
This review article explores the role of land-grant Extension amidst an escalating epistemic crisis, where misinformation and the contestation of knowledge severely impact public trust and policymaking. We delve into the historical mission of land-grant institutions to democratize education and extend knowledge through Cooperative Extension Services, highlighting their unique position to address contemporary challenges of information disorder and declining public confidence in higher education. Land-grant universities can reaffirm their relevance and leadership in disseminating reliable information by reasserting their foundational principles of unbiased, objective scholarship and deep engagement with diverse stakeholders. This reaffirmation comes at a critical time when societal trust in science and academia is waning, necessitating a recommitment to community engagement and producing knowledge for the public good. The article underscores the necessity for these institutions to adapt to the changing information landscape by fostering stakeholder-engaged scholarship and enhancing accessibility, thus reinforcing their vital role in upholding the integrity of public discourse and policy.
This chapter introduces the reader to the topic studied in the book, factual misinformation and its appeal in war. It poses the main research question of who believes in wartime misinformation and how people know what is happening in war. It then outlines the book’s central argument about the role of proximity and exposure to the fighting in constraining public misperceptions in conflict, and the methods and types of evidence used to test it. After clarifying some key concepts used in the book, it finally closes with a sketch of the manuscript’s main implications and an outline of its structure and contents.
This chapter concludes the book and considers its major theoretical and practical implications. It begins by exploring how the book pushes us to think about fake news and factual misperceptions as an important “layer” of war – a layer that has been largely neglected despite the burgeoning attention to these issues in other domains. This final chapter then examines what the book’s findings tell us about such topics as the psychology and behavior of civilian populations, the duration of armed conflicts, the feasibility of prevailing counterinsurgency models, and the depths and limits of misperceptions more broadly in social and political life. It also engages with the practical implications of the book for policymakers, journalists, activists, and ordinary politically engaged citizens in greater depth, exploring how the problems outlined in the research might also be their own solutions. Ultimately, this chapter shows how the book has something to offer to anyone who is interested in the dynamics of truth and falsehood in violent conflicts (and beyond) – and perhaps the beginnings of a framework for those who would like to cultivate more truth.
This chapter examines issues of factual misinformation and misperception in the case of the US drone campaign in Pakistan. It first shows that, while the drone campaign is empirically quite precise and targeted, it is largely seen as indiscriminate throughout Pakistani society. In other words, there is a pervasive factual misperception about the nature of the drone strikes in Pakistan. Second, the chapter shows that this misperception is consequential. Notably, it shows that Pakistani perceptions of the inflated civilian casualties associated with the strikes are among the strongest drivers of opposition to them in the country. It also provides evidence suggesting that this anti-drone backlash fuels broader political alienation and violence in Pakistan. Finally, the chapter shows that these misbeliefs about drones (and the reactions they inspire) are not shared by local civilians living within the tribal areas where the incidents occur. In sum, the chapter demonstrates that factual misperceptions about US drone strikes in Northwest Pakistan are generally widespread and consequential in the country, but not in the areas that actually experience the violence.
Text classification methods have been widely investigated as a way to detect content of low credibility: fake news, social media bots, propaganda, etc. Quite accurate models (likely based on deep neural networks) help in moderating public electronic platforms and often cause content creators to face rejection of their submissions or removal of already published texts. Having the incentive to evade further detection, content creators try to come up with a slightly modified version of the text (known as an attack with an adversarial example) that exploit the weaknesses of classifiers and result in a different output. Here we systematically test the robustness of common text classifiers against available attacking techniques and discover that, indeed, meaning-preserving changes in input text can mislead the models. The approaches we test focus on finding vulnerable spans in text and replacing individual characters or words, taking into account the similarity between the original and replacement content. We also introduce BODEGA: a benchmark for testing both victim models and attack methods on four misinformation detection tasks in an evaluation framework designed to simulate real use cases of content moderation. The attacked tasks include (1) fact checking and detection of (2) hyperpartisan news, (3) propaganda, and (4) rumours. Our experimental results show that modern large language models are often more vulnerable to attacks than previous, smaller solutions, e.g. attacks on GEMMA being up to 27% more successful than those on BERT. Finally, we manually analyse a subset adversarial examples and check what kinds of modifications are used in successful attacks.
We introduce a dynamic dataset of all communications by state election officials (EOs) on social media during the 2022 election cycle and develop metrics to assess the effectiveness of trust-building strategies on voter confidence. We employ quantitative manual content analysis of 10,000 organic posts from 118 state EOs’ accounts on Facebook, Instagram, and Twitter between September 10 and November 30, 2022, and code for the presence of variables that measure EOs’ efforts to combat misinformation and build trusted networks of communications. The measures we present here address two questions: (1) How much coordination was there among states in terms of incorporating the #TrustedInfo2022 campaign, promoted by the National Association of Secretaries of State, in their social media communications, and (2) How much of states’ social media communications explicitly signaled that EOs are trusted sources of information? We demonstrate the applicability of our data on research that evaluates the impact of trust-building campaigns on voter confidence in elections, which is grounded on theories of deliberative democracy and democratic listening.
To what extent can the harms of misinformation be mitigated by relying on nudges? Prior research has demonstrated that non-intrusive ‘accuracy nudges’ can reduce the sharing of misinformation. We investigate an alternative approach. Rather than subtly reminding people about accuracy, our intervention, indebted to research on the bystander effect, explicitly appeals to individuals' capacity to help solve the misinformation challenge. Our results are mixed. On the one hand, our intervention reduces the willingness to share and believe in misinformation fact-checked as false. On the other hand, it also reduces participants' willingness to share information that has been fact-checked as true and innocuous, as well as non-fact-checked information. Experiment 1 offers proof of concept; Experiment 2 tests our intervention with a more realistic mix of true and false social media posts; Experiment 3 tests our interventions alongside an accuracy nudge. The effectiveness of our intervention at reducing willingness to share misinformation remains consistent across experiments; meta-analysis reveals that our treatment reduced willingness to share false content across experiments by 20% of a scale point on a six-point scale. We do not observe the accuracy nudge reducing willingness to share false content. Taken together, these results highlight the advantages and disadvantages of accuracy nudges and our more confrontational approach.
The field of misinformation studies has experienced a boom of scholarship in recent years. Buoyed by the emergence of information operations surrounding the 2016 election and the rise of so-called “fake news,” researchers hailing from fields ranging from philosophy to computer science have taken up the challenge of detecting, analyzing, and theorizing false and misleading information online. In an attempt to understand the spread of misinformation online, researchers have adapted concepts from different disciplines. Concepts from epidemiology, for example, have opened doors to thinking about spread, contagion, and resistance. The life sciences offer concepts and theories to further extend what we know about how misinformation adapts; by viewing information as an organism within a complex ecosystem, we can better understand why some narratives succeed and others fail. Collaborations between misinformation researchers and life scientists to develop responsible adaptations of fitness models can bolster misinformation research.
Factual misinformation is spread in conflict zones around the world, often with dire consequences. But when is this misinformation actually believed, and when is it not? Seeing is Disbelieving examines the appeal and limits of dangerous misinformation in war, and is the go-to text for understanding false beliefs and their impact in modern armed conflict. Daniel Silverman extends the burgeoning study of factual misinformation, conspiracy theories, and fake news in social and political life into a crucial new domain, while providing a powerful new argument about the limits of misinformation in high-stakes situations. Rich evidence from the US drone campaign in Pakistan, the counterinsurgency against ISIL in Iraq, and the Syrian civil war provide the backdrop for practical lessons in promoting peace, fighting wars, managing conflict, and countering misinformation more effectively.
In a “mixed bag” 2023-2024 session, the U.S. Supreme Court issued a series of decisions both favorable and antithetical to public health and safety. Taking on tough constitutional issues implicating gun control, misinformation, and homelessness, the Court also avoided substantive reviews in favor of procedural dismissals in key cases involving reproductive rights and government censorship.
This Element takes on two related questions: How do the media cover the issue of misinformation, and how does exposure to this coverage affect public perceptions, including trust? A content analysis shows that most media coverage explicitly blames social media for the problem, and two experiments find that while exposure to news coverage of misinformation makes people less trusting of news on social media, it increases trust in print news. This counterintuitive effect occurs because exposure to news about misinformation increases the perceived value of traditional journalistic norms. Finally, exposure to misinformation coverage has no measurable effect on political trust or internal efficacy, and political interest is a strong predictor of interest in news coverage of misinformation across partisan lines. These results suggest that many Americans see legacy media as a bulwark against changes that threaten to distort the information environment.
Does nationalism increase beliefs in conspiracy theories that frame minorities as subversives? From China to Russia to India, analysts and public commentators increasingly assume that nationalism fuels belief in false or unverified information. Yet existing scholarly work has neither theoretically nor empirically examined this link. Using a survey experiment conducted among 2,373 individuals and 6 focus groups with 6–8 participants each, for a total of 50 individuals, we study the impact of nationalist sentiment on belief in conspiracy theories related to ethnic minority groups in Pakistan. We find that nationalist primes – even those intended to emphasize the integration of diverse groups into one superordinate national identity – increase belief in statements about domestic minorities collaborating with hostile foreign powers. Subgroup analysis and focus groups suggest that nationalism potentially increases the likelihood that one views rights-seeking minorities as undermining the pursuit of national status.
During health crises, misinformation may spread rapidly on social media, leading to hesitancy towards health authorities. The COVID-19 pandemic prompted significant research on how communication from health authorities can effectively facilitate compliance with health-related behavioral advice such as distancing and vaccination. Far fewer studies have assessed whether and how public health communication can help citizens avoid the harmful consequences of exposure to COVID-19 misinformation, including passing it on to others. In two experiments in Denmark during the pandemic, the effectiveness of a 3-minute and a 15-second intervention from the Danish Health Authorities on social media was assessed, along with an accuracy nudge. The findings showed that the 3-minute intervention providing competences through concrete and actionable advice decreased sharing of COVID-19-related misinformation and boosted their sense of self-efficacy. These findings suggest that authorities can effectively invest in building citizens’ competences in order to mitigate the spread of misinformation on social media.
This chapter reviews the evidence behind the anti-misinformation interventions that have been designed and tested since misinformation research exploded in popularity around 2016. It focuses on four types of intervention: boosting skills or competences (media/digital literacy, critical thinking, and prebunking); nudging people by making changes to social media platforms’ choice architecture; debunking misinformation through fact-checking; and (automated) content labelling. These interventions have one of three goals: to improve relevant skills such as spotting manipulation techniques, source criticism, or lateral reading (in the case of boosting interventions and some content labels); to change people’s behavior, most commonly improving the quality of their sharing decisions (for nudges and most content labels); or to reduce misperceptions and misbeliefs (in the case of debunking). While many such interventions have been shown to work well in lab studies, there continues to be an evidence gap with respect to their effectiveness over time, and how well they work in real-life settings (such as on social media).
Most people who regularly use the Internet will be familiar with words like “misinformation,” “fake news,” “disinformation,” and maybe even “malinformation.” It can appear as though these terms are used interchangeably, and they often are. However, they don’t always refer to the same types of content, and just because a news story or social media post is false doesn’t always mean it’s problematic. To add to the confusion, not all misinformation researchers agree on the definition of the problem, or employ a unified terminology. This chapter discusses the terminology around misinformation, guided by illustrative examples of problematic news content. It also looks at what misinformation isn’t: what makes a piece of information “real” or “true”? Finally, we explore how researchers have defined misinformation and how these definitions can be categorized, before presenting the working definition that is used throughout this book.
This chapter discusses how governments and supernational institutions have tried to tackle misinformation through laws and regulations. Some countries have adopted new legislation making the spread or creation of misinformation illegal; this has often been met with criticism by human rights organizations, for instance, because governments cannot act as neutral arbiters of truth. The UK and EU have adopted expansive regulatory frameworks that regulate not only misinformation but rather the online information space in its entirety. The United States is generally wary of any new legislation that imposes limits on speech, and doesn’t currently have legislative initiatives that are as broad in scope as those in the UK and EU. Instead, some entities in the US have tried out investing in communications campaigns about mis- and disinformation; these are aimed at individuals (and not companies or misinformation producers), and their effectiveness is evaluated in a very different way.
Chapter 3 expands on the diabolical aspects of the contemporary political soundscape and develops initial deliberative responses to its key problematic aspects. These aspects include an overload of expression that overwhelms the reflective capacities of listeners; a lack of argumentative complexity in political life; misinformation and lies; low journalistic standards in “soft news”; cultural cognition, which means that an individual’s commitment to a group determines what gets believed and denied; algorithms that condition what people get to hear (which turn out to fall short of creating filter bubbles in which they hear only from the like-minded); incivility; and extremist media. The responses feature reenergizing the public sphere through means such as the cultivation of spaces for reflection both online and offline, online platform regulation and design, restricting online anonymity, critical journalism, media literacy education, designed forums, social movement practices, and everyday conversations in diverse personal networks. Formal institutions (such as legislatures) and political leaders also matter.