We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The digital revolution has transformed the dissemination of messages and the construction of public debate. This article examines the disintermediation and fragmentation of the public sphere by digital platforms. Disinformation campaigns, that aim at assuming the power of determining a truth alternative to reality, highlight the need to enhance the traditional view of freedom of expression as negative freedom with an institutional perspective. The paper argues that freedom of expression should be seen as an institution of freedom, an organizational space leading to a normative theory of public discourse. This theory legitimizes democratic systems and requires proactive regulation to enforce its values.
Viewing freedom of expression as an institution changes the role of public power: this should not be limited to abstention but instead has a positive obligation to regulate the spaces where communicative interactions occur. The article discusses how this regulatory need led to the European adoption of the Digital Services Act (DSA) to correct DPs through procedural constraints. Despite some criticisms, the DSA establishes a foundation for a transnational European public discourse aligned with the Charter of Fundamental Rights and member states’ constitutional traditions.
Malgré l'attention accordée à l'enjeu de la mésinformation au cours des dernières années, peu d’études ont examiné l'appui des citoyens pour les mesures visant à y faire face. À l'aide de données récoltées lors des élections québécoises de 2022 et de modèles par blocs récursifs, cet article montre que l'appui aux interventions contre la mésinformation est élevé en général, mais que les individus ayant une idéologie de droite, appuyant le Parti conservateur du Québec et n'ayant pas confiance dans les médias et les scientifiques sont plus susceptibles de s'y opposer. Ceux qui ne sont pas préoccupés par l'enjeu, priorisent la protection de la liberté d'expression ou adhèrent aux fausses informations y sont aussi moins favorables. Les résultats suggèrent que dépolitiser l'enjeu de la mésinformation et travailler à renforcer la confiance envers les institutions pourraient augmenter la légitimité perçue et l'efficacité de notre réponse face à la mésinformation.
Armed conflict presents a multitude of risks to civilians, prisoners of war and others caught in the middle of hostilities. Harmful information spreading on social media compounds such risks in a variety of tangible ways, from potentially influencing acts that cause physical harm to undermining a person's financial stability, contributing to psychological distress, spurring social ostracization and eroding societal trust in evidentiary standards, among many others. Despite this span of risks, no typology exists that maps the full range of such harms. This article attempts to fill this gap, proposing a typology of harms related to the spread of harmful information on social media platforms experienced by persons affected by armed conflict. Developed using real-world examples, it divides potential harm into five categories: harms to life and physical well-being, harms to economic or financial well-being, harms to psychological well-being, harms to social inclusion or cultural well-being, and society-wide harms. After detailing each component of the typology, the article concludes by laying out several implications, including the need to view harmful information as a protection risk, the importance of a conflict-specific approach to harmful information, the relevance of several provisions under international law, and the possible long-term consequences for societies from harmful information.
The information used for this typology is based entirely on open-source reporting covering acts that occurred during armed conflict and that were seemingly related to identified harmful information on social media platforms or messaging applications. The authors did not verify any reported incidents or information beyond what was included in cited sources. Throughout the article, sources have been redacted from citations where there is a risk of reprinting harmful information or further propagating it, and where redaction was necessary to avoid the impression that the authors were attributing acts to particular groups or actors.
Persistently rising atmospheric greenhouse gas concentrations challenge dominant Liberal hopes that science and multilateralism might deliver rational, global climate outcomes. Emerging Realist climate approaches that take geopolitics and national interests more seriously have yet to explore Morgenthau’s concern that ‘scientism’ – exaggerated faith in scientific rationality to solve political problems – would lead to disastrous underestimations of power and irrationality. Recently, Realists have mooted ‘solar geoengineering’ designs as a ‘lesser evil’ option to deliberately cool the Earth independently of emissions reductions. However, assessments of solar geoengineering prospects barely factor in Realist concerns, focusing instead on idealised scientific modelling of bio-physical effects and Liberal governance scenarios. To explore how geoengineering techno-science would be ‘translated’ into security assessments, geopolitical logics were elicited through interviews and group discussions with (mainly Arctic-oriented) national security professionals. Security experts reframe solar geoengineering in three significant ways: (a) from a climate ‘global public good’ to a source of geopolitical leverage and disruption; (b) from a risk-reduction tool to a potential source of distrust and escalation; and (c) from a knowledge-deficit problem solvable by more research, to a potential disinformation vector. This expands Realist scholarship on climate change and identifies serious risks to ongoing scientific and commercial pursuit of such technologies.
This chapter focuses attention on covert or unattributable propaganda conducted in India by the Foreign Office’s Information Research Department. Between the outbreak of the Sino-Indian border war in 1962, and the Indian general election of 1967, IRD operations in the subcontinent peaked. At the time, the Indian government welcomed British support in an information war waged against Communist China. However, cooperation between London and New Delhi quickly waned. Britain’s propaganda initiative in India lacked strategic coherence and cut across the grain of local resistance to anti-Soviet material. The British Government found itself running two separate propaganda campaigns in the subcontinent: one openly focused on Communist China; and a second, secret programme, targeting the Soviet Union. Whitehall found it difficult to implement an integrated and effective anti-communist propaganda offensive in India. The chapter also recovers the importance of nonaligned nations in the story of Cold War covert propaganda and reveals that India was never a passive player in the propaganda Cold War.
The CIA remained a fixture at the heart of Indian civil debate throughout the 1980s. To the very end of the Cold War, the political fortunes of Indira Gandhi, and her son, and successor, Rajiv Gandhi, were intertwined with a series of espionage scandals in which, almost inevitably, the CIA figured prominently. This chapter examines the Reagan administration’s reliance of the CIA as a cold war foreign policy tool and its difficulties in securing Indian support to counter what officials in Washington perceived to be an alarming and unacceptable expansion in Soviet disinformation activity in the subcontinent. It explores the assassinations of Indira and Rajiv Gandhi and how these two tragic events came to be connected by South Asians with the Agency and its earlier CIA involvement in subversion and political assassination in the Global South. As the Cold War approached its end, and Hindu nationalism, rampant corruption, and political violence gripped India, the chapter considers why national powerbrokers in the subcontinent were once again unable to resist urging citizens to ‘look the other way’ and attribute the country’s troubles to a ubiquitous foreign hand?
This chapter traces the development of the Donbas media landscape after the emergence of the ‘People’s Republics’ of Donetsk and Luhansk (DNR and LNR) in 2014. It focuses on the DNR/LNR authorities’ efforts to first break down and then rebuild local media. These efforts consisted of two phases: one of destruction and one of reconstruction. The destruction phase involved tearing down the existing media structure and pressuring journalists into either leaving Donbas or cooperating with the new authorities. The reconstruction phase involved setting up new media channels or repurposing existing ones, as well as implementing new legislation to impose censorship and promote certain desired narratives. The ministries of information of the two ‘Republics’ promoted local media production and set limits to what was allowed to be published by implementing accreditation procedures and keeping track of journalists working in the region. This formalisation occurred through a system of laws, decrees, edicts, and other regulations.
Most people who regularly use the Internet will be familiar with words like “misinformation,” “fake news,” “disinformation,” and maybe even “malinformation.” It can appear as though these terms are used interchangeably, and they often are. However, they don’t always refer to the same types of content, and just because a news story or social media post is false doesn’t always mean it’s problematic. To add to the confusion, not all misinformation researchers agree on the definition of the problem, or employ a unified terminology. This chapter discusses the terminology around misinformation, guided by illustrative examples of problematic news content. It also looks at what misinformation isn’t: what makes a piece of information “real” or “true”? Finally, we explore how researchers have defined misinformation and how these definitions can be categorized, before presenting the working definition that is used throughout this book.
King Charles III is Dracula's distant cousin. Governments are hiding information about UFOs. COVID-19 came from outer space. These sound like absurd statements, but some are true, and others are misinformation. But what exactly is misinformation? Who believes and spreads things that aren't true, and why? What solutions do we have available, and how well do they work? This book answers all these questions and more. Tackling the science of misinformation from its evolutionary origins to its role in the internet era, this book translates rigorous research on misleading information into a comprehensive and jargon-free explanation. Whether you are a student, researcher, policymaker, or changemaker, you will discover an easy-to-read analysis on human belief in today's world and expert advice on how to prevent deception.
Put simply–although nothing about it is simple–public diplomacy is diplomacy carried out in public, as opposed to most of diplomacy, which is done in private. It is a set of activities that inform, engage and influence international public opinion to support policy objectives or create goodwill for the home country. It is important to understand what public diplomacy is not. It is not an advertising campaign to get foreigners to like your country–even if they dislike it, they can still support, or at least accept, a particular policy or action. It is not a propaganda effort to mislead or lie to audiences for tactical or other advantage. It is a sustained endeavor that advances your country’s policies and reflects a solid understanding of the host-country’s language, culture, history and traditions. Both public diplomacy and propaganda are means to project power.
Ideally, we want to resist mis/disinformation but not evidence. If this is so, we need accounts of misinformation and disinformation to match the epistemic normative picture developed so far. This chapter develops a full account of the nature of disinformation. The view, if correct, carries high-stakes upshots, both theoretically and practically. First, it challenges several widely spread theoretical assumptions about disinformation – such as that it is a species of information, a species of misinformation, essentially false or misleading, or essentially intended/aimed/having the function of generating false beliefs in/misleading hearers. Second, it shows that the challenges faced by disinformation tracking in practice go well beyond mere fact checking. I begin with an interdisciplinary scoping of the literature in information science, communication studies, computer science, and philosophy of information to identify several claims constituting disinformation orthodoxy. I then present counterexamples to these claims and motivate my alternative account. Finally, I put forth and develop my account: disinformation as ignorance-generating content.
Despite broad adoption of digital media literacy interventions that provide online users with more information when consuming news, relatively little is known about the effect of this additional information on the discernment of news veracity in real time. Gaining a comprehensive understanding of how information impacts discernment of news veracity has been hindered by challenges of external and ecological validity. Using a series of pre-registered experiments, we measure this effect in real time. Access to the full article relative to solely the headline/lede and access to source information improves an individual's ability to correctly discern the veracity of news. We also find that encouraging individuals to search online increases belief in both false/misleading and true news. Taken together, we provide a generalizable method for measuring the effect of information on news discernment, as well as crucial evidence for practitioners developing strategies for improving the public's digital media literacy.
With the recent advances in artificial intelligence (AI), patients are increasingly exposed to misleading medical information. Generative AI models, including large language models such as ChatGPT, create and modify text, images, audio and video information based on training data. Commercial use of generative AI is expanding rapidly and the public will routinely receive messages created by generative AI. However, generative AI models may be unreliable, routinely make errors and widely spread misinformation. Misinformation created by generative AI about mental illness may include factual errors, nonsense, fabricated sources and dangerous advice. Psychiatrists need to recognise that patients may receive misinformation online, including about medicine and psychiatry.
This chapter discusses the interaction between QAnon and the media. As the group turned from fringe to mainstream, it swamped social media. Eventually, social media companies had to take measures to stop the spread of disinformation associated with QAnon. Groups such as QAnon might use media to manipulate news frames and set agendas, which propels conspiracy-laden topics to mainstream media sources, and creates a platform for spreading disinformation. This chapter also examines how social media use relates to belief in conspiracy theories, and the potential consequences of exposure to conspiracy theories on social media platforms. This relationship between conspiracies, media, and disinformation is explored and applied to QAnon. A brief discussion of how QAnon is similar or different from other groups in their use of social media is offered, along with some research questions for future study about the QAnon movement.
Researchers have investigated how disinformation and fake news spreads through social networks. Understanding how disinformation flows on social networks can help identify interventions to reduce the impact of such falsehoods and prevent negative consequences that can result from following conspiracy theories. This chapter will provide an overview of how researchers can use the tool NodeXL to rapidly analyse social media data related to QAnon by drawing upon social network analysis. NodeXL can be used to identify the shape of the network, key opinion leaders, and content related to discussions around QAnon. NodeXL was recently utilised to study disinformation networks surrounding COVID-19, such as the 5G and COVID-19 conspiracy and the ‘Film Your Hospital’ conspiracy. The chapter will also examine how the QAnon Twitter network compares to other Twitter networks. The chapter will then provide an insight into future potential research avenues that could be pursued by scholars working in this area.
In the wake of the COVID-19 pandemic, the United States is actively reshaping parts of its national security enterprise. This article explores the underlying politics, with a specific interest in the context of biosecurity, biodefense, and bioterrorism strategy, programs, and response, as the United States responds to the most significant outbreak of an emerging infectious disease in over a century. How the implicit or tacit failure to recognize the political will and political decision-making connected to warfare and conflict for biological weapons programs in these trends is explored. Securitization of public health has been a focus of the literature over the past half century. This recent trend may represent something of an inverse: an attempt to treat national security interests as public health problems. A hypothesis is that the most significant underrecognized problem associated with COVID-19 is disinformation and the weakening of confidence in institutions, including governments, and how adversaries may exploit that blind spot.
The term 'fake news' became a buzzword during Donald Trump's presidency, yet it is a term that means very different things to different people. This pioneering book provides a comprehensive examination of what Americans mean when they talk about fake news in contemporary politics, mass media, and societal discourse, and explores the various factors that contribute to this, such as the power of language, political parties, ideology, media, and socialization. By analysing a range of case studies across war, political corruption, climate change, conspiracy theories, electoral politics, and the Covid-19 pandemic, it demonstrates how fake news is a fundamentally contested phenomenon, and how its meaning varies depending on the person using the term, and the political context. It provides readers with tools to identify, talk about, and resist fake news, and emphasizes a need for education reform with an eye toward promoting critical thinking and information literacy.
The introduction provides an overarching discussion of the main contributions of the book. It reviews the main argument of the book, that the meaning of social construction theory varies dramatically depending on the sociopolitical context within which one engages, as related to partisanship, media consumption, socialization, and other factors. The book consists of two parts, the first exploring what fake news means to different groups of Americans, as related to Donald Trump’s rhetoric, media coverage and discourses on fake news, and public opinion of fake news. The second part of the book includes case studies of how fake news is discussed and understood in various contexts, related to US foreign policy, climate change, and conspiracy theories.
Chapter 1 reviews the meaning of fake news, post-truth, propaganda, disinformation, and misinformation. It engages with theoretical questions related to social construction theory and propaganda. The chapter situates the book within a larger sociohistorical framework recognizing the history of American war propaganda pertaining to US official rhetoric, the news media, and public opinion, and how propaganda has been used to manipulate the public into supporting US foreign conflicts. It also examines the conditions under which people question war propaganda. The chapter reviews scholarly works covering post-truth, fake news, disinformation, and misinformation. It also discusses the rise of “new media” – particularly social media and the impact they have on rising public misinformation in American politics. A review of competing works discusses the potential of social media to empower and disempower the public. Social media are used to connect people to politics and each other and to help organize social movements. But they have also fueled a political culture of paranoia, conspiracies, and anti-intellectualism, which are perpetuated by rising disinformation embraced by both political parties – but primarily on the American right.