Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-26T08:26:01.094Z Has data issue: false hasContentIssue false

How to Stay Popular: Threat, Framing, and Conspiracy Theory Longevity

Published online by Cambridge University Press:  25 March 2024

Rights & Permissions [Opens in a new window]

Abstract

Why do some conspiracy theories (CTs) remain popular and continue to spread on social media while others quickly fade away? Situating conspiracy theories within the literature on social movements, we propose and test a new theory of how enduring CTs maintain and regain popularity online. We test our theory using an original, hand-coded dataset of 5,794 tweets surrounding a divisive and regularly commemorated set of CTs in Poland. We find that CTs that cue in-group and out-group threats garner more retweets and likes than CT tweets lacking this rhetoric. Surprisingly, given the extant literature on party leaders’ ability to shape political attitudes and behaviors, we find that ruling party tweets endorsing CTs gain less engagement than CT tweets from non-officials. Finally, when a CT’s main threat frames are referenced in current events, CTs re-gain popularity on social media. Given the centrality of CTs to populist rule, these results offer a new explanation for CT popularity—one focused on the conditions under which salient threat frames strongly resonate.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of American Political Science Association

Belief in and exposure to conspiracy theories correlate with various behaviors, including individuals’ vaccination decisions and their willingness to undermine electoral institutions (Fong et al. Reference Fong, Roozenbeek, Goldwert, Rathje and van der Linden2021). Conspiracy theories—henceforth, CTs—explain events by claiming that a small group of powerful people secretly operate to achieve nefarious objectives (Miller, Saunders, and Farhart Reference Miller, Saunders and Farhart2016; Uscinski and Parent Reference Uscinski and Parent2014). CT propagation adapts with new technologies (Bangerter, Wagner-Egger, and Delouvée Reference Bangerter, Wagner-Egger, Delouvée, Butter and Knight2020). We may worry about CT spread on social media, where ranking algorithms or self-selection place some individuals into homogeneous echo chambers (Del Vicario et al. Reference Del Vicario, Vivaldo, Bessi, Zollo, Scala, Caldarelli and Quattrociocchi2016). Even where they exist for only a small share of people, these echo chambers may still consequentially impact the spread of information (Guess Reference Guess2021) and reinforce existing interpersonal divisions (Tucker et al. Reference Tucker, Guess, Barberá, Vaccari, Siegel, Sanovich, Stukal and Nyhan2018). Ultimately, for those predisposed to conspiratorial thinking, social media use can be associated with CT beliefs (Enders et al. Reference Enders, Uscinski, Seelig, Klofstad, Wuchty, Funchion, Murthi, Premaratne and Stoler2021). Even being exposed to CTs unwittingly can increase belief in them (Einstein & Glick Reference Einstein and Glick2015), which matters because social media users who spread one CT often interact with multiple CTs (Krasodomski-Jones Reference Krasodomski-Jones2019).

Despite our understanding of the digital environment in which CTs circulate, this literature frequently examines belief in a specific CT at one moment in time. While beliefs in CTs remain stable (Romer and Hall Jamieson Reference Romer and Jamieson2020; Uscinski et al. Reference Uscinski, Enders, Klofstad, Seelig, Drochon, Premaratne and Murthi2022) or may even decrease over time (Mancosu and Vassallo Reference Mancosu and Vassallo2022), an individual’s engagement with a particular CT likely ebbs and flows. Indeed, some online CTs, like those targeting George Soros, have centuries-old origins rooted in anti-Semitism (Tamkin Reference Tamkin2020). By contrast, others quickly lose popularity—such as a CT that Finland does not exist (Ellis Reference Ellis2018). Knowing when and how CTs earn and maintain online engagement is crucial because even short-lived CTs can be consequential. Despite the pernicious nature of CTs online, we lack a deep understanding of why some CTs garner more online engagement than others. What features of CTs enable some to become popular while others fade?

We propose and test a new theory of online CT popularity. Situating CTs within the social movements literature, we expect that the credible threat invoked by a CT will explain its staying power. We theorize three factors through which CT entrepreneurs—the people who “sell” CTs by creating and spreading them (Harambam Reference Harambam, Butter and Knight2020)—engage others in their CT: 1) invoking a well-defined out-group threat, 2) elite endorsement, and 3) the role of “focusing events,” or “sudden, unexpected, and visible events” that “push event-relevant issues to the top of the public agenda” (Reny and Newman Reference Reny and Newman2021, p. 1499). These factors may affect how and when specific CTs gain online engagement.

We test this theory by leveraging three CTs emerging from the 2010 Smoleńsk plane crash that still circulate over a decade later. The Polish president and 95 other political, religious, and military officials died in this crash, which generated many CTs. Thirty-four percent of Poles in 2020 agreed that the crash was likely an assassination (CBOS 2020). Some associated CTs falsely claim that the prominent Polish party PO, the PO politician Donald Tusk, or PO in collusion with Russia orchestrated the crash. These CTs map onto pre-existing partisan polarization between the PO and PiS parties (Cinar and Nalepa Reference Cinar and Nalepa2022), often casting PO not just as a political out-group but as a domestic fifth column (TVRepublika 2023; Żukiewicz & Zimny Reference Żukiewicz and Zimny2015). Despite official reports from the Polish government and Russian Interstate Aviation Committee concluding that the crash was due to pilot and air traffic control error amidst poor weather conditions (Żukiewicz and Zimny Reference Żukiewicz and Zimny2015), members of the ruling Polish party Prawo i Sprawiedliwość (PiS) have endorsed crash CTs (Bilewicz et al. Reference Bilewicz, MartaWitkowska, Gkinopoulos and Klein2019). Some Polish media outlets also circulated these CTs. Monthly masses, marches to a memorial in Warsaw, smaller events across Poland, and online conversations regularly memorialize the crash. We leverage the monthly commemorations’ regularity to analyze the factors shaping CT popularity and resurgence online.

To test our theory, we scraped 5,974 tweets during the three days surrounding the Smoleńsk monthly commemorations over one year. We hand-coded each tweet to identify which CT—if any—the user endorsed, the symbols attached to their message, and whether the user is a PiS politician.

We find that CTs survive by tapping into pre-existing in-and-out group threat frames. CTs cuing underlying domestic political divisions or foreign threats are retweeted and liked at higher rates than tweets about the crash without this framing. These effects increase during focusing events. Surprisingly, tweets invoking CTs from ruling PiS officials are retweeted less in normal times. However, after a focusing event, PiS tweets about CTs garner just as much or more engagement than before.

Our paper improves the understanding of when and how CTs garner online engagement. Since political misinformation spreads faster than true information online (Vosoughi, Roy, and Arel Reference Vosoughi, Roy and Aral2018), social media is a powerful venue to study CT proliferation. Online interaction with CTs may even translate into offline action, as it does for other types of political behaviors (Larson et al. Reference Larson, Nagler, Ronen and Tucker2019). Further, we answer a call to address the micro-foundations of the relationship between CTs and populist rule (Bergmann and Butter Reference Bergmann, Butter, Butter and Knight2020; Hawkins et al. Reference Hawkins, Carlin, Littvay and Kaltwasser2018; Pirro and Taggart Reference Pirro and Taggart2022). We achieve this through a novel typology of online CT endurance, which clarifies the context, actors, and timing underpinning how social media users interact with CTs over time.

The paper is organized in five parts. First, we define CTs and discuss how they spread online. Second, we theorize about how tweet content, user characteristics, and contemporary events shape CTs’ popularity. Third, we describe the Smoleńsk crash. Fourth, we detail our data collection strategy. Finally, we present our findings.

Online Conspiracy Theory Popularity

A conspiracy represents a “true causal chain of events” (Douglas et al. Reference Douglas, Uscinski, Sutton, Cichocka, Nefes, Ang and Deravi2019, 4), occurring when a small group of actors secretly work to gain political or economic power, conceal secrets, or harm another group. By contrast, conspiracy theories (CTs) claim that a secretive group of conspirators caused some event that harms an in-group while benefiting those conspirators (Cichocka et al. Reference Cichocka, Marchlewska, de Zavala and Olechowski2016; Miller, Saunders, and Farhart Reference Miller, Saunders and Farhart2016). CTs are “accusatory perceptions” (Uscinski and Parent Reference Uscinski and Parent2014, 33), which explain events by blaming some group of covertly pursuing a nefarious end but lack evidence to conclusively prove their claims (Douglas et al. Reference Douglas, Uscinski, Sutton, Cichocka, Nefes, Ang and Deravi2019). While many CTs are false, others are proven true and become conspiracies (Sunstein and Vermeule Reference Sunstein and Vermeule2009). Watergate is a prototypical example of a CT becoming a conspiracy, after reporting and Congressional hearings revealed the truth (Atkinson, DeWitt, and Uscinski Reference Atkinson, DeWitt, Uscinski, Lucas, Galdieri and Sisco2017).

CTs spread online and offline. CT entrepreneurs can propagate CTs via mass media, as Henry Ford did when publishing The Protocols of the Elders of Zion in his newspaper (Bangerter, Wagner-Egger, and Delouvée Reference Bangerter, Wagner-Egger, Delouvée, Butter and Knight2020). CTs also circulate online, where citizens have “a platform to (inter)actively deconstruct official versions of the ‘truth’, to consume alternative accounts and to produce their own theories” (Aupers Reference Aupers2012, p. 27). Despite the presence of misinformation (Anspach and Carlson Reference Anspach and Carlson2020), many people access news on social media (Gottfried and Shearer Reference Gottfried and Shearer2017). While the internet does not necessarily make more people believe in CTs (Uscinski, DeWitt, and Atkinson Reference Uscinski, DeWitt, Atkinson, Dyrendal, Robertson and Asprem2018), social media usage may correspond with CT beliefs (Enders et al. Reference Enders, Uscinski, Seelig, Klofstad, Wuchty, Funchion, Murthi, Premaratne and Stoler2021). Unlike Facebook or Instagram, Twitter has a social norm of public conversation (Steinert-Threlkeld Reference Steinert-Threlkeld2018), while allowing relative anonymity. Twitter also democratizes access, with political elites engaging alongside everyday users, making Twitter well suited for CT propagation (Tucker et al. Reference Tucker, Guess, Barberá, Vaccari, Siegel, Sanovich, Stukal and Nyhan2018).

Common explanations of CT popularity focus on how CT entrepreneurs (DeWitt, Atkinson, and Wegner Reference DeWitt, Atkinson, Wegner and Uscinski2018) and online echo chambers circulate CTs (Kauk, Kreysa, and Schweinberger Reference Kauk, Kreysa and Schweinberger2021). Little attention, however, is paid to the variation in these CTs’ content or their level of engagement over time.Footnote 1 Beyond the environment in which they circulate, we know little about what types of CTs thrive online.

To address this question, we draw on the social movements literature, which analyzes what factors drive social movement engagement. Both CTs and social movements involve anti-elite, anti-establishment, or anti-status quo elements (Grossman and Mayer Reference Grossman and Mayer2022; Pirro and Taggart Reference Pirro and Taggart2022). Scholars of social movements attribute movement success to a trifecta of political opportunities, grievance framing, and resource mobilization (McAdam Reference McAdam, McAdam, McCarthy and Zald1996). Whereas grievance framing and political opportunities—such as elite allies or timing—are relevant to online CT spread, resource mobilization is less critical. Spreading CTs on social media involves one click—a much lower cost than the resources and time commitments requisite for offline action (Olson Reference Olson1971). With lower participation costs, online CTs can spread quickly. We now detail our theory of online CT popularity. Drawing on the logic of political opportunities and grievance framing, we theorize that online CT engagement is driven by threat frames, elite endorsement, and reminders of a CT’s frames during “focusing events.”

Threats, Grievances, and Conspiracy Theory Framing

Conspiracy theory framing mirrors the meaning-making process in social movements. Social movements must situate their goals within the existing social order (Zald Reference Zald, McAdam, McCarthy and Zald1996). As movement framing is a competitive process (Benford and Snow Reference Benford and Snow2000), social movements often justify their cause by framing the movement as under threat (Tarrow Reference Tarrow2011). Social movements thus define themselves by identifying an in-group, out-group, and “the locations of the borders between” these groups (Tarrow Reference Tarrow2011, 143).

Framing similarly affects whether and how a CT emerges, as “shifting threats” anticipate “which outsiders will be scapegoated, when, and why” (Uscinski and Parent Reference Uscinski and Parent2014, 135). Like social movement frames, CT frames are deployed within competitive environments (Chong and Druckman Reference Chong and Druckman2007), emphasizing particular aspects of a perceived reality over others (Entman Reference Entman1993). A CT’s frame helps people cope with collectively traumatic events, loss, weakness, or disunity (Kay et al. Reference Kay, Whitson, Gaucher and Galinsky2009; Uscinski and Parent Reference Uscinski and Parent2014) by attributing causality for those events. Therefore, CTs can arm believers with a sense of control (van Prooijen Reference van Prooijen2020), particularly when they categorize political actors as belonging to groups, like “the powerful” and “powerless” (Sapountzis and Condor Reference Sapountzis and Condor2013). In this manner, both CTs and social movements define a grievance, diagnose who is to blame, and try to mobilize people around this grievance.

We thus expect that CT frames targeting well-specified, “powerful” others will gain greater online engagement. Indeed, out-group derogation can powerfully polarize people (Wojcieszak et al. Reference Wojcieszak, Sobkowicz, Yu and Bulat2022b) and impact their willingness to name perceived conspirators (Jolley et al. Reference Jolley, Douglas, Marchlewska, Cichocka and Sutton2022; Kim Reference Kim2022). We theorize that when a CT’s framing maps onto credible pre-existing political threats, including domestic polarization or foreign tensions, the CT should attract greater online engagement.

Domestic Political Threat Frames

Conspiracy theories can appeal to collective identities and threats through partisan cues. We expect that CTs grafting their central claims to partisan divides will remain salient because partisanship centrally impacts how people process political information, issues, and events (Brader and Marcus Reference Brader and Marcus2013; Goren et al. Reference Goren, Federico and Kittilson2009).

Many CTs implicate a political party as the conspirator. When the party blamed is one’s out-party, partisans are more likely to endorse the CT than when co-partisans comprise this group (Pasek et al. Reference Pasek, Stark, Krosnick and Tompson2015; Uscinski, Klofstad, and Atkinson Reference Uscinski, Klofstad and Atkinson2016). Similarly, individuals share misinformation on partisan lines (Garrett and Bond Reference Garrett and Bond2021) and out of a dislike of their political opponents (Osmundsen et al. Reference Osmundsen, Bor, Vahlstrup, Bechmann and Petersen2021). Since group identities are salient on social media (Wojcieszak et al. Reference Wojcieszak, Casas, Yu, Nagler and Tucker2022a), partisanship might consequentially shape online CT spread. As people conform to their in-group by sharing information that the group cares about (Brady, Crockett, and Van Bavel Reference Brady, Crockett and Van Bavel2020), social media may facilitate the spread of CTs painting the in-group positively or the out-group negatively.

We expect that CTs appealing to pre-existing partisan divisions by blaming party officials for conspiring will earn more engagement online than those lacking this rhetoric.Footnote 2

H1a: Tweets claiming party officials are complicit in a conspiracy theory are more likely to be liked and retweeted than tweets without this claim.

Foreign Threat Frames

Credible foreign threats represent another effective frame for online conspiracy theory entrepreneurs. Since feelings of powerlessness and a lack of personal control rear CTs (Pantazi, Papaioannou, and van Prooijen Reference Pantazi, Papaioannou and van Prooijen2022; Stojanov and Halberstadt Reference Stojanov and Halberstadt2020), CT entrepreneurs may leverage salient international tensions to generate a credible foreign threat. For example, historical trauma can breed a loss of agency at the group and individual level (Fritsche Reference Fritsche2022). Thus, blaming a former colonizer or wartime enemy in a CT may improve that CT’s resonance, even when those beliefs do not reflect the current “socio-political reality” (Bilewicz Reference Bilewicz2022). These beliefs remain important, however, as people who think their nation was uniquely victimized are more likely to believe in CTs (Bilewicz et al. Reference Bilewicz, MartaWitkowska, Gkinopoulos and Klein2019).

Our expectations are also rooted in the rally-around-the-flag literature, which finds public opinion shifts following foreign threats (Baum Reference Baum2002; Colaresi Reference Colaresi2007). Just as a rallying effect may occur under heightened external threat, CTs appealing to foreign tensions may gain popularity (Uscinski and Parent Reference Uscinski and Parent2014, 133-36). When CT entrepreneurs claim that members of a country conspire with foreign enemies, they suggest that internal fifth columns enact domestic harm. These themes exist in many CTs, including the CT that the Bush administration facilitated the 9/11 attacks (Lindsay and Shortridge Reference Lindsay and Shortridge2021). We expect that tweets grounding CTs in credible foreign threats will get greater online engagement.

H1b: Tweets claiming that foreign actors who present a credible threat are complicit in the conspiracy theory will be liked and retweeted at higher rates than tweets that do not invoke this threat frame.

Elite Endorsements

We theorize that when partisan elites support conspiracy theories on social media, their posts will receive more engagement than non-elite social media users who spread similar CTs. Our expectation aligns with the opinion leadership literature. Partisan identities impact how people evaluate political information, issues, and events (Goren et al. Reference Goren, Federico and Kittilson2009). When co-partisan leaders support an issue, partisans often affirm their party’s position (Oliver and Wood Reference Oliver and Wood2014), punish dissent (Filindra and Harbridge-Yong Reference Filindra and Harbridge-Yong2022), and “follow-the-leader” to adopt these views (DeWitt, Atkinson, and Wegner Reference DeWitt, Atkinson, Wegner and Uscinski2018). On Twitter, people following political elites strongly prefer ideological congruity (Wojcieszak et al. Reference Wojcieszak, Casas, Yu, Nagler and Tucker2022a). Further, individuals’ pre-existing ideologies and partisan identities influence CT beliefs (Hartman and Newmark Reference Hartman and Newmark2012; Smallpage et al. Reference Smallpage, Enders, Joseph and Uscinski2017).

Politicians leverage individuals’ susceptibility to CTs to “flaunt their access to intelligence” (Radnitz Reference Radnitz2021, 176) and de-legitimize political opponents (Muirhead and Rosenblum Reference Muirhead and Rosenblum2019, 88). CTs rooted in partisan divides can even harden into a “conspiracy cleavage” capable of structuring political competition (Marinov and Popova Reference Marinov and Popova2022). Ultimately, how strongly politicians endorse CTs influences the likelihood that co-partisans will accept CTs (Enders and Smallpage Reference Enders and Smallpage2019). We anticipate that these patterns will extend to social media, where party leaders’ engagement with a CT should garner greater engagement from co-partisans.

H2: Tweets from party leaders invoking conspiracy theories will gain greater online engagement than CT tweets not shared by party officials.

Focusing Events

Finally, we turn to timing. We theorize that focusing events will shape a conspiracy theory’s popularity over time. Focusing events are “sudden, unexpected, and visible events” (Reny and Newman Reference Reny and Newman2021) that cause identifiable and concentrated harm to a subset of the population, making blame easily identifiable (Birkland Reference Birkland1998). In the aftermath, event-relevant issues can return “to the top of the public agenda” (Reny and Newman Reference Reny and Newman2021), potentially triggering public opinion change. For example, the police killing of George Floyd was a focusing event for public opinion on policing and anti-Black discrimination in the United States (Reny and Newman Reference Reny and Newman2021). Other common focusing events include conflicts or environmental disasters (Alexandrova Reference Alexandrova2015).

We propose that focusing events can also affect the conditions under which CTs resurface. For example, under our theory, the resurgence of interest in September 2022 in the CT that the Crown killed Princess Diana corresponds to the focusing event of Queen Elizabeth II’s death, which returned CTs about the princess’ death to public conversation (Google Trends 2022).

Focusing events connected to a CT’s threat frames may regenerate CT popularity. This process parallels changes in a social movement’s opportunity structure, wherein changes in opportunities shift the balance of power between a group and a regime (Tilly Reference Tilly1978). Events that may change a social movement’s opportunity structure include “defeat in war, elite divisions, state fiscal problems” (Goldstone and Tilly Reference Goldstone and Tilly2001, 183), among others (McAdam Reference McAdam, McAdam, McCarthy and Zald1996). Similar focusing events may generate an opening for CT entrepreneurs. Indeed, times of elevated foreign threat in the United States, including the Cold War, saw an uptick in CTs with a “foreign villain” (Uscinski and Parent Reference Uscinski and Parent2014, 143).

We anticipate that when the central “villain” of a CT looms in current events, those who feel threatened by this villain will re-engage with the CT. That CT—and those threats posed by the villain—are established ex ante, embedded within the original frames of the CT. When a CT villain’s behavior suggests they present a credible threat, related CTs may gain engagement.

H3: Tweets invoking a conspiracy theory will be liked and retweeted at higher rates after the invocation of a focusing event appealing to the CT’s threat framing.

Case Justification

To test our theory, we analyze a set of conspiracy theories emerging after the 2010 Smoleńsk plane crash. The durability and variability of the constituent Smoleńsk CTs, as well as the relative importance of CT endorsements to populist politicians, make this case distinct. Poland is a paradigmatic case of ethnopopulist rule (Vachudova Reference Vachudova2020). The salience of the Smoleńsk CTs within Poland allows us to probe how CTs gain or stay popular when populists use CTs for political ends. We detail the nuances of the CTs in the following section. Here, we discuss the logic underpinning our case selection.

Many CTs persist, like those questioning U.S. President Barack Obama’s birth certificate or blaming the Bush administration for the 9/11 attacks. These two CTs are widely known, with 24% and 19% of Americans believing them respectively (Oliver and Wood Reference Oliver and Wood2014). Our case maintains higher domestic support, with 34% of Poles believing the crash was an assassination (CBOS 2020). Most CT scholarship analyzes CTs in the United States (see, for example, Oliver and Wood Reference Oliver and Wood2014; Sunstein and Vermeule Reference Sunstein and Vermeule2009; Uscinski, Klofstad. and Atkinson Reference Uscinski, Klofstad and Atkinson2016). We join a growing literature probing CTs beyond the United States (Radnitz Reference Radnitz2021), where party competition may exist on an axis of CT support (Enyedi and Whitefield Reference Enyedi, Whitefield, Rohrschneider and Thomassen2020; Pirro and Taggart Reference Pirro and Taggart2022) or CT beliefs correlate with authoritarian preferences (Marinov and Popova Reference Marinov and Popova2022).

Unlike some CTs, monthly meetings ritualize the Smoleńsk crash. These offline meetings correspond with online discussions of the crash. We leverage the routine nature of this discourse to observe how people popularize CTs online. We start our analysis in June 2021, about a decade after the CT’s emergence, to assess the factors shaping a CT’s enduring popularity. Variation in the alleged conspirators allow us to hold constant the context of the overarching event while exploiting variation in specific CTs’ popularity.

Case Description

In April 2010, 96 high-ranking Polish officials flew to the seventieth commemoration of the Katyń massacre. This commemoration was hailed as an important moment for Polish–Russian reconciliation. In 1940, the Russian People’s Commissariat of Internal Affairs (NKVD) murdered 21,768 Polish intelligentsia and military officers in the forests near Katyń (Fredheim Reference Fredheim2014). The historical handling of the Katyń massacre by Soviet and Russian officials made the 2010 commemoration event particularly salient. During communist times, Soviet political leaders and scholars claimed that the Nazis committed the massacre. Only in 1990 did Soviet leader Mikhail Gorbachev confirm the NKVD’s culpability and open related archives. Despite a period of increased transparency, in 2004, Vladimir Putin closed the archives and prevented Poles from accessing information about Katyń. This stance upset Poles seeking social justice. When Putin invited Polish officials to the seventieth commemoration of the massacre in 2010, it signaled a potential opening in Russo-Polish relations (Drzewiecka and Hasian Reference Drzewiecka and Hasian2018; Soroka Reference Soroka2022).

En route to these commemorations, the plane carrying 96 Polish officials crashed. All passengers died, including Polish president Lech Kaczyński, his wife, and senior military, government, and religious officials. Many, but not all, of the politicians who died in the crash belonged to the PiS party. The crash occurred near the Russian city of Smoleńsk. Transcripts from the cockpit’s black box—a standard voice recorder used to facilitate investigations after aviation accidents—show that the pilots warned against landing in the weather, but were pressured to land by senior officials onboard (Reuters 2015).Footnote 3 Subsequent official investigation reports revealed that thick fog, pilot error, and poor visibility on the plane’s descent caused the crash (Khalitova et al. Reference Khalitova, Myslik, Turska-Kawa, Tarasevich and Kiousis2020; Żukiewicz and Zimny Reference Żukiewicz and Zimny2015). Despite these findings, CTs about the crash’s cause emerged by late April 2010 (Niżyńska Reference Niżyńska2010). Popular newspapers like Rzeczpospolita and Gazeta Polska published these CTs (Żukiewicz and Zimny Reference Żukiewicz and Zimny2015) and competing accounts of the crash proliferated (Myslik et al. Reference Myslik, Khalitova, Zhang, Tarasevich, Kiousis, Mohr, Kim, Kawa, Carroll and Golan2021).

The central CT claims that explosions caused the crash. While PiS officials did not endorse this narrative immediately, it soon dominated the party’s rhetoric (Niżyńska 2010). Over time, PiS subsumed these CTs into its broader political strategy of reinterpreting history to increase its popular support (Bernhard and Kubik Reference Bernhard, Kubik, Bernhard and Kubik2014). These CTs often blame the crash on either 1) the Platforma Obywatelska party (PO); 2) its leader Donald Tusk; or 3) PO in collusion with Russian officials. In 2012, PiS chairman Jarosław Kaczyński—the twin brother of the deceased president Lech Kaczyński—declared in parliament to then-Prime Minister and PO party leader Donald Tusk, “in a political sense, you bear 100% responsibility for the catastrophe” (Davies Reference Davies2016). In this speech, Jarosław Kaczyński accused Donald Tusk of conspiring with Russia to conceal alleged explosions.

After PiS returned to power in 2015, PiS MP and former Minister of Defense Antoni Macierewicz launched new investigations into the crash. In April 2022, PiS released the Macierewicz report, which blamed Russia (Kublik and Wójcik Reference Kublik and Wójcik2022). An investigative report led by the Polish television station TVN found that Macierewicz’s report misrepresented findings commissioned from the U.S. National Institute for Aviation Research and manipulated the plane’s black box recordings to support the CTs that PiS endorses (Ptak Reference Ptak2022). In turn, Macierewciz claims TVN’s investigation is false (Ptak Reference Ptak2022). Poland’s Supreme Audit Office could not identify the purpose of nine contracts worth 602,600 złoty (about $140,000) associated with the Macierewicz investigations (Dobrosz-Oracz Reference Dobrosz-Oracz2023). Contestation over the crash’s cause thus continues today.

Mapping the Theory to the Case

We now map our hypotheses onto features of the Smoleńsk crash. Table 1 provides a summary.

Table 1 Observable implications

Our first hypothesis concerns domestic partisan threats. Pre-existing partisan divisions may provide fertile ground for frames painting the other party as an out-group. The PiS and PO parties polarize Polish politics (Cinar and Nalepa Reference Cinar and Nalepa2022). Moreover, the crash mapped onto pre-existing cleavages. Most politicians who died in the crash belonged to PiS, and some PiS officials blamed PO or its leader Donald Tusk for the crash. Unsurprisingly, public opinion about the crash is polarized on partisan lines. A 2023 poll showed that 77% of PiS voters believe in crash CTs, while 86% of voters in PO’s current electoral coalition reject these CTs (PAP 2023). We expect that tweets blaming PO or its officials for the Smoleńsk crash will garner more likes and retweets than tweets about the crash without this claim. We operationalize this by coding for whether the tweet contains a CT blaming PO or its leader Tusk for causing the crash.

Russia partitioned Poland for 123 years and the Soviet Union waged war against Poland in the twentieth century, making Russia appear as an “existential threat to Poland’s politics of history” for some (Soroka Reference Soroka2022, 349). Poland’s historical victimhood (Bilewicz et al. Reference Bilewicz, MartaWitkowska, Gkinopoulos and Klein2019) may make the threat of Russian collusion with Polish politicians invoked by the Smoleńsk CTs seem more plausible. Therefore, CTs claiming collusion between Russia and an alleged internal fifth column, PO, may present a credible foreign threat and increase these CTs’ appeal. We hypothesize that tweets invoking collusion CTs will earn more likes and retweets than tweets discussing the crash that do not invoke this CT.Footnote 4 To assess credible foreign threats, we code whether tweets claim Russian forces worked with PO to cause the crash.

PiS leaders sometimes endorse Smoleńsk CTs (Bilewicz et al. Reference Bilewicz, MartaWitkowska, Gkinopoulos and Klein2019), and these CTs have a strong partisan character (Blackington Reference Blackington2021; Stanley and Cześnik Reference Stanley, Cześnik, Mudde and Kaltwasser2019). When PiS officials promote CT tweets, we expect these tweets to gain greater engagement than non-PiS tweets with CTs.Footnote 5

Finally, we expect that focusing events that remind of the CTs’ central threats will increase engagement. In February 2022, Russia launched a full-scale invasion of Ukraine. Since the Smoleńsk crash occurred in Russian territory en route to commemorate Soviet aggression in Katyń, we expect that the invasion may serve as a focusing event, triggering memories of similar, past Russian aggression against Poles (Snyder Reference Snyder2022). Since Russia has recently committed war crimes and atrocities against Ukrainians (Hinnant and Keaten Reference Hinnant and Keaten2023), some Poles may be more likely to believe that the Smoleńsk crash is another instance of recent Russian atrocities—this time allegedly committed alongside Polish politicians. Thus, we expect CTs to garner additional engagement after Russia’s full-fledged invasion than before.

Data Collection and Coding

To assess the spread and nature of the Smoleńsk plane crash discourse, we scraped tweets from August 2021 to July 2022. Just as campaign speeches and press releases are used to analyze elite rhetoric, tweets offer insight into elite and quotidian political communication. Social media data allow us to observe longitudinal trends and identify events with immense granularity (Barberá and Steinert-Threlkeld Reference Barberá, Steinert-Threlkeld, Curini and Franzese2020) while avoiding framing biases artificiality invoked via other methods of studying public discourse like surveys (Klašnja et al. Reference Klašnja, Barbera´, Beauchamp, Nagler, Tucker, Atkeson and Alvarez2017).

We used Twitter’s free Academic API to scrape tweets using a list of hashtags related to the Smoleńsk crash (refer to table A4 in the online appendix). The Smoleńsk commemoration events occur on the tenth of every month, so we scraped tweets from 12:00 a.m. on the ninth to 11:59 p.m. on the eleventh. The bulk of online Smoleńsk discourse occurs on days surrounding these offline monthly anniversaries. Appendix A hosts data collection procedures. In limiting our sample to those discussing the Smoleńsk crash on Twitter, our sample is neither representative of the full Polish tweet population nor of the full Polish Twitter discourse on those days. It is representative of tweets engaging with Smoleńsk during the monthly commemorations.

We independently hand-coded tweets for references to domestic political actors, religious symbols and events, foreign actors, invocation of historical memory, and anti-CT discourse. Automated object detection algorithms often struggle with non-English languages, poor resolution, filtering distortions (common to photoshopped images), veiled allusions, and with general accuracy (Zou et al. Reference Zou, Shi, Guo and Ye2019)—all of which characterize our tweet sample. Thus, we hand-coded our entire tweet corpus. Our intercoder reliability check revealed that our codes matched with 96.5% accuracy and a Cohen’s Kappa of 0.76. We discussed any tweets with a disagreement. We describe coding rules in online appendix A and offer representative tweets in tables 2 and 3.

Table 2 Coding examples—Tweets featuring conspiracy theories

Table 3 Coding examples—Tweets without conspiracy theories

Analysis

Our sample included 5,974 unique tweets across 2,258 accounts collected from August 2021 to July 2022 (Blackington and Cayton Reference Blackington2024). Figure 1 presents a descriptive summary of the content invoked in these tweets. In online appendix B, figures A6 and A7 and table A5 present descriptive statistics about how users engage with tweets invoking CTs and typical engagement rates per tweet. On average, a single account contributed 2.65 tweets to our dataset, though this ranged between 1 and 131 tweets per account. The median contribution is 1 tweet.

Figure 1 Descriptive summary of tweet frequencies

Notes: The Y-axis shows the proportion of total monthly tweets, sorted by content type. The X-axis is the month of tweet collection. There is a dotted line after the start of Russia’s full-fledged invasion of Ukraine. The top panel shows conspiracy theory content; the bottom panel non-CT content. ‘Total Conspiracy’ includes all tweets referencing a CT (all PO, Tusk, and collusion CT tweets). All tweets in Table 2 are included in this category. Some tweets have multiple categories. For example, a tweet invoking Russia-PO collaboration would be coded for Total, PO, and PO-Russia collaboration. We similarly illustrate non-CT content in the bottom panel. All tweets in table 3 are in one of these groups.

Tweets referencing a CT ranged from 5.25% to 57.23% of each month’s sample. A marked increase in the percent of CT tweets occurred after Russia’s full-fledged invasion of Ukraine. We also visualize the most-used themes within these tweets in online appendix C, figure A9. The most frequently used words in Smoleńsk Twitter discourse reflect CTs, often invoking foreign or domestic political threats. Some illustrative words in the word cloud include “assassination attempt,” “Putin,” “Tusk,” “was killed,” and “truth.” These phrases, and others in figure A9, suggest that CT discourse about the Smoleńsk plane crash widely occurs on Twitter during the monthly commemoration events. We now turn to the empirical tests of our hypotheses.

Tweet Framing Results

We first consider our threat framing hypotheses (H1a and H1b). Our outcome variables are like and retweet counts, and therefore have high variance in engagement. Though Poisson models are standard for counts, we use quasi-Poisson models to account for this over-dispersion, a common problem with aggregate count data (Hobbs et al. Reference Hobbs, Lajevardi, Li and Lucas2023; Osmundsen et al. Reference Osmundsen, Bor, Vahlstrup, Bechmann and Petersen2021; Vogler Reference Vogler2019). The standard Poisson model holds:

(1) $$ \mathit{\Pr}\left(Y={y}_i|{\unicode{x03BC}}_i\right)=\frac{e^{-{\unicode{x03BC}}_i}{\unicode{x03BC}}_i^{y_i}}{y_i!},{y}_i=0,1,2,\dots $$

For each observation i, the systematic component μi follows the equation:

(2) $$ \mathit{\log}\left({\unicode{x03BC}}_i\right)={\unicode{x03B2}}_0+{\unicode{x03B2}}_1\mathrm{C}{\mathrm{T}}_{\mathrm{i},\mathrm{t}}+{\mathbf{x}}_{\mathbf{i},\mathbf{t}}^{\prime}\boldsymbol{\unicode{x03B2}} $$

Wherein β 0 is the intercept, β 1 is the coefficient of the effect on a CT reference and $ {\mathbf{x}}_{\mathbf{i},\mathbf{t}}^{\prime}\boldsymbol{\unicode{x03B2}} $ corresponds with covariates. Y, our outcome of interest, is the number of retweets or likes a tweet garners.

While a standard Poisson assumes that the mean μ = σ 2, under the quasi-Poisson specification, σ 2 = ψμ, where ψ is a dispersion parameter. As ψ influences the variance, but not the mean, the parameter accounts for the fact that the mean and variance are not equivalent in over-dispersed counts. Thus, the quasi-Poisson addresses over-dispersion by adjusting the standard errors (Vogler Reference Vogler2019).Footnote 6

The term $ {\mathbf{x}}_{\mathbf{i},\mathbf{t}}^{\prime}\boldsymbol{\unicode{x03B2}} $ includes several controls. First, we control for month fixed effects. These account for any unobserved time-invariant confounders occurring in a given monthly scraping period (Imai and Kim Reference Imai and Kim2019), allowing us to hold constant seasonal effects unique to each month’s three-day collection period. We also control for each user’s logged number of followers and friends (those who one follows), which correlate with the number of retweets and likes an individual gets (Kwak et al. Reference Kwak, Cheanghyune and Moon2010; Suh Reference Suh2021). Since verified accounts are notable accounts—representing governments, companies, and other influential individuals—–and likely operate differently on social media, we also control for an account’s verified status (Shahi, Dirkson, and Majchrzak Reference Shahi, Dirkson and Majchrzak2021).

We run the same set of models outlined earlier for each hypothesis, alternating the key independent variable (“CT”) for the specific Smoleńsk CT. Tables 6 and 7 in online appendix D provide full regression tables.

Domestic Partisan Framing Results

Does tying a conspiracy theory to domestic political threats impact the rate at which tweets are retweeted and liked? We hypothesized that tweets blaming either PO or its leader Donald Tusk for causing the Smoleńsk crash would be retweeted and liked at higher rates than those tweets about Smoleńsk that do not mention partisan conspirators (H1a). We use the quasi-Poisson regression models described in Equations 1 and 2, regressing retweets and likes on whether a PO-tinted or Tusk-tinted CT is invoked.

Figure 2 plots the difference in the predicted number of times a tweet would be retweeted or liked if it contains a CT about PO compared to those tweets that do not include this CT, holding all else constant. We use the observed case approach and calculate 95% confidence intervals using Monte Carlo simulation. We find that tweets endorsing the PO CT are retweeted at higher rates, but liked at lower rates, than tweets about Smoleńsk that do not endorse this CT. We conduct the same analysis for the tweets invoking Tusk CTs. Tweets promoting the Tusk CT are also retweeted, but not liked, at higher rates.

Figure 2 Difference in predicted number of likes or retweets based on Tweet content

Notes: The Y-axis is conspiracy theory frame, while the X-axis shows the difference in predicted outcome. The black lines and dots reflect likes; the grey lines and triangles are retweets. We calculate 95 percent confidence intervals using Monte Carlo simulation and the observed case approach. In each model, the reference group is all tweets that do not mention this version of the CT. For example, for the “Tusk” CT, the reference group includes all tweets that do not reference the Tusk CT, both those with no CT reference at all and those referencing another CT, but that do not make reference to Tusk.

These findings offer mixed support for H1a. While domestic threat CTs can gain more engagement than crash tweets lacking these CTs, this varies by engagement type. Through retweets, users bring more attention to the mentioned CT. The CT thus becomes more “viral” (Pancer and Poole Reference Pancer and Poole2016).

However, users do not necessarily reward domestic threat CT tweets with likes more than tweets lacking this rhetoric. Indeed, in the case of PO, these tweets are less popular than tweets without this CT. Thus, some of the engagement with domestic threat CT tweets may stem from people who are unwilling to publicly endorse (i.e., “like”) the tweet.

Foreign Threat Framing Results

Beyond domestic threats, we theorize that tweets stating Russian and PO officials collaborated to orchestrate the crash would be retweeted and liked more than tweets about Smoleńsk lacking these conspiracy theories (H1b). Tables A6 and A7 in online appendix D present our quasi-Poisson regressions, which follow Equations 1 and 2.

We plot the difference in the predicted number of retweets or likes for tweets that contain this CT and those that do not in figure 2. Tweets claiming that Russian officials collaborated with PO officials to cause the Smoleńsk plane crash garner more retweets and likes than tweets about Smoleńsk that do not. This supports H1b.

Together, we interpret the support for H1b and mixed support for H1a to suggest that CTs survive through their threat frames. The type of threat invoked by a CT may make that CT more viral by earning more retweets, though not always more popular by earning more likes. The difference between the foreign and domestic threat findings offers tentative evidence that this effect is heterogenous across the type of threat invoked. This variation in tweet traction underscores the essential role of framing in CT proliferation. CTs tapping into salient in and out-group threats garner more online engagement than tweets without these frames.

Elite Endorsement Results

Are our results driven by tweets from prominent officials? We hypothesized that when PiS officials endorse a conspiracy theory, their tweets would be liked and retweeted more than CT tweets from non-officials (H2). To test this, we run the first set of quasi-Poisson models (Equations 1 and 2), interacting the key independent variable of CT type with a dummy variable indicating whether PiS officials wrote the post.Footnote 7 The reference group are tweets not authored by PiS elites. Tables A8 and A9 in online appendix D present these results. Figure 3 shows the predicted engagement for whether a PiS official tweeted the message and whether a CT was mentioned.

Figure 3 Difference in predicted number of likes or retweets based on tweet content and PiS officials

Notes: Each panel represents a different conspiracy theory. The Y-axis represents retweets or likes. The X-axis shows the difference in predicted outcome. The dotted (solid) lines and triangles (dots) reflect PiS officials (non-officials). We use the observed case approach and calculate 95 percent confidence intervals using Monte Carlo simulation.

PiS officials who invoke PO CTs on social media earn fewer retweets and likes when compared to ordinary users who propagate these CTs—though this difference is only statistically significant for retweets. They garner about the same amount of retweets and likes as non-PiS users for Tusk and collusion CTs.Footnote 8

We thus do not find support for H2. When PiS party officials endorse CTs, they earn less or the same amount of engagement for CTs as non-PiS officials. While we find it surprising that tweets from party leaders invoking CTs garner less online engagement than CTs not authored by party officials, we note that these findings align with other instances of politicians trying—and failing— to mobilize people around CTs. For example, Hillary Clinton blamed a vast right-wing conspiracy for her husband’s troubles in 1998 (Zaller Reference Zaller1998). Similarly, then-President Barrack Obama began his 2012 re-election campaign with a commercial about secretive oil billionaires being out to get him (Uscinski and Parent Reference Uscinski and Parent2014, 135). Both CTs lacked mobilizing power. Given these examples of officeholders failing to leverage CTs, our results reiterate the difficulty of convincing people that a power-holder is a victim. CTs thrive because of their anti-elite nature, making CT invocation a tactic used by “losers” (Uscinski and Parent Reference Uscinski and Parent2014). The very skepticism that encourages individual to believe CTs can generate hesitancy toward the motives of political elites who endorse them (Radnitz Reference Radnitz2021).

Though incumbent politicians may be punished for spreading CTs online, they do mobilize supporters around CTs (Muirhead and Rosenblum Reference Muirhead and Rosenblum2019). CTs are central to ethnopopulist rhetoric in cases ranging from Hungary (Plenta Reference Plenta2020) to India (Vaishnav Reference Vaishnav2019). This spurs the question: under what conditions might powerful politicians successfully garner online engagement when they use CTs? We now examine how focusing events may shape politicians’ ability to spread CTs.

Focusing Event Results

To test our findings’ durability, we consider whether and how conspiracy theories gain traction on social media through “focusing events.” We hypothesized that a CT would revive after focusing events invoking its central threat frames. Some Smoleńsk CTs claim that Russia and PO coordinated to cause the crash. We leverage the interruption of our data collection due to the full-scale Russian invasion of Ukraine on February 24, 2022, as a focusing event. Per H3, we expect that after Russia’s full- scale invasion of Ukraine, tweets invoking Smoleńsk CTs would be retweeted and liked more than before the war, when compared with those tweets discussing the crash without invoking CTs.

We run six nonlinear difference-in-differences models comparing whether our findings change after the war. We use the same quasi-Poisson specification of Equations 1 and 2, specifying the component μi:

(3) $$ \begin{align} \log \left({\mu}_i\right) = &{\beta}_0+{\beta}_1C{T}_{i,t}+{\beta}_2{War}_{i,t}\\ &\quad+ {\beta}_3C{T}_{i,t}\hskip0.35em \ast {War}_{i,t} +{\mathbf{x}}_{\mathbf{i},\mathbf{t}}^{\prime}\boldsymbol{\beta} \end{align} $$

“War” is a dummy variable capturing if a tweet written is before (0) or after (1) the start of Russia’s full-scale invasion of Ukraine. We include the same controls as earlier models. Tables A12 and A3 in online appendix D contain full regression tables. Figure 4 shows the predicted difference in engagement. After the war began, a significant and substantive increase occurs in retweets and likes for PO CTs. While Tusk CT tweets garner more likes after the war, they garner about the same amount of retweets. In three out of four of our domestic threat models, the effect direction flips after the war, illustrating the increased popularity and virality of domestic threat CTs. By contrast, tweets invoking Russia-PO collusion CTs do not show any significant temporal variation in predicted engagement. They continue to garner the same higher level of predicted engagement.

Figure 4 Difference in the predicted number of likes or retweets based before and after Russia’s full-scale invasion of Ukraine

Notes: Each panel represents a different conspiracy theory. The Y-axis is CT measure (retweets or likes). The X-axis shows the difference in predicted outcome. The black (grey) lines reflect those tweets from after (before) February 24, 2022. We use the observed case approach and calculate 95 percent confidence intervals using Monte Carlo simulation.

To examine whether CT tweets from PiS officials earn more engagement after the war, we run the same models, interacting dummy variables for the specific CT, wartime period, and whether a PiS official authored the tweet.Footnote 9 Figure 5 shows the predicted difference in likes and retweets between tweets with these various frames. Across all three CTs, PiS officials earn substantially more likes after the start of the war, when compared to before. PiS officials also earn more retweets after the start of the war, though this predicted post-war engagement is only statistically significant for collusion and Tusk CTs. Thus, before the full-fledged invasion, PiS tweets invoking CTs gained less engagement when compared to non-conspiratorial PiS tweets. After the start of the war, however, relative engagement with PiS officials’ CT rhetoric somewhat increases.

Figure 5 Difference in the predicted number of likes or retweets based on content invoked, presence of PiS Officials, and before or after Russia’s full-scale invasion of Ukraine

Notes: Each panel represents a different conspiracy theory. The Y-axis is CT measure (retweets or likes). The X-axis shows the difference in predicted outcome. The black (grey) lines reflect those tweets from after (before) February 24, 2022. The triangles (dots) reflect those tweets (not) including PiS officials. We use the observed case approach and calculate 95 percent confidence intervals using Monte Carlo simulation.

Comparable but smaller shifts occur for non-PiS officials. For PO and Tusk CTs, users earn marginally more engagement after February 2022. However, ordinary people receive fewer likes and retweets for invoking collusion CTs after the war relative to before. Our findings offer tentative evidence that for some frames, elite and public CT discourse align more after focusing events. Surprisingly, we note that those CTs with the largest uptick in social media engagement following a focusing event lack explicit ties to the event. Domestic threat CTs gain popularity after the war, while CTs of foreign threat, arguably more directly related to the war, maintain greater engagement.

This finding offers mixed support for H3. Focusing events have heterogeneous effects across subgroups. While elite engagement with CTs is rewarded more than CTs shared by the general public after a focusing event than before, the strength of this effect is not uniform. Further work should tease out whether focusing events have a uniform effect on CT resurgence, or whether the specific CT moderates these effects.

Robustness Checks

We run three robustness checks: 1) checking the assumptions underlying our difference-in-differences models, 2) verifying bots do not drive our analysis, and 3) assessing model dependence.

First, identification for our focusing events findings rests on the assumption that we maintain parallel trends between the treatment (CT) and control (non-CT) tweets before the war. We demonstrate these trends graphically in online appendix F, figures A10 through A12. We randomly select 100 “start dates” of the war amongst all pre-treatment (pre-war) tweets. We then run our difference-in-differences models and find no effect. The placebo tests lend empirical credibility to the parallel trends assumption (Cunningham Reference Cunningham2021; Huntington-Klein Reference Huntington-Klein2021). These results can be found in tables A22 through A27 in online appendix F. This bolsters our confidence that the war increases Smoleńsk CT engagement.

Further, difference-in-differences models assume that the treatment only affects those within the treatment group. The war reminded that Russia will use deadly means to politically target other countries. This is a central tenet of the Smoleńsk CTs. Accordingly, our identifying assumption expects tweets invoking Smoleńsk CTs to be impacted by the start of the war. Recall that our control group are all tweets that engage with the crash, but not with a CT. Over 4,000 such tweets exist in our dataset, ranging from announcing road closures for crash memorials to masses honoring crash dead. While the war impacts a wide array of attitudes, we do not expect the start of the war to affect the number of likes and retweets of tweets that do not engage with a CT.

Second, our study focuses on the activity of social media users, not bots. If bots drove our results, our results would not correspond to real users. While both bots and foreign government-sponsored accounts like “Russia Today” can promote disinformation on social media (Elshehawy et al. Reference Elshehawy, Gavras, Marinov, Nanni and Schoen2022), we did not find any definitive evidence of foreign government-sponsored accounts in our dataset when we hand-coded each individual tweet for content and user-type.

We also leverage cutting-edge methods that detect bots (Boichak et al. Reference Boichak, Hemsley, Jackson, Tromble and Tanupabrungsun2021; Haunschild et al. Reference Haunschild, Bornmann, Potnis and Tahamtan2021; Pozzana and Ferrara Reference Pozzana and Ferrara2020). We identified potential bots by using the Tweetbotornot supervised classifier (Kearney Reference Kearney2018). We then exclude all potential bot accounts and re-ran all analyses. Our main findings remain consistent. Tables A14 to A21 in online appendix E discuss these procedures and results.

Finally, we re-evaluate all analyses with Poisson models. Our results remain robust. We report these results in online appendix H, tables A28 through A35.

Of course, we cannot attribute causality to all our findings. Still, in theorizing and offering micro-foundational evidence regarding online CT resurgence, this work opens the door to future work on the political implications of CT popularity. Likewise, we acknowledge that our corpus reflects those tweets engaging with Smoleńsk on Twitter. Though we contend that varying tints of CT discourse and non-CT discourse on the same topic are explicit counterfactuals, our results cannot speak to how CTs persist compared to all online discourse.

Conclusion

Political disinformation may not wane naturally. Instead, conspiracy theories promoting political disinformation can spread rampantly over time, powerfully shaping online political narratives. The political relevance of CTs partly stems from their durability and wide social penetration.

Our findings provide insight into the factors underpinning online popularity and engagement with CTs. We show that CT frames that tap into salient group divides, such as perceived partisan or foreign enemies, remain important on social media. Drawing on the social movements literature and leveraging temporal variation in our data, we show how opinion leaders and the public can activate CT popularity by leveraging threat frames relevant to current events. Notably, we may not expect elite endorsement to move the needle on CT proliferation outside of a focusing event.

We find that the credibility of foreign and domestic threat frames strengthens online CT popularity. This credibility could be reinforced where “true political conspiracies” exist alongside CTs lacking an empirical basis (Marinov and Popova Reference Marinov and Popova2022). Poland is a democracy—albeit one that experienced backsliding. How would our findings vary in weakly democratic or autocratic contexts, where one party controls public discourse (Radnitz Reference Radnitz2021, 187) through censorship (Roberts Reference Roberts2018) or positive propaganda (Guriev and Treisman Reference Guriev and Treisman2019)? Incentives to conceal extrajudicial violence (Fariss, Kenwick, and Reuning Reference Fariss, Kenwick and Reuning2020) may further blur the distinction between verifiable conspiracies and conspiracy theories, impacting how unproven ideas propagate. Our findings raise new questions about how different regimes’ control over political information shapes the credibility of true conspiracies, CT frames, and a CT’s online lifespan.

We also connect the literature on CTs with that on historical memory and politics. Historical memories of Russian aggression and contemporary Russian war crimes against Ukrainians (Hinnant and Keaten Reference Hinnant and Keaten2023) help propagate several CTs across Central and Eastern Europe, which lends credibility to those CTs claiming PO-Russia collusion caused the Smoleńsk crash. We show that political elites who post CTs blaming PO for betraying Poland and conspiring with Russia gained greater Twitter engagement after Russia’s February 2022 full-scale invasion of Ukraine. This effect holds among ordinary users. Our findings have implications for research on how CT entrepreneurs can leverage historical narratives and focusing events to spread disinformation.

Finally, the CTs we analyze have shaped Polish politics for twelve years, yielding regular online and offline engagement. By tracing when a popular CT’s profile rises, we start to identify the conditions under which online CT engagement proliferates. This article’s efforts to identify the timing of CT popularity is a first step to pinpointing when and how online disinformation translates to offline political engagement. Our findings raise several additional avenues for future research. Do newer CTs spread via the same mechanisms as older ones? Do discussions of CTs differ online and offline? Our research lays the foundation for these and other consequential questions on conspiracy theory proliferation and popularity.

Acknowledgements

The authors wish to thank Roman Hlatky, Will Hobbs, Liesbet Hooghe, Gary Marks, Monika Nalepa, Tom Pepinsky, Tsveta Petrova, Bryn Rosenfeld, Graeme Robertson, Ken Roberts, Milada Vachudova, the UNC-Chapel Hill Authoritarian Politics Lab, and participants at the 2023 UNC-Chapel Hill Eastern Europe and European Union workshop, as well as the Perspectives on Politics editors and four anonymous reviewers for excellent feedback on earlier drafts. A previous version was presented for audiences at the 2022 annual meetings of APSA, ASEEES, and CES. Both authors contributed equally and have no financial or non-financial conflicts of interest or funding support to report.

Supplementary Material

To view supplementary material for this article, please visit http://doi.org/10.1017/S1537592723003006.

Footnotes

A list of permanent links to Supplemental Materials provided by the authors precedes the References section.

*

Data replication sets are available in Harvard Dataverse at: https://doi.org/10.7910/DVN/BXYZL9

1 See exceptions in the communications field, focusing on COVID-19 and natural disaster-related disinformation: Kant et al. Reference Kant, Wiebelt, Weisser, Kis-Katos and Luber2022; King and Wang Reference King and Wang2021; Nanath and Joy Reference Nanath and Joy2021; Yu et al. Reference Yu, Gil-López, Shen and Wojcieszak2022.

2 Twitter engagement occurs with likes and retweets. Likes represent a tweet’s “popularity” (Zhang et al. Reference Zhang, Wang, Molu Shi and Wang2021). Retweets, however, can express agreement, disagreement, or an agnostic desire to share information. We cannot be sure. Rather than speculate on users’ intent, we consider retweets a proxy for the online visibility or “virality” of a CT (Pancer and Poole Reference Pancer and Poole2016). Retweets directly spread CTs to new audiences. Popularity may also breed virality, as Twitter shows in-timeline tweets that friends liked. Though likes may be a more explicit “social endorsement” (Tucker et al. Reference Tucker, Guess, Barberá, Vaccari, Siegel, Sanovich, Stukal and Nyhan2018), likes and retweets both reflect tweet engagement and spread.

3 Online appendix G provides details about the official investigative reports.

4 Not all CTs invoking foreign threat will spread widely. Only when CTs can portray foreign actors as credibly threatening harm to the in-group (Uscinski and Parent Reference Uscinski and Parent2014) do we expect CTs to spread more. For example, if the plane crash occurred in New Zealand rather than Russia, CTs blaming PO-New Zealand collusion may not emerge or gain purchase because New Zealand has not historically behaved aggressively towards Poland. Only if international tensions sparked between New Zealand and Poland, and a credible foreign threat emerged, would our theory anticipate such a New Zealand incident to generate popular collusion CTs in Poland. By contrast, following Russian partitions and war with the Soviets, the threat of Russian collusion with internal Polish actors invoked by the Smoleńsk CTs may appear more credible.

5 We do not anticipate PiS tweets to generate engagement from non-PiS partisans. We expect PiS politicians to garner more engagement than non-PiS elite tweets, given party elites’ ability to encourage co-partisans to believe in CTs (Miller, Saunders, and Farhart Reference Miller, Saunders and Farhart2016; Pasek et al. Reference Pasek, Stark, Krosnick and Tompson2015). However, our Twitter data do not allow us to observe users’ partisanship.

6 All results are robust to using the Poisson specification. Refer to online appendix H.

7 Thus, the equation for μi now holds:

$$ \begin{align*} \log \left({\mu}_i\right)& ={\beta}_0\hskip0.35em + {\beta}_1C{T}_{i,t}+{\beta}_2 PiSOfficia{l}_{i,t}\\ &\quad {}+ {\beta}_3C{T}_{i,t}\hskip0.35em \ast \hskip0.35em PiSOfficia{l}_{i,t}+{\mathbf{x}}_{\mathbf{i},\mathbf{t}}^{\prime}\boldsymbol{\beta} \end{align*} $$

PiS elites may naturally have more engagement due to their stature, so we control for a user’s logged follower and friend count, which compensates for PiS leaders’ larger baseline networks. Online appendix B, figure 8, shows distributions of logged follower counts for PiS elites and non-PiS elites.

8 The confidence intervals are substantially larger for CTs blaming Tusk for the explosion when compared to PO. Fewer CTs mention Tusk, making these estimates less precise.

9 Thus, the equation for μi holds:

$$ \begin{align*} \log \left({\mu}_i\right) = & {\beta}_0+{\beta}_1C{T}_{i,t}+{\beta}_2{War}_{i,t}+{\beta}_3{PisOfficial}_{i,t}\\ {}&\quad +\hskip0.35em {\beta}_4C{T}_{i,t}\hskip0.35em \ast \hskip0.35em {War}_{i,t}+{\beta}_5C{T}_{i,t}\hskip0.35em \ast \hskip0.35em {PisOfficial}_{i,t}\\ {}&\quad +\hskip2px {\beta}_6{War}_{i,t}\hskip0.35em \ast \hskip0.35em {PisOfficial}_{i,t}\\ {}&\quad +\hskip2px {\beta}_7C{T}_{i,t}\hskip0.35em \ast \hskip0.35em {War}_{i,t}\hskip0.35em \ast \hskip0.35em {PisOfficial}_{i,t}+{\mathbf{x}}_{\mathbf{i},\mathbf{t}}^{\prime}\boldsymbol{\beta} \end{align*} $$

References

Alexandrova, Petya. 2015. “Upsetting the Agenda: The Clout of External Focusing Events in the European Council.” Journal of Public Policy 35(3): 505–30.CrossRefGoogle Scholar
Anspach, Nicholas M., and Carlson, Taylor N.. 2020. “What to Believe? Social Media Commentary and Belief in Misinformation.” Political Behavior 42(3): 697718.CrossRefGoogle Scholar
Atkinson, Matthew, DeWitt, Darin, and Uscinski, Joseph E.. 2017. “Conspiracy Theories in the 2016 Election.” Conventional Wisdom, Parties, and Broken Barriers in the 2016 Election, ed. Lucas, Jennifer C.; Galdieri, Christopher J., and Sisco, Tauna Starbuck, 163. Washington, DC: Rowman & Littlefield.Google Scholar
Aupers, Stef. 2012. “‘Trust No One’: Modernization, Paranoia and Conspiracy Culture.” European Journal of Communication 27(1): 2234.CrossRefGoogle Scholar
Bangerter, Adrian, Wagner-Egger, Pascal, and Delouvée, Sylvain. 2020. “How Conspiracy Theories Spread.” In Routledge Handbook of Conspiracy Theories, ed. Butter, Michael and Knight, Peter. Abingdon: Routledge.Google Scholar
Barberá, Pablo, and Steinert-Threlkeld, Zachary C., (2020). “How to Use Social Media Data for Political Science Research.” In The SAGE Handbook of Research Methods in Political Science and International Relations, ed. Curini, Luigi and Franzese, Robert, 404–23. Thousand Oaks, CA: SAGE Publications Ltd.CrossRefGoogle Scholar
Baum, Matthew. 2002. “The Constituent Foundations of the Rally-Round-the-Flag Phenomenon.” International Studies Quarterly 46(2): 263–98.CrossRefGoogle Scholar
Benford, Robert D., and Snow, David A.. 2000. “Framing Processes and Social Movements: An Overview and Assessment.” Annual Review of Sociology 26:611–39.CrossRefGoogle Scholar
Bergmann, Erikur, and Butter, Michael. 2020. “Conspiracy Theory and Populism.” In Routledge Handbook of Conspiracy Theories, ed. Butter, Michael and Knight, Peter. Abingdon: Routledge.Google Scholar
Bernhard, Michael, and Kubik, Jan. 2014. “Roundtable Discord.” Twenty Years after Communism: The Politics of Memory and Commemoration, ed. Bernhard, Michael and Kubik, Jan, 60. New York: Oxford University PressCrossRefGoogle Scholar
Bilewicz, Michal. 2022. “Conspiracy Beliefs as an Adaptation to Historical Trauma.” Current Opinion in Psychology, 101359. DOI: https://doi.org/10.1016/j.copsyc.2022.101359CrossRefGoogle Scholar
Bilewicz, Michal, MartaWitkowska, Myrto Pantazi, Gkinopoulos, Theofilus, and Klein, Olivier. 2019. “Traumatic Rift: How Conspiracy Beliefs Undermine Cohesion After Societal Trauma?Europe’s Journal of Psychology 15(1): 82.CrossRefGoogle ScholarPubMed
Birkland, Thomas A. 1998. “Focusing Events, Mobilization, and Agenda Setting.” Journal of Public Policy 18(1): 5374.CrossRefGoogle Scholar
Blackington, Courtney. 2021. “Partisanship and Plane Crashes: Can Partisanship Drive Conspiratorial Beliefs?East European Politics 38(2): 254–80.CrossRefGoogle Scholar
Blackington, Courtney, and Cayton, Frances. 2024. “Replication Data for: How to Stay Popular: Threat, Framing, and Conspiracy Theory Longevity.” Harvard Dataverse. https://doi.org/10.7910/DVN/BXYZL9.CrossRefGoogle Scholar
Boichak, Olga, Hemsley, Jeff, Jackson, Sam, Tromble, Rebekah, and Tanupabrungsun, Sikana. 2021. “Not the Bots You Are Looking For: Patterns and Effects of Orchestrated Interventions in the US and German Elections.” International Journal of Communication 15:814–39.Google Scholar
Brader, Ted, and Marcus, George E.. 2013. Emotion and Political Psychology. New York: Oxford University Press.CrossRefGoogle Scholar
Brady, William J., Crockett, Molly J., and Van Bavel, Jay J.. 2020. “The MAD Model of Moral Contagion: The Role of Motivation, Attention, and Design in the Spread of Moralized Content Online.” Perspectives on Psychological Science 15(4): 9781010.CrossRefGoogle ScholarPubMed
Centrum Badania Opinii Społecznej (CBOS). 2020. 10 rocznica katastrofy pod Smoleńskiem (tech. rep. No. 48). Warsaw: Centrum Badania Opinii Społecznej.Google Scholar
Chong, Dennis, and Druckman, James. 2007. “A Theory of Framing and Opinion Formation in Competitive Elite Environments.” Journal of Communication 57(1): 99118.Google Scholar
Cichocka, Aleksandra, Marchlewska, Marta, de Zavala, Agnieszka Golec, and Olechowski, Mateusz. 2016. “‘They Will Not Control Us’: Ingroup Positivity and Belief in Intergroup Conspiracies.” British Journal of Psychology 107(3): 556–76.CrossRefGoogle ScholarPubMed
Cinar, Ipek, and Nalepa, Monika. 2022. “Mass or Elite Polarization as the Driver of Authoritarian Backsliding? Evidence from 14 Polish Surveys (2005–2021). Journal of Political Institutions and Political Economy 3(3-4): 433–48.CrossRefGoogle Scholar
Colaresi, Michael. 2007. “The Benefit of the Doubt: Testing an Informational Theory of the Rally Effect.” International Organization 61(1): 99143.CrossRefGoogle Scholar
Cunningham, Scott. 2021. Causal Inference: The Mixtape. New Haven, CT: Yale University Press.Google Scholar
Davies, Christian. 2016. “The Conspiracy Theorists Who Have Taken over Poland.” The Guardian, 16.Google Scholar
Del Vicario, Michela, Vivaldo, Gianna, Bessi, Alessandro, Zollo, Fabiana, Scala, Antonio, Caldarelli, Guido, and Quattrociocchi, Walter. 2016. “Echo Chambers: Emotional Contagion and Group Polarization on Facebook.” Nature: Scientific Reports 6(1): 37825. https://doi.org/10.1038/srep37825Google ScholarPubMed
DeWitt, Darin, Atkinson, Matthew, and Wegner, Drew. 2018. “How Conspiracy Theories Spread.” In Conspiracy Theories and the People Who Believe Them, ed. Uscinski, Joseph E., 319–36. New York: Oxford University Press.CrossRefGoogle Scholar
Dobrosz-Oracz, Justyna. 2023. “Na co poszło 600 tysiecy? macierewicz o raporcie nik.” Gazeta Wyborcza (Warsaw).Google Scholar
Douglas, Karen M., Uscinski, Joseph E., Sutton, Robbie M., Cichocka, Aleksandra, Nefes, Turkay, Ang, Chee Siang, and Deravi, Farzin. 2019. “Understanding Conspiracy Theories.” Political Psychology 40(51): 335.CrossRefGoogle Scholar
Drzewiecka, Jolanta A., and Hasian, Marouf. 2018. “Discourses of the Wound and Desire for the Other: Remembrances of the Katyń Massacre and the Smoleńsk Crash.” Review of Communication 18(3): 231–48.CrossRefGoogle Scholar
Einstein, Katherine Levine, and Glick, David M.. 2015. “Do I Think BLS Data Are BS? The Consequences of Conspiracy Theories.” Political Behavior 37(3): 679701.CrossRefGoogle Scholar
Ellis, Emma Grey. 2018. “The WIRED Guide to Online Conspiracy Theories.” Wired (https://www.wired.com/story/wired-guide-to-conspiracy-theories/).Google Scholar
Elshehawy, Ashrakat, Gavras, Konstantin, Marinov, Nikolay, Nanni, Federico, and Schoen, Harald. 2022. “Illiberal Communication and Election Intervention during the Refugee Crisis in Germany.” Perspectives on Politics 20(3): 860–78.CrossRefGoogle Scholar
Enders, Adam M., and Smallpage, Steven M.. 2019. “Informational Cues, Partisan-Motivated Reasoning, and the Manipulation of Conspiracy Beliefs.” Political Communication 36(1): 83102.CrossRefGoogle Scholar
Enders, Adam M., Uscinski, Joseph E., Seelig, Michelle, Klofstad, Casey, Wuchty, Stefan, Funchion, John, Murthi, Manohar, Premaratne, Kamal, and Stoler, Justin. 2021. “The Relationship between Social Media Use and Beliefs in Conspiracy Theories and Misinformation.” Political Behavior 5:781804.Google Scholar
Entman, Robert M. 1993. “Framing: Toward Clarification of a Fractured Paradigm.” Journal of Communication 43(4): 5158.CrossRefGoogle Scholar
Enyedi, Zsolt, and Whitefield, Stephen. 2020. “Populists in Power: Populism and Representation in Illiberal Democracies.” In The Oxford Handbook of Political Representation in Liberal Democracies, ed. Rohrschneider, R. and Thomassen, J.. New York: Oxford University Press.Google Scholar
Fariss, Christopher J., Kenwick, Michael R., and Reuning, Kevin. 2020. “Estimating One-Sided-Killings from a Robust Measurement Model of Human Rights.” Journal of Peace Research 57(6): 801–14.CrossRefGoogle Scholar
Filindra, Alexandra, and Harbridge-Yong, Laurel. 2022. “How Do Partisans Navigate Intra-Group Conflict? A Publisher of Leadership-Driven Motivated Reasoning.” Political Behavior. https://doi.org/10.2139/ssrn.3570984CrossRefGoogle Scholar
Fong, Amos, Roozenbeek, Jon, Goldwert, Danielle, Rathje, Steven, and van der Linden, Sander. 2021. “The Language of Conspiracy.” Group Processes & Intergroup Relations 24(4): 606–23.CrossRefGoogle Scholar
Fredheim, Rolf. 2014. “The Memory of Katyn in Polish Political Discourse: A Quantitative Study.” Europe-Asia Studies 66(7): 1165–87.CrossRefGoogle Scholar
Fritsche, Inmo. 2022. “Agency through the We: Group-Based Control Theory.” Current Directions in Psychological Science 31(2): 194201.CrossRefGoogle Scholar
Garrett, R. Kelly, and Bond, Robert M.. 2021. “Conservatives’ Susceptibility to Political Misperceptions. Science Advance 7(23): eabf1234. https://doi.org/10.1126/sciadv.abf1234Google ScholarPubMed
Goldstone, Jack A., and Tilly, Charles. 2001. Threat (And Opportunity): Popular Action and State Response in the Dynamics of Contentious Action . Silence and Voice in the Study of Contentious Politics series. New York: Cambridge University Press.Google Scholar
Google Trends. 2022. “Princess Diana conspiracy theory.” Retrieved October 2, 2022 (https://trends.google.com/trends/explore?date=today%205-y&q=princess%20diana%20conspiracy%20theory).Google Scholar
Goren, Paul, Federico, Christopher M., and Kittilson, Miki Caul. 2009. “Source Cues, Partisan Identities, and Political Value Expression.” American Journal of Political Science 53(4): 805–20.CrossRefGoogle Scholar
Gottfried, Jeffrey, and Shearer, Elisa. 2017. “Americans’ Online News Use Is Closing In on TV News Use.” Pew Research Center, September 7 (https://www.pewresearch.org/journalism/2017/09/07/news-use-across-social-media-platforms-2017/).Google Scholar
Grossman, Emiliano, and Mayer, Nonna. 2022. “A New Form of Anti-Government Resentment? Making Sense of Mass Support for the Yellow-Vest Movement in France.” Journal of Elections Public Opinion and Parties 33(4):746–68.CrossRefGoogle Scholar
Guess, Andrew M. 2021. “(Almost) Everything in Moderation: New Evidence On Americans’ Online Media Diets.” American Journal of Political Science 65(4): 1007–22.CrossRefGoogle Scholar
Guriev, Sergei, and Treisman, Daniel. 2019. “Informational Autocrats.” Journal of Economic Perspectives 33(4): 100–27.CrossRefGoogle Scholar
Harambam, Jaron. 2020. “Conspiracy Theory Entrepreneurs, Movements and Individuals.” In Routledge Handbook of Conspiracy Theories, ed. Butter, Michael and Knight, Peter, 278291. Abingdon: Routledge.CrossRefGoogle Scholar
Hartman, Todd K., and Newmark, Adam J.. 2012. “Motivated Reasoning, Political Sophistication, and Associations between President Obama and Islam.” PS: Political Science & Politics 45(3): 449–55.Google Scholar
Haunschild, Robin, Bornmann, Lutz, Potnis, Devandra, and Tahamtan, Iman. 2021. “Investigating Dissemination of Scientific Information on Twitter: A Study of Topic Networks in Opioid Publications.” Quantitative Science Studies 2(4): 1486–510.CrossRefGoogle Scholar
Hawkins, Kirk A., Carlin, Ryan E., Littvay, Levente, and Kaltwasser, Cristóbal Rovira. 2018. The Ideational Approach to Populism: Concept, Theory, And Analysis. Abingdon: Routledge.CrossRefGoogle Scholar
Hinnant, Lori, and Keaten, Jamey. 2023. “UN-Backed Inquiry Accuses Russia of War Crimes in Ukraine.” Associated Press, April 11.Google Scholar
Hobbs, William, Lajevardi, Nazita, Li, Xinyi, and Lucas, Caleb. 2023. “From Anti-Muslim to Anti-Jewish: Target Substitution on Fringe Social Media Platforms and the Persistence of Online and Offline Hate.” Political Behavior. https://doi.org/10.1007/s11109-023-09892-9CrossRefGoogle Scholar
Huntington-Klein, Nick. 2021. The Effect: An Introduction to Research Design and Causality. Bocca Raton, FL: Chapman and Hall/ CRC Press.CrossRefGoogle Scholar
Imai, Kosuke, and Kim, In Song. 2019. “When Should We Use Unit Fixed Effects Regression Models for Causal Inference with Longitudinal Data?American Journal of Political Science 63(2): 467–90.CrossRefGoogle Scholar
Jolley, Daniel, Douglas, Karen M., Marchlewska, Marta, Cichocka, Aleksandra, and Sutton, Robbie M.. 2022. “Examining the Links between Conspiracy Beliefs and the EU ‘Brexit’ Referendum Vote in the UK: Evidence from a Two-Wave Survey.” Journal of Applied Social Psychology 52(1): 3036.CrossRefGoogle Scholar
Kant, Gillian, Wiebelt, Levin, Weisser, Christoph, Kis-Katos, Krisztina, Luber, Mattias, and Benjamin Säfken. 2022. “An Iterative Topic Model Filtering Framework for Short and Noisy User-Generated Data: Analyzing Conspiracy Theories on Twitter.” International Journal of Data Science and Analytics. https://doi.org/10.1007/s41060-022-00321-4CrossRefGoogle Scholar
Kauk, Julian, Kreysa, Helene, and Schweinberger, Stefan R.. 2021. “Understanding and Countering the Spread of Conspiracy Theories in Social Networks: Evidence from Epidemiological Models of Twitter Data.” PLoS One 16(8): e0256179.CrossRefGoogle ScholarPubMed
Kay, Aaron C., Whitson, Jennnifer A., Gaucher, Danielle, and Galinsky, Adam D.. 2009. “Compensatory Control: Achieving Order through the Mind, Our Institutions, and the Heavens.” Current Directions in Psychological Science 18(5): 264–68.CrossRefGoogle Scholar
Kearney, Michael W. 2018. “Tweetbotornot: Detecting Twitter Bots” (https://github.com/mkearney/tweetbotornot2).Google Scholar
Khalitova, Ludmila, Myslik, Barbara, Turska-Kawa, Agnieszka, Tarasevich, Sofia, and Kiousis, Spiro. 2020. “He Who Pays the Piper, Calls the Tune? Examining Russia’s and Poland’s Public Diplomacy Efforts to Shape the International Coverage of the Smolensk Crash.” Public Relations Review 46(2): 101858.CrossRefGoogle Scholar
Kim, Yongkwang. 2022. “How Conspiracy Theories Can Stimulate Political Engagement.” Journal of Elections, Public Opinion and Parties 32(1): 121.CrossRefGoogle Scholar
King, Kelvin K., and Wang, Bin. 2021. “Diffusion of Real versus Misinformation during a Crisis Event: A Big Data-Driven Approach.” International Journal of Information Management 71. https://doi.org/10.1016/j.ijinfomgt.2021.102390Google Scholar
Klašnja, Marko, Barbera´, Pablo, Beauchamp, Nick, Nagler, Jonathan, and Tucker, Joshua. 2017. “Measuring Public Opinion with Social Media Data.” In The Oxford Handbook of Polling and Survey Methods, ed. Atkeson, Lonna Rae and Alvarez, R. Michael, 555582. New York: Oxford University Press.Google Scholar
Krasodomski-Jones, Alex 2019. “Suspicious Minds: Conspiracy Theories in the Age of Populism.” Policy Brief, February 11. Brussels: Wilfried Martens Centre for European Studies.Google Scholar
Kublik, Agnieska, and Wójcik, Rafal. 2022. “Smoleński raport Macierewicza ze zdjeciem okaleczonego ciała Lecha Kaczyńskiego.” Gazeta Wyborcza (Warsaw).Google Scholar
Kwak, Haewoon, Cheanghyune, Hosung Park, and Moon, Sue. 2010. “What Is Twitter, A Social Network or a News Media?” In WWW ‘10: Proceedings of the 19th International Conference on World Wide Web, 591600. Raleigh, NC, April 26–30.Google Scholar
Larson, Jennifer M., Nagler, Jonathan, Ronen, Jonathan, and Tucker, Joshua A.. 2019. “Social Networks and Protest Participation: Evidence from 130 Million Twitter Users.” American Journal of Political Science 63(3): 690705.CrossRefGoogle Scholar
Lindsay, James M., and Shortridge, Anna. 2021. “Seven Resources Debunking 9/11 Conspiracy Theories.” Council on Foreign Relations, September 11 (https://www.cfr.org/blog/seven-resources-debunking-911-conspiracy-theories).Google Scholar
Mancosu, Moreno, and Vassallo, Ssalvatore. 2022. “The Life Cycle of Conspiracy Theories: Evidence from a Long-Term Panel Survey on Conspiracy Beliefs in Italy.” Italian Political Science Review 52(1): 117. https://doi.org/10.1017/ipo.2021.57Google Scholar
Marinov, Nikolay, and Popova, Maria. 2022. “Will the Real Conspiracy Please Stand Up: Sources of Post- Communist Democratic Failure.” Perspectives on Politics 20(1): 222–36.CrossRefGoogle Scholar
McAdam, Doug 1996. “Conceptual Origins, Current Problems, Future Direction.” In Comparative Perspectives on Social Movements: Political Opportunities, Mobilizing Structures, and Cultural Framings” ed. McAdam, D., McCarthy, J. D., and Zald, M. N., 2340. New York: Cambridge University Press.CrossRefGoogle Scholar
Miller, Joanne M., Saunders, Kyle L., and Farhart, Christina E.. 2016. “Conspiracy Endorsement as Motivated Reasoning: The Moderating Roles of Political Knowledge and Trust.” American Journal of Political Science 60(4): 824–44CrossRefGoogle Scholar
Muirhead, Russell, and Rosenblum, Nancy L.. 2019. A Lot of People Are Saying: The New Conspiracism and the Assault on Democracy. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Myslik, Barbara, Khalitova, Ludmila, Zhang, Tianduo, Tarasevich, Sophia, Kiousis, Spiro, Mohr, Tiffany, Kim, Ji Young, Kawa, AgnieszkaTurska-, Carroll, Craig, and Golan, Guy. 2021. “Two Tales of One Crash: Intergovernmental Media Relations and Agenda Building during the Smolensk Airplane Crash.” International Communication Gazette 83(2): 169–92.CrossRefGoogle Scholar
Nanath, Krishnadas, and Joy, Geethu. 2021. “Leveraging Twitter Data to Analyze the Virality of COVID-19 Tweets: A Text Mining Approach.” Behaviour & Information Technology 42(2): 196214.CrossRefGoogle Scholar
Niżyńska, Joanna. 2010. “The Politics of Mourning and the Crisis of Poland’s Symbolic Language after April 10.” East European Politics and Societies 24(4): 467–79.CrossRefGoogle Scholar
Oliver, J. Eric, and Wood, Thomas J.. 2014. “Conspiracy Theories and the Paranoid Style (s) of Mass Opinion.” American Journal of Political Science 58(4): 952–66.CrossRefGoogle Scholar
Olson, Mancur. 1971. The Logic of Collective Action: Public Goods and the Theory of Groups, with a New Preface and Appendix. Cambridge, MA: Harvard University Press.Google Scholar
Osmundsen, Mathias, Bor, Alexander, Vahlstrup, Peter Bjerregaard, Bechmann, Anja, and Petersen, Nichael Bang. 2021. “Partisan Polarization Is the Primary Psychological Motivation behind Political Fake News Sharing on Twitter.” American Political Science Review 115(3): 9991015.CrossRefGoogle Scholar
Pancer, Ethan, and Poole, Maxwell. 2016. “The Popularity and Virality of Political Social Media: Hashtags, Mentions, and Links Predict Likes and Retweets of 2016 U.S. Presidential Nominees’ Tweets.” Social Influence 11(4): 259–70.Google Scholar
Pantazi, Myrto, Papaioannou, Kostas, and van Prooijen, Jan-Willem. 2022. “Power to the People: The Hidden Link between Support for Direct Democracy and Belief in Conspiracy Theories. Political Psychology 43(3): 529–48.CrossRefGoogle Scholar
Polska Agencja Prasowa (PAP). 2023. “Sondaż. ilu polaków wierzy, że w smoleńsku był zamach?” Polska Agencja Prasowa, April 21.Google Scholar
Pasek, Josh, Stark, Tobias H., Krosnick, Jon A., and Tompson, Trevor. 2015. “What Motivates a Conspiracy Theory? Birther Beliefs, Partisanship, Liberal-Conservative Ideology, and Anti-Black Attitudes.” Electoral Studies 40:482–89.CrossRefGoogle Scholar
Pirro, Anddrea L., and Taggart, Paul. 2022. “Populists in Power and Conspiracy Theories.” Party Politics 29(3). https://doi.org/10.1177/13540688221077071Google Scholar
Plenta, Peter. 2020. “Conspiracy Theories as a Political Instrument: Utilization of Anti-Soros Narratives in Central Europe. Contemporary Politics 26(5): 512–30.CrossRefGoogle Scholar
Pozzana, Iacopo, and Ferrara, Emilio. 2020. “Measuring Bot and Human Behavioral Dynamics.” Frontiers in Physics 8. DOI=https://doi.org/10.3389/fphy.2020.00125CrossRefGoogle Scholar
Ptak, Alicja. 2022. “Government Committee Investigating Smolensk Crash Ignored Findings of US Lab, Report Finds.” Notes From Poland, September 13 (https://notesfrompoland.com/category/news/).Google Scholar
Radnitz, Scott. 2021. Revealing Schemes: The Politics of Conspiracy in Russia and the Post-Soviet Region. New York: Oxford University Press.CrossRefGoogle Scholar
Reny, Tyler T., and Newman, Benjamin J.. 2021. “The Opinion-Mobilizing Effect of Social Protest against Police Violence: Evidence from the 2020 George Floyd Protests.” American Political Science Review 115(4): 1499–507.CrossRefGoogle Scholar
Reuters. 2015. “Polish Radio Says Crew Distracted Before 2010 Smolensk Plane Crash.” April 7 (https://www.voanews.com/a/polish-radio-says-crew-distracted-before-2010-smolensk-plane-crash/2709652.html).Google Scholar
Roberts, Margaret. 2018. Censored: Distraction and Diversion Inside China’s Great Firewall. Princeton, NJ: Princeton University Press.Google Scholar
Romer, Daniel, and Jamieson, Kathleen Hall. 2020. “Conspiracy Theories as Barriers to Controlling the Spread of COVID-19 in the US.” Social Science Medicine. https://doi.org/10.1016/j.socscimed.2020.113356CrossRefGoogle Scholar
Sapountzis, Antonis, and Condor, Susan. 2013. “Conspiracy Accounts as Intergroup Theories: Challenging Dominant Understandings of Social Power and Political Legitimacy.” Political Psychology 34(5): 731–52.CrossRefGoogle Scholar
Shahi, Gautam Kishore, Dirkson, Anne, and Majchrzak, Tim A.. 2021. “An Exploratory Study of COVID-19 Misinformation on Twitter.” Online Social Networks and Media 22. https://doi.org/10.1016/j.osnem.2020.100104CrossRefGoogle ScholarPubMed
Smallpage, Steven M., Enders, Adam M., and Joseph, E.Uscinski, J. E. 2017. “The Partisan Contours of Conspiracy Theory Beliefs.” Research & Politics 4(4). https://doi.org/10.1177/2053168017746554CrossRefGoogle Scholar
Snyder, Timothy. 2022. “The War in Ukraine Is a Colonial War.” The New Yorker, April 28.Google Scholar
Soroka, George. 2022. “Recalling Katyń: Poland, Russia, and the Interstate Politics of History.” East European Politics and Societies: and Cultures, 36(1): 328–55.CrossRefGoogle Scholar
Stanley, Ben, and Cześnik, Mikolaj. 2019. “Populism in Poland.” In Populism around the World, ed. Mudde, Cas and Kaltwasser, Cristóbal Rovira, 6787. New York: Oxford University Press.CrossRefGoogle Scholar
Steinert-Threlkeld, Zachary C. 2018. Twitter as Data. New York: Cambridge University Press.CrossRefGoogle Scholar
Stojanov, Ana, and Halberstadt, Jamin, 2020. “Does Lack of Control Lead to Conspiracy Beliefs? A Meta-Analysis.” European Journal of Social Psychology 50(5): 955–68.CrossRefGoogle Scholar
Suh, Chan. 2021. “In the Smoke of the People’s Opium: The Influence of Religious Beliefs and Activities on Protest Participation.” International Sociology 36(3): 378–97.CrossRefGoogle Scholar
Sunstein, Cass R., and Vermeule, Adrian. 2009. “Conspiracy Theories: Causes and Cures.” Journal of Political Philosophy 17(2): 202–27.CrossRefGoogle Scholar
Tamkin, Emily. 2020. The Influence of Soros: Politics, Power, and the Struggle for Open Society. New York: HarperCollins.Google Scholar
Tarrow, Sidney G. 2011. Power in Movement: Social Movements and Contentious Politics. New York: Cambridge University Press.CrossRefGoogle Scholar
Tilly, Charles. 1978. From Mobilization to Revolution. New York: Random House.Google Scholar
Tucker, Joshua, Guess, Andrew, Barberá, Pablo, Vaccari, Cristian, Siegel, Alexandra, Sanovich, Sergey, Stukal, Denis, and Nyhan, Brendan. 2018. “Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature.” SSRN Electronic Journal (https://ssrn.com/abstract=3144139).CrossRefGoogle Scholar
TVRepublika. 2023. “Sakiewicz: To, co robi Tusk, to działania V kolumny, dażacej do paraliżu państwa! [WIDEO].”Google Scholar
Uscinski, Joseph E., Enders, Adam M., Klofstad, Casey A., Seelig, Michelle I., Drochon, Hugo, Premaratne, Kamal, and Murthi, Manohar. 2022. “Have Beliefs in Conspiracy Theories Increased over Time?PLoS One 17(7): e0270429.CrossRefGoogle ScholarPubMed
Uscinski, Joseph E., DeWitt, Darin, and Atkinson, Matthew D.. 2018. “A Web of Conspiracy? Internet and Conspiracy Theory.” In Handbook of Conspiracy Theory and Contemporary Religion, ed. Dyrendal, Asbjørn, Robertson, David G., and Asprem, Egil, 106130. Leiden: Brill.CrossRefGoogle Scholar
Uscinski, Joseph E., Klofstad, Casey A., and Atkinson, Matthew D.2016. “What Drives Conspiratorial Beliefs? The Role of Informational Cues and Predispositions.” Political Research Quarterly 69(1): 5771.CrossRefGoogle Scholar
Uscinski, Josseph E., and Parent, Joseph M.. 2014. American Conspiracy Theories. New York: Oxford University Press.CrossRefGoogle Scholar
Vachudova, Milada A. 2020. “Ethnopopulism and Democratic Backsliding in Central Europe. East European Politics 36(3): 318–40.CrossRefGoogle Scholar
Vaishnav, Milan. 2019. “Religious Nationalism and India’s Future—The BJP in Power: Indian Democracy and Religious Nationalism.” April 4. Carnegie Endowment for International Peace (https://carnegieendowment.org/2019/04/04/religious-nationalism-and-india-s-future-pub-78703)Google Scholar
van Prooijen, Jan.-Willem. 2020. “An Existential Threat Model of Conspiracy Theories.” European Psychologist 25(1): 1625.CrossRefGoogle Scholar
Vogler, Jan P. 2019. “Imperial Rule, the Imposition of Bureaucratic Institutions, and Their Long-Term Legacies.” World Politics 71(4): 806–63.CrossRefGoogle Scholar
Vosoughi, Soroush, Roy, Deb, and Aral, Sinan. 2018. “The Spread of True and False News Online.” Science 359(6380): 1146–51.CrossRefGoogle ScholarPubMed
Wojcieszak, Magdalena, Casas, Andreu, Yu, Xudong, Nagler, Jonathan, and Tucker, Joshua A.. 2022a. “Most Users Do Not Follow Political Elites on Twitter; Those Who Do Show Overwhelming Preferences for Ideological Congruity.” Science Advances 8(39): eabn9418.CrossRefGoogle ScholarPubMed
Wojcieszak, Magdalena, Sobkowicz, Pawel, Yu, Xudong, and Bulat, Beril. 2022b. “What Information Drives Political Polarization? Comparing the Effects of In-Group Praise, Out-Group Derogation, and Evidence-Based Communications on Polarization.” International Journal of Press/Politics 27(2): 325–52.CrossRefGoogle Scholar
Yu, Xudong, Gil-López, Teresa, Shen, Cuihua, and Wojcieszak, Magdalena. 2022. “Engagement with Social Media Posts in Experimental and Naturalistic Settings: How Do Message Incongruence and Incivility Influence Commenting? International Journal of Communication 16:5086–109.Google Scholar
Zald, Mayer N. 1996. “Culture, Ideology, and Strategic Framing.” In Comparative Perspectives on Social Movements: Political Opportunities, Mobilizing Structures, and Cultural Framings, ed. McAdam, D., McCarthy, J. D., and Zald, M. N., 261274. New York: Cambridge University Press.CrossRefGoogle Scholar
Zaller, John R. 1998. “Monica Lewinsky’s Contribution to Political Science.” PS: Political Science & Politics 31(2): 182–89.Google Scholar
Zhang, Jueman, Wang, Yi, Molu Shi, M., and Wang, Xiuli. 2021. “Factors Driving the Popularity and Virality of COVID-19 Vaccine Discourse on Twitter: Text Mining and Data Visualization Study.” JMIR Public Health and Surveillance 7(12): e32814.CrossRefGoogle ScholarPubMed
Zou, Zhengzia., Shi, Zhenwei, Guo, Yuhong, and Ye, Jieping. 2019. “Object Detection in 20 Years: A Survey.” arXiv preprint arXiv. https://doi.org/10.48550/arXiv.1905.05055CrossRefGoogle Scholar
Żukiewicz, Przemyslaw, and Zimny, Rafal. 2015. “The Smolensk Tragedy and Its Importance for Political Communication in Poland after 10th April, 2010.” Środkowoeuropejskie Studia Polityczne (1):6382.Google Scholar
Figure 0

Table 1 Observable implications

Figure 1

Table 2 Coding examples—Tweets featuring conspiracy theories

Figure 2

Table 3 Coding examples—Tweets without conspiracy theories

Figure 3

Figure 1 Descriptive summary of tweet frequenciesNotes: The Y-axis shows the proportion of total monthly tweets, sorted by content type. The X-axis is the month of tweet collection. There is a dotted line after the start of Russia’s full-fledged invasion of Ukraine. The top panel shows conspiracy theory content; the bottom panel non-CT content. ‘Total Conspiracy’ includes all tweets referencing a CT (all PO, Tusk, and collusion CT tweets). All tweets in Table 2 are included in this category. Some tweets have multiple categories. For example, a tweet invoking Russia-PO collaboration would be coded for Total, PO, and PO-Russia collaboration. We similarly illustrate non-CT content in the bottom panel. All tweets in table 3 are in one of these groups.

Figure 4

Figure 2 Difference in predicted number of likes or retweets based on Tweet contentNotes: The Y-axis is conspiracy theory frame, while the X-axis shows the difference in predicted outcome. The black lines and dots reflect likes; the grey lines and triangles are retweets. We calculate 95 percent confidence intervals using Monte Carlo simulation and the observed case approach. In each model, the reference group is all tweets that do not mention this version of the CT. For example, for the “Tusk” CT, the reference group includes all tweets that do not reference the Tusk CT, both those with no CT reference at all and those referencing another CT, but that do not make reference to Tusk.

Figure 5

Figure 3 Difference in predicted number of likes or retweets based on tweet content and PiS officialsNotes: Each panel represents a different conspiracy theory. The Y-axis represents retweets or likes. The X-axis shows the difference in predicted outcome. The dotted (solid) lines and triangles (dots) reflect PiS officials (non-officials). We use the observed case approach and calculate 95 percent confidence intervals using Monte Carlo simulation.

Figure 6

Figure 4 Difference in the predicted number of likes or retweets based before and after Russia’s full-scale invasion of UkraineNotes: Each panel represents a different conspiracy theory. The Y-axis is CT measure (retweets or likes). The X-axis shows the difference in predicted outcome. The black (grey) lines reflect those tweets from after (before) February 24, 2022. We use the observed case approach and calculate 95 percent confidence intervals using Monte Carlo simulation.

Figure 7

Figure 5 Difference in the predicted number of likes or retweets based on content invoked, presence of PiS Officials, and before or after Russia’s full-scale invasion of UkraineNotes: Each panel represents a different conspiracy theory. The Y-axis is CT measure (retweets or likes). The X-axis shows the difference in predicted outcome. The black (grey) lines reflect those tweets from after (before) February 24, 2022. The triangles (dots) reflect those tweets (not) including PiS officials. We use the observed case approach and calculate 95 percent confidence intervals using Monte Carlo simulation.

Supplementary material: File

Blackington and Cayton supplementary material

Blackington and Cayton supplementary material
Download Blackington and Cayton supplementary material(File)
File 2.3 MB
Supplementary material: Link

Blackington and Cayton Dataset

Link