Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-26T07:30:18.547Z Has data issue: false hasContentIssue false

Conceptual Replication of Four Key Findings about Factual Corrections and Misinformation during the 2020 US Election: Evidence from Panel-Survey Experiments

Published online by Cambridge University Press:  01 February 2023

Alexander Coppock
Affiliation:
Department of Political Science, Yale University, USA
Kimberly Gross
Affiliation:
School of Media and Public Affairs and Department of Political Science, George Washington University, USA
Ethan Porter*
Affiliation:
School of Media and Public Affairs and Department of Political Science, George Washington University, USA
Emily Thorson
Affiliation:
Department of Political Science, Syracuse University, USA
Thomas J. Wood
Affiliation:
Department of Political Science, The Ohio State University, USA
*
*Corresponding author. Email: evporter@gwu.edu
Rights & Permissions [Opens in a new window]

Abstract

In the final two months of the 2020 US election, we conducted eight panel experiments to evaluate the immediate and medium-term effects of misinformation and factual corrections. Our results corroborate four sets of existing findings: fact-checks reliably improve factual accuracy, while misinformation degrades it; effects of fact-checks on belief accuracy endure, though they fade with time; effects on attitudes are minuscule; and there are important partisan asymmetries. We also offer one new empirical finding suggesting that effect heterogeneities by personality type and cognitive style may reflect attention paid to treatments. Our study confirms that the fundamental push and pull of misinformation and factual corrections on political beliefs holds even in electoral settings as saturated with mistruths as the 2020 US election.

Type
Letter
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

The 2020 US election featured an incumbent president with a well-documented penchant for spreading falsehoods (Kessler Reference Kessler2020) and culminated in an insurrection that policymakers have attributed to misinformation about the “big lie” of election fraud (Dillard Reference Dillard2020; US House Committee on Energy and Commerce Staff, 2021). While absolute levels of exposure to online misinformation are lower than media reports have suggested (Guess, Nagler, and Tucker Reference Guess, Nagler and Tucker2019), exposure to unreliable sources continues to rise (Fischer Reference Fischer2020). Belief in falsehoods may shape political attitudes and behavior in ways that damage democratic accountability (Einstein and Hochschild Reference Einstein and Hochschild2015; Nyhan Reference Nyhan2020), with some even suggesting that misinformation directly influences evaluations of political figures, thereby influencing election outcomes (Gunther, Beck, and Nisbet Reference Gunther, Beck and Nisbet2019; Jamieson Reference Jamieson2018).

Both traditional and social media have turned to fact-checks to respond to this onslaught of misinformation (Chan et al. Reference Chan2017; Graves Reference Graves2016; Walter et al. Reference Walter2019). For example, in advance of the 2020 election, Facebook announced that it would partner with fact-checking organizations to review dubious claims and promote fact-checks to its users (Culliford Reference Culliford2019). These and similar efforts are premised on the assumption that at a crucial moment of democratic reckoning—high-intensity political campaigns—factual corrections offer an effective countermeasure against misinformation.

Prior academic work on the effects of fact-checks has established, to varying degrees of certainty and generality, four important findings. First, fact-checks reliably improve factual belief accuracy, while misinformation degrades it (Chan et al. Reference Chan2017; Guess et al. Reference Guess2020; Porter and Wood Reference Porter and Wood2019; Wood and Porter Reference Wood and Porter2018). Second, although these effects fade with time, they remain detectable after immediate exposure (Berinsky Reference Berinsky2015; Porter and Wood Reference Porter and Wood2021; Swire et al. Reference Swire2017). Third, the downstream effects of fact-checks on attitudes toward the fact-checked politician or group are minuscule at best (Nyhan et al. Reference Nyhan2019; but see Loomba et al. Reference Loomba2021; Wintersieck Reference Wintersieck2017). Fourth, Democrats and Republicans both respond to fact-checks by becoming more accurate, though with important partisan asymmetries: Democrats appear to respond more strongly to fact-checks of partisan-congenial information than do Republicans (Edelson et al. Reference Edelson2021; Guay et al. Reference Guay2020; Jennings and Stroud Reference Jennings and Stroud2021; Shin and Thorson Reference Shin and Thorson2017). The goal of the present study is to systematically reconfirm these findings, while extending them to one of the most politically important settings for fact-checks: the 2020 US presidential election. While prior research has investigated the effects of fact-checks and misinformation on factual beliefs during other election seasons, no work of which we are aware investigates these particular questions at comparable scale and timing. Most work on fact-checks has evaluated single corrections, sometimes outside the context of elections (Nyhan et al. Reference Nyhan2019; Wintersieck Reference Wintersieck2017). Since party leaders' messages (including false statements) are magnified during campaigns—as is the influence of these messages on voters' beliefs and attitudes (Lenz Reference Lenz2012)—we might be concerned that real-world fact-checks deployed during campaigns would have smaller effects. In particular, a persistent concern is that the partisan polarization and media fragmentation that characterized the 2020 election cycle might render fact-checks ineffective.

To study fact-checking during the 2020 election season, we conducted eight preregistered panel experiments (total N = 17,681). For each panel study, we selected fact-check treatments from the set of recent articles produced by the nonpartisan organization PolitiFact, along with the corresponding misinformation treatments in their original forms (including Facebook posts, videos, and “fake news” articles). In total, we evaluated the effects of twenty-one widely disseminated pieces of misinformation and corresponding fact-checks during the final two months of the 2020 election. We evaluated the effects of these treatments in almost real time: on average, thirteen days separated the first appearance of the misinformation online from the inclusion of that misinformation in our experiments.

We can confirm each of the four prior findings generalize to the 2020 election. Exposure to misinformation increased false beliefs by an average of 4.3 points on a 100-point belief certainty scale. Exposure to fact-checks more than corrected this effect, decreasing false beliefs by 10.5 points. We show that 66 per cent of the initial effect of fact-checks on factual beliefs is still detectable one week after initial exposure, with 50 percent detectable after more than two weeks. Both misinformation and fact-checks have very small effects on attitudes toward politicians and political organizations: on 100-point feeling thermometers, the effects of fact-checks and misinformation on subsequent attitudes are smaller than one-quarter of a point. With regard to treatment-effect heterogeneity by partisanship, we find that Democrats became more accurate than Republicans following exposure to fact-checks that were in conflict with their partisanship. Lastly, our design enables the systematic study of treatment-effect heterogeneity along multiple dimensions of personality and cognitive style. Here, we find limited heterogeneity in the effects of misinformation but substantial heterogeneity in the effects of fact-checks. In an exploratory analysis, we offer evidence that these heterogeneities may be due to differences in the amount of time different kinds of people spent reading the treatments.

Prior Research

To contextualize our contribution, we briefly review the prior experimental evidence on each of the four empirical findings we replicated. First, recent evidence shows that fact-checks increase belief accuracy, while misinformation degrades it. One meta-analysis (Chan et al. Reference Chan2017) gathered together twenty studies with fifty-two independent treatments to show that misinformation decreases accuracy and fact-checks increase it. Porter and Wood (Reference Porter and Wood2019) inspect sixty-three issues overall (including the issue experimentally evaluated by Haglin [Reference Haglin2017]), finding that corrections significantly improved accuracy for more than 96 per cent of issues and observing no corrections “backfiring,” or reducing belief accuracy on their own. The scholarly consensus that corrections improve belief accuracy is summarized in Lewandowsky et al. (Reference Lewandowsky2020). On the misinformation side, Guess et al. (Reference Guess2020) randomize exposure to two “fake news” stories, finding that it decreases belief accuracy about the story in question.

Second, prior research shows that the effects of fact-checks endure, at least to some extent, after exposure. Porter and Wood (Reference Porter and Wood2021) recontacted participants in three countries two weeks after initial exposure to fact-checks. For eleven of seventeen issues, belief accuracy increases were still detectable at this later date. A total of 40 percent of the first-wave effect remained detectable in the second wave. Similarly, Porter, Velez, and Wood (Reference Porter, Velez and Wood2022) recontacted participants in ten countries two weeks after a randomized exposure to a COVID-19-related correction, and conclude that 39 per cent of the correction's effect remained detectable. However, there have been divergent findings on this matter; in a similar multi-wave study, Carey et al. (Reference Carey2022) find that the effects of corrections on COVID-19 belief accuracy do not persist in either the US or Great Britain (albeit across a smaller set of false claims than tested here and in the other studies).

Third, much of the available evidence shows that, at most, corrections and misinformation have very small downstream effects on attitudes. Nyhan et al. (Reference Nyhan2019) found that exposure to a fact-check of a false Donald Trump claim increased belief accuracy, including among Trump supporters, but had no discernible impact on either supporters' or opponents' attitudes toward Trump. Similarly, in their ten-country study of COVID-19 corrections and misinformation, Porter, Velez, and Wood (Reference Porter, Velez and Wood2022) do not find evidence that either corrections or misinformation affected attitudes toward vaccines. However, Wintersieck (Reference Wintersieck2017) finds that evaluations of candidates' debate performance improve when a fact-check shows they made accurate statements during the debate. Guess et al. (Reference Guess2020) find that randomized exposure to misinformation affects self-reported vote intention, but has no impact on feelings toward the media or political parties.

Fourth, studies of corrective interventions have often (though not always) found evidence of partisan asymmetries. Observational evidence of Twitter users during the 2012 election shows Republicans were more likely to tweet fact-checks of Democratic President Obama than Democrats were to tweet fact-checks of Republican candidate Romney (Shin and Thorson Reference Shin and Thorson2017). In an experiment conducted on a mock version of the Facebook news feed, Jennings and Stroud (Reference Jennings and Stroud2021) find only Democrats respond to fact-checking labels by becoming more accurate (though for a similar study that finds fact-checks making both Democrats and Republicans more accurate, see Porter and Wood Reference Porter and Wood2022). Finally, exposure to an accuracy nudge (an intervention related to, but distinct from, fact-checks) has small to negligible effects on accuracy among conservatives, with its effectiveness concentrated among Democrats and independents (Rathje et al. Reference Rathje2022).

Design

We administered six two-wave panels and two three-wave panels between September and November 2020. Four panels were fielded on Mechanical Turk and four were fielded on Lucid.

The first wave of every panel was structured identically. Before the experimental portion, we measured pretreatment covariates. After answering demographic questions, subjects completed batteries measuring political knowledge, Cognitive Reflection Test (CRT-2), need for cognition, and a ten-question personality inventory. Since demographic information is provided directly for Lucid respondents, the Mechanical Turk samples were asked more extensive demographic questions whose response categories matched the Lucid categories exactly (for the full instrument, see the Online Appendix). Subjects then participated in three independently randomized experiments, each relating to a separate topic of misinformation. For each topic, respondents were assigned to one of three conditions: pure control, misinformation only, or misinformation followed by a fact-check. In the Online Appendix, we demonstrate that these randomizations generated experimental groups balanced on pretreatment covariates.

Each week throughout the study, PolitiFact shared internal data with us about the popularity of their fact-checks (measured via web traffic). These data informed the selection of the fact-checks used in the experiments. The topics were chosen based on the following criteria. First, each received relatively high traffic on the PolitiFact website (see Figure 1). Secondly, each panel included one false claim that we anticipated would be congenial to Republicans, one false claim expected to be congenial to Democrats, and a third chosen to tap into unfolding events, regardless of expectations about differential partisan response.

Figure 1. Cumulative page views of each fact-check used in treatments, as well as respondents' agreement in control, by the partisan congeniality of the false claim.

Notes: The right column reports the count of total visits to each false claim's associated fact-check webpage. The left column reports agreement with each false claim, according to respondents' partisanship. Points indicate mean agreement, and the ranges indicate 95 per cent confidence intervals.

Two-thirds of our tested fact-checks were produced by PolitiFact as part of their partnership with Facebook. Through that partnership, Facebook presented PolitiFact and other fact-checking organizations with a set of widely circulating misinformation, dividing the sets into three tiers based on internal data. The most popular pieces were in the first tier, while the least popular were in the third. The fourteen Facebook partnership fact-checks in our experiment were all from the first tier.

Participants saw the misinformation and fact-checks in as close to their original form as possible, including transcripts, Facebook posts, tweets, video, and “fake news” articles. The fact-checks adhered to the format and text used by PolitiFact. They included the PolitiFact logo, a headline, and a graphic illustrating the verdict (for example, “Pants on Fire”). All text and images in the fact-checks were taken from PolitiFact. The complete set of stimuli are in the Online Appendix.

Figure 1 shows the cumulative page views of each of the tested fact-checks, by congeniality. On average, fact-checks of Republican-congenial misinformation (shown in the top panel) received 138,971 views over the eight weeks following the original post. By contrast, the average fact-check of Democratic-congenial misinformation received 45,123 views. This pattern complements descriptive work (Edelson et al. Reference Edelson2021) which shows that during the 2020 election, Republican-congenial misinformation on social media received far more engagement than Democratic-congenial misinformation. Just as Republican-congenial misinformation is more popular than Democratic-congenial misinformation, so are articles debunking that misinformation.

After treatment, all respondents, including those in control conditions, answered questions about their belief in each piece of misinformation, measured via two questions. The first asked how accurate they thought the misinformation was, and the second asked how confident they were in their answer. The use of multiple outcome measures helps mitigate concerns about measurement error (Swire, DeGutis, and Lazer Reference Swire, DeGutis and Lazer2020). Our use of a confidence measure allows us to account for effects on beliefs while being mindful of the way in which limited belief certainty may impact misperceptions (Graham Reference Graham2020) and even lead researchers to conflate uninformed participants with misinformed participants (Pasek, Sood, and Krosnick Reference Pasek, Sood and Krosnick2015). Finally, to assess effects on attitudes, participants were presented with feeling thermometers for the groups and people prominently featured in the misinformation and fact-checks.

At the close of the first wave, we debriefed subjects in the misinformation conditions to inform them that the misinformation was false, showing them the corresponding fact-checks. Since we could not be certain that all participants would return for subsequent surveys, we believed it unethical to expose them to uncorrected misinformation. We therefore do not measure the over-time effect of misinformation alone, but rather the effect of misinformation plus fact-check, relative to control. Evaluating the over-time effect of fact-checked misinformation allows us to address the pressing real-world question of whether the effects of fact-checks in the presence of misinformation endure beyond immediate exposure.

To measure over-time effects, we recontacted participants at least once, with a minimum of seven days separating waves. Six of our eight panels were comprised of two waves; the remaining two featured a third wave. Post-treatment waves measured the same set of outcomes as the first wave. In the Online Appendix, we demonstrate that our treatments do not appear to change whether subjects respond to outcome questions, either immediately post-treatment or in subsequent waves, allaying concerns about differential attrition.

Results

We begin our discussion of results with the averages of the belief confidence measure for Democrats and Republicans in the control condition. Figure 1 shows a partisan gap across issues: Republicans are more likely than Democrats to believe all of the Republican-congenial false statements, but Democrats are more likely than Republicans to believe only two of the Democratic-congenial false statements.Footnote 1 These results should be interpreted with caution, as they are based on observational data gleaned from convenience samples.

Since our primary interest is in treatment effects, we next turn to the average effects of exposure to misinformation and fact-checks. Figure 2 displays the outcome distributions by topic and condition. Belief certainty is plotted on the vertical axis, ranging from 0 (completely certain the false statement is inaccurate) to 100 (completely certain the statement is accurate). The first column in each graph shows the control condition, the second the group that saw only the misinformation, and the third the group that saw both the misinformation and the fact-check. Overall, exposure to the misinformation significantly decreased accuracy in twelve out of the twenty-four opportunities. Compared to the misinformation condition, twenty of the fact-checks had statistically significant negative average effects on belief certainty and none “backfired,” or increased false beliefs. Using random-effects meta-analysis, we estimate the average misinformation effect (relative to control) to be 4.30 points (standard error = 1.07) and the average fact-check effect (relative to misinformation) to be −10.5 points (standard error = 1.1).

Figure 2. Outcome distributions by topic and condition.

Notes: Each cloud depicts the distribution of belief certainty within each experimental condition. Each label shows the conditional mean. Statistically significant differences between control and misinformation, and misinformation and fact-checks, are labelled with the associated significance level. *** p < 0.001; ** p < 0.01; * p < 0.05.

While fact-checks reduced false beliefs overall, we observed partisan asymmetries. The first column of Figure 3 isolates the effects of misinformation on beliefs. Being exposed to misinformation increased belief in the false claim by about the same amount across categories, both among Republicans and among Democrats. The second column of Figure 3 shows the effect of fact-checks on belief confidence. The effects are negative for Republicans and Democrats. For false claims congenial to Republicans, the effects are approximately equal in magnitude across party lines. However, Democrats update substantially further than Republicans when corrected about congenial false claims. This asymmetry is the opposite of what would be predicted by a motivated reasoning theory, under which partisans are resistant to treatments that are hostile to their partisan identity. Here, Democrats move even further than Republicans, even though the correction is counter-attitudinal for them.

Figure 3. Effects by party.

Notes: Points are the predicted effects by partisanship. Ranges indicate 95 per cent confidence intervals. Points are increased in size, and labelled, when an effect is significantly different to 0 (at p < 0.05). Partisan differences in correction or misinformation effects are depicted with asterisks. *** p < 0.001; ** p < 0.01; * p < 0.05.

We thus observe the following partisan asymmetry: once shown counter-attitudinal corrections, Democrats are made more accurate than Republicans. This difference may be attributable to differences in the partisan information environments. An inspection of PolitiFact traffic data (see Figure 1) and available social media data on the tested misinformation indicates that Republican-congenial misinformation may have been circulating more widely than Democratic-congenial misinformation (corroborating Edelson et al.'s [Reference Edelson2021] findings). This pattern could explain why Republicans responded differently than Democrats: Republicans entered our studies more likely to have seen the tested misinformation, with this prior exposure making false beliefs more difficult to dislodge. Differences in responses to counter-attitudinal fact-checking may also be explained by source cues: once exposed to a counter-attitudinal correction, Democrats may have become more accurate than Republicans because they viewed PolitiFact more favorably than did Republicans (Walker and Gottfried Reference Walker and Gottfried2019).

In Figure 4, we turn to the medium-term effects of exposure to misinformation followed by fact-checks (versus control). The top row of panels presents meta-analytic estimates for all issues, conditional on subjects responding in the first and second waves of the study. The effects of fact-checks observed immediately after treatment dissipate somewhat from Wave 1 to Wave 2, on average, at 66.4 per cent of the initial magnitude. This pattern of decay is broadly consistent across issues that are congenial to either partisan group and among both Democratic and Republican respondents (for additional estimates, see Figure 9 in the Online Appendix). The bottom row of Figure 4 shows the meta-analytic averages for the six issues in studies with three-wave panels and conditions the analysis on responding in the first and third waves of the panel. On average, the effect in Wave 3 is 50 per cent the magnitude of the original effect, though the smaller number of issues and subjects increases our uncertainty.

Figure 4. Duration of effects.

Notes: Points indicate the mean effect, and line ranges report the 95 per cent confidence intervals.

Next, we consider how the immediate effects of fact-checks and misinformation may differ by the following sources of potential heterogeneity: levels of political interest, political knowledge, need for cognition, performance on the Cognitive Reflection Test, and Big Five personality measures.Footnote 2 Figure 5 shows the estimated effects for subjects in the top versus bottom tercile of each index.Footnote 3 None of these covariates moderate the effects of misinformation (left panels), but several of them moderate the effects of fact-checks (right panels). This same pattern of heterogeneity persists after one week, though at diminished magnitudes (see the Online Appendix).

Figure 5. Heterogeneous treatment effect estimates.

Notes: Each small point is an underlying effect estimate for some potential source of effect heterogeneity (each source is described on the y-axis). Effects for those with low values are depicted with dark points; effects for those with high values are depicted with light points. Larger points, labels, and line ranges are meta-analytic summaries of these distributions. Significance levels report when these meta-analytic estimates are different. *** p < 0.001; ** p < 0.01; * p < 0.05.

We consider two potential explanations for why these traits, in particular, cognitive reflection and need for cognition, are associated with the effectiveness of fact-checks. One possibility is that people high in these traits may have better reasoning skills and therefore be more capable of understanding, evaluating, and applying the evidence presented in fact-checks. A second possibility is that people high in these traits may simply be more attentive to fact-checks.

To begin to distinguish between these two mechanisms, we conduct an exploratory analysis of the relationship between traits and time spent on the misinformation page and the fact-checking page. The results (available in the Online Appendix) show that the within-trait ranking of conditional effects shown in Figure 5 is matched exactly by a within-trait ranking of average time spent reading the fact-check. This pattern leads us to believe that attention paid to fact-checks explains these heterogeneous effects, not fundamental differences in cognitive ability. Of course, this observation only backs up the question as to why people who score differently on these scales spend different amounts of time reading fact-checks.

Lastly, we find that both fact-checks and misinformation had extremely limited influence on related attitudes. In Figure 6, we display the effects of misinformation and fact-checks on political figures and groups mentioned in the misinformation, overall and conditional by party. As preregistered, we isolate attitudes toward the target of the misinformation and corresponding fact-check. For example, for misinformation alleging that Amy Coney Barrett once made homophobic remarks, we evaluate attitudes toward Barrett in the misinformation and fact-check conditions. Meta-analysis shows that the overall effects of misinformation and fact-checks on attitudes were each smaller than half a point on the 100-point feeling thermometer. While the effects were in the expected direction, with fact-checks making respondents more positive and misinformation more negative, the effects were very small. In the Online Appendix, we show that when we relax assumptions about the target of the misinformation and fact-checks, the same pattern holds.

Figure 6. Estimated effects on attitudes, both overall and by partisanship.

Notes: Each point shows the effect of being misinformed (in the left panel) or being corrected (in the right panel). Shaded points are those effects where the misinformation or correction effects are significant (p < 0.05). Diamonds and point ranges report the meta-analytic average. These averages are also labelled inside these diamonds. *** p < 0.001; ** p < 0.01; * p < 0.05.

Discussion

Eight multi-wave experiments extend prior findings about factual corrections and misinformation to the 2020 US election. Consistent with a broad set of prior findings (Lewandowsky et al. Reference Lewandowsky2020; Wintersieck Reference Wintersieck2017), we observe fact-checks improving belief accuracy. Over time, the effects attenuate, but they remain detectable. The effects of fact-checks persisted at 66 per cent the original magnitude after one week and at 50 per cent after more than two weeks. This echoes previous research too (Porter and Wood Reference Porter and Wood2021). As other research has also found (Nyhan et al. Reference Nyhan2019), even as corrections durably improve accuracy, they have, at best, minor effects on downstream political attitudes. Finally, in line with evidence attesting to partisan asymmetries in these matters (Guay et al. Reference Guay2020; Shin and Thorson Reference Shin and Thorson2017), we observe Democrats becoming more accurate than Republicans after exposure to corrections that were uncongenial to their partisanship.

Aside from partisanship, do different kinds of people respond differently to misinformation and fact-checks? Here, we find the answer is “no” with respect to misinformation and “yes” with respect to fact-checks. The Big Five personality traits, need for cognition, and political interest all moderate the effect of fact-checks. How can we reconcile homogeneity of misinformation effects with heterogeneity of correction effects? We think the answer lies in how easy it is to process our two treatment types. The misinformation treatments are short, simple, and require limited attention. The fact-checks, by contrast, offer details, evidence, and logical reasoning. Therefore, it is easier (and less time-consuming) for people to process the misinformation treatments than the fact-checks. Consequently, correction effects are consistently higher among subjects who spend more time with the fact-check treatments. Others (Pennycook et al. Reference Pennycook2021) have shown that attention shapes people's response to misinformation. The same appears to be true for fact-checks. Media organizations and fact-checkers should generate factual corrections that are effective at reducing misperceptions while being easy to process.

Several features of our findings suggest avenues for future research. Although misinformation reduces accuracy, on average, the magnitude of the effects of fact-checks is more than twice that of misinformation. Similarly, the effect of misinformation on related attitudes is tiny. Taken together, these results call into question portrayals of misinformation as a substantial barrier to democratic accountability. It is certainly possible that misinformation's harms only emerge cumulatively, after repeated exposure to many false claims, far beyond what we test here. Future research should investigate whether larger quantities of misinformation and corrections have more pronounced effects on subsequent attitudes.

The quantity of items we test stands out as a limitation of the present study. The present study is also limited by its reliance on real-world misinformation; as a result, the stimuli we test are not exactly identical to one another. This limitation should be kept in mind when examining our data on average time spent with stimuli. While we determined that trading off treatment equivalence for realism was worthwhile, we look forward to future research that takes an alternative approach.

Our results show that during the 2020 US election, misinformation degraded accuracy, while corrections improved it by roughly twice the amount, often durably so. The effects of misinformation and corrections on attitudes were negligible. Important heterogeneities emerged in our analysis, with implications for scholars, social media companies, and policymakers.

Supplementary Material

Online appendices are available at: https://doi.org/10.1017/S0007123422000631

Data Availability Statement

Replication data for this article can be found in Harvard Dataverse at: https://doi.org/10.7910/DVN/WQXZUP

Acknowledgments

We are grateful to the team at PolitiFact, especially Angie Holan and Josie Hollingsworth, for partnering with us and sharing data. We received no compensation for the PolitiFact partnership. We thank Andrew Guess, Matt Graham, Gregory Huber, Brendan Nyhan, Thomas Nelson, Gordon Pennycook, Yamil Velez, and the Political Psychology workshop participants at Ohio State University for helpful comments. All mistakes are our own.

Financial Support

This research is supported by the John S. and James L. Knight Foundation through a grant to the Institute for Data, Democracy & Politics at The George Washington University.

Competing Interests

None.

Footnotes

1 All reported tests are two-sided.

2 The relationship between cognitive reflection and susceptibility to misinformation has been studied extensively (see, for example, Pennycook and Rand Reference Pennycook and Rand2019), as has political knowledge (Kuklinski et al. Reference Kuklinski2000) and political interest (Schaffner and Luks Reference Schaffner and Luks2018). Need for cognition is also being studied in this area (Leding and Antonio Reference Leding and Antonio2019). Previous research has shown that personality type is associated with political knowledge and interest (Gerber et al. Reference Gerber2011b) and engagement with political information (Gerber et al. Reference Gerber2011a)—all of which are relevant to the study of misinformation and correction.

3 This analysis procedure differs from our preregistered approach. The regression-based approach we preregistered is shown in the Online Appendix.

References

Berinsky, A (2015) Rumors and health care reform: experiments in political misinformation. British Journal of Political Science 47, 241262.CrossRefGoogle Scholar
Carey, JM et al. (2022) The ephemeral effects of fact-checks on COVID-19 misperceptions in the United States, Great Britain and Canada. Nature Human Behaviour 6(2), 236243. Available from https://doi.org/10.1038/s41562-021-01278-3CrossRefGoogle ScholarPubMed
Chan, M-pS et al. (2017) Debunking: a meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science 28(11), 15311546.CrossRefGoogle ScholarPubMed
Coppock, A, Gross, K, Porter, E, Thorson, E and Wood, TJ (2023) “Replication Data for: Conceptual Replication of Four Key Findings about Factual Corrections and Misinformation During the 2020 U.S. Election: Evidence from Panel Survey Experiments”, https://doi.org/10.7910/DVN/WQXZUP, Harvard Dataverse, V2, UNF:6:ll8kQCpQfOdAIKguhZyKNA== [fileUNF].CrossRefGoogle Scholar
Culliford, E (2019) Facebook Announces Steps to Clamp Down on Misinformation Ahead of 2020 Election. Reuters. Available from https://www.reuters.com/article/us-usa-election-facebook/facebook-announces-steps-to-clamp-down-on-misinformation-ahead-of-2020-election-idUSKBN1X022SGoogle Scholar
Dillard, J (2020) Ocasio-Cortez Reveals Past Sexual Assault, Faults GOP over Riot. Bloomberg. Available from https://www.bloomberg.com/news/articles/2021-02-02/ocasio-cortez-accuses-gop-critics-of-diminishing-riot-traumaGoogle Scholar
Edelson, L et al. (2021) Far-Right News Sources on Facebook More Engaging. Available from https://medium.com/cybersecurity-for-democracy/far-right-news-sources-on-facebook-more-engaging-e04a01efae90Google Scholar
Einstein, KL and Hochschild, J (2015) Do Facts Matter? Information and Misinformation in American Politics. Norman, OK: University of Oklahoma Press.Google Scholar
Fischer, S (2020) “Unreliable” News Sources Got More Traction in 2020. Available from https://www.axios.com/unreliable-news-sources-social-media-engagement-297bf046-c1b0-4e69-9875-05443b1dca73.htmlGoogle Scholar
Gerber, AS et al. (2011a) The Big Five personality traits in the political arena. Annual Review of Political Science 14(1), 265287. Available from https://doi.org/10.1146/annurev-polisci-051010-111659CrossRefGoogle Scholar
Gerber, AS et al. (2011b) Personality traits and the consumption of political information. American Politics Research 39(1), 3284. Available from https://doi.org/10.1177/1532673X10381466CrossRefGoogle Scholar
Graham, MH (2020) Self-awareness of political knowledge. Political Behavior 42(1), 305326. Available from https://doi.org/10.1007/s11109-018-9499-8CrossRefGoogle Scholar
Graves, L (2016) Deciding What's True: The Rise of Political Fact-Checking in American Journalism. New York, NY: Columbia University Press.CrossRefGoogle Scholar
Guay, B et al. (2020) Examining Partisan Asymmetries in Fake News Sharing and the Efficacy of Accuracy Prompt Interventions. OSF Preprints. Available from https://psyarxiv.com/y762kGoogle Scholar
Guess, A, Nagler, J and Tucker, J (2019) Less than you think: prevalence and predictors of fake news dissemination on Facebook. Science Advances 5(1). Available from https://advances.sciencemag.org/content/5/1/eaau4586CrossRefGoogle ScholarPubMed
Guess, AM et al. (2020) “Fake News” May Have Limited Effects beyond Increasing Beliefs in False Claims. Available from https://doi.org/10.37016/mr-2020-004CrossRefGoogle Scholar
Gunther, R, Beck, PA and Nisbet, EC (2019) “Fake news” and the defection of 2012 Obama voters in the 2016 presidential election. Electoral Studies 61, 102030. Available from http://www.sciencedirect.com/science/article/pii/S0261379418303019CrossRefGoogle Scholar
Haglin, K (2017) The limitations of the backfire effect. Research & Politics 4(3), 2053168017716547. Available from https://doi.org/10.1177/2053168017716547CrossRefGoogle Scholar
Jamieson, KH (2018) Cyberwar: How Russian Hackers and Trolls Helped Elect a President: What We Don't, Can't, and Do Know. New York, NY: Oxford University Press.Google Scholar
Jennings, J and Stroud, NJ (2021) Asymmetric adjustment: partisanship and correcting misinformation on Facebook. New Media & Society, 14614448211021720. Available from https://doi.org/10.1177/14614448211021720Google Scholar
Kessler, G (2020) Donald Trump and His Assault on Truth: The President's Falsehoods, Misleading Claims and Flat-Out Lies. New York, NY: Scribner.Google Scholar
Kuklinski, JH et al. (2000) Misinformation and the currency of democratic citizenship. Journal of Politics 62(3), 790–816.CrossRefGoogle Scholar
Leding, JK and Antonio, L (2019) Need for cognition and discrepancy detection in the misinformation effect. Journal of Cognitive Psychology 31(4), 409415. Available from https://doi.org/10.1080/20445911.2019.1626400CrossRefGoogle Scholar
Lenz, GS (2012) Follow the Leader? How Voters Respond to Politicians’ Policies and Performance. Chicago, IL: University of Chicago Press.CrossRefGoogle Scholar
Lewandowsky, S et al. (2020) The Debunking Handbook 2020. Available from https://sks.to/db2020Google Scholar
Loomba, S et al. (2021) Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nature Human Behaviour 5, 337–348. Available from https://doi.org/10.1038/s41562-021-01056-1CrossRefGoogle ScholarPubMed
Nyhan, B (2020) Facts and myths about misperceptions. Journal of Economic Perspectives 34(August), 220236. Available from https://www.aeaweb.org/articles?id=10.1257/jep.34.3.220CrossRefGoogle Scholar
Nyhan, B et al. (2019) Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Political Behavior 42, 939–960.Google Scholar
Pasek, J, Sood, G and Krosnick, JA (2015) Misinformed about the Affordable Care Act? Leveraging certainty to assess the prevalence of misperceptions. Journal of Communication 65(4), 660673. Available from https://onlinelibrary.wiley.com/doi/abs/10.1111/jcom.12165CrossRefGoogle Scholar
Pennycook, D and Rand, DG (2019) Lazy, not biased: susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition 188, 3950.CrossRefGoogle Scholar
Pennycook, G et al. (2021) Shifting attention to accuracy can reduce misinformation online. Nature 592(7855), 590595. Available from https://doi.org/10.1038/s41586-021-03344-2CrossRefGoogle ScholarPubMed
Porter, E and Wood, TJ (2019) False Alarm: The Truth about Political Mistruths in the Trump Era. New York, NY: Cambridge University Press.CrossRefGoogle Scholar
Porter, E and Wood, TJ (2021) The global effectiveness of fact-checking: evidence from simultaneous experiments in Argentina, Nigeria, South Africa, and the United Kingdom. Proceedings of the National Academy of Sciences 118(37), e2104235118. Available from https://www.pnas.org/doi/abs/10.1073/pnas.2104235118CrossRefGoogle ScholarPubMed
Porter, E and Wood, TJ (2022) Misinformation on the Facebook news feed: experimental evidence. Journal of Politics 84, 1812–1817. Available from https://osf.io/r3mvw/CrossRefGoogle Scholar
Porter, E, Velez, Y and Wood, TJ (2022) Correcting COVID-19 Vaccine Misinformation in Ten Countries. Available from https://osf.io/4stbm/CrossRefGoogle Scholar
Rathje, S (2022) Letter to the Editors of Psychological Science: Meta-analysis Reveals that Accuracy Nudges Have Little to No Effect for U.S. Conservatives: Regarding Pennycook et al. Available from https://psyarxiv.com/945na/CrossRefGoogle Scholar
Schaffner, BF and Luks, S (2018) Misinformation or expressive responding? What an inauguration crowd can tell us about the source of political misinformation in surveys. Public Opinion Quarterly 82(2), 135147. Available from https://doi.org/10.1093/poq/nfx042CrossRefGoogle Scholar
Shin, J and Thorson, K (2017) Partisan selective sharing: the biased diffusion of fact-checking messages on social Media. Journal of Communication 67(2), 233255. Available from https://onlinelibrary.wiley.com/doi/abs/10.1111/jcom.12284CrossRefGoogle Scholar
Swire, B, DeGutis, J and Lazer, D (2020) Searching for the Backfire Effect: Measurement and Design Considerations. PsyArXiv. Available from https://psyarxiv.com/ba2kc/Google Scholar
Swire, B et al. (2017) Processing political misinformation: comprehending the Trump phenomenon. Royal Society Open Science 4(3), 160802. Available from https://royalsocietypublishing.org/doi/abs/10.1098/rsos.160802CrossRefGoogle ScholarPubMed
US House Committee on Energy and Commerce Staff (2021) Hearing on “Fanning the Flames: Disinformation and Extremism in the Media.” Available from https://docs.house.gov/meetings/IF/IF16/20210224/111229/HHRG-117-IF16-20210224-SD002.pdfGoogle Scholar
Walker, M and Gottfried, J (2019) Republicans Far More Likely Than Democrats to Say Fact-Checkers Tend to Favor One Side. Pew Research Center. Available from https://www.pewresearch.org/fact-tank/2019/06/27/republicans-far-more-likely-than-democrats-to-say-fact-checkers-tend-to-favor-one-side/Google Scholar
Walter, N et al. (2019) Fact-checking: a meta-analysis of what works and for whom. Political Communication 37, 350–375.Google Scholar
Wintersieck, AL (2017) Debating the truth: the impact of fact-checking during electoral debates. American Politics Research 45(2), 304331. Available from https://doi.org/10.1177/1532673X16686555CrossRefGoogle Scholar
Wood, TJ and Porter, E (2018) The elusive backfire effect: mass attitudes’ steadfast factual adherence. Political Behavior 41, 135163.CrossRefGoogle Scholar
Figure 0

Figure 1. Cumulative page views of each fact-check used in treatments, as well as respondents' agreement in control, by the partisan congeniality of the false claim.Notes: The right column reports the count of total visits to each false claim's associated fact-check webpage. The left column reports agreement with each false claim, according to respondents' partisanship. Points indicate mean agreement, and the ranges indicate 95 per cent confidence intervals.

Figure 1

Figure 2. Outcome distributions by topic and condition.Notes: Each cloud depicts the distribution of belief certainty within each experimental condition. Each label shows the conditional mean. Statistically significant differences between control and misinformation, and misinformation and fact-checks, are labelled with the associated significance level. *** p < 0.001; ** p < 0.01; * p < 0.05.

Figure 2

Figure 3. Effects by party.Notes: Points are the predicted effects by partisanship. Ranges indicate 95 per cent confidence intervals. Points are increased in size, and labelled, when an effect is significantly different to 0 (at p < 0.05). Partisan differences in correction or misinformation effects are depicted with asterisks. *** p < 0.001; ** p < 0.01; * p < 0.05.

Figure 3

Figure 4. Duration of effects.Notes: Points indicate the mean effect, and line ranges report the 95 per cent confidence intervals.

Figure 4

Figure 5. Heterogeneous treatment effect estimates.Notes: Each small point is an underlying effect estimate for some potential source of effect heterogeneity (each source is described on the y-axis). Effects for those with low values are depicted with dark points; effects for those with high values are depicted with light points. Larger points, labels, and line ranges are meta-analytic summaries of these distributions. Significance levels report when these meta-analytic estimates are different. *** p < 0.001; ** p < 0.01; * p < 0.05.

Figure 5

Figure 6. Estimated effects on attitudes, both overall and by partisanship.Notes: Each point shows the effect of being misinformed (in the left panel) or being corrected (in the right panel). Shaded points are those effects where the misinformation or correction effects are significant (p < 0.05). Diamonds and point ranges report the meta-analytic average. These averages are also labelled inside these diamonds. *** p < 0.001; ** p < 0.01; * p < 0.05.

Supplementary material: Link

Coppock et al. Dataset

Link
Supplementary material: PDF

Coppock et al. supplementary material

Appendix

Download Coppock et al. supplementary material(PDF)
PDF 6.8 MB