Hostname: page-component-78c5997874-lj6df Total loading time: 0 Render date: 2024-11-10T16:38:23.228Z Has data issue: false hasContentIssue false

The Effects of Source Cues and Issue Frames During COVID-19

Published online by Cambridge University Press:  29 January 2021

Chandler Case
Affiliation:
Department of Political Science, University of South Carolina, ColumbiaSC29208, USA
Christopher Eddy
Affiliation:
Department of Political Science, University of South Carolina, ColumbiaSC29208, USA
Rahul Hemrajani
Affiliation:
Department of Political Science, University of South Carolina, ColumbiaSC29208, USA
Christopher Howell
Affiliation:
Department of Political Science, University of South Carolina, ColumbiaSC29208, USA
Daniel Lyons
Affiliation:
Department of Political Science, University of South Carolina, ColumbiaSC29208, USA
Yu-Hsien Sung
Affiliation:
Department of Political Science, University of South Carolina, ColumbiaSC29208, USA
Elizabeth C. Connors*
Affiliation:
Department of Political Science, University of South Carolina, ColumbiaSC29208, USA
*
*Corresponding author. Email: Connors4@mailbox.sc.edu
Rights & Permissions [Opens in a new window]

Abstract

The health and economic outcomes of the COVID-19 pandemic will in part be determined by how effectively experts can communicate information to the public and the degree to which people follow expert recommendation. Using a survey experiment conducted in May 2020 with almost 5,000 respondents, this paper examines the effect of source cues and message frames on perceptions of information credibility in the context of COVID-19. Each health recommendation was framed by expert or nonexpert sources, was fact- or experience-based, and suggested potential gain or loss to test if either the source cue or framing of issues affected responses to the pandemic. We find no evidence that either source cue or message framing influence people’s responses – instead, respondents’ ideological predispositions, media consumption, and age explain much of the variation in survey responses, suggesting that public health messaging may face challenges from growing ideological cleavages in American politics.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of The Experimental Research Section of the American Political Science Association

Introduction

The health and economic outcomes of the COVID-19 pandemic are largely dependent on individual behavior – behavior that can potentially be shaped by the information environment. This pandemic thus highlights the importance of communication: how individuals interpret recommendations from experts matters, as the effectiveness of this communication can determine how many people are affected by this pandemic and to what extent. Unfortunately, research on the growing distrust of experts and rise of anti-intellectualism in America (Merkley Reference Merkley2020) suggest that communication from experts will not be given the credence it perhaps should be – or worse is downplayed because people do not trust experts.

To attempt to understand individuals’ reception of COVID-19 recommendations, we use a survey experiment and test: (1) whether people’s likelihood of following public health guidelines changes depending on if they originate from an expert or nonexpert source and (2) whether this is moderated by issue frame (experience vs. fact and loss vs. gain). Our findings suggest that neither source cues nor issue frames influence the adoption of information about COVID-19. This is in line with previous literature that suggests highly salient issues are most likely to be subject to motivated reasoning (Jerit and Barabas Reference Jerit and Barabas2012; Slothuus and de Vreese Reference Slothuus and de Vreese2010). Indeed, we find that individuals’ responses are largely driven by ideology, media consumption, and age. Our findings suggest that public health messaging likely faces additional challenges from growing ideological cleavages in American politics.

Theory

Different lines of research lead to four hypotheses for the influence of information by expert sources in the context of COVID-19. First, research shows the public relies on source cues to process information (Botero et al. Reference Botero, Castro Cornejo, Gamboa, Pavao and Nickerson2015; Hartman and Weber Reference Hartman and Weber2009), assessing source credibility (Darmofal Reference Darmofal2005) and often overlooking information credibility. That is, people are more persuaded by arguments given by experts – those possessing formal qualifications or experience on the issue (Thon and Jucks Reference Thon and Jucks2017). This reliance on expert cues could be exacerbated during a health pandemic, when people may suspend their many biases – instead prioritizing accuracy goals in line with their own safety. Indeed, a recent poll found that Americans highly trust medical workers (Archer and Ron-Levey Reference Archer and Ron-Levey2020). This research would predict that people observe public health recommendations from those qualified to give them during the pandemic: experts.

Conversely, though, motivated reasoning also shapes people’s assessment of source credibility (Druckman and Mcgrath Reference Druckman and McGrath2019). Directional motivated reasoning responds to intentional sentiment (Bullock et al. Reference Bullock, Gerber, Hill and Huber2015) or subconscious affect (Taber and Lodge Reference Taber and Lodge2006), as cognitive processes are selected to reaffirm an existing world view. Previous literature focuses largely on politically motivated reasoning (Jerit and Barabas Reference Jerit and Barabas2012; Flynn et al. Reference Flynn, Nyhan and Reifler2017; Taber and Lodge Reference Taber and Lodge2006). This research would thus conversely predict that the downplaying of COVID-19 from the leader of the Republican Party (President Trump) would create a partisan divide – making nonpolitical source cues (and frames) meaningless for the transmission of information about COVID-19, where individuals would instead rely on political cues.

Third, today’s growing anti-intellectual sentiment – which creates a perceived conflict between experts and “common sense” (Merkley Reference Merkley2020) – may lead people to downplay health recommendations. We have indeed seen concerted efforts to delegitimize health officials and call into question their competence and trustworthiness during COVID-19, suggesting the pandemic is a context where anti-expert sentiment could guide both attitudes and behavior (Bursztyn et al. Reference Bursztyn, Rao, Roth and Yanagizawa-Drott2020). For example, people exposed to conservative media have been more likely to mistrust the Centers for Disease Control and Prevention (CDC), while trusting unfounded cures to combat COVID-19 (Jamieson and Albarracin Reference Jamieson and Albarracin2020). This research would predict that information from experts could backfire.

Fourth, expert recommendations may also prime partisan identity. In recent years, Republicans have become skeptical of the scientific community (Hamilton, Harter and Saito Reference Hamilton, Hartter and Saito2015), as Republican elites associate institutions of higher education with partisan out-groups, and in doing so question their moral integrity (Motta Reference Motta2018). Thus, this research would also predict that information from experts would backfire, but that this would be moderated by partisanship.

This paper uses a survey experiment to test these four source cue hypotheses – examining if, in the context of COVID-19: (1) people find experts persuasive; (2) people do not find experts persuasive; (3) people react negatively to experts; and/or (4) expert animus is particular among Republicans. Beyond source cues, though, we test if message framing matters in the context of COVID-19 – that is, if fact- or experience-based messages as well as gain- or loss-based messages either have a main effect or moderate the main effect. This additional variation is motivated by research that frames matter (Mondak Reference Mondak1993) – particularly that (1) narrative or testimonial information can be more persuasive than factual or evidence-based information (Braverman Reference Braverman2008) and (2) gain-framed health messages are considered more credible than loss-framed ones (Rothman et al. Reference Rothman, Bartels, Wlaschin and Salovey2006).

Survey experiment

In our survey design, each respondent received four vignettes in random order: (1) within-home social distancing, (2) prisoner release programs, (3) restarting college football, and (4) mail-in ballots. On each topic, we designed three sets of manipulations: expert versus nonexpert, fact versus experience, and gain versus loss.

For the first set of manipulations (expert vs. nonexpert), we embedded source cues by manipulating the person who delivered the statements: in the expert treatments, messages were provided by a physician, a former prison doctor, a public health expert, and an election official, respectively; whereas in the nonexpert treatments, messages were delivered by a recovering COVID-19 patient, a retired prison guard, an avid sports fan, and a voter, respectively.

For the second set of manipulations (fact vs. experience), we varied how people talked about the subject at hand: in the fact treatments, the individual stated “evidence shows that the most common means of passing on the virus is…,” whereas in the experience treatments, the individual stated “among the people I know that at one point had COVID-19…”

For the third set of manipulations (gain vs. loss), we varied whether complying with the rules would result in a gain or loss. For example, on the topic of social distancing, the messages were “from this [self-quarantine], they would benefit the health and safety of those close to them…” (gain) versus “failing to do so [self-quarantine] risks spreading the virus among those close to them” (loss). See full text in Appendix A.

Our expectation was that because of declining trust in experts, people would be more likely to be persuaded by nonexpert than expert sources.Footnote 1 We also expected this effect to be moderated by issue frame, where experience- and gain-framed messages from nonexpert sources would be most persuasive.

Participants were recruited from the Lucid survey platform on May 20, 2020. On this day, 4,722 participants were recruited, although 36 were eliminated for various reasons (see Appendix E), leading to 4,686 participants (participant demographics by group in Appendix B). Participants were randomly assigned to one of the nine experimental conditions (eight treatments and a control group). After receiving pretreatment questions, participants read four vignettes from the same condition. Again, the conditions varied whether the source was an expert or nonexpert as well as whether the information was fact- or experience-based and framed as a gain or loss (see Table 1), and the four vignettes discussed within-home social distancing, prisoner release programs, restarting college football, and mail-in ballots.

Table 1 Assignment of Treatment Groups

The dependent variable was the incorporation of the information received, measured after each vignette by asking participants their views on the issue. Participants gave their responses on a seven-category Likert scale from strongly disagree to strongly agree (“How much do you agree with the following statement?”) in response to four statements: (1) “People should isolate in a separate room and stay away from family and other residents within the household if a family member suspects they were exposed to coronavirus or develops symptoms” (within-home social distancing); (2) “All states should expand prisoner release programs to release nonviolent offenders from prisons and other detention centers during the COVID-19 crisis” (prisoner release programs); (3) “The NCAA should decide to postpone this season’s kickoff for college football due to the COVID-19 crisis” (restarting college football); and (4) “All states and local governments should continue to have in-person rather than mail-in ballots for elections” (mail-in ballots).

To ensure attention to the vignettes, participants had to spend at least 15 seconds on each vignette before they were allowed to proceed and were also given a manipulation check (see Appendix A). After this, participants were asked about their levels of trust in various institutions and their personal experience with COVID-19. They were then debriefed and paid.

We note that in vignettes 2 and 3 (prisoner release programs and restarting college football), an error was made and “Dr.” remained in the name of both expert and nonexpert sources (although the descriptions demonstrated expertise and nonexpertise). We acknowledge that people may have considered the nonexpert as an expert because of this, thus biasing the results for these two vignettes (but leaving the other two vignettes valid). We thus remove these two vignettes from the analysis of expert versus nonexpert source cues.

Results

As shown in Figures 1 and 2, the differences in mean responses between treatment groups are minimal. We find no significant effect of any of the eight treatments as compared to the control group for any of the four vignettes (see Appendix C: t-tests). Even when accounting for potential moderators (including partisanship), we find that neither source cue nor frame influenced people’s responses to questions about COVID-19. These null findings remain robust to nonparametric bootstrapping, lowering the manipulation check standards, accounting for low attention, and pooling treatments to increase statistical power (see Appendix C for robustness checks).

Figure 1 COVID-19 vignette response distributions by treatment condition.

Notes: Subjects were asked how much they agree with a policy statement corresponding to each vignette. Response options ranged from strongly disagree to strongly agree. Points represent the mean response and intervals represent the 95% confidence intervals.

Figure 2 Difference in mean responses between test conditions.

Notes: This figure provides the difference in mean policy agreement between the two conditions indicated in the y-axis label. Each condition is associated with the four treatment groups labeled in Table 1. 95% confidence intervals are derived from nonparametric bootstrapping. Vignettes on releasing detainees and NCAA football are excluded from the expert versus nonexpert analysis due to mistakenly adding a “Dr.” prefix for the nonexpert in the text of these vignettes.

Note that: (1) our multiple comparisons increase the chance of a Type I error – detecting an effect where there is none – although we find zero effects; and (2) our experiment has the power (0.9) to detect a small effect size (Cohen’s d = 0.2) in each group, so the possibility of a Type II error – failing to detect a treatment effect where there is one – is quite low. An equivalence test using the “two-one-sided t-tests” (TOST) procedure yields significant results at the 0.95 levels for all inter-group and inter-treatment comparisons (Δ L = –0.25, Δ U = 0.25), suggesting that the true difference of means is zero (Lakens, Scheel, and Isager Reference Lakens, Scheel and Isager2018). Thus, we cannot reject the null hypothesis that source cues and issue frames do not influence people’s responses to questions about COVID-19.

Bayesian factor analysis, comparing the relationships of expert to nonexpert source cues, fact to experience framing, gain to loss framing, and all treatments to the control group – each with the null of the respective relationships – also generally supports the null hypotheses. At the default prior distribution, a Cauchy distribution with the parameter γ set to 1, BF01, contrasting all treatments with the control, is 22.5 for the self-quarantine vignette and between 7.7 and 12.1 for the remaining three vignettes. The Bayesian factors model the change in posterior confidence from prior beliefs that the null hypothesis is more likely than the alternative hypothesis given the data and prior beliefs (see Appendix C: Bayesian factor analysis for factors graphed over a range of prior beliefs).

Support of null relationships through Bayesian factors are even stronger regarding contrasts between treatment conditions. Across the four vignettes, BF01 associated with the expert to nonexpert contrast ranges from 25.1 to 39. With respect to the fact to experience contrast, BF01 ranges from 21.7 to 37.3. Lastly, with respect to the gain to loss contrast, BF01 ranges from 18 to 39.1. Suggestions for appropriate language to describe the strength of evidence indicate a “positive” to “substantial” level of support for the null when comparing treatments to control and a “strong” to “very strong” level of support for the null when comparing relevant treatment conditions to each other (Jarosz and Wiley Reference Jarosz and Wiley2014). See Appendix C: Bayesian factor analysis for regression model comparisons between the Figure 3 model above and the nested models removing each independent variable.

Figure 3 Responses to COVID-19 vignettes by observed variables.

Notes: This figure displays OLS regression standardized coefficient estimates with respect to the dependent variable (agreement with the respective policy statement). Confidence intervals are derived from nonparametric bootstrapping with 95% confidence. Appendix C, Table 6 displays the corresponding regression table and variable descriptions.

Lastly, while none of the manipulated variables meaningfully influenced participant responses, some of the observed variables – ideology, media consumption, and age – explain much of the variation in participants’ responses (see Figure 3).

Discussion

We thus find no evidence that expertise of source cues or issue frames influence people’s responses to questions about COVID-19. However, we believe these null results do not negate the ample research on source cues and issue frames. In what follows, we address some design decisions that could have driven the null results as well as what we believe to be the more likely culprits: issue salience and political polarization.

First, it is possible that a more extreme difference between the expert and nonexpert source cues, experience versus fact, and/or gain versus loss would have led to treatment effects. The robust nulls that we find, however, suggest that even if a stronger treatment had some effect, the effect would be quite minimal.

Second, it is also possible that social desirability weakened potential treatment effects – the dire circumstances of a pandemic may have led to convergence on socially desirable norms, thus limiting variation in our dependent variable and undermining our ability to detect other patterns (Daoust et al. Reference Daoust, Nadeau, Dassonneville, Lachapelle, Bélanger, Savoie and van der Linden2020). We did anticipate this, however, and thus collected preliminary data with a separate sample asking questions about COVID-19. We then reworded questions with low variance to limit social desirability effects (e.g., our questions ask about others’ behavior rather than one’s own).

Further, the high variation in our ultimate survey responses suggest social desirability did not drive the null findings. Except for vignette 1, the distribution of the responses for all other vignettes has a negative excess kurtosis (<-1) and low skewness (between −0.5 and 0.5), suggesting that their distribution is symmetric and more fat-tailed than the normal distribution. Vignette 1 has a high peak (with most participants either strongly agreeing or agreeing with self-quarantine) but still has a relatively high standard deviation (μ = 2.32, σ = 1.57). It is still possible, of course, that social desirability muted potential effects, but, again, our robust null findings make us doubtful that social desirability is the whole story here.

Instead, the data – which show the strong influence of ideology and media consumption on views toward COVID-19 – suggest to us that the issue salience of, and political dynamic surrounding, the pandemic drove our null findings. Indeed, research suggests that salient issues are more ridden with directional goals (Jerit and Barabas Reference Jerit and Barabas2012; Slothuus and de Vreese Reference Slothuus and de Vreese2010), and although we attempted to choose less salient issues within the crisis, COVID-19 itself – a global health pandemic – was quite salient in May 2020.

Further, elites and media outlets were quick to adopt positions regarding COVID-19, creating a political dimension to the crisis that may have encouraged the public to fall on political lines. Trump’s quick downplaying of the dangers of COVID-19 – whether out of a motivation to avert panic or something less noble – undoubtedly held extraordinary weight for his supporters, who (like the rest of the public) had little information about the novel virus. Fox News indeed reinforced this idea that fears of the virus were overblown, something rarely seen in the liberal news media.

The COVID-19 pandemic (by nature) has created fear and uncertainty, so it is unsurprising if it has encouraged people to listen to the sources they trust – for some, those sources may be health experts, but for others those may be their preferred media outlet and/or political leaders. This choice of which sources to trust has likely been influential in how the pandemic has played out and how political it has become: indeed, we see that the viewing of two politically opposed media outlets (Fox News and NBC News) largely guided views toward COVID-19. This fits with research suggesting that when elites disagree with expert opinion, public opinion on the disputed issue will be divided by politics (Darmofal Reference Darmofal2005).

Thus, it is possible that had our study been run in a less polarized environment (either a different country or earlier on in the crisis), source cues and issue frames could have mattered. The US remains a relative anomaly in the developed world, with political leaders politicizing health care recommendations and actively spreading misinformation. Deploying our study in environments with less affective polarization and misinformation, such as in Northwestern European countries (Reiljan Reference Reiljan2020), or those with declining levels of affective polarization, like Germany or Norway (Boxwell et al. Reference Boxwell, Gentzkow and Shapiro2020), could have uncovered treatment effects.

Similarly, perhaps running our study earlier on in the crisis would have led to different findings. Keep in mind, however, that our study was run merely 2 months after the first US COVID-19 case announcement, suggesting attitudes crystallized quite rapidly.Footnote 2 The fact that that the American public formed polarized attitudes so quickly in this health crisis highlights the strength of the ideological cleavages in the US as well as the frightening possibility that future advice by public health experts may be ineffective – even during a crisis that threatens lives, the economy, and our way of life. It suggests that not even a pandemic can take politics off the table.

An inability to communicate useful information to the public is always concerning – but in this case it has particularly negative outcomes, leading to higher rates of COVID-19 infection, hospitalization, and mortality. Our findings thus leave us doubtful that health communication will be effective at overcoming previous media and elite cues, at least in the US. Politicizing this health crisis may be unable to be undone – at least in the current information environment.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2021.3

Footnotes

This research was funded by an internal grant from the University of South Carolina (Proposal #135700-20-54109). Our hypotheses are pre-registered within the original grant proposal (see Appendix D). The data and code required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: doi: 10.7910/DVN/0Q1F1U. There were no conflicts of interest for the authors in conducting this research.

1 This prediction is detailed in the original grant proposal and the preregistered report can be found in Appendix D.

2 Crystallized attitudes toward COVID-19 would help to explain our null findings, as it is possible that participants were already pretreated with cues and frames in the real world, diminishing the effect of cues and frames within the experiment (see Druckman and Leeper Reference Druckman and Leeper2012).

References

Archer, Kristjan and Ron-Levey, Ilana. 2020. “Trust in Government Lacking on COVID-19's Frontlines.” Gallup, March 20.Google Scholar
Bullock, John G., Gerber, Alan, Hill, Seth, and Huber, Gregory. 2015. Partisan Bias in Factual Beliefs about Politics. Quarterly Journal of Political Science 10(4): 519–78.CrossRefGoogle Scholar
Botero, Sandra, Castro Cornejo, Rodrigo, Gamboa, Laura, Pavao, Nara, and Nickerson, David W.. 2015. Says Who? An Experiment on Allegations of Corruption and Credibility of Sources. Political Research Quarterly 68(3): 493504.CrossRefGoogle Scholar
Boxwell, Levi, Gentzkow, Matthew, and Shapiro, Jesse M.. 2020. Cross-country Trends in Affective Polarization. National Bureau of Economic Research (No. w26669).CrossRefGoogle Scholar
Braverman, Julia. 2008. Testimonials Versus Informational Persuasive Messages: The Moderating Effect of Delivery Mode and Personal Involvement. Communication Research 35(5): 666694.CrossRefGoogle Scholar
Bursztyn, Leonardo, Rao, Aakaash, Roth, Christopher, and Yanagizawa-Drott, David. 2020. Misinformation During a Pandemic. National Bureau of Economic Research (No. 27417).CrossRefGoogle Scholar
Case, Chandler, Eddy, Christopher, Hemrajani, Rahul, Howell, Christopher, Lyons, Daniel, Sung, Yu-Hsien, and Connors, Elizabeth C.. 2020. Replication Data for: The Effects of Source Cues and Issue Frames During COVID-19,” https://doi.org/10.7910/DVN/0Q1F1U, Harvard Dataverse.CrossRefGoogle Scholar
Darmofal, David. 2005. Elite Cues and Citizen Disagreement with Expert Opinion. Political Research Quarterly 58(3): 381–95.CrossRefGoogle Scholar
Daoust, Jean-François, Nadeau, Richard, Dassonneville, Ruth, Lachapelle, Erick, Bélanger, Éric, Savoie, Justin, and van der Linden, Clifton. 2020. How to Survey Citizens’ Compliance with COVID-19 Public Health Measures: Evidence from Three Survey Experiments. Journal of Experimental Political Science 18.Google Scholar
Druckman, James N., and McGrath, Mary C.. 2019. The Evidence for Motivated Reasoning in Climate Change Preference Formation. Nature Climate Change 9(2): 111119.CrossRefGoogle Scholar
Druckman, James N., and Leeper, Thomas J.. 2012. Learning More from Political Communication Experiments: Pretreatment and Its Effects. American Journal of Political Science 56(4): 875896.CrossRefGoogle Scholar
Flynn, D. J., Nyhan, Brendan, and Reifler, Jason. 2017. The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs about Politics. Political Psychology 38: 127150.CrossRefGoogle Scholar
Jamieson, Kathleen Hall. and Albarracin, Dolores. 2020. The Relation between Media Consumption and Misinformation at the Outset of the SARS-CoV-2 Pandemic in the US. The Harvard Kennedy School (HKS) Misinformation Review.Google Scholar
Lakens, Daniël, Scheel, Anne M., and Isager, Peder M.. 2018. Equivalence Testing for Psychological Research: A Tutorial. Advances in Methods and Practices in Psychological Science 1: 259269.Google Scholar
Hamilton, Lawrence C., Hartter, Joel, and Saito, Kei. 2015. Trust in Scientists on Climate Change and Vaccines. Sage Open 5(3): 113.CrossRefGoogle Scholar
Hartman, Todd K. and Weber, Christopher R.. 2009. Who Said What? The Effects of Source Cues in Issue Frames. Political Behavior 31(4): 537.CrossRefGoogle Scholar
Hlavac, Marek (2018). stargazer: Well-Formatted Regression and Summary Statistics Tables. R package version 5.2.2. https://CRAN.R-project.org/package=stargazer.Google Scholar
Jarosz, Andrew F., and Wiley, Jennifer. 2014. What are the Odds? A Practical Guide to Computing and Reporting Bayes Factors. The Journal of Problem Solving 7, 29.CrossRefGoogle Scholar
Jerit, Jennifer, and Barabas, Jason. 2012. Partisan Perceptual Bias and the Information Environment. The Journal of Politics 74(3): 672684.CrossRefGoogle Scholar
Merkley, Eric. 2020. Anti-Intellectualism, Populism, and Motivated Resistance to Expert Consensus. Political Opinion Quarterly 84(1): 2448.Google Scholar
Mondak, Jeffery J. 1993. Public Opinion and Heuristic Processing of Source Cues. Political Behavior 15(2): 167192.CrossRefGoogle Scholar
Motta, Matthew. 2018. The Dynamics and Political Implications of Anti-Intellectualism in the United States. American Politics Research 46(3): 465498.CrossRefGoogle Scholar
Reiljan, Andres. 2020. Fear and Loathing Across Party Lines’ (also) in Europe: Affective Polarisation in European Party Systems. European Journal of Political Research 59(2): 376396.Google Scholar
Rothman, Alexander J., Bartels, Roger D., Wlaschin, Jhon, and Salovey, Peter. 2006. The Strategic Use of Gain-and Loss-Framed Messages to Promote Healthy Behavior: How Theory Can Inform Practice. Journal of Communication 56: S202S220.CrossRefGoogle Scholar
Slothuus, Rune, and de Vreese, Claes H.. 2010. Political Parties, Motivated Reasoning, and Issue Framing Effects. The Journal of Politics 72(3): 630645.CrossRefGoogle Scholar
Taber, Charles S., and Lodge, Milton. 2006. Motivated Skepticism in the Evaluation of Political Beliefs. American Journal of Political Science 50(3): 755769.CrossRefGoogle Scholar
Thon, Franziska M., and Jucks, Regina. 2017. Believing in Expertise: How Authors’ Credentials and Language Use Influence the Credibility of Online Health Information. Health Communication 32(7): 828836.CrossRefGoogle ScholarPubMed
Figure 0

Table 1 Assignment of Treatment Groups

Figure 1

Figure 1 COVID-19 vignette response distributions by treatment condition.Notes: Subjects were asked how much they agree with a policy statement corresponding to each vignette. Response options ranged from strongly disagree to strongly agree. Points represent the mean response and intervals represent the 95% confidence intervals.

Figure 2

Figure 2 Difference in mean responses between test conditions.Notes: This figure provides the difference in mean policy agreement between the two conditions indicated in the y-axis label. Each condition is associated with the four treatment groups labeled in Table 1. 95% confidence intervals are derived from nonparametric bootstrapping. Vignettes on releasing detainees and NCAA football are excluded from the expert versus nonexpert analysis due to mistakenly adding a “Dr.” prefix for the nonexpert in the text of these vignettes.

Figure 3

Figure 3 Responses to COVID-19 vignettes by observed variables.Notes: This figure displays OLS regression standardized coefficient estimates with respect to the dependent variable (agreement with the respective policy statement). Confidence intervals are derived from nonparametric bootstrapping with 95% confidence. Appendix C, Table 6 displays the corresponding regression table and variable descriptions.

Supplementary material: Link

Case et al. Dataset

Link
Supplementary material: File

Case et al. supplementary material

Case et al. supplementary material

Download Case et al. supplementary material(File)
File 3.5 MB