Hostname: page-component-857557d7f7-cmjwd Total loading time: 0 Render date: 2025-12-09T06:33:31.501Z Has data issue: false hasContentIssue false

The accuracy of gist: Rethinking public awareness of attitude change

Published online by Cambridge University Press:  03 December 2025

Irina Vartanova
Affiliation:
Institute for Futures Studies , Sweden
Kimmo Eriksson*
Affiliation:
Mälardalen University , Sweden
Pontus Strimling
Affiliation:
Institute for Futures Studies , Sweden
*
Corresponding author: Kimmo Eriksson; Email: jdm.kimmo.eriksson@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

Does the public accurately perceive how views change in society? Prevailing narratives suggest not, but we argue this conclusion stems from searching for the wrong kind of accuracy—demanding pollster-like precision instead of acknowledging the public’s robust perception of the ‘gist’ of change. Re-analyzing three large studies (total N = 2,236), we show that collective perceptions of change are incredibly consistent across different measurement methods (r > 0.90) and, critically, are highly aligned with actual historical data (r > 0.70). This collective wisdom is underpinned by a robust, individual-level ability to perceive the direction and relative force of these shifts. Moreover, there is a clear pattern to the minority of attitudes for which perceptions of change were inaccurate. We conclude that the public possesses a robust gist-based judgment that accurately tracks how various political attitudes have changed.

Information

Type
Empirical Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Society for Judgment and Decision Making and European Association for Decision Making

1. Introduction

A prevailing narrative across psychology, political science, and communication research suggests that the public is largely unaware of its own social world, beset by cognitive and social biases that distort perception. This perspective is grounded in theories like the ‘spiral of silence’ (Noelle-Neumann, Reference Noelle-Neumann1974) and extensive research on false consensus and pluralistic ignorance (Luzsa and Mayr, Reference Luzsa and Mayr2021; Sargent and Newman, Reference Sargent and Newman2021). Recent work continues to reinforce this view, showing how factors like social conflict, algorithmically curated news feeds, and differences in political awareness systematically distort perceptions across numerous domains (Cooper et al., Reference Cooper, Fahey and Jones2025; Dixon et al., Reference Dixon, Lerner and Bashian2024; Luzsa and Mayr, Reference Luzsa and Mayr2021; Sparkman et al., Reference Sparkman, Geiger and Weber2022).

This pessimistic view has profound implications across disciplines. In political science, it raises questions about the quality of democratic deliberation and the public’s capacity for informed self-governance. In communication research, it suggests that media consumption may fundamentally disconnect people from social reality. In psychology, it reinforces models of human cognition as fundamentally biased and error-prone when processing social information. In line with this view, it was recently claimed that people have ‘little idea’ how political attitudes have changed over timeFootnote 1 (Mastroianni and Dana, Reference Mastroianni and Dana2022).

Here, we challenge the view that people cannot accurately judge what social change has transpired. It is true that people do not have pollster-like precision—but this standard is difficult to meet due to basic properties of numeric cognition (Guay et al., Reference Guay, Marghetis, Wong and Landy2025). However, foundational work on real-world quantitative estimation argues that people’s judgments draw on two distinct types of information: an often-poor understanding of ‘metric’ properties (absolute values) but a surprisingly robust ‘mapping knowledge’: an understanding of relative differences and ranks (Brown and Siegler, Reference Brown and Siegler1993). In line with this alternative view, we argue that people accurately perceive the ‘gist’—the direction and relative force—of social change. This reframes the fundamental question: rather than asking whether a person can accurately recall a specific point estimate, we should ask whether they correctly perceive the gist of the change. For instance, a person might inaccurately guess that 50% of people approved of spanking in 1986 (a point estimate), while correctly perceiving the gist that this approval has significantly declined over the past several decades.

This theoretical reframing draws on multiple converging lines of research on judgment under uncertainty. Fuzzy-trace theory posits that human cognition excels at detecting meaningful, gist-based patterns rather than storing precise details (Reyna, Reference Reyna2012), a distinction validated across domains from medical decision-making to legal reasoning (Reyna et al., Reference Reyna, Hans, Corbin, Yeh, Lin and Royer2015, Reference Reyna, Edelson, Hayes and Garavito2022). Constructivist theories of memory emphasize that recall is schematic and meaning-based rather than verbatim (Alba and Hasher, Reference Alba and Hasher1983). From a Bayesian perspective, individuals need not retain precise polling data to estimate attitude shifts; instead, they integrate noisy cues from media, discourse, and personal experience to probabilistically infer change (Griffiths et al., Reference Griffiths, Kemp, Tenenbaum and Sun2008).

To investigate the hypothesis of accurate gist perception, we re-examine data from three large studies that measured people’s estimates of attitude change using different methods. Two studies asked participants for precise percentage estimates for different years (Mastroianni and Dana, Reference Mastroianni and Dana2022; Vartanova et al., Reference Vartanova, Eriksson, Hazin and Strimling2021), while a third asked for qualitative, gist-based estimates (Strimling et al., Reference Strimling, Vartanova, Jansson and Eriksson2019). All studies covered a large number of attitudes for which polling data provide estimates of actual change, offering a unique opportunity to evaluate the accuracy of social perception across measurement approaches.

We propose that even when asked for precise percentages, people engage in a gist-to-precision heuristic. They likely anchor on their perception of the present-day attitude and then adjust this figure based on their gist-based perception of historical change. This cognitive model implies that estimates derived from precise percentage methods should strongly correlate with estimates from gist-of-change methods, as they both tap into the same underlying representation. It also implies that if people are adept at perceiving the gist of change, their estimates should be quite accurate when compared with actual polling data.

2. Materials and methods

2.1. Study design and data sources

We conducted a secondary analysis of 3 previously published studies that measured public perceptions of attitude change across large sets of social issues. These studies employed different methodological approaches to measuring perceived change, allowing us to test whether gist-based accuracy emerges across measurement methods. Our analyses include all attitude items (51, 98, and 74, respectively) for which both perceived and actual change data were available in the original studies; no items were selectively excluded from our analysis.

Mastroianni and Dana (Reference Mastroianni and Dana2022): Data were obtained from the authors’ publicly accessible OSF repository (https://osf.io/wud9a/). We pooled data from their Study 1 and Study 2, collected in February and November 2022, respectively. The final sample consisted of 1,445 participants recruited via Prolific (mean age = 43.3 years, SD = 15.1; 52.0% female, 0.3% other gender). Participants estimated attitude change across 51 different social issues (see Supplementary Table S1).

Vartanova et al. (Reference Vartanova, Eriksson, Hazin and Strimling2021): These data were collected specifically for, but not published in, the original paper. The sample consisted of 568 participants recruited via Amazon Mechanical Turk in November–December 2018 (mean age = 39.2, SD = 12.0; 59.2% female). Participants estimated attitude change across 98 social issues (see Supplementary Table S2).

Strimling et al. (Reference Strimling, Vartanova, Jansson and Eriksson2019): Data were obtained from the authors’ public GitHub repository (https://github.com/irinavrt/moralopinion). The sample consisted of 223 participants recruited via Amazon Mechanical Turk in December 2016–January 2017 (mean age = 36.0, SD = 11.2; 58.7% female). Participants estimated attitude change across 74 social issues (see Supplementary Table S2).

2.2. Measures of perceived attitude change

Precise percentage estimates (Mastroianni and Dana, Reference Mastroianni and Dana2022; Vartanova et al., Reference Vartanova, Eriksson, Hazin and Strimling2021): In the Mastroianni and Dana (Reference Mastroianni and Dana2022) study, participants were presented with actual polling questions and asked to estimate the percentage of Americans who gave particular responses in specific years decades apart. For example: Every few years, a research organization has asked a nationally representative group of American adults the following question: “Do you strongly agree, agree, disagree, or strongly disagree that it is sometimes necessary to discipline a child with a good, hard spanking?” What percent of people do you think responded strongly agree OR agree in each of the following years? 1986 and 2018. See Supplementary Table S1 for aggregated estimates.

The Vartanova et al. (Reference Vartanova, Eriksson, Hazin and Strimling2021) study used a similar approach, asking participants: What percentage of people do you think would respond yes to the issue? What percentage of people do you think would respond yes to the issue 40 years ago? See Supplementary Table S2 for aggregated estimates.

Gist-based estimates (Strimling et al., Reference Strimling, Vartanova, Jansson and Eriksson2019): Participants were asked: How do you think public opinion on this issue has changed during the last 40 years? Response options were: ‘People have become much more likely to answer no (support has gone down by 15 percentage points or more)’ [coded −2], ‘People have become slightly more likely to answer no (support has gone down by less than 15 percentage points)’ [coded −1], ‘Public opinion has not changed’ [coded 0], ‘People have become slightly more likely to answer yes (support has gone up by less than 15 percentage points)’ [coded 1], and ‘People have become much more likely to answer yes (support has gone up by 15 percentage points or more)’ [coded 2]. See Supplementary Table S2 for aggregated estimates.

2.3. Actual attitude change

We obtained estimates of actual attitude change from established polling organizations. For Mastroianni and Dana (Reference Mastroianni and Dana2022), actual change data came from the General Social Survey (GSS), American National Election Studies, Pew Research Center, and Gallup, as compiled by the original authors. For Vartanova et al. (Reference Vartanova, Eriksson, Hazin and Strimling2021) and Strimling et al. (Reference Strimling, Vartanova, Jansson and Eriksson2019), we calculated change rates using GSS data. For questions with more than two response options, we collapsed categories (e.g., ‘Strongly agree’ and ‘Agree’ into ‘Support’) to create binary measures. We estimated change rates as percentage points per decade by fitting linear regressions of population-level attitudes on time (see Supplementary Tables S1 and S2).

We acknowledge that these polling data, while the best available ground truth, are themselves estimates subject to sampling and non-response errors. Measurement error in the criterion variable is expected to attenuate our observed correlations, meaning the true accuracy of people’s judgments may be even higher than reported here.

2.4. Statistical analysis

2.4.1. Primary analysis

We calculated perceived change for Mastroianni and Dana (Reference Mastroianni and Dana2022) and Vartanova et al. (Reference Vartanova, Eriksson, Hazin and Strimling2021) as the simple difference between percentage estimates for the second and first time points, divided by the number of decades between them. For Strimling et al. (Reference Strimling, Vartanova, Jansson and Eriksson2019), we used the raw gist-based ratings on the −2 to +2 scale. We then computed Pearson correlations between perceived and actual change within each study, and between perceived change estimates across studies.

2.4.2. Individual-level analysis

To determine whether collective accuracy emerged from genuine individual understanding rather than statistical artifacts of aggregation, we conducted two additional analyses. First, we calculated directional accuracy: among participants who estimated nonzero change for a given attitude, we determined the percentage who correctly identified the direction of actual change. Second, we conducted individual slope analyses by regressing each participant’s perceived change estimates against actual change values across all attitude items, then examined the distribution of individual slope coefficients.

2.4.3. Sample size and exclusions

For individual slope analyses, we included only participants who provided estimates for at least 5 attitude items to ensure stable regression estimates, as slopes calculated on very few data points can be unreliable. This resulted in analysis samples of 1,445 (Mastroianni and Dana), 568 (Vartanova et al.), and 223 (Strimling et al.) participants for directional accuracy, and 1,445 (Mastroianni and Dana), 259 (Vartanova et al.), and 163 (Strimling et al.) participants for slope analyses.

2.5. Use of generative AI

We used a large language model (Gemini 2.5 Pro) to obtain suggestions about style and language.

3. Results

Our analysis of the 3 studies (Mastroianni and Dana, Reference Mastroianni and Dana2022; Strimling et al., Reference Strimling, Vartanova, Jansson and Eriksson2019; Vartanova et al., Reference Vartanova, Eriksson, Hazin and Strimling2021) confirms both theoretical implications of the gist-to-precision hypothesis. First, we found that collective estimates of attitude change were remarkably consistent across all 3 studies, with correlation coefficients ranging from 0.92 to 0.94. As shown in Figure 1, people’s estimates from precise percentage tasks are almost perfectly correlated with their qualitative estimates of the gist of change.

Figure 1 Perceptions of attitude change are consistent across studies using different methods. Precise percentage estimates of change from different studies (Mastroianni and Dana, Reference Mastroianni and Dana2022; Vartanova et al., Reference Vartanova, Eriksson, Hazin and Strimling2021) are almost perfectly correlated with each other as well as with estimates of the gist of change (Strimling et al., Reference Strimling, Vartanova, Jansson and Eriksson2019), where gist estimates are on a scale from −2 (large negative change) to 2 (large positive change). Labels provide examples of what the items are about.

While consistency suggests a common cognitive basis, the critical question is external validity. To address this, we compared perceived change from each study to actual change calculated from historical polling data. Despite differences in methods and attitudes covered, the correlation between perceived and actual change was consistently very strong: for Mastroianni and Dana (Reference Mastroianni and Dana2022), r = 0.71; for Vartanova et al. (Reference Vartanova, Eriksson, Hazin and Strimling2021), r = 0.71; and for Strimling et al. (Reference Strimling, Vartanova, Jansson and Eriksson2019), r = 0.72. As Figure 2 illustrates, there are very few instances where the public perceived a strong trend in the opposite direction of the actual change. In short, people’s collective perceptions of attitude change are mostly accurate.

Figure 2 Perceptions of attitude change are mostly consistent with actual attitude change. Actual change in percentage points per decade plotted against perceived change, with dashed reference lines at zero highlighting that very few attitudes are perceived as having changed considerably in the opposite direction to the actual change. Results are very similar for 3 different studies: (A) Mastroianni and Dana (Reference Mastroianni and Dana2022), using precise percentage estimates, (B) Vartanova et al. (Reference Vartanova, Eriksson, Hazin and Strimling2021), using precise percentage estimates, and (C) Strimling et al. (Reference Strimling, Vartanova, Jansson and Eriksson2019), using estimates of the gist of change. Labels provide examples of what the items are about.

Having established the high accuracy of collective perceptions, a critical question remains: does this collective wisdom emerge from genuine individual understanding? We conducted two new analyses to investigate this. First, we performed a directional accuracy analysis examining whether participants who estimated a nonzero change in the popularity of a given attitude correctly identified the direction of change (i.e., whether the attitude had become more widely accepted or less so). Averaging across all attitudes and all 3 datasets, a large majority (70%) of participants made a directionally correct judgment. See Figure 3 for results for each item.

Figure 3 Directional accuracy for each item. Each dot indicates for a given attitude its actual change rate (x-axis) and the proportion of participants who guessed the direction of change correctly among those who made a directional guess. Note that the directional accuracy is typically very high for attitudes where there has been sizable change.

Second, we conducted an individual slope analysis to determine how sensitive each person’s perceptions were to the actual magnitude of change. For each participant, we regressed their perceived change estimates against the actual change values across all attitude items. A positive slope means that the participant tended to recognize differences in change between items. Almost all individuals had positive slopes whether we use data from Mastroianni and Dana (99%), Vartanova et al. (92%), or Strimling et al. (94%); see Figure 4 for the full distribution of slopes. These results indicate that most individuals’ perceptions are not random but are directionally and proportionally sensitive to reality. Thus, collective accuracy is built on a solid foundation of individual understanding.

Figure 4 Individual estimated change is predicted by actual change. Histograms of slopes when individual change estimates are regressed on actual change. In panels A and B, perfect sensitivity to actual differences in change between items would yield an individual slope of 1. Only individuals that estimated at least 5 items are included in B (n = 259) and C (n = 163). The mean slope is indicated in gray.

4. Discussion

The findings from our reanalysis of 3 datasets on people’s estimations of public attitude change robustly challenge the pessimistic narrative that pervades multiple disciplines regarding public awareness of social change. In line with prior research on the striking accuracy of collective estimates (Larrick et al., Reference Larrick, Mannes and Soll2012; Simoiu et al., Reference Simoiu, Sumanth, Mysore and Goel2019), our primary analysis revealed very strong correlation between actual change and collectively estimated change. Moreover, our individual-level analysis demonstrated that this correlation is not an artifact of aggregation but an emergent property of a competent populace. Most individuals correctly perceive the direction of societal change and are reasonably sensitive to its magnitude.

Our results suggest that public perception is more nuanced than a simple optimist/pessimist bias. If people were merely applying a general positive or negative filter to all social issues, they might correctly identify overall directional trends, but they would not be able to accurately distinguish which attitudes changed rapidly versus slowly. However, our results demonstrate that people not only perceive the correct directions of change but also accurately rank the relative magnitudes—correctly identifying which social attitudes have undergone dramatic shifts versus modest changes versus stability.

These findings align with fuzzy-trace theory, which posits that cognition excels at detecting meaningful, gist-based patterns rather than precise details (Reyna, Reference Reyna2012). Just as people tend to remember the gist of an event rather than its literal details (Alba and Hasher, Reference Alba and Hasher1983), they appear to track societal change in a way that captures essential directional and magnitude information.

4.1. Implications for judgment, decision-making, and methodology

Our findings have implications for models of bounded rationality in complex domains. Political theorists, for instance, have long debated citizen competence. Our results suggest that models based on a lack of precise factual knowledge may be flawed, as they miss the robust, gist-based understanding of social trends that people clearly possess. Our findings also inform debates about media effects and information processing. Rather than viewing the public as passive recipients of potentially distorting media messages, our results suggest people actively integrate information from multiple sources to construct accurate mental models of social change. This has implications for understanding how democratic societies process complex social information and adapt to changing circumstances.

Finally, our investigation highlights a critical metrological issue that spans empirical social science. While precision may be a relevant standard for a pollster, a more ecologically valid standard for lay social cognition is the accurate perception of relative change and directionality. By this standard, the public is remarkably well-attuned.

4.2. Limitations and future directions

A primary limitation of the current study is that while it demonstrates that people’s gist-based judgments are accurate, it cannot identify the mechanisms or information sources they use to form these judgments. This is a key avenue for future research. For example, are people tracking the volume of media coverage, the valence of personal conversations, or visible policy changes? Distinguishing between these sources is a critical next step to illuminate the cognitive mechanisms underlying this collective accuracy.

Second, while our re-analysis of 3 large studies provides robust evidence, all 3 samples were drawn from the United States. This necessarily limits the generalizability of our findings. A crucial next step is to investigate whether this capacity is contingent on specific societal factors, such as media freedom, political polarization, or cultural context. Cross-national research is needed to illuminate the boundary conditions of this collective wisdom.

Beyond these limitations, our findings on the exceptions to the general accuracy point to new lines of inquiry. While collective perception is largely accurate, a closer inspection of the outliers in Figure 2 reveals a patterned source of error: people tend to underestimate the magnitude of the very fastest changes. For example, while perceptions of change in attitudes toward gay marriage were in the correct positive direction, the actual change was even larger than the public collectively perceived (i.e., this item falls below the diagonal line in Figure 2A). But there were also a few issues, such as gun laws or climate, where collective perception of attitude change diverged more qualitatively from reality. Given the epidemic of mass shootings or the reality of climate change, it is logical to infer that support for gun laws or climate worry should have increased. This suggests a testable hypothesis: when direct social cues about opinion change are ambiguous, people may default to a form of normative reasoning, inferring the opinion change that should have occurred based on salient societal events. Future studies could experimentally test this proposed mechanism.

Finally, our findings on the public’s ability to perceive historical change should be distinguished from recent, important work highlighting the profound difficulty of accurately forecasting future societal trends (Hutcherson et al., Reference Hutcherson, Sharpinskyi, Varnum, Rotella, Wormley, Tay and Grossmann2023; The Forecasting Collaborative, 2023). The cognitive task of integrating accumulated historical signals to perceive what has already happened appears to be fundamentally different from the task of predicting what is yet to come. Our results suggest the public is surprisingly adept at the former.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/jdm.2025.10023.

Data availability statement

The data analyzed in this study and the analysis code are provided at OSF: https://osf.io/kdzyb/.

Acknowledgements

We are grateful to Adam Mastroianni for sharing the data on actual opinion change for the study by Mastroianni and Dana (Reference Mastroianni and Dana2022).

Funding statement

This study was supported by a grant from the Knut and Alice Wallenberg Foundation (Grant No. 2022.0191).

Competing interest

The authors declare no competing interests.

Footnotes

1 Mastroianni and Dana (Reference Mastroianni and Dana2022) came to this conclusion by asking participants to provide precise percentage estimates for 2 different time points (e.g., in 1986 and 2018). They then based their ‘little idea’ claim on analysis of the size of the errors in change estimates; for example, that participants’ estimated change was statistically different from the actual change for 49 out of 51 studied attitudes. Our article’s central argument is that this method tests for ‘pollster-like precision’, whereas gist-based analysis of the correlation between perceived and actual change or individual directional accuracy reveals a robust underlying understanding.

References

Alba, J. W., & Hasher, L. (1983). Is memory schematic? Psychological Bulletin, 93(2), 203231.10.1037/0033-2909.93.2.203CrossRefGoogle Scholar
Brown, N. R., & Siegler, R. S. (1993). Metrics and mappings: A framework for understanding real-world quantitative estimation. Psychological Review, 100(3), 511534.10.1037/0033-295X.100.3.511CrossRefGoogle ScholarPubMed
Cooper, C. H., Fahey, K., & Jones, R. (2025). Biased perceptions of public opinion don’t define echo chambers but reveal systematic differences in political awareness. PLOS ONE, 20, e0324507.10.1371/journal.pone.0324507CrossRefGoogle ScholarPubMed
Dixon, G. N., Lerner, B., & Bashian, S. (2024). Challenges to correcting pluralistic ignorance: false consensus effects, competing information environments, and anticipated social conflict. Human Communication Research, 50, 419429.10.1093/hcr/hqae001CrossRefGoogle Scholar
Griffiths, T. L., Kemp, C., & Tenenbaum, J. B. (2008). Bayesian models of cognition. In Sun, R. (Ed.), The Cambridge handbook of computational psychology (pp. 59100). Cambridge University Press.Google Scholar
Guay, B., Marghetis, T., Wong, C., & Landy, D. (2025). Quirks of cognition explain why we dramatically overestimate the size of minority groups. Proceedings of the National Academy of Sciences, 122(14), e2413064122.10.1073/pnas.2413064122CrossRefGoogle ScholarPubMed
Hutcherson, C. A., Sharpinskyi, K., Varnum, M. E. W., Rotella, A., Wormley, A. S., Tay, L., & Grossmann, I. (2023). On the accuracy, media representation, and public perception of psychological scientists’ judgments of societal change. American Psychologist, 78(8), 968981.10.1037/amp0001151CrossRefGoogle ScholarPubMed
Larrick, R. P., Mannes, A. E., & Soll, J. B. (2012). The social psychology of the wisdom of crowds. In Social judgment and decision making (pp. 227242). Psychology Press.Google Scholar
Luzsa, R., & Mayr, S. (2021). False consensus in the echo chamber: Exposure to favorably biased social media news feeds leads to increased perception of public support for own opinions. Cyberpsychology, 15(1), Article 3.10.5817/CP2021-1-3CrossRefGoogle Scholar
Mastroianni, A. M., & Dana, J. (2022). Widespread misperceptions of long-term attitude change. Proceedings of the National Academy of Sciences of the United States of America, 119(11), e2107260119.10.1073/pnas.2107260119CrossRefGoogle ScholarPubMed
Noelle-Neumann, E. (1974). The spiral of silence: A theory of public opinion. Journal of Communication, 24(2), 4351.10.1111/j.1460-2466.1974.tb00367.xCrossRefGoogle Scholar
Reyna, V. F. (2012). A new intuitionism: Meaning, memory, and development in fuzzy-trace theory. Judgment and Decision Making, 7(3), 332359.10.1017/S1930297500002291CrossRefGoogle ScholarPubMed
Reyna, V. F., Edelson, S., Hayes, B., & Garavito, D. (2022). Supporting health and medical decision making: findings and insights from fuzzy-trace theory. Medical Decision Making, 42(6), 741754.10.1177/0272989X221105473CrossRefGoogle ScholarPubMed
Reyna, V. F., Hans, V. P., Corbin, J. C., Yeh, R., Lin, K., & Royer, C. (2015). The gist of juries: Testing a model of damage award decision making. Psychology, Public Policy, and Law, 21(3), 280294.10.1037/law0000048CrossRefGoogle Scholar
Sargent, R. H., & Newman, L. S. (2021). Pluralistic ignorance research in psychology: A scoping review of topic and method variation and directions for future research. Review of General Psychology, 25(2), 163184.10.1177/1089268021995168CrossRefGoogle Scholar
Simoiu, C., Sumanth, C., Mysore, A., & Goel, S. (2019). Studying the “wisdom of crowds” at scale. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 7, pp. 171179).10.1609/hcomp.v7i1.5271CrossRefGoogle Scholar
Sparkman, G., Geiger, N., & Weber, E. U. (2022). Americans experience a false social reality by underestimating popular climate policy support by nearly half. Nature Communications, 13, 4779.10.1038/s41467-022-32412-yCrossRefGoogle ScholarPubMed
Strimling, P., Vartanova, I., Jansson, F., & Eriksson, K. (2019). The connection between moral positions and moral arguments drives opinion change. Nature Human Behaviour, 3(9), 922930.10.1038/s41562-019-0647-xCrossRefGoogle ScholarPubMed
The Forecasting Collaborative. (2023). Insights into the accuracy of social scientists’ forecasts of societal change. Nature Human Behaviour, 7, 484501.10.1038/s41562-022-01517-1CrossRefGoogle Scholar
Vartanova, I., Eriksson, K., Hazin, I., & Strimling, P. (2021). Different populations agree on which moral arguments underlie which opinions. Frontiers in Psychology, 12, 648405.10.3389/fpsyg.2021.648405CrossRefGoogle ScholarPubMed
Figure 0

Figure 1 Perceptions of attitude change are consistent across studies using different methods. Precise percentage estimates of change from different studies (Mastroianni and Dana, 2022; Vartanova et al., 2021) are almost perfectly correlated with each other as well as with estimates of the gist of change (Strimling et al., 2019), where gist estimates are on a scale from −2 (large negative change) to 2 (large positive change). Labels provide examples of what the items are about.

Figure 1

Figure 2 Perceptions of attitude change are mostly consistent with actual attitude change. Actual change in percentage points per decade plotted against perceived change, with dashed reference lines at zero highlighting that very few attitudes are perceived as having changed considerably in the opposite direction to the actual change. Results are very similar for 3 different studies: (A) Mastroianni and Dana (2022), using precise percentage estimates, (B) Vartanova et al. (2021), using precise percentage estimates, and (C) Strimling et al. (2019), using estimates of the gist of change. Labels provide examples of what the items are about.

Figure 2

Figure 3 Directional accuracy for each item. Each dot indicates for a given attitude its actual change rate (x-axis) and the proportion of participants who guessed the direction of change correctly among those who made a directional guess. Note that the directional accuracy is typically very high for attitudes where there has been sizable change.

Figure 3

Figure 4 Individual estimated change is predicted by actual change. Histograms of slopes when individual change estimates are regressed on actual change. In panels A and B, perfect sensitivity to actual differences in change between items would yield an individual slope of 1. Only individuals that estimated at least 5 items are included in B (n = 259) and C (n = 163). The mean slope is indicated in gray.

Supplementary material: File

Vartanova et al. supplementary material

Vartanova et al. supplementary material
Download Vartanova et al. supplementary material(File)
File 37.3 KB