We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Survey research is a method commonly used to understand what members of a population think, feel, and do. This chapter uses the total survey error perspective and the fitness for use perspective to explore how biasing and variable errors occur in surveys. Coverage error and sample frames, nonprobability samples and web panels, sampling error, nonresponse rates and nonresponse bias, and sources of measurement error are discussed. Different pretesting methods and modes of data collection commonly used in surveys are described. The chapter concludes that survey research is a tool that social psychologists may use to improve the generalizability of studies, to evaluate how different populations react to different experimental conditions, and to understand patterns in outcomes that may vary over time, place, or people.
As survey experiments have become increasingly common in political science, some scholars have questioned whether inferences about the real world can be drawn from experiments involving hypothetical, text-based scenarios. In response to this criticism, some researchers recommended using realistic, context-heavy vignettes while others argue that abstract vignettes do not generate substantially different results. We contribute to this debate by evaluating whether incorporating contextually realistic graphics into survey experiment vignettes affects experimental outcomes. We field three original experiments that vary whether respondents are shown a realistic graphic or a plain text description during an international crisis. In our experiments, varying whether respondents are shown realistic graphics or plain text descriptions generally yields little difference in outcomes. Our findings have implications for survey methodology and experiments in political science – researchers may not need to invest the time to develop contextually realistic graphics when designing experiments.
This paper surveys what we have learned on financial literacy and its relation to financial behavior from data collected in the Dutch Central Bank (DNB) Household Survey, a project done in collaboration with academics. A pioneering survey fielded in 2005 included an extensive set of financial literacy questions and questions that can serve as instruments for financial literacy in regression analyses to assess the causal effect of financial literacy on behavior. We describe how this survey spurred a series of research papers demonstrating the crucial role of financial literacy in stock market participation, retirement planning, and wealth accumulation. This inspired various follow-up studies and experiments based on new data collections in the DNB Household Survey. Researchers worldwide have used these data for innovative studies, and other surveys have included similar questions. This case study exemplifies the essential role of data in empirical research, showing how innovative data collections can inspire new research initiatives and significantly contribute to our understanding of household financial decision-making.
Scholars often use monetary incentives to boost participation rates in online surveys. This technique follows existing literature from western countries, which suggests egoistic incentives effectively boost survey participation. Positing that incentives’ effectiveness vary by country context, we tested this proposition through an experiment in Australia, India, and the USA. We compared three types of monetary lotteries to narrative and altruistic appeals. We find that egoistic rewards are most effective in the USA and to some extent, in Australia. In India, respondents are just as responsive to altruistic incentives as to egoistic incentives. Results from an adapted dictator game corroborate these patterns. Our results caution scholars against exporting survey participation incentives to areas where they have not been tested.
Scholars and policymakers warn that with rising affective polarization, politicians will find support from the public and permission from military professionals to use military force to selectively crack down on political opponents. We test these claims by conducting parallel survey experiments among the US public and mid-career military officers. We ask about two hypothetical scenarios of domestic partisan unrest, randomly assigning the partisan identity of protesters. Surprisingly, we find widespread public support for deploying the military and no significant partisanship effects. Meanwhile, military officers were very resistant to deploying the military, with nearly 75 percent opposed in any scenario. In short, there is little evidence that public polarization threatens to escalate domestic disputes, and strong evidence for military opposition.
What are the consequences of including a “don't know” (DK) response option to attitudinal survey questions? Existing research, based on traditional survey modes, argues that it reduces the effective sample size without improving the quality of responses. We contend that it can have important effects not only on estimates of aggregate public opinion, but also on estimates of opinion differences between subgroups of the population who have different levels of political information. Through a pre-registered online survey experiment conducted in the United States, we find that the DK response option has consequences for opinion estimates in the present day, where most organizations rely on online panels, but mainly for respondents with low levels of political information and on low salience issues. These findings imply that the exclusion of a DK option can matter, with implications for assessments of preference differences and our understanding of their impacts on politics and policy.
Among the greatest challenges facing scholars of public opinion are the potential biases associated with survey item nonresponse and preference falsification. This difficulty has led researchers to utilize nonresponse rates to gauge the degree of preference falsification across regimes. This article addresses the use of survey nonresponse rates to proxy for preference falsification. A simulation analysis exploring the expression of preferences under varying degrees of repression was conducted to examine the viability of using nonresponse rates to regime assessment questions. The simulation demonstrates that nonresponse rates to regime assessment questions and indices based on nonresponse rates are not viable proxies for preference falsification. An empirical examination of survey data supports the results of the simulation analysis.
When surveyed, clear majorities express concern about inequality and view the government as responsible for addressing it. Scholars often interpret this view as popular support for redistribution. We question this interpretation, contending that many people have little grasp of what reducing inequality actually entails, and that this disconnect masks important variation in preferences over concrete policies. Using original survey and experimental US data, we provide systematic evidence in line with these conjectures. Furthermore, when asked about more concrete redistributive measures, support for government action changes significantly and aligns more closely with people's self-interest. These findings have implications for how egalitarian policies can be effectively communicated to the public, as well as methodological implications for the study of preferences on redistribution.
Most public opinion research in China uses direct questions to measure support for the Chinese Communist Party (CCP) and government policies. These direct question surveys routinely find that over 90 per cent of Chinese citizens support the government. From this, scholars conclude that the CCP enjoys genuine legitimacy. In this paper, we present results from two survey experiments in contemporary China that make clear that citizens conceal their opposition to the CCP for fear of repression. When respondents are asked directly, we find, like other scholars, approval ratings for the CCP that exceed 90 per cent. When respondents are asked in the form of list experiments, which confer a greater sense of anonymity, CCP support hovers between 50 per cent and 70 per cent. This represents an upper bound, however, since list experiments may not fully mitigate incentives for preference falsification. The list experiments also suggest that fear of government repression discourages some 40 per cent of Chinese citizens from participating in anti-regime protests. Most broadly, this paper suggests that scholars should stop using direct question surveys to measure political opinions in China.
Granular temporal and spatial scale observations of conservation practices are essential for identifying changes in the production systems that improve soil health and water quality and inform long-term agricultural research and adaptive policy development. In this study, we demonstrate an innovative use of farmer practice survey data and what can be uniquely known from a detailed survey that targets specific farm groups with a regional focus over multiple consecutive years. Using three years of survey data (n = 3914 respondents), we describe prevailing crop rotation, tillage, and cover crop practice use in four Midwestern US states. Like national metrics, the results confirm dominant practices across the landscape, including corn-soybean rotation, little use of continuous no-till, and the limited use of cover crops. Our detailed regional survey further reveals differences by state for no-till and cover crop adoption rates that were not captured in federal datasets. For example, 66% of sampled acreage in the Midwest has corn and soybean rotation, with Illinois having the highest rate (72%) and Michigan the lowest (41%). In 2018, 20% of the corn acreage and 38% of the soybean acreage were in no-till, and 13% of the corn acres and 9% of the soybean acres were planted with a cover crop. Cover crop adoption rates fluctuate from year to year. Results demonstrate the value of a farmer survey at state scales over multiple years in complementing federal statistics and monitoring state and yearly differences in practice adoption. Agricultural policies and industry heavily depend on accurate and timely information that reflects spatial and temporal dynamics. We recommend building an agricultural information exchange and workforce that integrates diverse data sources with complementary strengths to provide a greater understanding of agricultural management practices that provide baseline data for prevailing practices.
Scholars, pundits, and politicians use opinion surveys to study citizen beliefs about political facts, such as the current unemployment rate, and more conspiratorial beliefs, such as whether Barack Obama was born abroad. Many studies, however, ignore acquiescence-response bias, the tendency for survey respondents to endorse any assertion made in a survey question regardless of content. With new surveys fielding questions asked in recent scholarship, we show that acquiescence bias inflates estimated incidence of conspiratorial beliefs and political misperceptions in the United States and China by up to 50%. Acquiescence bias is disproportionately prevalent among more ideological respondents, inflating correlations between political ideology such as conservatism and endorsement of conspiracies or misperception of facts. We propose and demonstrate two methods to correct for acquiescence bias.
Rising costs and challenges of in-person interviewing have prompted major surveys to consider moving online and conducting live web-based video interviews. In this paper, we evaluate video mode effects using a two-wave experimental design in which respondents were randomized to either an interviewer-administered video or interviewer-administered in-person survey wave after completing a self-administered online survey wave. This design permits testing of both within- and between-subject differences across survey modes. Our findings suggest that video interviewing is more comparable to in-person interviewing than online interviewing across multiple measures of satisficing, social desirability, and respondent satisfaction.
A rich literature documents the effects of survey interviewer race on respondents’ answers to questions about political issues and factual knowledge. In this paper, we advance the study of interviewer effects in two ways. First, we examine the impact of race on interviewers’ subjective evaluations of respondents’ political knowledge. Second, we substitute measures of respondent/interviewer racial self-identification with interviewer perceptions of respondent skin tone. We find that white interviewers subjectively rate black respondents’ knowledge lower than do black interviewers, even controlling for objective knowledge measures. Moreover, we identify a negative relationship between relative skin tone and interviewer's assessment of knowledge. Subsequent analyses show a linear relationship between subjective knowledge assessments and the difference between respondent and interviewer skin tone. We conclude with a discussion of the impact of colorism on survey administration and the measurement of political attitudes and democratic capabilities.
Our exploration of the 4D Framework uses an eclectic set of methodological techniques; Chapter 3 is an overview of the methodological core of our inquiry. We explain the key operationalizations of the 4D Framework and provide context and details for the studies that appear in multiple chapters throughout the book. We specifically describe how we measure contextual features of a discussion, such as disagreement, as well as the scales we use to measure individual dispositions, such as social anxiety. We explain the utility of psychophysiological data for our purposes and describe the research design details for the studies that use psychophysiological data. We provide details on our survey samples from which several analyses throughout the book are derived.
Is terrorism effective as a tool of political influence? In particular, do terrorists succeed in affecting their targets’ attitudes, and how long does the effect last? Existing research unfortunately is either limited to small samples or does not address two main difficulties: issues of endogeneity and the inability to assess the duration of the effect. Here, we first exploit the exogeneity to the selection process of the success or failure of an attack as an identification mechanism. Second, we take advantage of the random allocation of survey respondents to interview times to estimate the duration of the impact of terrorist events on attitudes. Using survey data from 30 European democracies between 2002 and 2017, we find first that terrorism affects people's reported life satisfaction and happiness—a proxy for the cost of terrorism in terms of utility. However, we also find that terrorist attacks do not affect respondents’ attitude toward their government, institutions, or immigrants. This suggests that terrorism is ineffective at translating discontent into political pressure. Importantly, we also find that all effects disappear within less than two weeks.
As the US faced its lowest levels of reported trust in government, the COVID-19 crisis revealed the essential service that various federal agencies provide as sources of information. This Element explores variations in trust across various levels of government and government agencies based on a nationally-representative survey conducted in March of 2020. First, it examines trust in agencies including the Department of Health and Human Services, state health departments, and local health care providers. This includes variation across key characteristics including party identification, age, and race. Second, the Element explores the evolution of trust in health-related organizations throughout 2020 as the pandemic continued. The Element concludes with a discussion of the implications for agency-specific assessments of trust and their importance as we address historically low levels of trust in government. This title is also available as Open Access on Cambridge Core.
How can we elicit honest responses in surveys? Conjoint analysis has become a popular tool to address social desirability bias (SDB), or systematic survey misreporting on sensitive topics. However, there has been no direct evidence showing its suitability for this purpose. We propose a novel experimental design to identify conjoint analysis’s ability to mitigate SDB. Specifically, we compare a standard, fully randomized conjoint design against a partially randomized design where only the sensitive attribute is varied between the two profiles in each task. We also include a control condition to remove confounding due to the increased attention to the varying attribute under the partially randomized design. We implement this empirical strategy in two studies on attitudes about environmental conservation and preferences about congressional candidates. In both studies, our estimates indicate that the fully randomized conjoint design could reduce SDB for the average marginal component effect (AMCE) of the sensitive attribute by about two-thirds of the AMCE itself. Although encouraging, we caution that our results are exploratory and exhibit some sensitivity to alternative model specifications, suggesting the need for additional confirmatory evidence based on the proposed design.
Models for converting expert-coded data to estimates of latent concepts assume different data-generating processes (DGPs). In this paper, we simulate ecologically valid data according to different assumptions, and examine the degree to which common methods for aggregating expert-coded data (1) recover true values and (2) construct appropriate coverage intervals. We find that the mean and both hierarchical Aldrich–McKelvey (A–M) scaling and hierarchical item-response theory (IRT) models perform similarly when expert error is low; the hierarchical latent variable models (A-M and IRT) outperform the mean when expert error is high. Hierarchical A–M and IRT models generally perform similarly, although IRT models are often more likely to include true values within their coverage intervals. The median and non-hierarchical latent variable models perform poorly under most assumed DGPs.
We examine citizens' evaluations of majoritarian and proportional electoral outcomes through an innovative experimental design. We ask respondents to react to six possible electoral outcomes during the 2019 Canadian federal election campaign. There are two treatments: the performance of the party and the proportionality of electoral outcomes. There are three performance conditions: the preferred party's vote share corresponds to vote intentions as reported in the polls at the time of the survey (the reference), or it gets 6 percentage points more (fewer) votes. There are two electoral outcome conditions: disproportional and proportional. We find that proportional outcomes are slightly preferred and that these preferences are partly conditional on partisan considerations. In the end, however, people focus on the ultimate outcome, that is, who is likely to form the government. People are happy when their party has a plurality of seats and is therefore likely to form the government, and relatively unhappy otherwise. We end with a discussion of the merits and limits of our research design.
The integrity of democratic elections, both in the United States and abroad, is an important problem. In this Element, we present a data-driven approach that evaluates the performance of the administration of a democratic election, before, during, and after Election Day. We show that this data-driven method can help to improve confidence in the integrity of American elections.