We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Survey research is a method commonly used to understand what members of a population think, feel, and do. This chapter uses the total survey error perspective and the fitness for use perspective to explore how biasing and variable errors occur in surveys. Coverage error and sample frames, nonprobability samples and web panels, sampling error, nonresponse rates and nonresponse bias, and sources of measurement error are discussed. Different pretesting methods and modes of data collection commonly used in surveys are described. The chapter concludes that survey research is a tool that social psychologists may use to improve the generalizability of studies, to evaluate how different populations react to different experimental conditions, and to understand patterns in outcomes that may vary over time, place, or people.
This is a brief conclusion arguing that the direction forward is clear, even if the path is not. The time for assuming away problems is past. We should begin with a paradigm that reflects all the ways that polling can go wrong and then identify, model, and measure all the sources of bias, not just the ones that are easy to fix. Much work remains to be done, though, as these new models and data sources will require much evaluation and development theoretically, empirically, and practically. The payoff will be that survey researchers will be able to remain true to their aspirations of using information about a small number of people to understand the realities about many people, even as it gets harder and hear from anyone, let alone the random samples that our previous theory relied on.
This chapter illustrates how to use randomized response treatments to assess possible nonresponse bias. It focuses on a 2019 survey and shows how nonignorable nonresponse may have deflated Trump support in the Midwest and among Democrats even as nonignorable nonresponse inflated Trump support among Republicans. We also show that Democrats who responded to the poll were much more liberal on race than Democrats who did not respond, a pattern that was particularly strong among White Democrats and absent among non-White Democrats. Section 12.1 describes a survey design with a randomized response instrument. Section 12.2 discusses nonignorable nonresponse bias for turnout questions. Section 12.3 looks at presidential support, revealing regional and partisan differences in nonignorable nonresponse. Section 12.4 looks at race, focusing on partisan and racial differences in nonignorable nonresponse. Section 12.5 assesses nonignorable nonresponse on climate, taxes, and tariffs.
We elaborate a general workflow of weighting-based survey inference, decomposing it into two main tasks. The first is the estimation of population targets from one or more sources of auxiliary information. The second is the construction of weights that calibrate the survey sample to the population targets. We emphasize that these tasks are predicated on models of the measurement, sampling, and nonresponse process whose assumptions cannot be fully tested. After describing this workflow in abstract terms, we then describe in detail how it can be applied to the analysis of historical and contemporary opinion polls. We also discuss extensions of the basic workflow, particularly inference for causal quantities and multilevel regression and poststratification.
Ex ante analyses of agricultural practices often examine stated preference data, yet response behavior as a potential source of bias is often disregarded. We use survey data to estimate producers’ willingness to rent public land for rotational grazing in Wisconsin and combine it with information on nonrespondents to control for nonresponse and avidity effects. Previous experience with managed grazing and rental decisions influenced who responded as well as their rental intentions. These effects do not produce discernable bias but still encourage attention to this possibility in other ex ante contexts. Land rental determinants and willingness-to-pay estimates are also related to grazing initiatives.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.