We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter explores the design, development, and format of the Likert-type scales and response categories used in an online questionnaire for quantitative data collection for a recent empirical case study exploring attitudes, challenges, and perceptions of first-year undergraduate students at an English Medium Instruction (EMI) university in Hong Kong. Questionnaires are among the most widely used methods for research in the social sciences and can be an important and valuable source of data, which can be converted into measures of the numerous variables being examined. A variety of rating scale formats and designs with differing numbers of response categories and sequences are used in survey research. While researchers are typically confronted with a surplus of design and format choices, there is often little in terms of research, guidelines, or standards directing them toward which styles and formats to choose. Based on the survey design and development for this recent EMI-related study, and drawing from the literature, this chapter reviews how such choices and decisions were made, how the Likert-type scales were designed, and how these decisions may have influenced the overall success of the data collection and analysis. The case studies in Chapters 7, 8, and 11 of this book also adopt Likert-type scales in their questionnaire design, and these could be read together to supplement the understanding of the current chapter.
Chapter 2 covers the basics of research design.It is written so that students without any research design experience or coursework can learn common research designs to enable them to conduct statistical analyses in the text.Hypotheses development with variable construction (dependent and independent variables) are covered and applied to experimental and non-experimental designs.Survey methods including question construction and implementation of surveys is presented.
This chapter covers the concepts of error and bias and their application in practice using a total error framework. This includes a discussion of how to manage both sampling and non-sampling error, and covers ways to assess and address coverage bias, nonresponse bias, measurement error, and estimation error.
Biodiversity monitoring programmes should be designed with sufficient statistical power to detect population change. Here we evaluated the statistical power of monitoring to detect declines in the occupancy of forest birds on Christmas Island, Australia. We fitted zero-inflated binomial models to 3 years of repeat detection data (2011, 2013 and 2015) to estimate single-visit detection probabilities for four species of concern: the Christmas Island imperial pigeon Ducula whartoni, Christmas Island white-eye Zosterops natalis, Christmas Island thrush Turdus poliocephalus erythropleurus and Christmas Island emerald dove Chalcophaps indica natalis. We combined detection probabilities with maps of occupancy to simulate data collected over the next 10 years for alternative monitoring designs and for different declines in occupancy (10–50%). Specifically, we explored how the number of sites (60, 128, 300, 500), the interval between surveys (1–5 years), the number of repeat visits (2–4 visits) and the location of sites influenced power. Power was high (> 80%) for the imperial pigeon, white-eye and thrush for most scenarios, except for when only 60 sites were surveyed or a 10% decline in occupancy was simulated over 10 years. For the emerald dove, which is the rarest of the four species and has a patchy distribution, power was low in almost all scenarios tested. Prioritizing monitoring towards core habitat for this species only slightly improved power to detect declines. Our study demonstrates how data collected during the early stages of monitoring can be analysed in simulation tools to fine-tune future survey design decisions.
A government's decision to communicate in a native tongue rather than a commonly used and understood but non-native language can prompt perception through an ethnically-tinted lens. While native-language communication is commonplace and typically benign, we argue that conveying a threat posed by an outgroup in a native tongue can trigger dehumanizing attitudes. We conducted a pre-registered survey experiment focusing on attitudes toward Muslim and Chinese people in India to test our expectations. In our two-stage design, we randomly assigned respondents to a survey language (Hindi or English) and, after that, to threat-provoking or control conditions. While Muslims and China are associated with recent violence against India, the government has routinely portrayed only the former as threatening. Likely due to this divergence, Hindi language assignment alone triggers Muslim dehumanization. Indians' more innocuous views of Chinese are responsive to exogenously-induced threat, particularly when conveyed in Hindi.
Chapter 6 takes stock of several insights that follow from the previous five chapters. One set of insights concerns expectations of a left-wing turn. Such expectations overlook the filtering role of fairness beliefs and fail to account for the redistribution to facet of redistributive preferences. Once these blind spots are accounted for, there are few reasons to expect a systematic relationship between an increase in income inequality and demand for redistribution. Another set of insights speaks to mass attitudinal change: The argument presented in the previous chapters points to factors that have received limited attention in political economy, including fiscal stress, survey design, and long-term partisan dynamics. One factor, immigration-induced ethnic diversity, is conspicuous by its absence. Part of the disconnect between inequality and support for redistribution could be due to hostility to immigrants. This chapter concludes by proposing several amendments to this line of reasoning, which, jointly, explain why, in this book, immigration-induced diversity ultimately takes a back seat.
I-O psychologists often face the need to reduce the length of a data collection effort due to logistical constraints or data quality concerns. Standard practice in the field has been either to drop some measures from the planned data collection or to use short forms of instruments rather than full measures. Dropping measures is unappealing given the loss of potential information, and short forms often do not exist and have to be developed, which can be a time-consuming and expensive process. We advocate for an alternative approach to reduce the length of a survey or a test, namely to implement a planned missingness (PM) design in which each participant completes a random subset of items. We begin with a short introduction of PM designs, then summarize recent empirical findings that directly compare PM and short form approaches and suggest that they perform equivalently across a large number of conditions. We surveyed a sample of researchers and practitioners to investigate why PM has not been commonly used in I-O work and found that the underusage stems primarily from a lack of knowledge and understanding. Therefore, we provide a simple walkthrough of the implementation of PM designs and analysis of data with PM, as well as point to various resources and statistical software that are equipped for its use. Last, we prescribe a set of four conditions that would characterize a good opportunity to implement a PM design.
The Endangered giant pangolin Smutsia gigantea is rare and elusive across its Central African range. Because of its solitary and nocturnal nature, the species is difficult to study and so its ecology is little known. Pangolins are considered the most trafficked mammals in the world. Therefore, confirming presence accurately and monitoring trends in distribution and abundance are essential to inform and prioritize conservation efforts. Camera traps are popular tools for surveying rare and cryptic species. However, non-targeted camera-trap surveys yield low camera-trapping rates for pangolins. Here we use camera-trap data from surveys conducted within three protected areas in Uganda to test whether targeted placement of cameras improves giant pangolin detection probability in occupancy models. The results indicate that giant pangolin detection probability is highest when camera traps are targeted on burrows. The median number of days from camera deployment to first giant pangolin detection event was 12, with the majority of events captured within 32 days from deployment. The median interval between giant pangolin events at a camera-trap site was 33 days. We demonstrate that camera-trap surveys can be designed to improve the detection of giant pangolins and we outline a set of recommendations to maximize the effectiveness of efforts to survey and monitor the species.
In this note, we provide direct evidence of cheating in online assessments of political knowledge. We combine survey responses with web tracking data of a German and a US online panel to assess whether people turn to external sources for answers. We observe item-level prevalence rates of cheating that range from 0 to 12 percent depending on question type and difficulty, and find that 23 percent of respondents engage in cheating at least once across waves. In the US panel, which employed a commitment pledge, we observe cheating behavior among less than 1 percent of respondents. We find robust respondent- and item-level characteristics associated with cheating. However, item-level instances of cheating are rare events; as such, they are difficult to predict and correct for without tracking data. Even so, our analyses comparing naive and cheating-corrected measures of political knowledge provide evidence that cheating does not substantially distort inferences.
Question effects are important when designing and interpreting surveys. Question responses are influenced by preceding questions through ordering effects. Identity Theory is employed to explain why some ordering effects exist. A conceptual model predicts respondents will display identity inertia, where the identity cued in one question will be expressed in subsequent questions regardless of whether those questions cue that identity. Lower amounts of identity inertia are found compared to habitual inertia, where respondents tend to give similar answers to previous questions. The magnitude of both inertias is small, suggesting they are only minor obstacles to survey design.
Empirical studies have the potential to both inform and transform cyber peace research. Empirical research can shed light on opaque phenomena, summarize and synthesize diverse stakeholder perspectives, and allow causal inferences about the impact of policymaking efforts. However, researchers embarking on empirical projects in the area of cyber peace generally, and cybersecurity specifically, face significant challenges – particularly related to data collection. In this chapter, we identify some of the key impediments to empirical cyber research and suggest how researchers and other interested stakeholders can overcome these barriers.
To reduce strategic misreporting on sensitive topics, survey researchers increasingly use list experiments rather than direct questions. However, the complexity of list experiments may increase nonstrategic misreporting. We provide the first empirical assessment of this trade-off between strategic and nonstrategic misreporting. We field list experiments on election turnout in two different countries, collecting measures of respondents’ true turnout. We detail and apply a partition validation method which uses true scores to distinguish true and false positives and negatives for list experiments, thus allowing detection of nonstrategic reporting errors. For both list experiments, partition validation reveals nonstrategic misreporting that is: undetected by standard diagnostics or validation; greater than assumed in extant simulation studies; and severe enough that direct turnout questions subject to strategic misreporting exhibit lower overall reporting error. We discuss how our results can inform the choice between list experiment and direct question for other topics and survey contexts.
Freeman et al. (2020a, Psychological Medicine, 21, 1–13) argue that there is widespread support for coronavirus conspiracy theories in England. We hypothesise that their estimates of prevalence are inflated due to a flawed research design. When asking respondents to their survey to agree or disagree with pro-conspiracy statements, they used a biased set of response options: four agree options and only one disagree option (and no ‘don't know’ option). We also hypothesise that due to these flawed measures, the Freeman et al. approach under-estimates the strength of the correlation between conspiracy beliefs and compliance. Finally, we hypothesise that, due to reliance on bivariate correlations, Freeman et al. over-estimate the causal connection between conspiracy beliefs and compliance.
Methods
In a pre-registered study, we conduct an experiment embedded in a survey of a representative sample of 2057 adults in England (fieldwork: 16−19 July 2020).
Results
Measured using our advocated ‘best practice’ approach (balanced response options, with a don't know option), prevalence of support for coronavirus conspiracies is only around five-eighths (62.3%) of that indicated by the Freeman et al. approach. We report mixed results on our correlation and causation hypotheses.
Conclusions
To avoid over-estimating prevalence of support for coronavirus conspiracies, we advocate using a balanced rather than imbalanced set of response options, and including a don't know option.
A key challenge facing many large, in-person public opinion surveys is ensuring that enumerators follow fieldwork protocols. Implementing “quality control” processes can improve data quality and help ensure the representativeness of the final sample. Yet while public opinion researchers have demonstrated the utility of quality control procedures such as audio capture and geo-tracking, there is little research assessing the relative merits of such tools. In this paper, we present new evidence on this question using data from the 2016/17 wave of the AmericasBarometer study. Results from a large classification task demonstrate that a small set of automated and human-coded variables, available across popular survey platforms, can recover the final sample of interviews that results when a full suite of quality control procedures is implemented. Taken as a whole, our results indicate that implementing and automating just a few of the many quality control procedures available can streamline survey researchers’ quality control processes while substantially improving the quality of their data.
We show that for any n divisible by 3, almost all order-n Steiner triple systems admit a decomposition of almost all their triples into disjoint perfect matchings (that is, almost all Steiner triple systems are almost resolvable).
Recently, the idea of a universal basic income has received unprecedented attention from policymakers, the media and the wider public. This has inspired a plethora of surveys that seek to measure the extent of public support for the policy, many of which suggest basic income is surprisingly popular. However, in a review of past surveys, with a focus on the UK and Finland, we find that overall levels of support for basic income can vary considerably. We highlight the importance of survey design and, by employing new survey data in each country, compare the levels and determinants of support for varied models of basic income. Our results point to the importance of the multi-dimensionality of basic income and the fragility of public support for the idea. The findings suggest that the ability of political actors to mobilise the public in favour of basic income will eventually depend on the precise model they wish to implement.
List experiments are a widely used survey technique for estimating the prevalence of socially sensitive attitudes or behaviors. Their design, however, makes them vulnerable to bias: because treatment group respondents see a greater number of items (J + 1) than control group respondents (J), the treatment group mean may be mechanically inflated due simply to the greater number of items. The few previous studies that directly examine this do not arrive at definitive conclusions. We find clear evidence of inflation in an original dataset, though only among respondents with low educational attainment. Furthermore, we use available data from previous studies and find similar heterogeneous patterns. The evidence of heterogeneous effects has implications for the interpretation of previous research using list experiments, especially in developing world contexts. We recommend a simple solution: using a necessarily false placebo statement for the control group equalizes list lengths, thereby protecting against mechanical inflation without imposing costs or altering interpretations.
Declining telephone response rates have forced several transformations in survey methodology, including cell phone supplements, nonprobability sampling, and increased reliance on model-based inferences. At the same time, advances in statistical methods and vast amounts of new data sources suggest that new methods can combat some of these problems. We focus on one type of data source—voter registration databases—and show how they can improve inferences from political surveys. These databases allow survey methodologists to leverage political variables, such as party registration and past voting behavior, at a large scale and free of overreporting bias or endogeneity between survey responses. We develop a general process to take advantage of this data, which is illustrated through an example where we use multilevel regression and poststratification to produce vote choice estimates for the 2012 presidential election, projecting those estimates to 195 million registered voters in a postelection context. Our inferences are stable and reasonable down to demographic subgroups within small geographies and even down to the county or congressional district level. They can be used to supplement exit polls, which have become increasingly problematic and are not available in all geographies. We discuss problems, limitations, and open areas of research.
Conjoint analysis is a common tool for studying political preferences. The method disentangles patterns in respondents’ favorability toward complex, multidimensional objects, such as candidates or policies. Most conjoints rely upon a fully randomized design to generate average marginal component effects (AMCEs). They measure the degree to which a given value of a conjoint profile feature increases, or decreases, respondents’ support for the overall profile relative to a baseline, averaging across all respondents and other features. While the AMCE has a clear causal interpretation (about the effect of features), most published conjoint analyses also use AMCEs to describe levels of favorability. This often means comparing AMCEs among respondent subgroups. We show that using conditional AMCEs to describe the degree of subgroup agreement can be misleading as regression interactions are sensitive to the reference category used in the analysis. This leads to inferences about subgroup differences in preferences that have arbitrary sign, size, and significance. We demonstrate the problem using examples drawn from published articles and provide suggestions for improved reporting and interpretation using marginal means and an omnibus F-test. Given the accelerating use of these designs in political science, we offer advice for best practice in analysis and presentation of results.