Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-10T15:08:20.776Z Has data issue: false hasContentIssue false

Beyond the breaking point? Survey satisficing in conjoint experiments

Published online by Cambridge University Press:  08 May 2019

Kirk Bansak
Affiliation:
Department of Political Science, University of California San Diego, 9500 Gilman Drive, La Jolla, CA92093, United States
Jens Hainmueller
Affiliation:
Department of Political Science, 616 Serra Street Encina Hall West, Room 100, Stanford, CA94305-6044, United States
Daniel J. Hopkins*
Affiliation:
Department of Political Science, University of Pennsylvania, 207 S. 37th Street, Philadelphia, PA19104, United States
Teppei Yamamoto
Affiliation:
Department of Political Science, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA02139, United States
*
*Corresponding author. Email: danhop@sas.upenn.edu

Abstract

Recent years have seen a renaissance of conjoint survey designs within social science. To date, however, researchers have lacked guidance on how many attributes they can include within conjoint profiles before survey satisficing leads to unacceptable declines in response quality. This paper addresses that question using pre-registered, two-stage experiments examining choices among hypothetical candidates for US Senate or hotel rooms. In each experiment, we use the first stage to identify attributes which are perceived to be uncorrelated with the attribute of interest, so that their effects are not masked by those of the core attributes. In the second stage, we randomly assign respondents to conjoint designs with varying numbers of those filler attributes. We report the results of these experiments implemented via Amazon's Mechanical Turk and Survey Sampling International. They demonstrate that our core quantities of interest are generally stable, with relatively modest increases in survey satisficing when respondents face large numbers of attributes.

Type
Original Article
Copyright
Copyright © The European Political Science Association 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Abrajano, MA, Elmendorf, CS and Quinn, KM (2015) Using experiments to estimate racially polarized voting'. UC Davis Legal Studies Research Paper Series, No. 419.Google Scholar
Adamowicz, W, Boxall, P, Williams, M and Louviere, J (1998) Stated preference approaches for measuring passive use values: choice experiments and contingent valuation. American Journal of Agricultural Economics 80, 6475.CrossRefGoogle Scholar
Bansak, K, Hainmueller, J and Hangartner, D (2016) How economic, humanitarian, and religious concerns shape European attitudes toward asylum seekers. Science 354, 217222.CrossRefGoogle ScholarPubMed
Bansak, K, Hainmueller, J, Hopkins, DJ and Yamamoto, T (2018) The number of choice tasks and survey satisficing in conjoint experiments. Political Analysis 26, 112119.CrossRefGoogle Scholar
Bechtel, MM, Genovese, F and Scheve, KF (2017) Interests, norms, and support for the provision of global public goods: the case of climate cooperation. British Journal of Political Science (forthcoming).Google Scholar
Berinsky, AJ, Huber, GA and Lenz, GS (2012) Evaluating online labor markets for experimental research: Amazon.com's mechanical turk. Political Analysis 20, 351368.CrossRefGoogle Scholar
Berinsky, AJ, Margolis, MF and Sances, MW (2014) Separating the shirkers from the workers? Making sure respondents pay attention on self-administered surveys. American Journal of Political Science 58, 739753.CrossRefGoogle Scholar
Carlson, E (2015) Ethnic voting and accountability in Africa: a choice experiment in Uganda. World Politics 67, 353385.CrossRefGoogle Scholar
Carnes, N and Lupu, N (2016) Do Voters Dislike Working-Class Candidates? Voter Biases and the Descriptive Underrepresentation of the Working Class. American Political Science Review 110, 832844.CrossRefGoogle Scholar
Chang, L and Krosnick, JA (2009) National surveys via rdd telephone interviewing versus the internet: comparing sample representativeness and response quality. Public Opinion Quarterly 73, 641678.CrossRefGoogle Scholar
Crowder-Meyer, M, Gadarian, SK, Trounstine, J and Vue, K (2015) Complex interactions: candidate race, sex, electoral institutions, and voter choice'. Paper presented at the Annual Meeting of the Midwest Political Science Association, Chicago, IL, April 16–19.Google Scholar
Dafoe, A, Zhang, B and Caughey, D (2018) Information equivalence in survey experiments. Political Analysis 26, 399416.CrossRefGoogle Scholar
Franchino, F and Zucchini, F (2014) Voting in a multi-dimensional space: a conjoint analysis employing valence and ideology attributes of candidates. Political Science Research and Methods 3, 121.Google Scholar
Goldberg, SM, Green, PE and Wind, Y (1984) Conjoint analysis of price premiums for hotel amenities. Journal of Business 57, S111S132.CrossRefGoogle Scholar
Gooch, A and Vavreck, L (2015) How Face-to-Face Interviews and Cognitive Skill Affect Non-Response: A Randomized Experiment Assigning Mode of Interview. Working Paper, Los Angeles: University of California.Google Scholar
Green, PE and Rao, VR (1971) Conjoint measurement for quantifying judgmental data. Journal of Marketing Research VIII, 355363.Google Scholar
Groves, RM, Fowler, FJ Jr, Couper, MP, Lepkowski, JM, Singer, E and Tourangeau, R (2011) Survey Methodology, vol. 561, Hoboken, NJ: John Wiley & Sons.Google Scholar
Hainmueller, J and Hopkins, DJ (2015) The hidden American immigration consensus: a conjoint analysis of attitudes toward immigrants. American Journal of Political Science 59, 529548.CrossRefGoogle Scholar
Hainmueller, J, Hopkins, DJ and Yamamoto, T (2014) Causal inference in conjoint analysis: understanding multidimensional choices via stated preference experiments. Political Analysis 22, 130.CrossRefGoogle Scholar
Hauser, DJ and Schwarz, N (2015) Attentive turkers: Mturk participants perform better on online attention checks than do subject pool participants. Behavior Research Methods 48, 18.Google Scholar
Horiuchi, Y, Smith, DM and Yamamoto, T (2018) Measuring voters' multidimensional policy preferences with conjoint analysis: application to Japan's 2014 election. Political Analysis 26, 190209.CrossRefGoogle Scholar
Huff, C and Tingley, D (2015) “Who are these people?” Evaluating the demographic characteristics and political preferences of MTurk survey respondents. Research & Politics 2(3), https://doi.org/10.1177/2053168015604648.CrossRefGoogle Scholar
Jasso, G and Rossi, PH (1977) Distributive justice and earned income. American Sociological Review 42, 639–51.CrossRefGoogle Scholar
Krosnick, JA (1991) Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology 5, 213236.CrossRefGoogle Scholar
Krosnick, JA (1999) Survey research. Annual Review of Psychology 50, 537567.CrossRefGoogle ScholarPubMed
Loewen, PJ, Rubenson, D and Spirling, A (2012) Testing the power of arguments in referendums: a bradley—terry approach. Electoral Studies 31, 212221.CrossRefGoogle Scholar
Miller, GA (1994) The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychological Review 101, 343.CrossRefGoogle ScholarPubMed
Mullinix, KJ, Leeper, TJ, Druckman, JN and Freese, J (2016) The generalizability of survey experiments. Journal of Experimental Political Science 2, 109138.CrossRefGoogle Scholar
Mummolo, J and Nall, C (2016) Why partisans don't sort: the constraints on political segregation. The Journal of Politics 79, 4559.CrossRefGoogle Scholar
Mutz, DC (2011) Population-Based Survey Experiments. Princeton, NJ: Princeton University Press.Google Scholar
Sudman, S, Bradburn, NM and Schwarz, N (1996) Thinking about Answers: The Application of Cognitive Processes to Survey Methodology. San Francisco, CA: Jossey-Bass.Google Scholar
Wright, M, Levy, M and Citrin, J (2016) Public attitudes toward immigration policy across the legal/illegal divide: the role of categorical and attribute-based decision-making. Political Behavior 38, 229253.CrossRefGoogle Scholar
Yeager, DS, Krosnick, JA, Chang, L, Javitz, HS, Levendusky, MS, Simpser, A and Wang, R (2011) Comparing the accuracy of RDD telephone surveys and internet surveys conducted with probability and non-probability samples. Public Opinion Quarterly 75, 709747.CrossRefGoogle Scholar
Supplementary material: Link

Bansak et al. Dataset

Link
Supplementary material: PDF

Bansak et al. supplementary material

Bansak et al. supplementary material 1

Download Bansak et al. supplementary material(PDF)
PDF 407.9 KB