Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-27T10:57:24.807Z Has data issue: false hasContentIssue false

Survey Experiments with Google Consumer Surveys: Promise and Pitfalls for Academic Research in Social Science

Published online by Cambridge University Press:  04 January 2017

Lie Philip Santoso*
Affiliation:
Department of Political Science, Rice University, Houston, TX 77005
Robert Stein
Affiliation:
Department of Political Science, Rice University, Houston, TX 77005, e-mail: stein@rice.edu
Randy Stevenson
Affiliation:
Department of Political Science, Rice University, Houston, TX 77005, e-mail: randystevenson@rice.edu
*
e-mail: ls42@rice.edu (corresponding author)
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

In this article, we evaluate the usefulness of Google Consumer Surveys (GCS) as a low-cost tool for doing rigorous social scientific work. We find that its relative strengths and weaknesses make it most useful to researchers who attempt to identify causality through randomization to treatment groups rather than selection on observables. This finding stems, in part, from the fact that the real cost advantage of GCS over other alternatives is limited to short surveys with a small number of questions. Based on our replication of four canonical social scientific experiments and one study of treatment heterogeneity, we find that the platform can be used effectively to achieve balance across treatment groups, explore treatment heterogeneity, include manipulation checks, and that the provided inferred demographics may be sufficiently sound for weighting and explorations of heterogeneity. Crucially, we successfully managed to replicate the usual directional finding in each experiment. Overall, GCS is likely to be a useful platform for survey experimentalists.

Type
Articles
Copyright
Copyright © The Author 2016. Published by Oxford University Press on behalf of the Society for Political Methodology 

Footnotes

Authors’ note: Replication code and data are available at the Political Analysis Dataverse (Santoso, Stein, and Stevenson 2016) while the Supplementary materials for this article are available on the Political Analysis Web site. We would also like to thank Google Inc. for allowing us to ask some of the questions reported here free of charge.

References

Berinsky, Adam J., Margolis, Michelle F., and Sances, Michael W. 2014. Separating the shirkers from the workers? Making sure respondents pay attention on self-administered surveys. American Journal of Political Science 58:739–53.Google Scholar
Berinsky, Adam J., Huber, Gregory A., and Lenz, Gabriel S. 2012. Evaluating online labor markets for experimental research: Amazon.com's Mechanical Turk. Political Analysis 20:351–68.Google Scholar
Bremer, John. 2013. The interaction of sampling and weighting in producing a representative sample online: An excerpt from the ARF's “foundations of quality 2” initiative. Journal of Advertising Research 53:363–71.Google Scholar
DeRouvray, Cristel, and Couper, Mick P. 2002. Designing a strategy for reducing “no opinion” responses in web-based surveys. Social Science Computer Review 20:39.Google Scholar
Druckman, James N. 2001. Evaluating framing effects. Journal of Economic Psychology 22:91101.Google Scholar
Green, Donald P., and Kern, Holger L. 2012. Modeling heterogeneous treatment effects in survey experiments with Bayesian additive regression trees. Public Opinion Quarterly 76:491511.Google Scholar
Holbrook, Allyson L., and Krosnick, Jon A. 2010. Social desirability bias in voter turnout reports. Public Opinion Quarterly 74:3767.Google Scholar
Holbrook, Allyson, Krosnick, Jon A., and Pfent, Alison. 2007. The causes and consequences of response rates in surveys by the news media and government contractor survey research firms. In Advances in Telephone Survey Methodology, eds. Lepkowski, James M., Tucker, Clyde, Michael Brick, J., de Leeuw, Edith, Japec, Lilli, Lavrakas, Paul J., Link, Michael W., and Sangster, Roberta L. New York: Wiley-Interscience 499528.Google Scholar
Huber, Gregory A., and Paris, Celia. 2013. Assessing the programmatic equivalence assumption in question wording experiments: Understanding why Americans like assistance to the poor more than welfare. Public Opinion Quarterly 77:385–97.Google Scholar
Imai, Kosuke, King, Gary, and Stuart, Elizabeth. 2008. Misunderstandings among experimentalists and observationalists about causal inference. Journal of the Royal Statistical Society, Series A 171(2):481502.Google Scholar
Janus, Alexander L. 2010. The influence of social desirability pressures on expressed immigration attitudes. Social Science Quarterly 91:928–46.Google Scholar
Jou, Jerwen, Shanteau, James, and Harris, Richard. 1996. An information processing view of framing effects: The role of causal schemas in decision making. Memory & Cognition 24:115.Google Scholar
Kane, James G., Craig, Stephan C., and Wald, Kenneth D. 2004. Religion and presidential politics in Florida: A list experiment. Social Science Quarterly 85:281–93.Google Scholar
Keeter, Scott, and Christian, Leah. 2012. A comparison of results from surveys by the Pew Research Center and Google Consumer Surveys. Pew Research Center, Washington, DC.Google Scholar
Kohut, Andrew, Keeter, Scott, Doherty, Carroll, Dimock, Michael, and Christian, Leah. 2012. Assessing the representativeness of public opinion surveys. Pew Research Center, Washington, DC.Google Scholar
Kuhberger, Anton. 1995. The framing of decisions: A new look at old problems. Organizational Behavior & Human Decision Processes 62:230–40.Google Scholar
McDonald, Paul, Mohebbi, Matt, and Slatkin, Brett. 2012. Comparing Google Consumer Surveys to existing probability and non-probability-based Internet surveys. Google White Paper.Google Scholar
Montgomery, Jacob, and Cutler, Joshua. 2013. Computerized adaptive testing for public opinion surveys. Political Analysis 21(2):141–71.Google Scholar
Morgan, Stephen L., and Winship, Christopher. 2007. Counterfactuals and causal inference: methods and principles for social research. Cambridge: Cambridge University Press.Google Scholar
Mutz, D. C., and Pemantle, R. 2013. The perils of randomization checks in the analysis of experiments. Typescript. University of Pennsylvania.Google Scholar
Pearl, Judea. 2011. The structural theory of causation. In Causality in the sciences, eds. McKay Illari, P., Russo, F., and Williamson, J. Oxford: Oxford University Press 697727.Google Scholar
Rasinski, Kenneth A. 1989. The effect of question wording on public support for government spending. Public Opinion Quarterly 53:388–94.Google Scholar
Salganik, Matthew J., and Levy, Karen. 2015. Wiki surveys: Open and quantifiable social data collection. PLoS One 10(5):e0123483. doi:10.1371/journal.pone.0123483Google Scholar
Santoso, Philip, Stein, Robert, and Stevenson, Randy. 2016. Replication data for: Survey experiments with Google Consumer Surveys: Promise and pitfalls for academic research in social science. http://dx.doi.org/10.7910/DVN/FMH2IR, Harvard Dataverse, Draft version [UNF:6:lh9o42tCLawnMVwewlaPhA==].Google Scholar
Senn, Stephen. 1994. Testing for baseline balance in clinical trials. Statistics in Medicine 13:1715–26.Google Scholar
Shih, Tse-Hua, and Fan, Xitao. 2008. Comparing response rates from web and mail surveys: a meta-analysis. Field Methods 20:249–71.Google Scholar
Silver, Nate. 2012. Which polls fared best (and worst) in the 2012 presidential race? FiveThirtyEight (blog), New York Times, November 10.Google Scholar
Steeh, Charlotte G., Kirgis, Nicole, Cannon, Brian, and DeWitt, Jeff. 2001. Are they really as bad as they seem? Nonresponse rates at the end of the twentieth century. Journal of Official Statistics 17:227–47.Google Scholar
Streb, Matthew J., Burrell, Barbara, Frederick, Brian, and Genovese, Michael A. 2008. Social desirability effects and support for a female American president. Public Opinion Quarterly 72:7689.Google Scholar
Takemura, Kazuhisa. 1994. Influence of elaboration on the framing of decision. Journal of Psychology 128:3339.Google Scholar
Tanenbaum, Erin R., Krishnamurty, Parvati, and Stern, Michael. 2013. How representative are Google Consumer Surveys? Results from an analysis of a Google Consumer Survey question relative national level benchmarks with different survey modes and sample characteristics. Paper presented at the American Association for Public Opinion Research (AAPOR) 68th Annual Conference, Boston, MA.Google Scholar
Tversky, Amos, and Kahneman, Daniel. 1981. The framing of decisions and the psychology of choice. Science 211:453–58.Google Scholar
Wang, Wei, Rothschild, David, Goel, Sharad, and Gelman, Andrew. 2015. Forecasting elections with non-representative polls. International Journal of Forecasting 31:980–91.Google Scholar
Supplementary material: PDF

Santoso et al. supplementary material

Supplementary Material

Download Santoso et al. supplementary material(PDF)
PDF 610.3 KB