Hostname: page-component-78c5997874-s2hrs Total loading time: 0 Render date: 2024-11-13T01:44:03.114Z Has data issue: false hasContentIssue false

Comparing Experimental and Matching Methods Using a Large-Scale Voter Mobilization Experiment

Published online by Cambridge University Press:  04 January 2017

Kevin Arceneaux
Affiliation:
Department of Political Science, Temple University, 453 Gladfelter Hall, 1115 West Berks Street, Philadelphia, PA 19122. e-mail: kevin.arceneaux@temple.edu (corresponding author)
Alan S. Gerber
Affiliation:
Yale University, Institution for Social and Policy Studies, P.O. Box 208209, 77 Prospect Street, New Haven, CT 06520. e-mail: alan.gerber@yale.edu
Donald P. Green
Affiliation:
Yale University, Institution for Social and Policy Studies, P.O. Box 208209, 77 Prospect Street, New Haven, CT 06520. e-mail: donald.green@yale.edu

Abstract

In the social sciences, randomized experimentation is the optimal research design for establishing causation. However, for a number of practical reasons, researchers are sometimes unable to conduct experiments and must rely on observational data. In an effort to develop estimators that can approximate experimental results using observational data, scholars have given increasing attention to matching. In this article, we test the performance of matching by gauging the success with which matching approximates experimental results. The voter mobilization experiment presented here comprises a large number of observations (60,000 randomly assigned to the treatment group and nearly two million assigned to the control group) and a rich set of covariates. This study is analyzed in two ways. The first method, instrumental variables estimation, takes advantage of random assignment in order to produce consistent estimates. The second method, matching estimation, ignores random assignment and analyzes the data as though they were nonexperimental. Matching is found to produce biased results in this application because even a rich set of covariates is insufficient to control for preexisting differences between the treatment and control group. Matching, in fact, produces estimates that are no more accurate than those generated by ordinary least squares regression. The experimental findings show that brief paid get-out-the-vote phone calls do not increase turnout, while matching and regression show a large and significant effect.

Type
Research Article
Copyright
Copyright © The Author 2005. Published by Oxford University Press on behalf of the Society for Political Methodology 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Abadie, Alberto, and Imbens, Guido. 2004. “Large Sample Properties of Matching Estimators for Average Treatment Effects.” Unpublished manuscript, Harvard University.Google Scholar
Angrist, Joshua D., Imbens, Guido W., and Rubin, Donald B. 1996. “Identification of Causal Effects Using Instrumental Variables.” Journal of the American Statistical Association 91(434): 444455.Google Scholar
Barabas, Jason. 2004. “How Deliberation Affects Policy Opinions.” American Political Science Review 98(4): 687702.Google Scholar
Becker, Sascha O., and Ichino, Andrea. 2002. “Estimation of Average Treatment Effects Based on Propensity Scores.” Stata Journal 4: 358377.Google Scholar
Bloom, Howard S., Michalopoulos, Charles, Hill, Carolyn J., and Lei, Ying. 2002. “Can Nonexperimental Comparison Group Methods Match the Findings from a Random Assignment Evaluation of Mandatory Welfare-to-Work Programs?” Working paper, Manpower Demonstration Research Corporation.Google Scholar
Dehejia, R., and Wahba, S. 1999. “Causal Effects in Nonexperimental Studies: Reevaluating the Evaluation of Training Programs.” Journal of the American Statistical Association 94(448): 10531062.CrossRefGoogle Scholar
Dehejia, Rajeev. 2005. “Practical Propensity Score Matching: A Reply to Smith and Todd.” Journal of Econometrics 125: 355364.CrossRefGoogle Scholar
Gerber, Alan S., and Green, Donald P. 2000. “The Effects of Personal Canvassing, Telephone Calls, and Direct Mail on Voter Turnout: A Field Experiment.” American Political Science Review 94(3): 653664.CrossRefGoogle Scholar
Gerber, Alan S., and Green, Donald P. 2005. “Do Phone Calls Increase Voter Turnout? An Update.” Annals of the American Academy of Political and Social Science 601 (September): 142154.CrossRefGoogle Scholar
Gerber, Alan S., Green, Donald P., and Kaplan, Edward H. 2004. “The Illusion of Learning from Observational Research.” In Problems and Methods in the Study of Politics, eds. Shapiro, Ian, Smith, Rogers, and Massoud, Tarek. New York: Cambridge University Press, pp. 251273.Google Scholar
Glazerman, Steven, Levy, Dan M., and Myers, David. 2003. “Nonexperimental versus Experimental Estimates of Earnings Impacts.” Annals of the American Academy of Political and Social Science 589(1): 6393.Google Scholar
Green, Donald P., and Gerber, Alan S. 2004. Get Out the Vote! How to Increase Voter Turnout. Washington, DC: Brookings Institution Press.Google Scholar
Heckman, James, Ichimura, Hidehiko, Smith, Jeffrey, and Todd, Petra. 1998. “Characterizing Selection Bias Using Experimental Data.” Econometrica 66(5): 10171098.Google Scholar
Heckman, James, Ichimura, Hidehiko, and Todd, Petra. 1998. “Matching as an Econometric Evaluation Estimator.” Review of Economic Studies 65(2): 261294.Google Scholar
Heckman, James, Ichimura, Hidehiko, and Todd, Petra. 1997. “Matching as an Econometric Evaluator Estimator: Evidence from Evaluating a Job Training Program.” Review of Economic Studies 64(4): 605654.CrossRefGoogle Scholar
Howell, William G., and Peterson, Paul E. 2002. The Education Gap: Vouchers and Urban Schools. Washington, DC: Brookings Institution Press.Google Scholar
Imai, Kosuke. 2005. “Do Get-Out-the-Vote Calls Reduce Turnout? The Importance of Statistical Methods for Field Experiments.” American Political Science Review 99(2): 283300.CrossRefGoogle Scholar
Imbens, Guido. 2003. “Semiparametric Estimation of Average Treatment Effects under Exogeneity: A Review.” Technical Report, Department of Economics, University of California—Berkeley.Google Scholar
Katz, Lawrence F., Kling, Jeffrey R., and Liebman, Jeffrey B. 2001. “Moving to Opportunity in Boston: Early Results of a Randomized Mobility Experiment.” Quarterly Journal of Economics 116: 607654.Google Scholar
LaLonde, Robert J. 1986. “Evaluating the Econometric Evaluations of Training Programs with Experimental Data.” American Economic Review 76: 604620.Google Scholar
Nickerson, David W. 2005. “Measuring Interpersonal Influence.” PhD dissertation, Department of Political Science, Yale University.Google Scholar
Plutzer, Eric. 2002. “Becoming a Habitual Voter: Inertia, Resources, and Growth in Young Adulthood.” American Political Science Review 96(1): 4156.CrossRefGoogle Scholar
Rosenbaum, Paul R., and Rubin, Donald B. 1983. “The Central Role of the Propensity Score in Observational Studies for Causal Effects.” Biometrika 70(1): 4155.CrossRefGoogle Scholar
Rosenbaum, Paul R., and Rubin, Donald B. 1985. “The Bias due to Incomplete Matching.” Biometrics 41(1): 103116.Google Scholar
Rosenstone, Stephen. J., and Hansen, John Mark. 1993. Mobilization, Participation, and Democracy in America. New York: Macmillan Google Scholar
Rubin, Donald B. 1973. “Matching to Remove Bias in Observational Studies.” Biometrica 29: 153183. Correction: 1974. 30:728.Google Scholar
Sekhon, Jasjeet S. 2005. “Multivariate and Propensity Score Matching Software.” (Available from http://jsekhon.fas.harvard.edu/matching/.)Google Scholar
Sherman, Lawrence W., and Rogan, Dennis P. 1995. “Deterrent Effects of Police Raids on Crack Houses: A Randomized, Controlled Experiment.” Justice Quarterly 12(4): 755781.Google Scholar
Smith, Jeffrey, and Todd, Petra. 2001. “Reconciling Conflicting Evidence on the Performance of Matching Methods?American Economic Review, Papers and Proceedings 91(2): 112118.Google Scholar
Smith, Jeffrey, and Todd, Petra. 2005. “Does Matching Overcome LaLonde's Critique of Nonexperimental Methods?Journal of Econometrics 125 (1–2): 305353.Google Scholar