Hostname: page-component-78c5997874-8bhkd Total loading time: 0 Render date: 2024-11-10T08:17:57.724Z Has data issue: false hasContentIssue false

A Selection Bias Approach to Sensitivity Analysis for Causal Effects

Published online by Cambridge University Press:  04 January 2017

Matthew Blackwell*
Affiliation:
Department of Political Science, University of Rochester, 307 Harkness Hall, Rochester, NY 14627, NY
*
e-mail: m.blackwell@rochester.edu (corresponding author)
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

The estimation of causal effects has a revered place in all fields of empirical political science, but a large volume of methodological and applied work ignores a fundamental fact: most people are skeptical of estimated causal effects. In particular, researchers are often worried about the assumption of no omitted variables or no unmeasured confounders. This article combines two approaches to sensitivity analysis to provide researchers with a tool to investigate how specific violations of no omitted variables alter their estimates. This approach can help researchers determine which narratives imply weaker results and which actually strengthen their claims. This gives researchers and critics a reasoned and quantitative approach to assessing the plausibility of causal effects. To demonstrate the approach, I present applications to three causal inference estimation strategies: regression, matching, and weighting.

Type
Research Article
Copyright
Copyright © The Author 2013. Published by Oxford University Press on behalf of the Society for Political Methodology 

Footnotes

Author's note: The methods used in this article are available as an open-source R package, causalsens, on the Comprehensive R Archive Network (CRAN) and the author's web site. The replication archive for this article is available at the Political Analysis Dataverse as Blackwell (2013b). Many thanks to Steve Ansolabehere, Adam Glynn, Gary King, Jamie Robins, Maya Sen, and two anonymous reviewers for helpful comments and discussions. All remaining errors are my own.

References

Ansolabehere, Stephen, Iyengar, Shanto, and Simon, Adam. 1999. Replicating experiments using aggregate and survey data: The case of negative advertising and turnout. American Political Science Review 93(4): 901–9.Google Scholar
Ansolabehere, Stephen, Iyengar, Shanto, Simon, Adam, and Valentino, Nicholas. 1994. Does attack advertising demobilize the electorate? American Political Science Review 88(4): 829–38.Google Scholar
Blackwell, Matthew. 2013a. A framework for dynamic causal inference in political science. American Journal of Political Science 57(2): 504–20.Google Scholar
Blackwell, Matthew. 2013b. Replication data for: A selection bias approach to sensitivity analysis for causal effects. Dataverse Network, hdl:1902.1/21131.Google Scholar
Boyd, Christina L., Epstein, Lee, and Martin, Andrew D. 2010. Untangling the causal effects of sex on judging. American Journal of Political Science 54(2): 389411.Google Scholar
Brooks, Deborah Jordan. 2006. The resilient voter: Moving toward closure in the debate over negative campaigning and turnout. Journal of Politics 68(3): 684–96.Google Scholar
Brumback, Babette A., Hernán, Miguel A., Haneuse, Sebastien J. P. A., and Robins, James M. 2004. Sensitivity analyses for unmeasured confounding assuming a marginal structural model for repeated measures. Statistics in Medicine 23(5): 749–67.CrossRefGoogle ScholarPubMed
Cornfield, Jerome, Haenszel, William, Cuyler Hammond, E., Lilienfeld, Abraham M., Shimkin, Michael B., and Wynder, Ernst L. 1959. Smoking and lung cancer: Recent evidence and a discussion of some questions. Journal of the National Cancer Institute 22: 173203.Google Scholar
Dawid, A. Phillip. 1979. Conditional independence in statistical theory. Journal of the Royal Statistical Society. Series B (Methodological) 41(1): 131.Google Scholar
Finkel, Steven E., and Geer, John G. 1998. A spot check: Casting doubt on the demobilizing effect of attack advertising. American Journal of Political Science 42(2): 573–95.CrossRefGoogle Scholar
Glynn, Adam N., and Quinn, Kevin M. 2011. Why process matters for causal inference. Political Analysis 19(3): 273–86.Google Scholar
Heckman, James, Ichimura, Hidehiko, Smith, Jeffrey, and Todd, Petra. 1998. Characterizing selection bias using experimental data. Econometrica 66(5): 1017–98.CrossRefGoogle Scholar
Ho, Daniel E., Imai, Kosuke, King, Gary, and Stuart, Elizabeth A. 2006. Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Political Analysis 15(3): 199.Google Scholar
Imai, Kosuke, Keele, Luke, Tingley, Dustin, and Yamamoto, Teppei. 2011. Unpacking the black box of causality: Learning about causal mechanisms from experimental and observational studies. American Political Science Review 105(4): 765–89.Google Scholar
Imai, Kosuke, Keele, Luke, and Yamamoto, Teppei. 2010. Identification, inference and sensitivity analysis for causal mediation effects. Statistical Science 25(1): 5171.Google Scholar
Imbens, Guido W. 2003. Sensitivity to exogeneity assumptions in program evaluation. American Economic Review 93(2): 126–32.Google Scholar
Imbens, Guido W. 2004. Nonparametric estimation of average treatment effects under exogeneity: A review. Review of Economics and Statistics 86 (1): 429.Google Scholar
Keele, Luke. 2010. An overview of rbounds: An R package for Rosenbaum bounds sensitivity analysis with matched data. Unpublished manuscript.Google Scholar
LaLonde, Robert J. 1986. Evaluating the econometric evaluations of training programs with experimental data. American Economic Review 76(4): 604–20.Google Scholar
Lau, Richard R., Sigelman, Lee, and Rovner, Ivy Brown. 2007. The effects of negative political campaigns: A meta-analytic reassessment. Journal of Politics 69(4): 1176–209.CrossRefGoogle Scholar
Manski, Charles F. 1990. Nonparametric bounds on treatment effects. American Economic Review 80(2): 319–23.Google Scholar
Mebane, Walter R., and Poast, Paul. 2013. Causal Inference without ignorability: Identification with nonrandom assignment and missing treatment data. Political Analysis 21(2): 233–51.Google Scholar
Morgan, Stephen L., and Winship, Christopher. 2007. Counterfactuals and causal inference. Methods and principles for social research. Cambridge: Cambridge University Press.Google Scholar
Robins, James M. 1999. Association, causation, and marginal structural models. Synthese 121 (1/2): 151–79.CrossRefGoogle Scholar
Robins, James M., Hernán, Miguel A., and Brumback, Babette A. 2000. Marginal structural models and causal inference in epidemiology. Epidemiology 11(5): 550–60.Google Scholar
Rosenbaum, Paul R. 2002. Observational studies. 2nd ed. New York: Springer.Google Scholar
Rosenbaum, Paul R., and Rubin, Donald B. 1983. The central role of the propensity score in observational studies for causal effects. Biometrika 70(1): 4155.Google Scholar
Rubin, Donald B. 1978. Bayesian inference for causal effects: The role of randomization. Annals of Statistics 6(1): 3458.Google Scholar