Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-13T02:23:04.073Z Has data issue: false hasContentIssue false

Coarsening Bias: How Coarse Treatment Measurement Upwardly Biases Instrumental Variable Estimates

Published online by Cambridge University Press:  04 January 2017

John Marshall*
Affiliation:
Department of Government, Harvard University, Cambridge, MA 02138
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Political scientists increasingly use instrumental variable (IV) methods, and must often choose between operationalizing their endogenous treatment variable as discrete or continuous. For theoretical and data availability reasons, researchers frequently coarsen treatments with multiple intensities (e.g., treating a continuous treatment as binary). I show how such coarsening can substantially upwardly bias IV estimates by subtly violating the exclusion restriction assumption, and demonstrate that the extent of this bias depends upon the first stage and underlying causal response function. However, standard IV methods using a treatment where multiple intensities are affected by the instrument–even when fine-grained measurement at every intensity is not possible–recover a consistent causal estimate without requiring a stronger exclusion restriction assumption. These analytical insights are illustrated in the context of identifying the long-run effect of high school education on voting Conservative in Great Britain. I demonstrate that coarsening years of schooling into an indicator for completing high school upwardly biases the IV estimate by a factor of three.

Type
Articles
Copyright
Copyright © The Author 2016. Published by Oxford University Press on behalf of the Society for Political Methodology 

Footnotes

Authors' note: I thank Matt Blackwell, John Bullock, Anthony Fowler, Andy Hall, Torben Iversen, Horacio Larreguy, Rakeen Mabud, Daniel Moskowitz, Arthur Spirling, Brandon Stewart, Dustin Tingley, Tess Wise, the editor and two anonymous referees for illuminating discussions or useful comments. Replication materials are available online as Marshall (2016). Supplementary materials for this article are available on the Political Analysis Web site.

References

Abadie, Alberto. 2003. Semiparametric instrumental variable estimation of treatment response models. Journal of Econometrics 113(2): 231–63.Google Scholar
Acemoglu, Daron, Johnson, Simon, and Robinson, James A. 2001. The colonial origins of comparative development: An empirical investigation. American Economic Review 91(5): 1369–401.Google Scholar
Angrist, Joshua D., and Imbens, Guido W. 1995. Two-stage least squares estimation of average causal effects in models with variable treatment intensity. Journal of the American Statistical Association 90(430): 431–42.CrossRefGoogle Scholar
Angrist, Joshua D., Imbens, Guido W., and Rubin, Donald B. 1996. Identification of causal effects using instrumental variables. Journal of the American Statistical Association 91(434): 444–55.Google Scholar
Angrist, Joshua D., and Pischke, Jörn-Steffan. 2008. Mostly harmless econometrics: An empiricist's companion. Princeton, NJ: Princeton University Press.Google Scholar
Büthe, Tim, and Milner, Helen V. 2008. The politics of foreign direct investment into developing countries: Increasing FDI through international trade agreements? American Journal of Political Science 52(4): 741–62.Google Scholar
Dunning, Thad. 2008. Model specification in instrumental-variables regression. Political Analysis 16(3): 290302.Google Scholar
Gerber, Alan. 1998. Estimating the effect of campaign spending on senate election outcomes using instrumental variables. American Political Science Review 92(2): 401–11.CrossRefGoogle Scholar
Gerber, Alan S., and Green, Donald P. 2000. The effects of canvassing, telephone calls, and direct mail on voter turnout: A field experiment. American Political Science Review 94(3): 653–63.Google Scholar
Gerber, Alan S., Huber, Gregory A., and Washington, Ebonya. 2010. Party affiliation, partisanship, and political beliefs: A field experiment. American Political Science Review 104(4): 720–44.CrossRefGoogle Scholar
Imbens, Guido W., and Kalyanaraman, Karthik. 2012. Optimal bandwidth choice for the regression discontinuity estimator. Review of Economic Studies 79(3): 933–59.Google Scholar
Kane, Thomas J., Elena Rouse, Cecilia, and Staiger, Douglas. 1999. Estimating returns to schooling when schooling is misreported. NBER working paper 7235.Google Scholar
Marshall, John. 2016. Replication Data for: Coarsening bias: How coarse treatment measurement upwardly biases instrumental variable estimates, Harvard Dataverse, V1. http://dx.doi.org/10.7910/DVN/J7HUX3.Google Scholar
Marshall, John. Forthcoming. Education and voting Conservative: Evidence from a major schooling reform in Great Britain. Journal of Politics. Google Scholar
Milligan, Kevin, Moretti, Enrico, and Oreopoulos, Philip. 2004. Does education improve citizenship? Evidence from the United States and the United Kingdom. Journal of Public Economics 88:1667–95.Google Scholar
Newey, Whitney K., and Powell, James L. 2003. Instrumental variable estimation of nonparametric models. Econometrica 71(5): 1565–78.CrossRefGoogle Scholar
Pierskalla, Jan H., and Hollenbach, Florian M. 2013. Technology and collective action: The effect of cell phone coverage on political violence in Africa. American Political Science Review 107(2): 207–24.Google Scholar
Sovey, Allison J., and Green, Donald P. 2011. Instrumental variables estimation in political science: A readers’ guide. American Journal of Political Science 55(1): 188200.Google Scholar
Staiger, Douglas, and Stock, James H. 1997. Instrumental variables regression with weak instruments. Econometrica 65(3): 557–86.Google Scholar
Supplementary material: PDF

Marshall supplementary material

Appendix

Download Marshall supplementary material(PDF)
PDF 1.5 MB