Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-01-07T18:46:39.597Z Has data issue: false hasContentIssue false

School System Evaluation by Value Added Analysis Under Endogeneity

Published online by Cambridge University Press:  01 January 2025

Jorge Manzi
Affiliation:
School of Psychology, Pontificia Universidad Católica de Chile Measurement Center MIDE UC
Ernesto San Martín*
Affiliation:
Measurement Center MIDE UC Department of Statistics, Pontificia Universidad Católica de Chile Faculty of Education, Pontificia Universidad Católica de Chile Center for Operations Research and Econometrics, Université catholique de Louvain
Sébastien Van Bellegem
Affiliation:
Center for Operations Research and Econometrics, Université catholique de Louvain
*
Requests for reprints should be sent to Ernesto SanMartín, Department of Statistics, Pontificia Universidad Católica de Chile, Vicuña Mackenna 4860, Macul, Santiago, Chile. E-mail: esanmart@mat.puc.cl

Abstract

Value added is a common tool in educational research on effectiveness. It is often modeled as a (prediction of a) random effect in a specific hierarchical linear model. This paper shows that this modeling strategy is not valid when endogeneity is present. Endogeneity stems, for instance, from a correlation between the random effect in the hierarchical model and some of its covariates. This paper shows that this phenomenon is far from exceptional and can even be a generic problem when the covariates contain the prior score attainments, a typical situation in value added modeling. Starting from a general, model-free definition of value added, the paper derives an explicit expression of the value added in an endogeneous hierarchical linear Gaussian model. Inference on value added is proposed using an instrumental variable approach. The impact of endogeneity on the value added and the estimated value added is calculated accurately. This is also illustrated on a large data set of individual scores of about 200,000 students in Chile.

Type
Original Paper
Copyright
Copyright © 2013 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aitkin, M., Longford, N. (1986). Statistical modelling issues in school effectiveness studies. Journal of the Royal Statistical Society. Series A, 149, 143CrossRefGoogle Scholar
Angrist, J.D., Krueger, A.B. (2001). Instrumental variables and the search for identification: from supply and demand to natural experiments. The Journal of Economic Perspectives, 15, 6985CrossRefGoogle Scholar
Blundell, R., Bond, S. (1998). Initial conditions and moment restrictions in dynamic panel data models. Journal of Econometrics, 87, 115143CrossRefGoogle Scholar
Bollen, K.A. (1989). Structural equations with latent variables, London: WileyCrossRefGoogle Scholar
Bollen, K.A. (2002). Latent variables in psychology and the social sciences. Annual Review of Psychology, 53, 605634CrossRefGoogle ScholarPubMed
Braun, H., Chudowsky, N., Koening, J. (2010). Getting value out of value-added: report of a workshop, Washington: The National AcademicsGoogle Scholar
Briggs, D.C., Weeks, J.P. (2011). The persistence of school-level value-added. Journal of Educational and Behavioral Statistics, 36, 616637CrossRefGoogle Scholar
Corvalán, J., Elaqua, G. & de Wolf, I., (2010). El sector particular subvencionado en Chile. Tipologización y perspectivas frente a las nuevas regulaciones. In Ministry of Education of the Chilean Government (Ed.), Evidencias para políticas públicas en educación (pp. 11–40). Santiago: MINEDUC.Google Scholar
Ebbes, P., Böckenholt, U., Wedel, M. (2004). Regressor and random-effects dependencies in multilevel models. Statistica Neerlandica, 58, 161178CrossRefGoogle Scholar
Engle, R.E., Hendry, D.F., Richard, J.F. (1983). Exogeneity. Econometrica, 51, 277304CrossRefGoogle Scholar
EPI Briefing Paper, (2010). Problems with the use of student test scores to evaluate teachers, Washington: Economic Policy InstituteGoogle Scholar
Florens, J.P., Johannes, J., Van Bellegem, S. (2012). Instrumental regression in partially linear models. Econometrics Journal, 15, 304324CrossRefGoogle Scholar
Florens, J.P., Mouchart, M. (1985). A linear theory for noncausality. Econometrica, 53, 157176CrossRefGoogle Scholar
Florens, J.P., Mouchart, M., Rolin, J.M. (1990). Elements of Bayesian statistics, New York: DekkerGoogle Scholar
Gansle, K.A., Noell, G.H., Burns, J.M. (2012). Do student achievement outcomes differ across teacher preparation programs? An analysis of teacher education in Lousiana. Journal of Teacher Education,CrossRefGoogle Scholar
Goldstein, H. (2001). Multilevel statistical models, London: ArnoldGoogle Scholar
Gray, J., Hopkins, D., Reynolds, D., Wilcox, B., Farrell, S., Jesson, D. (1999). Improving schools: performance and potential, Buckingham: Open University PressGoogle Scholar
Grilli, L., Rampachini, C. (2011). The role of sample cluster mans in multilevel models. A view on endogeneity and measurement error issues. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 7, 121133CrossRefGoogle Scholar
Hanchane, S., Mostafa, T. (2012). Solving endogeneity problems in multilevel estimation: an example using education production functions. Journal of Applied Statistics, 39, 11011114CrossRefGoogle Scholar
Hansen, L.P. (1982). Large sample properties of generalized methods of moments estimators. Econometrica, 50, 10291054CrossRefGoogle Scholar
Hanushek, E.A. (2006). School resources. In Hanushek, E.A., Welch, F. (Eds.), Handbook of the economics of education, Amsterdam: Elsevier 865908Google Scholar
Hendry, D.F. (1995). Dynamic econometrics, Oxford: Oxford University PressCrossRefGoogle Scholar
Kim, J.S., Frees, E.W. (2006). Omitted variables in multilevel models. Psychometrika, 71, 659690CrossRefGoogle Scholar
Kim, J.S., Frees, E.W. (2007). Multilevel modelling with correlated effects. Psychometrika, 72, 505533CrossRefGoogle Scholar
Kyriakides, L. (2006). Using international comparative studies to develop the theoretical framework of educational effectiveness research: a secondary analysis of TIMSS 1999 data. Educational Research and Evaluation, 12, 513534CrossRefGoogle Scholar
Lazarsfeld, P.F. (1950). The logical and mathematical foundation of latent structural analysis. In Stout, S.S. (Eds.), Measurement and prediction, Princeton: Princeton University Press 362412Google Scholar
Manzi, J., Martín, E., & Van Bellegem, S., (2011). School system evaluation by value-added analysis under endogeneity (Technical Report No. IT1-102). Measurement Center MIDE UC.Google Scholar
McPherson, A.F. (1992). Measuring added value in schools, London: National Commission of EducationGoogle Scholar
Mundlak, Y. (1978). On the pooling of time-series and cross section data. Econometrica, 46, 6985CrossRefGoogle Scholar
OECD (2008). Measuring improvements in learning outcomes. Best practices to assess the value-added of schools. Organisation for Economic Co-operation and Development OECD.CrossRefGoogle Scholar
Paredes, R.D., Paredes, V. (2009). Chile: academic performance and educational management under a rigid employment regime. CEPAL Review, 99, 117129CrossRefGoogle Scholar
Peng, W.J., Thomas, S.M., Yang, X., Li, J. (2006). Developing school evaluation methods to improve the quality of schooling in China: a pilot value added study. Assessment in Education, 13, 135154Google Scholar
Picci, G. (1989). Parametrization of factor analysis models. Journal of Econometrics, 41, 1738CrossRefGoogle Scholar
R Development Core Team (2011) R: a language and environment for statistical computing [computer software manual], Vienna, Austria. Available from http://www.R-project.org/. ISBN 3-900051-07-0.Google Scholar
Raudenbush, S.W. (2004). What are value-added models estimating and what does this imply for statistical practice?. Journal of Educational and Behavioral Statistics, 29, 121129CrossRefGoogle Scholar
Raudenbush, S.W., Bryk, A.S. (2001). Hierarchical linear models: applications and data analysis method, London: SageGoogle Scholar
Raudenbush, S.W., Willms, J.D. (1995). The estimation of school effects. Journal of Educational and Behavioral Statistics, 20, 307335CrossRefGoogle Scholar
Ray, A., (2006). School value added measures in England (Technical Report). Department for Education and Skills.Google Scholar
Ray, A., McCormack, T., Evans, H. (2009). Value added in English schools. Education Finance and Policy, 4, 415438CrossRefGoogle Scholar
Reardon, S.F., Raudenbush, S.W. (2009). Assumptions of value-added models for estimating school effect. Education Finance and Policy, 4, 492519CrossRefGoogle Scholar
Rosenbaum, P.R., Rubin, D.B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70, 4155CrossRefGoogle Scholar
San Martín, E., Rolin, J.M. (2013). Identification of parametric Rasch-type models. Journal of Statistical Planning and Inference, 143, 116130CrossRefGoogle Scholar
Sapelli, C., Vial, B. (2002). The performance of private and public schools in the Chilean voucher system. Cuadernos de Economía, 39, 423454CrossRefGoogle Scholar
SIMCE (2009). Resultados nacionales SIMCE 2008. Unidad de Currículum y Evaluación (UCE), Ministerio de Educación, Gobierno de Chile. Available at http://www.simce.cl/index.php?id=430. Google Scholar
Skrondal, A., Rabe-Hesketh, S. (2004). Generalized latent variable modeling: multilevel, longitudinal and structural equation models, Boca Raton: Chapman & HallCrossRefGoogle Scholar
Snijders, T., Bosker, R. (1999). Multilevel analysis: an introduction to basic and advanced multilevel modeling, London: SageGoogle Scholar
Spencer, N.H., Fielding, A. (2002). A comparison of modelling strategies for value-added analyses of educational data. Computational Statistics, 17, 103116CrossRefGoogle Scholar
Tekwe, C.D., Carter, R.L., Ma, C.X., Algina, J., Lucas, M.E., Roth, J. et al. (2004). An empirical comparison of statistical models for value-added assessment of school performance. Journal of Educational and Behavioral Statistics, 29, 1136CrossRefGoogle Scholar
Thomson, S. (2008). Examining the evidence from TIMSS: gender differences in year 8 science achievement in Australia. Studies in Educational Evaluation, 34, 7381CrossRefGoogle Scholar
Timmermans, A., Doolaard, S., de Wolf, I. (2011). Conceptual and empirical differences among various value-added models for accountability. School Effectiveness and School Improvement, 22, 393413CrossRefGoogle Scholar
White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48, 817838CrossRefGoogle Scholar
Willms, J.D., Raudenbusch, S.W. (1989). A longitudinal hierarchical linear model for estimating school effects and their stability. Journal of Educational Measurement, 26, 209232CrossRefGoogle Scholar
Wooldridge, J. (2008). Introductory econometrics: a modern approach, (4th ed.). Mason: South-Western College Pub.Google Scholar