Hostname: page-component-5f745c7db-nc56l Total loading time: 0 Render date: 2025-01-06T23:02:06.903Z Has data issue: true hasContentIssue false

Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

Published online by Cambridge University Press:  01 January 2025

Robert I. Jennrich*
Affiliation:
University of California at Los Angeles
*
Requests for reprints should be sent to Robert I. Jennrich, Department of Mathematics, University of California, Los Angeles, CA 90095, USA. E-mail: rij@stat.ucla.edu

Abstract

The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the population sampled has the covariance structure assumed. Commonly used covariance structure analysis software uses parametric methods for estimating parameters and standard errors. When the population sampled has the covariance structure assumed, but fails to have the distributional form assumed, the parameter estimates usually remain consistent, but the standard error estimates do not. This has motivated the introduction of a variety of nonparametric standard error estimates that are consistent when the population sampled fails to have the distributional form assumed. The only distributional assumption these require is that the covariance structure be correctly specified. As noted, even this assumption is not required for the infinitesimal jackknife. The relation between the infinitesimal jackknife and other nonparametric standard error estimators is discussed. An advantage of the infinitesimal jackknife over the jackknife and the bootstrap is that it requires only one analysis to produce standard error estimates rather than one for every jackknife or bootstrap sample.

Type
Theory and Methods
Copyright
Copyright © 2008 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Bentler, P. M., & Dijkstra, T. K. (1985). Efficient estimation via linearization in structural models. In Krishnaiah, P. R. (Eds.), Multivariate analysis VI (pp. 942). Amsterdam: North-Holland.Google Scholar
Browne, M. W. (1984). Asymptotically distribution-free methods for the analysis of covariance structures. British Journal of Mathematical and Statistical Psychology, 37, 6283.CrossRefGoogle ScholarPubMed
Chou, C.-P., Bentler, P. M., & Satorra, A. (1991). Scaled test statistics and robust standard errors for nonnormal data in covariance structure analysis: A Monte Carlo study. British Journal of Mathematical and Statistical Psychology, 44, 347357.CrossRefGoogle ScholarPubMed
Efron, B. (1982). The jackknife, the bootstrap and other resampling plans, Philadelphia: Society for Industrial and Applied Mathematics.CrossRefGoogle Scholar
Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap, New York: Chapman & Hall.CrossRefGoogle Scholar
Godambe, V. P. (1960). An optimum property of regular maximum likelihood estimation. Annals of Mathematical Statistics, 31, 12081211.CrossRefGoogle Scholar
Holzinger, K. J., Swineford, F. (1939). A study in factor analysis: The stability of a bi-factor solution, Chicago: University of Chicago.Google Scholar
Jaeckel, L. (1972). The infinitesimal jackknife (Bell Laboratories Memorandum #MM 72-1215-11).Google Scholar
Jennrich, R. I., & Clarckson, D. B. (1980). A feasible method for standard errors of estimate in maximum likelihood factor analysis. Psychometrika, 45, 237247.CrossRefGoogle Scholar
Joreskog, K. G. (1969). A general method approach to confirmatory maximum likelihood factor analysis. Psychometrika, 34, 183202.CrossRefGoogle Scholar
Quenouille, M. (1949). Notes on bias in estimation. Biometrika, 43, 6884.Google Scholar
Satorra, A. (1989). Alternate test criteria in covariance structure analysis: A unified approach. Psychometrika, 54, 131151.CrossRefGoogle Scholar
Satorra, A., & Bentler, P. M. (1994). Corrections to test statistics and standard errors in covariance structure analysis. In von Eye, A., & Clogg, C. C. (Eds.), Latent variables analysis: Applications for development research (pp. 399419). Newbury Park: Sage.Google Scholar
Tukey, J. W. (1958). Bias and confidence in not quite large samples (abstract). Annals of Mathematical Statistics, 29, 614.Google Scholar
Yuan, K.-H., & Bentler, P. M. (1997). Improving parameter tests in covariance structure analysis. Computational Statistics and Data Analysis, 26, 177198.CrossRefGoogle Scholar
Yuan, K.-H., & Hayashi, K. (2006). Standard errors in covariance structure models: Asymptotics versus bootstrap. British Journal of Mathematical and Statistical Psychology, 59, 397417.CrossRefGoogle ScholarPubMed