Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-01-07T10:03:40.923Z Has data issue: false hasContentIssue false

A Penalized Likelihood Method for Structural Equation Modeling

Published online by Cambridge University Press:  01 January 2025

Po-Hsien Huang
Affiliation:
National Taiwan University National Cheng Kung University
Hung Chen
Affiliation:
National Taiwan University
Li-Jen Weng*
Affiliation:
National Taiwan University
*
Correspondence should be made to Li-Jen Weng, Department of Psychology, National Taiwan University, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan. Email: ljweng@ntu.edu.tw

Abstract

A penalized likelihood (PL) method for structural equation modeling (SEM) was proposed as a methodology for exploring the underlying relations among both observed and latent variables. Compared to the usual likelihood method, PL includes a penalty term to control the complexity of the hypothesized model. When the penalty level is appropriately chosen, the PL can yield an SEM model that balances the model goodness-of-fit and model complexity. In addition, the PL results in a sparse estimate that enhances the interpretability of the final model. The proposed method is especially useful when limited substantive knowledge is available for model specifications. The PL method can be also understood as a methodology that links the traditional SEM to the exploratory SEM (Asparouhov & Muthén in Struct Equ Model Multidiscipl J 16:397–438, 2009). An expectation-conditional maximization algorithm was developed to maximize the PL criterion. The asymptotic properties of the proposed PL were also derived. The performance of PL was evaluated through a numerical experiment, and two real data illustrations were presented to demonstrate its utility in psychological research.

Type
Original Paper
Copyright
Copyright © 2017 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Electronic supplementary material The online version of this article (doi:10.1007/s11336-017-9566-9) contains supplementary material, which is available to authorized users.

References

Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19, 716723CrossRefGoogle Scholar
Asparouhov, T., & Muthén, B. (2009). Exploratory structural equation modeling. Structural Equation Modeling: A Multidisciplinary Journal, 16, 397438CrossRefGoogle Scholar
Arminger, G., & Schoenberg, R. (1989). Pseudo maximum likelihood estimation and a test for misspecification in mean and covariance structure models. Psychometrika, 54, 409426CrossRefGoogle Scholar
Baer, R. A., & Smith, G. T., & Hopkins, J., & Krietemeyer, J., & Toney, L. (2006). Using self-report assessment methods to explore facets of mindfulness. Assessment, 13, 2745CrossRefGoogle ScholarPubMed
Bentler, P. M., & Mooijaart, A. (1989). Choice of structural model via parsimony: A rationale based on precision. Psychological Bulletin, 106, 315317CrossRefGoogle ScholarPubMed
Breheny, P., & Huang, J. (2011). Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. The Annals of Applied Statistics, 5, 232253CrossRefGoogle ScholarPubMed
Breiman, L. (1996). Heuristics of instability and stabilization in model selection. The Annals of Statistics, 24, 23502383CrossRefGoogle Scholar
Browne, M. W. (1972). Oblique rotation to a partially specified target. British Journal of Mathematical and Statistical Psychology, 25, 207212CrossRefGoogle Scholar
Browne, M. W. (1984). Asymptotic distribution-free methods for the analysis of covariance structures. British Journal of Mathematical and Statistical Psychology, 37, 6283CrossRefGoogle ScholarPubMed
Browne, M. W., & Cudeck, R. (1989). Single sample cross-validation indices for covariance structures. Multivariate Behavioral Research, 24, 445455CrossRefGoogle ScholarPubMed
Browne, M. W., & Cudeck, R. Bollen, K. A., & Long, J. S. (1993). Alternative ways of assessing model fit. Testing structural equation models, Beverly Hills, CA: Sage 136162.Google Scholar
Bühlmann, P., & van de Geer, S. (2011). Statistics for high dimensional data: Methods, theory and applications, Heidelberg, Berlin: SpringerCrossRefGoogle Scholar
Chang, J. H., Lin, Y. C., & Huang, C. L. (2010). Exploring the mechanism of mindfulness: From attention to self-integration. In 11th annual meeting of the society for personality and social psychology, Las Vegas, USA.Google Scholar
Chaudhuri, S., & Drton, M., & Richardson, T. S. (2007). Estimation of a covariance matrix with zeros. Biometrika, 94, 199216CrossRefGoogle Scholar
Chen, Y., & Liu, J., & Xu, G., & Ying, Z. (2015). Statistical analysis of Q-matrix based diagnostic classification models. Journal of the American Statistical Association, 110, 850866CrossRefGoogle Scholar
Choi, J., & Zou, H., & Oehlert, G. (2011). A penalized maximum likelihood approach to sparse factor analysis. Statistics and Its Interface, 3, 429436CrossRefGoogle Scholar
Chou, C-P, & Bentler, P. M. (1990). Model modification in covariance structural modeling: A comparison among likelihood ratio, Lagrange multiplier, and Wald tests. Multivariate Behavioral Research, 25, 115136CrossRefGoogle ScholarPubMed
Cudeck, R., & Browne, M. W. (1983). Cross-validation of covariance structures. Multivariate Behavioral Research, 18, 147167CrossRefGoogle ScholarPubMed
Cudeck, R., & Henly, S. J. (1991). Model selection in covariance-structures analysis and the “problem” of sample size—A clarification. Psychological Bulletin, 109, 512519CrossRefGoogle ScholarPubMed
Dempster, A., & Laird, N., & Rubin, D. (1977). Maximum likelihood from incomplete data via the EM algorithm (with discussion). Journal of the Royal Statistical Society, Series B, 39, 138.CrossRefGoogle Scholar
Donoho, D. L., & Johnstone, J. M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81, 425455CrossRefGoogle Scholar
Fan, J., & Li, R-Z (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96, 13481360CrossRefGoogle Scholar
Fan, J-Q, & Lv, J-C (2011). Non-concave penalized likelihood with NP-dimensionality. IEEE - Information Theory, 57, 54675484CrossRefGoogle ScholarPubMed
Fan, Y-Y, & Li, R-Z (2012). Variable selection in linear mixed effects models. Annals of Statistics, 40, 20432068CrossRefGoogle ScholarPubMed
Fan, J., & Peng, H. (2004). Nonconcave penalized likelihood with a diverging number of parameters. The Annals of Statistics, 32, 928961CrossRefGoogle Scholar
Friedman, J., & Hastie, H., & Höfling, H., & Tibshirani, R. (2007). Pathwise coordinate optimization. The Annals of Applied Statistics, 1, 302332CrossRefGoogle Scholar
Garcia, R. I., & Ibrahim, J. G., & Zhu, H-T (2010). Variable selection for regression models with missing covariate data. Statistica Sinica, 20, 149165Google Scholar
Groll, A., & Tutz, G. (2014). Variable selection for generalized linear mixed models by 1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\ell }_{1}$$\end{document}-penalized estimation. Statistics and Computing, 24, 137154CrossRefGoogle Scholar
Hastie, T., & Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning, 2New York, NY: SpringerCrossRefGoogle Scholar
Hastie, T., & Tibshirani, R., & Wainwright, M. (2015). Statistical learning with sparsity: The lasso and generalizations, London: CRC Press.CrossRefGoogle Scholar
Hirose, K., & Yamamoto, M. (2014). Estimation of oblique structure via penalized likelihood factor analysis. Computational Statistics and Data Analysis, 79, 120132CrossRefGoogle Scholar
Hirose, K., & Yamamoto, M. (2015). Sparse estimation via nonconcave penalized likelihood in a factor analysis model. Statistics and Computing, 25, 863875CrossRefGoogle Scholar
Holzinger, K., & Swineford, F. (1939). A study in factor analysis: The stability of a bifactor solution. Supplementary Educational Monograph, no. 48. Chicago: University of Chicago Press.Google Scholar
Hoerl, A. E., & Kennard, R. W. (1970). Ridge regression-biased estimation for nonorthogonal problems. Technometrics, 42, 8086CrossRefGoogle Scholar
Huang, P.-H. (2015). lsl: Latent Structure Learning. R package version 0.5.0.Google Scholar
Ibrahim, J. G., & Zhu, H-T, & Garcia, R. I., & Guo, R. (2011). Fixed and random effects selection in mixed effects models. Biometrics, 67, 495503CrossRefGoogle ScholarPubMed
Jöreskog, K. G. (1969). A general approach to confirmatory maximum likelihood factor analysis. Psychometrika, 34, 183202CrossRefGoogle Scholar
Jung, S. (2012). Structural equation modeling with small sample sizes using two-stage ridge least-squares estimation. Behavior Research Methods, 45, 7581CrossRefGoogle Scholar
Kaplan, D. (1988). The impact of specification error on the estimation, testing, and improvement of structural equation models. Multivariate Behavioral Research, 23, 6986CrossRefGoogle ScholarPubMed
Kwon, S., & Kim, Y. (2012). Large sample properties of the SCAD-penalized maximum likelihood estimation on high dimensions. Statistica Sinica, 22, 629653CrossRefGoogle Scholar
Lee, S. Y., & Zhu, H. T. (2002). Maximum likelihood estimation of nonlinear structural equation models. Psychometrika, 67, 189210CrossRefGoogle Scholar
Leeb, H., & Pötscher, B. M. (2006). Can one estimate the conditional distribution of post-model-selection estimators?. The Annals of Statistics, 34, 25542591CrossRefGoogle Scholar
MacCallum, R. C. (1986). Specification searches in covariance structure modeling. Psychological Bulletin, 100, 107120CrossRefGoogle Scholar
MacCallum, R. C. (2003). Working with imperfect models. Multivariate Behavioral Research, 38, 113139CrossRefGoogle ScholarPubMed
Marsh, H. W., & Morin, A. J. S., & Parker, P. D., & Kaur, G. (2014). Exploratory structural equation modeling: An integration of the best features of exploratory and confirmatory factor analysis. Annual Review of Clinical Psychology, 10, 85110CrossRefGoogle ScholarPubMed
Mazumder, R., & Friedman, J., & Hastie, T. (2011). SparseNet: Coordinate descent with nonconvex penalties. Journal of the American Statistical Association, 106, 11251138CrossRefGoogle ScholarPubMed
McDonald, R. P. (1982). A note on the investigation of local and global identifiability. Psychometrika, 47, 101103CrossRefGoogle Scholar
Meng, X-L (2008). Discussion: one-step sparse estimates in nonconcave penalized likelihood models: Who cares if it is a white cat or a black cat?. Annals of Statistics, 36, 15421552CrossRefGoogle Scholar
Meng, X-L, & Rubin, D. B. (1993). Maximum likelihood estimation via the ECM algorithm: A general framework. Biometrika, 80, 267278CrossRefGoogle Scholar
Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105, 156166CrossRefGoogle Scholar
Ning, L., & Georgiou, T. T. (2011). Sparse factor analysis via likelihood and 1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\ell }_{1}$$\end{document} regularization. In 50th IEEE conference on decision and control and european control conference (pp. 5188–5192).Google Scholar
Preacher, K. J. (2006). Quantifying parsimony in structural equation modeling. Multivariate Behavioral Research, 41, 227259CrossRefGoogle ScholarPubMed
R Core Team. (2016). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing.Google Scholar
Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48, 1–36.CrossRefGoogle Scholar
Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 19, 461464Google Scholar
Shao, J. (1997). An asymptotic theory for linear model selection. Statistica Sinica, 7, 221242.Google Scholar
Shapiro, A., & Browne, M. W. (1983). On the investigation of local identifiability—A counterexample. Psychometrika, 48, 303304CrossRefGoogle Scholar
Schelldorfer, J., & Bühlmann, P., & van de Geer, S. (2011). Estimation for high-dimensional linear mixed-effects models using 1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\ell }_{1}$$\end{document}-penalization. The Scandinavian Journal of Statistics, 38, 197214CrossRefGoogle Scholar
Stone, M. (1974). Cross-validatory choice and assessment of statistical predictions. Journal of Royal Statistical Society, Series B, 36, 111147.CrossRefGoogle Scholar
Strawderman, R. L., & Wells, M. T., & Schifano, E. D. (2013). Hierarchical Bayes, maximum a posteriori estimators, and minimax concave penalized likelihood estimation. Electronic Journal of Statistics, 7, 973990CrossRefGoogle Scholar
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58, 267288.CrossRefGoogle Scholar
Trendafilov, N. T., & Adachi, K. (2015). Sparse versus simple structure loadings. Psychometrika, 80, 776790CrossRefGoogle ScholarPubMed
Tutz, G., & Schauberger, G. (2015). A penalty approach to differential item functioning in Rasch models. Psychometrika, 80, 2143CrossRefGoogle ScholarPubMed
Vrieze, S. I. (2012). Model selection and psychological theory: A discussion of the differences between Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Psychological Methods, 17, 228243CrossRefGoogle ScholarPubMed
Watson, D., & Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54, 10631070CrossRefGoogle ScholarPubMed
Yuan, K-H, & Marshall, L. L., & Bentler, E. M. (2003). Assessing the effect of model misspecifications on parameter estimates in structural equation models. Sociological Methodology, 33, 241265CrossRefGoogle Scholar
Yuan, K-H, & Hayashi, K. (2006). Standard errors in covariance structure models: Asymptotics versus bootstrap. The British Journal of Mathematical and Statistical Psychology, 59, 397417CrossRefGoogle ScholarPubMed
Zhang, C-H (2010). Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38, 894942CrossRefGoogle Scholar
Zhang, Y-Y, & Li, R-Z, & Tsai, C-L (2012). Regularization parameter selections via generalized information criterion. Journal of the American Statistical Association, 105, 312323CrossRefGoogle Scholar
Zhao, P., & Yu, B. (2006). On model selection consistency of lasso. Journal of Machine Learning Research, 7, 25412563.Google Scholar
Zou, H. (2006). The adaptive Lasso and its oracle properties. Journal of the American Statistical Association, 101, 14181429CrossRefGoogle Scholar
Zou, H., & Hastie, T., & Tibshirani, R. (2006). Sparse principal component analysis. Journal of Computational and Graphical Statistics, 15, 265286CrossRefGoogle Scholar
Supplementary material: File

Huang et al. supplementary material

Huang et al. supplementary material 1
Download Huang et al. supplementary material(File)
File 616.9 KB
Supplementary material: File

Huang et al. supplementary material

Huang et al. supplementary material 2
Download Huang et al. supplementary material(File)
File 4.9 KB