Hostname: page-component-5f745c7db-hj587 Total loading time: 0 Render date: 2025-01-06T21:22:46.708Z Has data issue: true hasContentIssue false

Sparse Exploratory Factor Analysis

Published online by Cambridge University Press:  01 January 2025

Nickolay T. Trendafilov*
Affiliation:
Open University
Sara Fontanella
Affiliation:
Imperial College London
Kohei Adachi
Affiliation:
Osaka University
*
Correspondence should be made to Nickolay T. Trendafilov, School of Mathematics and Statistics, Open University, Milton Keynes, UK. Email: Nickolay.Trendafilov@open.ac.uk

Abstract

Sparse principal component analysis is a very active research area in the last decade. It produces component loadings with many zero entries which facilitates their interpretation and helps avoid redundant variables. The classic factor analysis is another popular dimension reduction technique which shares similar interpretation problems and could greatly benefit from sparse solutions. Unfortunately, there are very few works considering sparse versions of the classic factor analysis. Our goal is to contribute further in this direction. We revisit the most popular procedures for exploratory factor analysis, maximum likelihood and least squares. Sparse factor loadings are obtained for them by, first, adopting a special reparameterization and, second, by introducing additional ℓ1\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\ell _1$$\end{document}-norm penalties into the standard factor analysis problems. As a result, we propose sparse versions of the major factor analysis procedures. We illustrate the developed algorithms on well-known psychometric problems. Our sparse solutions are critically compared to ones obtained by other existing methods.

Type
Original paper
Copyright
Copyright © 2017 The Psychometric Society

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Absil, P-A, Mahony, R., & Sepulchre, R.Optimization algorithms on matrix manifolds 2008 Princeton, NJ: Princeton University Pressdoi:10.1515/9781400830244.CrossRefGoogle Scholar
Boumal, N., Mishra, B., Absil, P-A, & Sepulchre, R.. (2014). MANOPT: a Matlab toolbox for optimization on manifolds. Journal of Machine Learning Research, 15, 14551459.Google Scholar
Choi, J., Zou, H., & Oehlert, G.. (2011). A penalized maximum likelihood approach to sparse factor analysis. Statistics and Its Interface, 3, 429436. doi:10.4310/SII.2010.v3.n4.a1.CrossRefGoogle Scholar
Del Buono, N., & Lopez, L.. (2001). Runge–Kutta type methods based on geodesics for systems of ODEs on the Stiefel manifold. BIT Numerical Mathematics, 41(5), 912923. doi:10.1023/A:1021924825224.CrossRefGoogle Scholar
Edelman, A., Arias, T. A., & Smith, S. T.. (1998). The geometry of algorithms with orthogonality constraints. SIAM Journal on Matrix Analysis and Applications, 20, 303353. doi:10.1137/S0895479895290954.CrossRefGoogle Scholar
Fontanella, S., Trendafilov, N., & Adachi, K. (2014). Sparse exploratory factor analysis. In Proceedings of COMPSTAT, 2014 (pp. 281–288)..Google Scholar
Hage, C., & Kleinsteuber, M. Robust PCA and subspace tracking from incomplete observations using 0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _0$$\end{document}-surrogates Computational Statistics 2014 29, 467487. doi:10.1007/s00180-013-0435-4.CrossRefGoogle Scholar
Harman, H. H., (1976). Modern factor analysis. 3Chicago, IL: University of Chicago Press.Google Scholar
Hirose, K., & Yamamoto, M.. (2014). Estimation of an oblique structure via penalized likelihood factor analysis. Computational Statistics and Data Analysis, 79, 120132. doi:10.1016/j.csda.2014.05.011.CrossRefGoogle Scholar
Hirose, K., & Yamamoto, M.. (2015). Sparse estimation via nonconcave penalized likelihood in a factor analysis model. Statistics and Computing, 25, 863875. doi:10.1007/s11222-014-9458-0.CrossRefGoogle Scholar
Jolliffe, I. T., (2002). Principal component analysis. 2New York, NY: Springer-verlag.Google Scholar
Jöreskog, K. G., Enslein, K., Ralston, A., & Wilf, H. S.. (1977). Factor analysis by least-squares and maximum likelihood methods. Mathematical methods for digital computers. New York, NY: John Wiley & Sons 125153.Google Scholar
Luss, R., & Teboulle, M.. (2013). Conditional gradient algorithms for rank-one matrix approximations with a sparsity constraint. SIAM Review, 55, 6598. doi:10.1137/110839072.CrossRefGoogle Scholar
MATLAB MATLAB R2014b 2014 New York, NY: The MathWorks Inc.Google Scholar
Mulaik, S. A., (2010). The foundations of factor analysis. 2Boca Raton, FL: Chapman and Hall/CRC.Google Scholar
Ning, N., & Georgiou, T. T. (2011). Sparse factor analysis via likelihood and 1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1$$\end{document}-regularization. In 50th IEEE conference on decision and control and european control conference (CDC-ECC) Orlando, FL, USA, December 12–15, 2011..Google Scholar
Trendafilov, N. T.. (2003). Dynamical system approach to factor analysis parameter estimation. British Journal of Mathematical and Statistical Psychology, 56, 2746. doi:10.1348/000711003321645322.CrossRefGoogle ScholarPubMed
Trendafilov, N. T.. (2014). From simple structure to sparse components: A review. Computational Statistics, 29, 431454. doi:10.1007/s00180-013-0434-5.CrossRefGoogle Scholar
Trendafilov, N. T., & Adachi, K.. (2015). Sparse versus simple structure loadings. Psychometrika, 80, 776790. doi:10.1007/s11336-014-9416-y.CrossRefGoogle ScholarPubMed
Trendafilov, N. T., & Jolliffe, I. T.. (2006). Projected gradient approach to the numerical solution of the SCoTLASS. Computational Statistics and Data Analysis, 50, 242253. doi:10.1016/j.csda.2004.07.017.CrossRefGoogle Scholar
Wen, Z., & Yin, W.. (2013). A feasible method for optimization with orthogonality constraints. Mathematical Programming, 142, 397434. doi:10.1007/s10107-012-0584-1.CrossRefGoogle Scholar