Hostname: page-component-5f745c7db-j9pcf Total loading time: 0 Render date: 2025-01-06T07:08:39.185Z Has data issue: true hasContentIssue false

A General Theorem and Proof for the Identification of Composed CFA Models

Published online by Cambridge University Press:  01 January 2025

R. Maximilian Bee*
Affiliation:
Friedrich Schiller University Jena
Tobias Koch
Affiliation:
Friedrich Schiller University Jena
Michael Eid
Affiliation:
Freie Universität Berlin
*
Correspondence should be made to R. Maximilian Bee, Psychological Methods Division, Institute for Psychology, Friedrich Schiller University Jena, Am Steiger 3/Haus 1, 07743 Jena, Germany. Email: richard.maximilian.bee@uni-jena.de
Rights & Permissions [Opens in a new window]

Abstract

In this article, we present a general theorem and proof for the global identification of composed CFA models. They consist of identified submodels that are related only through covariances between their respective latent factors. Composed CFA models are frequently used in the analysis of multimethod data, longitudinal data, or multidimensional psychometric data. Firstly, our theorem enables researchers to reduce the problem of identifying the composed model to the problem of identifying the submodels and verifying the conditions given by our theorem. Secondly, we show that composed CFA models are globally identified if the primary models are reduced models such as the CT-C(M-1)\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(M-1)$$\end{document} model or similar types of models. In contrast, composed CFA models that include non-reduced primary models can be globally underidentified for certain types of cross-model covariance assumptions. We discuss necessary and sufficient conditions for the global identification of arbitrary composed CFA models and provide a Python code to check the identification status for an illustrative example. The code we provide can be easily adapted to more complex models.

Type
Theory and Methods
Creative Commons
Creative Common License - CCCreative Common License - BY
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Copyright
Copyright © 2023 The Author(s)

Confirmatory factor analysis [CFA] models are ubiquitous in the social sciences. They are frequently applied in multitrait-multimethod [MT-MM] research (e.g., Eid, Reference Eid2000; Eid et al., Reference Eid, Lischetzke, Nussbeck and Trierweiler2003; Jeon et al., Reference Jeon, Rijmen and Rabe-Hesketh2018; Kenny, Reference Kenny1976; Kenny & Kashy, Reference Kenny and Kashy1992), longitudinal or measurement of change research (e.g., Courvoisier et al., Reference Courvoisier, Nussbeck, Eid, Geiser and Cole2008; Hedeker & Gibbons, Reference Hedeker and Gibbons2006; Koch et al., Reference Koch, Holtmann, Bohn and Eid2018; Little, Reference Little2013; McArdle & Nesselroade, Reference McArdle and Nesselroade2014; Newsom, Reference Newsom2015) as well as in psychometrics (e.g., bifactor models and applications, e.g., Cai et al., Reference Cai, Yang and Hansen2011; Gibbons et al., Reference Gibbons, Bock, Hedeker, Weiss, Segawa, Bhaumik, Kupfer, Frank, Grochocinski and Stover2007; Gibbons & Hedeker, Reference Gibbons and Hedeker1992; Jeon et al., Reference Jeon, Rijmen and Rabe-Hesketh2013; Rijmen, Reference Rijmen2010), among many others.

For any statistical inference based on a CFA model to be meaningful, it is imperative that the given model is identified. A frequently encountered definition of model identification can be stated as follows. Consider a model with model-implied covariance matrix Σ = Σ ( θ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma =\Sigma (\theta )$$\end{document} as a function of model parameters θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta $$\end{document} belonging to this model’s space of permissible parameter values. The model is identified if two different parameter vectors from this model’s parameter space, say θ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta _1$$\end{document} and θ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta _2$$\end{document} , cannot produce the same model-implied covariance matrix. In other words, the model is identified if Σ ( θ 1 ) Σ ( θ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma (\theta _1)\ne \Sigma (\theta _2)$$\end{document} whenever θ 1 θ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta _1\ne \theta _2$$\end{document} (Bollen, Reference Bollen1989; Jöreskog, Reference Jöreskog1978). If this property holds for the whole parameter space, the model is identified everywhere. This is in contrast to generic identification that allows for this property not to hold on negligible sets (more formally, sets of measure zero, see Bekker & ten Berge, Reference Bekker and ten Berge1997).

Identification is a necessary condition for the model parameters to be uniquely estimable from the data (Bollen, Reference Bollen1989). That is, repeated applications of the same model to the same data may result in differing conclusions if the model is not identified. Empirically, the application of underidentified models has been found to give misleading parameter estimates, which cannot be replicated in different samples (Kenny and Kashy, Reference Kenny and Kashy1992), as well as divergent solutions and improper estimates (Geiser et al., Reference Geiser, Koch and Eid2014).

There is a range of well-known necessary conditions for the identification of CFA models and structural equation models [SEM] (Anderson and Rubin, Reference Anderson, Rubin and Neyman1956; Bekker et al., Reference Bekker, Merckens and Wansbeek1994; Bollen, Reference Bollen1989). Moreover, there is a great body of research establishing rules for the local identification of CFA models (e.g., Anderson & Rubin, Reference Anderson, Rubin and Neyman1956; Bekker, Reference Bekker1989; Bekker et al., Reference Bekker, Merckens and Wansbeek1994; Bollen, Reference Bollen1989; Reilly, Reference Reilly1995; Shapiro, Reference Shapiro1985; Wegge, Reference Wegge1996). These rules determine the identification of a model for all parameters in an open neighborhood of some point of the parameter space.

Nevertheless, even if a model is locally identified everywhere, it can still be globally underidentified (Bollen, Reference Bollen1989; Reilly, Reference Reilly1995) and there exist neither general sufficient nor necessary conditions for the global identification for an arbitrary CFA model (Bollen, Reference Bollen1989; Grayson and Marsh, Reference Grayson and Marsh1994). Hence, practitioners often resort to proving the identification of a given model algebraically. For the parameter vector θ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\theta $$\end{document} , this is done by finding the inverse function of the model-implied covariance matrix Σ ( θ ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma (\theta )$$\end{document} . This process can become practically infeasible if it involves solving multiple (generally nonlinear) equations simultaneously (Bollen, Reference Bollen1989).

However, models can be grouped by the patterns they share in their respective loading, factor, or error covariance matrices. These patterns can then be exploited to derive rules that determine the global status of identification for a whole subclass of models. Davis (Reference Davis1993) showed which residual covariance and loading patterns imply identification of models in which each item has factor complexity one, that is, nonzero loading for one factor only. These sufficient conditions were extended by Reilly (Reference Reilly1995) to necessary as well as sufficient ones for the same class of models. In another work, Reilly and O’Brien (Reference Reilly and O’Brien1996) stated identification conditions for the factor loadings in models where each factor has at least one item of factor complexity one.

As another example, Grayson and Marsh (Reference Grayson and Marsh1994) showed that any CFA model with diagonal error covariance matrix Ψ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Psi $$\end{document} and block-diagonal factor covariance matrix Φ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi $$\end{document} with all blocks saturated is not identified if its loading matrix, Λ = ( Λ 1 | Λ 2 | ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda =(\Lambda _1\vert \Lambda _2\vert \cdots )$$\end{document} , has one or more submatrices Λ i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _i$$\end{document} or one or more pairs of submatrices ( Λ j | Λ k ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\Lambda _j\vert \Lambda _k)$$\end{document} with linearly dependent columns. Based on this result, Grayson and Marsh (Reference Grayson and Marsh1994) provided necessary and sufficient conditions for the identification of MT-MM models.

Fang et al. (Reference Fang, Guo, Xu, Ying and Zhang2021) building on the work of Anderson and Rubin (Reference Anderson, Rubin and Neyman1956) give identification conditions for so-called two-tier bifactor models. These models consist of two submodels with diagonal factor covariance matrices, that is, uncorrelated factors. Across models, only the general factors of each submodel are allowed to correlate.

In the present article, we are equally concerned with the process of combining submodels to form a larger model, but we abstract from any specific modeling framework, such as MT-MM or bifactor models. Instead, we broaden the scope and introduce a subclass of CFA models we denote composed models. We give a theorem stating not only necessary but also sufficient conditions for their global identification and give recommendations on how to deal with the challenges presented by these models. The aim of this study is to supply researchers with the necessary theoretical foundations and practical guidelines that enable them to better understand why certain models behave the way they do and ensure the identification of the models they work with.

In order to delineate the notion of a composed model, we consider the following scenario. A researcher investigates the relationship between two constructs of interest in a population, expressed by two separate CFA models, which we call the primary models. Beyond the covariance matrices of the individual models, the researcher is particularly interested in the covariances between factors of the first and factors of the second primary model. To estimate these covariances, the two primary models must be combined into a larger model, incorporating all items from both primary models, which we refer to as the composed model. However, items from one primary model are assumed not to load on factors of the other primary model, such that there are no new loading parameters introduced in the composed model. Put differently, composed models can be considered general structural equation models, in which factors defined in distinct measurement models are correlated or related in a latent path model (for an overview, see Wang & Wang, Reference Wang and Wang2012).

Figure 1. A bifactor ESEM model given by items X ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{ji}$$\end{document} , general factor NF, and specific factors S Xi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{Xi}$$\end{document} , j { 1 , 2 } , i { 1 , 2 , 3 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$j\in \{1,2\}, i\in \{1,2,3\}$$\end{document} , on the one hand, and a two-factor model with factors P and N indicated by their respective items, P i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$P_{i}$$\end{document} , N i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$N_{i}$$\end{document} , i { 1 , 2 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\in \{1,2\}$$\end{document} , on the other. In the bifactor ESEM model, the exploratory loadings are shown with dotted lines. There are no cross-loadings, but all factors are correlated across models. Cross-model covariances are represented by dashed lines. The nomenclature is chosen to resemble the model employed by Tóth-Király et al. (Reference Tóth-Király, Morin, Böthe, Orosz and Rigó2017) relating need fulfillment to positive and negative affect. Analogously to Figs. 2 and 3, the parameter labels have been omitted.

Numerous examples can be found in the literature where the factors in some CFA model are related to a single criterion such as, for example, task performance (Debusscher et al., Reference Debusscher, Hofmans and De Fruyt2017), humor (Christensen et al., Reference Christensen, Silvia, Nusbaum and Beaty2018), or life satisfaction (Chen et al., Reference Chen, Hayes, Carver, Laurenceau and Zhang2012). A specific instance of this type of model is depicted in Fig. 1. It represents the approach employed by Tóth-Király et al. (Reference Tóth-Király, Morin, Böthe, Orosz and Rigó2017) to relate need fulfillment, measured by a bifactor exploratory structural equation model [ESEM] (see, e.g., Marsh et al., Reference Marsh, Morin, Parker and Kaur2014), to positive and negative affect, measured by a model with two correlated factors.

Figure 2. A composed model with a CT-CU model and a multiprocess IRT model as primary models. The (restricted) CT-CU model consists of factors ERS and MRS indicated by their respective items. Residual variables of items E R S i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ERS_{i}$$\end{document} and M R S i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$MRS_{i}$$\end{document} are correlated for every i, i { 1 , 2 , 3 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\in \{1,2,3\}$$\end{document} . The multiprocess IRT model is given by factors PI, PII, PIII. The auxiliary latent variables P s i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Ps_{i}^{*}$$\end{document} are indicated by dichotomous manifest variables P s i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Ps_{i}$$\end{document} with unit loadings. ( P s 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Ps_{2}^{*}$$\end{document} and P s 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Ps_{3}^{*}$$\end{document} are pseudo-variables and can be identified with PII and PIII, respectively.) There are no cross-loadings, but all factors are correlated across models. Cross-model covariances are represented by dashed lines. This is a simplified version of the model employed by Plieninger and Meiser (Reference Plieninger and Meiser2014) to relate response styles and IRT processes, in which the factors are not simply correlated across models, but the process factors are regressed on the response style factors, and there is an additional criterion variable. Analogously to Figs. 1 and  3, the parameter labels have been omitted.

As another example, Plieninger and Meiser (Reference Plieninger and Meiser2014) validated the response processes in a multiprocess item response theory [IRT] model with response style scales. With the multiprocess IRT model as the first primary model, the second one was given by analyzing the scales for extreme as well as mid response styles, forming parcels, and modeling them with a correlated traits-correlated uniqueness [CT-CU] model. In the original article, the authors used latent regression between the primary models based on theoretical considerations. Latent regression—and any other linear dependency that can be represented in a path model—is a function of the latent variances and covariances. Because we wish to abstract from any specific path model, we state our theorem in terms of these fundamental parameters they are built on. Therefore, this article lays the groundwork for the discussion of arbitrary path models in composed models, but it is beyond its scope. The correlational composed model underlying the latent regression employed by Plieninger and Meiser (Reference Plieninger and Meiser2014) is shown in Fig. 2.

Our theorem covers combinations of more complex models as well such as multiconstruct growth curve models (Bollen and Curran, Reference Bollen and Curran2005) or latent state-trait [LST] models (e.g., Schmitt, Reference Schmitt2000; Steyer et al., Reference Steyer, Geiser and Fiege2012). A schematic example of the former is given in Fig. 3: Two constructs, measured by two distinct growth curve models, are put in relation to another via occasion-specific correlations on the one hand and correlated intercept as well as slope factors on the other. However, in some applications of these kinds of longitudinal models the problem of autocorrelated errors might arise. In the Discussion section, we explain how correlated errors can be handled within our approach.

Figure 3. A composed model with two growth curve models (see, e.g., Bollen & Curran Reference Bollen and Curran2005) given by items X ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{ji}$$\end{document} , occasions O Xi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O_{Xi}$$\end{document} , intercept factor I n t X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Int_{X}$$\end{document} , slope factor S l o X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Slo_{X}$$\end{document} as well as items Y ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y_{ji}$$\end{document} , occasions O Yi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O_{Yi}$$\end{document} , intercept factor I n t Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Int_{Y}$$\end{document} , slope factor S l o Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Slo_{Y}$$\end{document} , j { 1 , 2 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$j\in \{1,2\}$$\end{document} and i { 1 , 2 , 3 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\in \{1,2,3\}$$\end{document} , respectively. As in Fig. 4, there are no cross-loadings. All factors are correlated across models, and cross-model covariances are represented by dashed lines. Loadings, (co-)variances, and errors have not been labeled for the sake of readability because the model is not discussed further in this article.

Figure 4. A composed model with two bifactor models as primary models, given by items X ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{ji}$$\end{document} , general factor G X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_{X}$$\end{document} , and specific factors S Xi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{Xi}$$\end{document} as well as items Y ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y_{ji}$$\end{document} , general factor G Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_{Y}$$\end{document} , and specific factors S Yi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{Yi}$$\end{document} , j { 1 , 2 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$j\in \{1,2\}$$\end{document} and i { 1 , 2 , 3 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\in \{1,2,3\}$$\end{document} , respectively. Items for the first primary (bifactor) model do not load on the second primary (bifactor) model and vice versa. Black dashed lines represent covariances on the diagonal of the cross-model factor covariance matrix Φ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{YX}$$\end{document} as given in Eq. (9), and gray dashed lines represent off-diagonal covariances in Φ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{YX}$$\end{document} . If the covariances represented by the gray dashed lines are fixed to zero as in Eq. (10), the model is a multiconstruct bifactor model (see Koch et al., Reference Koch, Holtmann, Bohn and Eid2018).

Finally, as a another example of a composed model consisting of more complex primary models that are prominently applied in the social sciences, consider Fig. 4. The model consists of two bifactor models with one general and three specific factors each (Reise, Reference Reise2012). The composed model obtained by only considering the black lines is a multiconstruct bifactor model (Eid et al., in preparation; Koch et al., Reference Koch, Holtmann, Bohn and Eid2018). Because we will reference this example throughout the article and use it to illustrate how our theorem can be applied in practice, all model parameters (i.e., loadings, variances, and covariances as well as errors) are explicitly given.

In the specific model chosen, both primary models are bifactor reformulations of hierarchical G-factor models (Markon, Reference Markon2019; Reise, Reference Reise2012). That is, their loading structures are derived from the Schmid–Leiman solution of a hierarchical model (Schmid and Leiman, Reference Schmid and Leiman1957; Yung et al., Reference Yung, Thissen and McLeod1999). To see what this implies, note that the loadings of the items on the specific factors need to be set to unity because there are only two items per factor. Then, because loadings on the G-factor in the bifactor model are equal to the product of the path coefficients along the respective paths in the hierarchical model, for each facet (i.e., for each i) the loadings of the items X ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{ji}$$\end{document} , respectively, Y ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y_{ji}$$\end{document} on G X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_{X}$$\end{document} , respectively, G Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_{Y}$$\end{document} are equal and therefore linearly dependent on the corresponding loadings on the specific factors S Xi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{Xi}$$\end{document} , respectively, S Yi \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{Yi}$$\end{document} .

We emphasize, however, that the sufficient and necessary conditions we state in our theorem hold not only for the loading structure given in this example, but for all possible loading structures, such as, for example, essentially τ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau $$\end{document} -equivalent models (Steyer, Reference Steyer2015), the Green–Yang factor structure (Green and Yang, Reference Green and Yang2018), or trifactor models (Jeon et al., Reference Jeon, Rijmen and Rabe-Hesketh2018). In fact, as the variety of the previous examples indicate, our theorem even holds not only for bifactor models but for all models that share the present form: Two primary models are combined to form a composed model, and loadings as well as error covariances are set to zero across the primary models.

However, as Eid et al. (Reference Eid, Krumm, Koch and Schulze2018) showed, even in the simple case of a construct validation study, identification issues can arise. Specifically, a composed model predicting academic achievement by all factors of a bifactor model measuring intelligence is not identified if the bifactor model has an essentially τ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau $$\end{document} -equivalent loading structure. That is, although both primary models are identified, the composed model is not. In the present article, we replicate this counterintuitive finding for a much larger class of models under which the model considered by Eid et al. (Reference Eid, Krumm, Koch and Schulze2018) can be subsumed.

The practical relevance of our theorem lies in the fact that it not only provides simple rules to determine the status of identification of a given CFA model of the type we discuss in this article, but that it is descriptive of the pattern of parameters that are underidentified and how to achieve identification algorithmically. Lastly, we conclude the article by introducing another class of CFA models we denote reduced models and explaining how they circumvent identification issues for the composed model. Throughout the article, we illustrate important results with the help of our introductory example.

A mathematically rigorous treatment of our results is provided in Supplementary Material such that interested readers can retrace all steps of our theorem and proofs. It gives exact definitions for the concepts presented more colloquially in the main body of the text and states the central result, its proof, and corollaries in terms of these definitions. In Supplementary Material, we also provide a more detailed definition of reduced models than is given in the main body of the text. Lastly, along with this article we provide a Python code that performs the identification analysis for the introductory example, which is easily adapted to other situations.

1. Central Result

Consider a CFA model given byFootnote 1

(1) Σ c = Λ c Φ c Λ c T + Ψ c \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma _{c}=\Lambda _{c}\Phi _{c}\Lambda _{c}^{T}+\Psi _{c} \end{aligned}$$\end{document}

with (block) loading matrix

(2) Λ c : = Λ 1 0 0 Λ 2 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Lambda _{c}{:}{=}\left( \begin{array}{c|c} \Lambda _{1}&{}\varvec{0}\\ \hline \varvec{0}&{}\Lambda _{2} \end{array}\right) , \end{aligned}$$\end{document}

(block) latent variable covariance matrix

(3) Φ c = Φ 1 Φ 21 T Φ 21 Φ 2 , \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Phi _{c}=\left( \begin{array}{c|c} \Phi _{1}&{}\Phi _{21}^{T}\\ \hline \Phi _{21}&{}\Phi _{2} \end{array}\right) , \end{aligned}$$\end{document}

and (block) error covariance matrix

(4) Ψ c = Ψ 1 0 0 Ψ 2 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Psi _{c}=\left( \begin{array}{c|c} \Psi _{1}&{}\varvec{0}\\ \hline \varvec{0}&{}\Psi _{2} \end{array}\right) . \end{aligned}$$\end{document}

We refer to this model as the composed model. The submatrices on the diagonals in Eqs. (2)–(4) in turn constitute CFA models given by the model equations

(5) Σ 1 = Λ 1 Φ 1 Λ 1 T + Ψ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma _{1}=\Lambda _{1}\Phi _{1}\Lambda _{1}^{T}+\Psi _{1} \end{aligned}$$\end{document}

and

(6) Σ 2 = Λ 2 Φ 2 Λ 2 T + Ψ 2 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma _{2}=\Lambda _{2}\Phi _{2}\Lambda _{2}^{T}+\Psi _{2}. \end{aligned}$$\end{document}

The two models defined by Eqs. (5)–(6) will be the primary models.

The only submatrix not contained in either primary model is Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} . It contains all covariances between any factor from the first primary model and any factor from the second primary model. We subsequently refer to these covariances as the cross-model factor covariances and to Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} as the cross-model factor covariance matrix.

Note that Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} might be constrained, which our theorem explicitly takes into account. In the present context, we limit ourselves to linear constraints that do not introduce new parameters (such as latent regression weights that can be identified once the covariances have been identified). They make up the overwhelming majority of commonly applied constraints for covariances (Bekker, Reference Bekker1989; Bollen, Reference Bollen1989), since they include, for example, setting some covariance to zero or to equal some other covariance.

The terminology introduced for our introductory example is in line with this definition: The model depicted in Fig. 4 is a composed model with two bifactor models as primary models, each with one general and three specific factors. Its primary models have respective loading matrices

(7) Λ X : = 1 0 0 1 1 0 0 1 0 1 0 γ X 2 0 1 0 γ X 2 0 0 1 γ X 3 0 0 1 γ X 3 and Λ Y : = 1 0 0 1 1 0 0 1 0 1 0 γ Y 2 0 1 0 γ Y 2 0 0 1 γ Y 3 0 0 1 γ Y 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Lambda _{X}{:}{=}\begin{pmatrix} 1 &{}\quad 0 &{}\quad 0 &{}\quad 1\\ 1 &{}\quad 0 &{}\quad 0 &{}\quad 1\\ 0 &{}\quad 1 &{}\quad 0 &{}\quad \gamma _{X2}\\ 0 &{}\quad 1 &{}\quad 0 &{}\quad \gamma _{X2}\\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \gamma _{X3}\\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \gamma _{X3} \end{pmatrix}\quad \text {and}\quad \Lambda _{Y}{:}{=}\begin{pmatrix} 1 &{}\quad 0 &{}\quad 0 &{}\quad 1\\ 1 &{}\quad 0 &{}\quad 0 &{}\quad 1\\ 0 &{}\quad 1 &{}\quad 0 &{}\quad \gamma _{Y2}\\ 0 &{}\quad 1 &{}\quad 0 &{}\quad \gamma _{Y2}\\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \gamma _{Y3}\\ 0 &{}\quad 0 &{}\quad 1 &{}\quad \gamma _{Y3} \end{pmatrix} \end{aligned}$$\end{document}

as well as factor covariance matrices

(8) Φ X : = σ S X 1 2 0 0 0 0 σ S X 2 2 0 0 0 0 σ S X 3 2 0 0 0 0 σ G X 2 and Φ Y : = σ S Y 1 2 0 0 0 0 σ S Y 2 2 0 0 0 0 σ S Y 3 2 0 0 0 0 σ G Y 2 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Phi _{X}{:}{=}\begin{pmatrix} \sigma _{S_{X1}}^2 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad \sigma _{S_{X2}}^2 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad \sigma _{S_{X3}}^2 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad \sigma _{G_{X}}^2 \end{pmatrix}\quad \text {and}\quad \Phi _{Y}{:}{=}\begin{pmatrix} \sigma _{S_{Y1}}^2 &{}\quad 0 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad \sigma _{S_{Y2}}^2 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad \sigma _{S_{Y3}}^2 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 &{}\quad \sigma _{G_{Y}}^2 \end{pmatrix}. \end{aligned}$$\end{document}

Lastly, all error variables are assumed to be uncorrelated, such that the error covariance matrices Ψ X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Psi _{X}$$\end{document} and Ψ Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Psi _{Y}$$\end{document} of both bifactor models are diagonal. The only submatrix in the composed model yet to be defined is the cross-model factor covariance matrix Φ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{YX}$$\end{document} (i.e., Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} in the general definition—the lower-right submatrix of Φ c \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{c}$$\end{document} ), which is given by

(9) Φ YX : = σ S X 1 S Y 1 0 0 σ G X S Y 1 0 σ S X 2 S Y 2 0 σ G X S Y 2 0 0 σ S X 3 S Y 3 σ G X S Y 3 σ S X 1 G Y σ S X 2 G Y σ S X 3 G Y σ G X G Y . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Phi _{YX}{:}{=}\begin{pmatrix} \sigma _{S_{X1}S_{Y1}} &{} 0 &{} 0 &{} \sigma _{G_{X}S_{Y1}}\\ 0 &{} \sigma _{S_{X2}S_{Y2}} &{} 0 &{} \sigma _{G_{X}S_{Y2}}\\ 0 &{} 0 &{} \sigma _{S_{X3}S_{Y3}} &{} \sigma _{G_{X}S_{Y3}}\\ \sigma _{S_{X1}G_{Y}}&{} \sigma _{S_{X2}G_{Y}} &{} \sigma _{S_{X3}G_{Y}} &{} \sigma _{G_{X}G_{Y}} \end{pmatrix}. \end{aligned}$$\end{document}

As bifactor models, the primary models are identified (see Steyer et al., Reference Steyer, Mayer, Geiser and Cole2015, for details). The central question thus becomes: Under what conditions is the composed model identified, assuming that both primary models are identified by themselves? The answer is provided by the following theorem.

Theorem 1

Let the primary models be globally identified, and let all, if any, constraints on Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} be linear and not introduce new parameters. Then, the following holds.

  • (a) The composed model is [generically] globally identified if and only if Λ 1 Λ 2 = Λ 1 ( θ Λ 1 ) Λ 2 ( θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}\otimes \Lambda _{2}=\Lambda _{1}(\theta _{\Lambda _{1}})\otimes \Lambda _{2}(\theta _{\Lambda _{2}})$$\end{document} is injective for [almost] all ( θ Λ 1 , θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\theta _{\Lambda _{1}},\theta _{\Lambda _{2}})$$\end{document} on the parameter space containing the cross-model covariance parameters arranged in Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} . In particular, this implies the following.

  • (b) Λ 1 = Λ 1 ( θ Λ 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}=\Lambda _{1}(\theta _{\Lambda _{1}})$$\end{document} and Λ 2 = Λ 2 ( θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}=\Lambda _{2}(\theta _{\Lambda _{2}})$$\end{document} having full rank for [almost] all ( θ Λ 1 , θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\theta _{\Lambda _{1}},\theta _{\Lambda _{2}})$$\end{document} is sufficient for the composed model to be [generically] globally identified.

  • (c) If Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} is unrestricted, Λ 1 = Λ 1 ( θ Λ 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}=\Lambda _{1}(\theta _{\Lambda _{1}})$$\end{document} and Λ 2 = Λ 2 ( θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}=\Lambda _{2}(\theta _{\Lambda _{2}})$$\end{document} having full rank for [almost] all ( θ Λ 1 , θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\theta _{\Lambda _{1}},\theta _{\Lambda _{2}})$$\end{document} is necessary and sufficient for the composed model to be [generically] globally identified.

If the primary models are globally identified everywhere, then both variants of Items (a)–(c) hold. If the primary models are only generically globally identified, then only the generic variants of Items (a)–(c) hold.

Proof

Theorem 1 constitutes a summary of Theorem S.3 and its corollaries in Supplementary Material. Here, we give a concise but self-contained proof with pointers to Supplementary Material (S) such that the interested reader can find a formal justification for each step.

Recall that a map f : U V \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f:U\rightarrow V$$\end{document} is injective on U if and only if f ( x ) = f ( y ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f(x)=f(y)$$\end{document} implies x = y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x=y$$\end{document} for all x , y U \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$x,y\in U$$\end{document} . The kernel of a linear map A, denoted ker A \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ker A$$\end{document} , is the subspace containing all vectors that are mapped to the zero vector under A. Lastly, the rank of a linear map is the dimension of its image, which, in the case of matrices, corresponds to the maximal number of linearly independent columns, respectively, rows. Relevant references can be found in the introduction to Supplementary Material.

We start by proving Item (a) in Theorem 1 [which is a restatement of Eq. (S.20)]. For any fixed parameters ( θ Λ 1 , θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\theta _{\Lambda _{1}},\theta _{\Lambda _{2}})$$\end{document} , the tensor product Λ 1 Λ 2 = Λ 1 ( θ Λ 1 ) Λ 2 ( θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}\otimes \Lambda _{2}=\Lambda _{1}(\theta _{\Lambda _{1}})\otimes \Lambda _{2}(\theta _{\Lambda _{2}})$$\end{document} is a vector-valued representation of the matrix-valued linear map Λ 2 Φ 21 Λ 1 T = Λ 2 ( θ Λ 2 ) Φ 21 Λ 1 T ( θ Λ 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}\Phi _{21}\Lambda _{1}^{T}=\Lambda _{2}(\theta _{\Lambda _{2}})\Phi _{21}\Lambda _{1}^{T}(\theta _{\Lambda _{1}})$$\end{document} [taken as a function of Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} , see Eq. (S.24a)]. The matrix Λ 2 Φ 21 Λ 1 T \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}\Phi _{21}\Lambda _{1}^{T}$$\end{document} , however, is a submatrix of the model-implied covariance matrix Σ c \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{c}$$\end{document} , and thus, uniquely determining the corresponding parameters is a necessary requirement for the unique determination of Σ c \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma _{c}$$\end{document} and thus the identification of the model [see Eq. (S.21b)].

By the definition of injectivity, if the condition given in Item (a) is fulfilled, it implies that no two sets of cross-model factor covariance parameters can result in the same matrix Λ 2 Φ 21 Λ 1 T = Λ 2 ( θ Λ 2 ) Φ 21 Λ 1 T ( θ Λ 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}\Phi _{21}\Lambda _{1}^{T}=\Lambda _{2}(\theta _{\Lambda _{2}})\Phi _{21}\Lambda _{1}^{T}(\theta _{\Lambda _{1}})$$\end{document} and thus Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} is uniquely determined. If the condition fails on a zero-measured set of parameters ( θ Λ 1 , θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\theta _{\Lambda _{1}},\theta _{\Lambda _{2}})$$\end{document} , this set is zero-measured in the whole parameter space as well such that Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} is uniquely determined for almost all parameter vectors.

To complete the proof of Item (a), we observe that three crucial assumptions—that the primary models are [generically] globally identified, that items in the composed model only load on their respective primary model’s factors, and that errors are uncorrelated across models—imply that the only yet undetermined parameters introduced in the composed model are the cross-model factor covariances in Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} . The unique determination of the parameters contained in Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} is thus also sufficient to identify the full model.

Because Λ 1 Λ 2 = Λ 1 ( θ Λ 1 ) Λ 2 ( θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}\otimes \Lambda _{2}=\Lambda _{1}(\theta _{\Lambda _{1}})\otimes \Lambda _{2}(\theta _{\Lambda _{2}})$$\end{document} is a linear map for any fixed ( θ Λ 1 , θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\theta _{\Lambda _{1}},\theta _{\Lambda _{2}})$$\end{document} , it is injective on some subspace if and only if its kernel intersected with this subspace consists solely of the zero vector (we say the subspace is trivial). This condition, in turn, is determined by the column ranks of Λ 1 = Λ 1 ( θ Λ 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}=\Lambda _{1}(\theta _{\Lambda _{1}})$$\end{document} and Λ 2 = Λ 2 ( θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}=\Lambda _{2}(\theta _{\Lambda _{2}})$$\end{document} as well as the restrictions on Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} (i.e., the equations governing the corresponding subspace). Specifically, if the loading matrices have full column ranks, then their kernels are trivial. Since ker Λ 1 Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ker \Lambda _{1}\otimes \Lambda _{2}$$\end{document} is composed of the kernels of either matrix [cf. Lemma S.7], it is then trivial as well. But then Λ 1 Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}\otimes \Lambda _{2}$$\end{document} is injective, giving Item (b) by application of Item (a).

On the other hand, if any of the kernels of Λ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}$$\end{document} and Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}$$\end{document} are non-trivial and Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} is unrestricted, then ker Λ 1 Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ker \Lambda _{1}\otimes \Lambda _{2}$$\end{document} cannot be trivial either and therefore Λ 1 Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}\otimes \Lambda _{2}$$\end{document} cannot be injective. Again, by Item (a), the model is not identified and we obtain Item (c), which concludes the proof. \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\square $$\end{document}

To elaborate, Item (b) in Theorem 1 states that working with primary models that have full-rank loading matrices (and therefore linearly independent columns) for [almost] all loading parameter values ensures identification of the composed model, no matter how the covariances in Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} have been linearly restricted, if at all. Conversely, and somewhat surprisingly, from Item (c) we learn that even given the assumption that both primary models are identified, if there are no restrictions on the cross-model factor covariances in Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} , then for the composed model to be [generically] identified, both loading matrices Λ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}$$\end{document} and Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}$$\end{document} must have full rank for [almost] all loading parameter values.

However, in applied research, rank-deficient loading matrices of the primary models often arise from theoretical reasons and thus cannot be modified without modifying the underlying theory. Instead, researchers must restrict the cross-model covariances to obtain an identified model. As can be seen from rearranging the product Λ 2 Φ 21 Λ 1 T = Λ 2 ( Λ 1 Φ 21 T ) T \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}\Phi _{21}\Lambda _{1}^{T}=\Lambda _{2}(\Lambda _{1}\Phi _{21}^{T})^{T}$$\end{document} , Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}$$\end{document} acts on the columns of Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} , whereas Λ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}$$\end{document} acts on its rows. This implies that, firstly, if Λ 1 = Λ 1 ( θ Λ 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}=\Lambda _{1}(\theta _{\Lambda _{1}})$$\end{document} , respectively, Λ 2 = Λ 2 ( θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}=\Lambda _{2}(\theta _{\Lambda _{2}})$$\end{document} are rank-deficient for [almost] all ( θ Λ 1 , θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\theta _{\Lambda _{1}},\theta _{\Lambda _{2}})$$\end{document} , no row, respectively, column in Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} can be fully unrestricted. In other words, no factor from a primary model with rank-deficient loading matrix can covary with all factors of the other primary model. Moreover, in terms of structural requirements for Λ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}$$\end{document} , Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}$$\end{document} , and Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} , this means that, loosely speaking, if there are linearly dependent columns in Λ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}$$\end{document} , respectively, Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}$$\end{document} , Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} should not reflect this dependency in its rows, respectively, columns. More precisely, to obtain an identified composed model, Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} should not be decomposable into vectors from ker Λ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ker \Lambda _{1}$$\end{document} in its rows added to vectors from ker Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ker \Lambda _{2}$$\end{document} in its columns [cf. Proposition S.8].

In the case of restricted cross-model covariances and rank-deficient loading matrices, neither Item (b) nor Item (c) applies and the condition in Item (a) must be checked directly. This can be achieved by purely algebraic deliberations, such as the calculation of the kernel of Λ 1 Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}\otimes \Lambda _{2}$$\end{document} and comparing the resulting equations with those governing the restrictions on Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} . Alternatively, it can be verified computationally. Indeed, injectivity of Λ 1 Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}\otimes \Lambda _{2}$$\end{document} on the subspace of cross-model covariance parameters is obtained if the generating set of ker Λ 1 Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ker \Lambda _{1}\otimes \Lambda _{2}$$\end{document} cannot produce vectors in this subspace and vice versa. This is the case if and only if the sum of the dimensions of these spaces equals the dimension of their sum space, which holds because in Theorem 1 all restrictions on Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} are assumed to be linear such that the underlying space is a linear subspace. This condition, in turn, can be translated into an equation involving ranks of matrices containing the respective generating sets [see Corollary S.9]. Such an equation can be checked by a modern Computer Algebra System [CAS], such as the free SymPy library (Meurer et al., Reference Meurer, Smith, Paprocki, Èertík, Kirpichev, Rocklin, Kumar, Ivanov, Moore, Singh, Rathnayake, Vig, Granger, Muller, Bonazzi, Gupta, Vats, Johansson, Pedregosa, Curry, Terrel, Rouǩa, Saboo, Fernando, Kulal, Cimrman and Scopatz2017) for Python. An immediate corollary is that since the rank of Λ 1 Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}\otimes \Lambda _{2}$$\end{document} is equal to the product of the ranks of Λ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}$$\end{document} and Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}$$\end{document} , the number of free covariances in Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} cannot exceed this product.

In summary, we obtain the following algorithm to resolve underidentification.

  1. 1. Calculate the ranks of both primary models’ loading matrices for [almost] all loading parameter vectors ( θ Λ 1 , θ Λ 2 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\theta _{\Lambda _{1}},\theta _{\Lambda _{2}})$$\end{document} .

  2. 2. The product of their ranks then is equal to the maximum number of “free” elements permitted in Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} .

  3. 3. Restrict the remaining number of covariances in Φ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{21}$$\end{document} in a way that leads to identification of the composed model, which can be checked, for example, by verifying Eq. (S.38) algorithmically as demonstrated in the Python code provided as Supplementary Material to this article.

To give an illustration, we tend to our introductory example again. By considering Eq. (7), we find that rank Λ X = rank Λ Y = 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\,\textrm{rank}\,}}\Lambda _{X}={{\,\textrm{rank}\,}}\Lambda _{Y}=3$$\end{document} for all ( γ X 2 , γ X 3 , γ Y 2 , γ Y 3 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(\gamma _{X2},\gamma _{X3},\gamma _{Y2},\gamma _{Y3})$$\end{document} ; that is, both loading matrices are rank-deficient. Note that this still holds if there were more than two items per specific factor with additional loadings as long as the models are Schmid–Leiman solutions to hierarchical models (Yung et al., Reference Yung, Thissen and McLeod1999).

As explicated above, this implies that, firstly, there can be neither fully unrestricted rows nor columns in Φ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{YX}$$\end{document} . Secondly, the maximum number of free parameters permitted in Φ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{YX}$$\end{document} is rank Λ X · rank Λ Y = 3 · 3 = 9 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${{\,\textrm{rank}\,}}\Lambda _{X}\cdot {{\,\textrm{rank}\,}}\Lambda _{Y}=3\cdot 3=9$$\end{document} . Inspecting Φ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{YX}$$\end{document} as defined in Eq. (9), it follows from both preceding deliberations that the model in Fig. 4 is not identified: Both the last row and last column are saturated, and there are 10 > 9 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10>9$$\end{document} free covariances in Φ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{YX}$$\end{document} .

To identify the model without changing the loading structure, we set all but the covariances on the diagonal in Φ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{YX}$$\end{document} to zero. Then only G X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_{X}$$\end{document} and G Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_{Y}$$\end{document} as well as every kth specific factor S Xk \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{Xk}$$\end{document} and S Yk \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{Yk}$$\end{document} from each model are allowed to have nonzero covariance, such that we obtain the altered (now diagonal) cross-model factor covariance matrix

(10) Φ ~ YX : = σ S X 1 S Y 1 0 0 0 0 σ S X 2 S Y 2 0 0 0 0 σ S X 3 S Y 3 0 0 0 0 σ G X G Y . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \tilde{\Phi }_{YX}{:}{=}\begin{pmatrix} \sigma _{S_{X1}S_{Y1}} &{} 0 &{} 0 &{} 0\\ 0 &{} \sigma _{S_{X2}S_{Y2}} &{} 0 &{} 0\\ 0 &{} 0 &{} \sigma _{S_{X3}S_{Y3}} &{} 0\\ 0 &{} 0 &{} 0 &{} \sigma _{G_{X}G_{Y}} \end{pmatrix}. \end{aligned}$$\end{document}

We use the tilde to indicate that this covariance matrix belongs to a new model that differs from the one with covariance matrix Φ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{YX}$$\end{document} . Restricting Φ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{YX}$$\end{document} to obtain Φ ~ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde{\Phi }_{YX}$$\end{document} corresponds to removing all gray dashed lines from Fig. 4. As mentioned above, this cross-model factor covariance matrix renders the model in Fig. 4 a multiconstruct bifactor model (see Koch et al., Reference Koch, Holtmann, Bohn and Eid2018) with a loading structure according to the hierarchical G-factor model.

It is clear that Φ ~ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde{\Phi }_{YX}$$\end{document} fulfills the necessary identification conditions given above; that is, there are 4 9 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4\le 9$$\end{document} free covariances and no saturated rows nor columns. Sufficiency of the structure in Φ ~ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tilde{\Phi }_{YX}$$\end{document} is verified in the aforementioned Python script.

Alternatively, the loading matrices can be modified to have full rank for almost all loading parameter values by fixing the loadings of the first item of the general factor to one and setting the other loadings free (i.e., imposing a τ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau $$\end{document} -congeneric loading structure; Steyer Reference Steyer2015). Note that the loadings of the items on the specific factors can be set free only if there are at least three items per factor (Steyer et al., Reference Steyer, Mayer, Geiser and Cole2015). Identification of the composed model then follows from Item (b) in Theorem 1. For completeness, we remark that a composed model with only two indicators per specific factor but unrestricted loadings can be identified through cross-model covariances of the specific factors. However, because the intent is to show how identification of a composed model is achieved with identified primary models we do not discuss this case here.

If a τ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\tau $$\end{document} -congeneric loading structure is used, identification is achieved by adding parameters to the model. It is a well-known fact that there are CFA models for which the more general version may be identified, whereas the more parsimonious is not Bekker et al. (Reference Bekker, Merckens and Wansbeek1994). As a simple example, consider a model with two factors indicated by two items each and unrestricted loadings. Bollen (Reference Bollen1989, pp. 244–245) showed that the model is identified only if the factors are correlated, whereas the restricted and therefore more parsimonious model with uncorrelated factors is not. The same problem can arise in testing for measurement invariance. For example, Wu and Estabrook (Reference Wu and Estabrook2016) show that researchers can obtain unidentified models when simply adding measurement invariance constraints to a baseline model for ordered categorical outcomes.

If the loading matrix is specified to have full rank for not all, but almost all parameter values (i.e., yielding a generically identified composed model), there are zero-measured regions of the parameter space where the model is not identified. When parameter values lie close to such a region in an empirical application, then this results in an empirically unidentified model (see, e.g., Eid et al., Reference Eid, Geiser, Koch and Heene2017; Grayson & Marsh, Reference Grayson and Marsh1994; Kenny & Kashy, Reference Kenny and Kashy1992). This, however, is not the case for models with loading matrices that have full rank simply due to their configuration (i.e., their pattern of zero and nonzero entries). The main advantage of these models is that they allow researchers to restrict loadings at will (e.g., to test for measurement invariance) while specifying any cross-model covariance [cf. Item (b)]. We discuss these models in the next section.

2. Reduced Models

In the following, we discuss a specific type of model termed reduced model and show that taking identified reduced models as primary models results in an identified composed model. We illustrate our findings with the aid of our running example. In Supplementary Material, we give exact definitions and proofs to the deliberations in this section and further distinguish two types of reduced models to be able to determine in which case the composed model is only generically identified.

Recall that an item of factor complexity one is an item that has a nonzero loading for exactly one factor (Reilly, Reference Reilly1995). Furthermore, we say that a factor is associated with an item if this item has nonzero loading on this factor. Consider some CFA model that contains a factor associated with one or more items with factor complexity one. These items can be considered reference items of this factor, since their variances are assumed to be fully determined by this factor’s variance and the residual (error) variance. In other words, the psychometric meaning of this factor depends on these reference items.

Additionally, assume that there are items of factor complexity two that load on this factor plus one other factor. Although they cannot be considered reference items for the first factor, variation in these items is solely due to variation in the two factors and the residual terms. This variation, in turn, can be interpreted in relation to the variation in the reference items of the first factor.

Put differently, if we disregard the first factor, then variation in these items is solely due to the second factor they are associated with, plus the residual. Holding the first factor constant thus renders these items reference items for the second factor. By the same logic, we can examine the remaining factors for reference items holding previously considered factors constant until we have checked every factor in the model. In summary, we can identify unique contributions to every factor if it is possible to sequentially find factors with reference items, that is, items that load on one factor and not others, disregarding factors already equipped with such items. This is the case if the loading matrix of a model is structured in a way that there is the hypothetical possibility of sequentially removing factors with items of factor complexity one.

It can happen that no such sequence can be found because the factor complexity of all items is greater than one. Nevertheless, it might be possible that dropping a factor from the model altogether, and thus reducing it, allows for such sequences to exist. For this reason, we deem models for which these sequences exist reduced models.

To illustrate, consider the loading matrix of any of the two primary models depicted in Fig. 4 and defined in Eq. (7), say Λ X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{X}$$\end{document} . All items have factor complexity two; that is, they have nonzero loadings (except for a null set) on exactly two factors for this primary model. This means that it is not possible to begin any sequence finding reference items for every factor because that would require us to start with at least one item with factor complexity one. Consequently, the primary models as defined above and depicted in Fig. 4 are not reduced models.

Figure 5. A composed model with two reduced bifactor models as primary models. A reduced bifactor model with items X ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{ji}$$\end{document} , general factor G X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_{X}$$\end{document} , and specific factors S X 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{X2}$$\end{document} and S X 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{X3}$$\end{document} , composed with another reduced bifactor model with items Y ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y_{ji}$$\end{document} , general factor G Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_{Y}$$\end{document} , and specific factors S Y 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{Y2}$$\end{document} and S Y 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{Y3}$$\end{document} for j { 1 , 2 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$j\in \{1,2\}$$\end{document} and i { 1 , 2 , 3 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\in \{1,2,3\}$$\end{document} . As the non-reduced version depicted in Fig. 4, items for the first primary (bifactor) model do not load on the second primary (bifactor) model and vice versa. In comparison with the model depicted in Fig. 4, note the additional covariances (black solid lines) that are allowed to be nonzero in the bifactor ( S - 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(S-1)$$\end{document} model (Eid et al., Reference Eid, Geiser, Koch and Heene2017). The gray dashed lines between the specific factors across the primary models are additional covariances that are allowed to be nonzero while still obtaining an identified composed model.

Nevertheless, dropping any specific factor renders this model a reduced model. Indeed, first observe that this reduces the factor complexity of all items that previously loaded on the dropped specific factor to one. For example, pick the first specific factor to be dropped from the model and redefine the bifactor model as the model with reduced loading matrix

(11) Λ X R : = 0 0 1 0 0 1 1 0 γ X 2 1 0 γ X 2 0 1 γ X 3 0 1 γ X 3 . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Lambda ^{\text {R}}_{X}{:}{=}\begin{pmatrix} 0 &{}\quad 0 &{}\quad 1\\ 0 &{}\quad 0 &{}\quad 1\\ 1 &{}\quad 0 &{}\quad \gamma _{X2}\\ 1 &{}\quad 0 &{}\quad \gamma _{X2}\\ 0 &{}\quad 1 &{}\quad \gamma _{X3}\\ 0 &{}\quad 1 &{}\quad \gamma _{X3} \end{pmatrix}. \end{aligned}$$\end{document}

We use the superscript R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text {R}$$\end{document} to denote that this loading matrix belongs to a new (reduced) model. It is depicted in Fig. 5 with items Y ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y_{ji}$$\end{document} , j { 1 , 2 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$j\in \{1,2\}$$\end{document} and i { 1 , 2 , 3 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\in \{1,2,3\}$$\end{document} . Now, the general factor is associated with items X 11 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{11}$$\end{document} and X 12 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{12}$$\end{document} , corresponding to rows 1 and 2 of Λ X R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda ^{\text {R}}_{X}$$\end{document} , as items of factor complexity one. Then, a sequence assigning reference items to every factor while disregarding factors already equipped with such items must start with the general factor, since it is the only factor in Λ X R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda ^{\text {R}}_{X}$$\end{document} associated with items of factor complexity one. Disregarding the general factor, the remaining specific factors both only contain items of factor complexity one, which concludes the sequence. In other words, we determine that the primary models in Fig. 5 are reduced models by scanning their loading matrices for items of factor complexity one, disregarding the corresponding factors and repeating this process until all columns have been checked.

In the given example, assume that the (remaining) specific factors are correlated and assume that the general factor is uncorrelated with the specific factors, as depicted in Fig. 5. Then, the model with loading matrix Λ X R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda ^{\text {R}}_{X}$$\end{document} can be denoted a bifactor ( S - 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(S-1)$$\end{document} model (Eid et al., Reference Eid, Geiser, Koch and Heene2017), which, in the context of MT-MM research, is known as a correlated traits-correlated methods minus one [CT-C ( M - 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(M-1)$$\end{document} ] model (Eid, Reference Eid2000). As Geiser et al. (Reference Geiser, Eid and Nussbeck2008) pointed out, there are always as many different possible CT-C ( M - 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(M-1)$$\end{document} models as there are methods for any given data. Another example of a reduced model is the bifactor ( S · I - 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(S\cdot I-1)$$\end{document} model (Eid et al., Reference Eid, Geiser, Koch and Heene2017). These models have been shown to possess good psychometric properties and are known for high convergence rates, low number of improper solutions, and a clear psychometric meaning of the latent factors (Eid, Reference Eid2000; Eid et al., Reference Eid, Geiser, Koch and Heene2017, Reference Eid, Koch, Geiser and Hoyle2023; Geiser et al., Reference Geiser, Eid and Nussbeck2008). Which specific factor is dropped from the model, that is, which items loading on the general factor have factor complexity one and thus serve as reference items for the general factor, has implications for the meaning of the general and remaining specific factors (Geiser et al., Reference Geiser, Eid and Nussbeck2008). However these considerations go beyond the scope of the present article.

In any case, reduced models are not limited to the particular conceptualization of latent variables in the associated measure theoretical framework. The concept of a reduced model is uniquely concerned with the configuration of the loading matrix of the model of interest. To see how reduced models lead to identified composed models, recall from the previous section that Λ X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{X}$$\end{document} is rank-deficient. On the other hand, it is easily verified that Λ X R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda ^{\text {R}}_{X}$$\end{document} has full rank. In fact, every reduced model has this property. Indeed, by removing columns corresponding to factors that are associated with items of factor complexity one, we verify that these columns cannot be represented by the other columns in the loading matrix (because they are nonzero in rows for which the other columns are zero) and since this has to hold for all columns, this property is sufficient for the matrix to have full column rank. From Item (b) of Theorem 1, it then follows immediately that composed models consisting of identified reduced primary models are always identified.

To see how the introductory example can be identified by the use of reduced models, define Λ Y R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda ^{\text {R}}_{Y}$$\end{document} similarly to Λ X R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda ^{\text {R}}_{X}$$\end{document} —that is, by removing the first column from Λ Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{Y}$$\end{document} . The resulting bifactor ( S - 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(S-1)$$\end{document} models are still identified (Eid et al., Reference Eid, Geiser, Koch and Heene2017). Then, besides removing the rows and columns of Φ X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{X}$$\end{document} , Φ Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{Y}$$\end{document} , and Φ YX \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi _{YX}$$\end{document} corresponding to the dropped specific factors in the bifactor models, no change is necessary to obtain an identified composed model. The final composed model is depicted in Fig. 5. Moreover, additionally to the covariances between the general factors of one primary model and the specific factors of the other primary model, which were already included in the model depicted in Fig. 4, we can also allow for correlations between the specific factors across the primary models, associated with the gray dashed lines in Fig. 5, which results in the reduced but fully unrestricted cross-model covariance matrix

(12) Φ YX R : = σ S X 2 S Y 2 σ S X 3 S Y 2 σ G X S Y 2 σ S X 2 S Y 3 σ S X 3 S Y 3 σ G X S Y 3 σ S X 2 G Y σ S X 3 G Y σ G X G Y . \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Phi ^{\text {R}}_{YX}{:}{=}\begin{pmatrix} \sigma _{S_{X2}S_{Y2}} &{}\quad \sigma _{S_{X3}S_{Y2}} &{}\quad \sigma _{G_{X}S_{Y2}}\\ \sigma _{S_{X2}S_{Y3}} &{}\quad \sigma _{S_{X3}S_{Y3}} &{}\quad \sigma _{G_{X}S_{Y3}}\\ \sigma _{S_{X2}G_{Y}} &{}\quad \sigma _{S_{X3}G_{Y}} &{}\quad \sigma _{G_{X}G_{Y}} \end{pmatrix}. \end{aligned}$$\end{document}

Again, this is because the primary models are reduced models. Thus, the composed model is identified, regardless of the structure of Φ YX R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi ^{\text {R}}_{YX}$$\end{document} , such that allowing for additional covariances to be nonzero in Φ YX R \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi ^{\text {R}}_{YX}$$\end{document} preserves identification.

In summary, working with reduced models simplifies the problem of identification of the composed model to the identification of the primary models. There exists a body of research that is devoted to such a modeling framework and the identification of reduced models, for example, in the context of MT-MM analysis and bifactor applications with a reduced number of specific factors (e.g., Geiser et al., Reference Geiser, Eid and Nussbeck2008, Koch et al., Reference Koch, Holtmann, Bohn and Eid2018). Reduced models permit researchers to leave the cross-model factor covariances of the primary models unrestricted, which may be beneficial for the interpretation of the parameters in the composed model (e.g., Eid et al., Reference Eid, Krumm, Koch and Schulze2018).

3. Discussion

In the present article, we considered the class of composed CFA models. They consist of identified submodels such that, in the complete model, items of one submodel show no cross-loadings on the factors of the other model and error variables are uncorrelated across models. Thus, the submodels only relate to one another via the covariances of their respective common factors.

Although the assumptions of no cross-model loadings and uncorrelated residuals across the primary models may be frequently violated in applied research and may appear strict at first glance,Footnote 2 note that, by the aid of auxiliary factors, the latter can be made without loss of generality and the former can be weakened in some cases. For example, it is straightforward to redefine residual variables as latent variables and incorporate them into the structural part of the model, such that residual covariances across models become cross-model (factor) covariances. For this purpose, an auxiliary factor must be defined with the corresponding item as its unique indicator. For a composed model with nonzero cross-model error covariance matrix Ψ 21 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Psi _{21}$$\end{document} , redefine the primary models’ matrices

(13a)
(13b)
(13c)

for i { 1 , 2 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\in \{1,2\}$$\end{document} and appropriately sized identity matrices I i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I_i$$\end{document} . Moreover, let

(13d)

and define Λ ^ c \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{\Lambda }_c$$\end{document} , Ψ ^ c \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{\Psi }_c$$\end{document} , and Φ ^ c \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\hat{\Phi }_c$$\end{document} accordingly. Then,

(14a) Σ i = Λ ^ i Φ ^ i Λ ^ i T \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \Sigma _i&=\hat{\Lambda }_i\hat{\Phi }_i\hat{\Lambda }_i^T \end{aligned}$$\end{document}
(14b) = Λ i Φ i Λ i T + Ψ i \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned}&=\Lambda _i\Phi _i\Lambda _i^T+\Psi _i \end{aligned}$$\end{document}

for i { 1 , 2 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\in \{1,2\}$$\end{document} . Hence, the primary models’ status of identification is unchanged by this procedure.

On the other hand, the equation

(15) Σ ^ c = Λ ^ c Φ ^ c Λ ^ c T + 0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\begin{aligned} \hat{\Sigma }_c=\hat{\Lambda }_c\hat{\Phi }_c\hat{\Lambda }_c^T+\varvec{0} \end{aligned}$$\end{document}

defines a new composed model without error variables. This way the residual covariances can be considered as cross-model covariances and are therefore subject to the identification conditions given in our theorem. Of course, now the modified counterparts of the loading and cross-model covariance matrices must be used to determine identification. Because the concatenation of the original primary models’ loading matrices with identity matrices renders them rank-deficient, there need to be restrictions on the (augmented) cross-model covariance matrix, now also containing the residual covariances. This implies that the error variables cannot all be correlated across models, even if the (original) loading matrices have full rank. Prospective research could determine which type of cross-model error covariances are possible for which kinds of composed models. In any case, researchers should be aware that allowing for correlated residuals might change the psychometric meaning of some, or even all, factors in the model.

In some cases, cross-model loadings can be dealt with by latent regression on auxiliary factors, which is more involved and beyond the scope of this article. Nevertheless, we sketch it here to show that certain instances of composed models with cross-model loadings are still covered by our theorem. Items for which cross-model loadings are assumed must have their measurement error variables redefined as factors. Former factor loadings are replaced with latent regressions on these new factors within the respective primary model (and thus the regression residuals take the role of the measurement error), again without changing its identification status. In the composed model, cross-model covariances are specified between the redefined item-specific residual variables in one primary model and the factors of the other primary model on which the items are assumed to show cross-loadings. These cross-model covariances, which are identified under the conditions given in our theorem, can be used in a cross-model latent regression analysis. Then, any cross-model loading of interest is given by the regression coefficient of the path connecting the redefined residual variable in the first primary model with the associated factor of the other primary model.

Nevertheless, if cross-model loadings for a majority of items are included, we no longer deem it appropriate to denote the resulting model a composed model. In our view, the two constructs assessed by the two primary models are then no longer sufficiently separable and simply constitute a different, unified type of model. Other modes of identification must be employed, even though a general theorem and proof for the identification of these kinds of models might not be available. In any case, it is important that researchers report on any data-driven changes to their model to avoid engaging in questionable research practices (Flake and Fried, Reference Flake and Fried2020; Crede and Harms, Reference Crede and Harms2019).

We only considered linear constraints on the cross-factor covariances, which make up the overwhelming majority of restrictions on covariances in CFA models.Footnote 3 Nevertheless, the cross-model covariance matrix might be subject to a nonlinear transformation, such as, for example, restricting a covariance to be strictly positive, setting a covariance to be equal some value unequal to zero,Footnote 4 or defining a covariance to be the square of another, just to name a few. However, an extension of our theorem to the nonlinear case is straightforward, since the loading matrices of the primary models act linearly on the (possibly nonlinearly transformed) cross-model covariance matrix. Then, the properties of the nonlinear transformation will be determining the status of identification.

Note that no linearity assumptions are made with respect to the constraints in the primary models. The proof of Theorem S.3 relies on the fact that for identified, and therefore specific, loading matrices Λ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}$$\end{document} and Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}$$\end{document} , the matrix product Λ 2 Φ 21 Λ 1 T \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}\Phi _{21}\Lambda _{1}^{T}$$\end{document} is a linear map, regardless of how the loading parameters of the primary models have been mapped into Λ 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{1}$$\end{document} and Λ 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _{2}$$\end{document} . Thus, we are confident that our theorem applies to the vast majority of composed CFA models encountered in applied research.

Our theorem may also apply to CFA models that initially have not been thought of as composed CFA models, but for which the question of their identification is yet of high practical relevance. One class of examples are multiconstruct LST models (Eid et al., Reference Eid, Notz, Steyer and Schwenkmezger1994; Schermelleh-Engel et al., Reference Schermelleh-Engel, Keith, Moosbrugger and Hodapp2004). The single-construct LST models they consist of can be considered the primary models, and then, the status of identification for their composition—the multiconstruct LST model—can be determined by our theorem. Even though identification for these types of models has already been shown (Steyer, Reference Steyer1989), they are just one of many conceivable examples of models that are covered by our theorem.

We showed that composed models consisting of reduced primary models must be identified. Working with non-reduced models as primary models, however, can lead to identification issues with the composed model if the loading matrices are rank-deficient as we have shown by means of the introductory example. Note that this strictly follows from algebraic considerations and is not related to the way latent variables are conceptualized. Moreover, it is important to emphasize that non-reduced models, such as the classical bifactor model by Holzinger and Swineford (Reference Holzinger and Swineford1939), can still constitute an identified composed model, it just requires additional assumptions. Concretely, researchers must decide if they want to free up parameters and find a suitable full-rank loading structure for the primary models or if there are solid theoretical reasons for why some cross-model covariances should be restricted and if these restrictions necessarily lead to an identified composed model.

We already pointed out the issues of parsimony and empirical underidentification (which we discuss in Supplementary Material) that arise in the former scenario. With this option, measurement invariance assumptions are not testable if they lead to rank-deficiency in the loading matrices of the primary models. As for the latter option, theoretical justification for the restriction of cross-model covariances is given, for example, for interchangeable methods in the LST context (Geiser et al., Reference Geiser, Koch and Eid2014). Conversely, for structurally different methods, this is not the case. For these kinds of research scenarios, however, there is extensive literature recommending reduced models, such as the CT-C ( M - 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(M-1)$$\end{document} model (Nussbeck et al., Reference Nussbeck, Eid, Geiser, Courvoisier and Lischetzke2009; Eid et al., Reference Eid, Nussbeck, Geiser, Cole, Gollwitzer and Lischetzke2008).

It is worth noting that there are models that do not fall under the definition of reduced models as given in the present article, but nevertheless share their favorable properties. Such an example is the latent means model, which could also be used for analyzing designs with structurally different methods (Pohl and Steyer, Reference Pohl and Steyer2010; Koch et al., Reference Koch, Eid, Lochner, Irwing, Booth and Hughes2018). Like the CT-C ( M - 1 ) \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(M-1)$$\end{document} model, it is based on the idea of defining a reduced number of method (or specific) factors. In the latent means model, the trait factor is not parameterized by reference items but as the overall mean of the true score variables pertaining to all structurally different methods. Therefore, the latent means model is not a reduced model. Nevertheless, its loading matrix has full rank under every loading structure, and thus, measurement invariance assumption, as well. The latent means parameterization of our running example is depicted in Fig. 6.

Figure 6. A composed model with two latent means models as primary models. A latent means model with items X ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X_{ji}$$\end{document} , general factor G X \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_X$$\end{document} , and specific factors S X 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{X1}$$\end{document} and S X 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{X3}$$\end{document} , composed with another latent means model with items Y ji \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Y_{ji}$$\end{document} , general factor G Y \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$G_Y$$\end{document} , and specific factors S Y 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{Y1}$$\end{document} and S Y 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$S_{Y3}$$\end{document} , j { 1 , 2 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$j\in \{1,2\}$$\end{document} and i { 1 , 2 , 3 } \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\in \{1,2,3\}$$\end{document} . All factors are correlated.

Irrespectively, our results cover a wide range of CFA models which cannot possibly be discussed individually and in full detail. Our theorem allows researchers to easily determine whether a composed CFA model is identified by the necessary and sufficient conditions we provided in this article. In fact, our theorem enables researchers to reduce the problem of identifying the composed model to the problem of identifying the submodels and verifying the conditions stated in Theorem 1. The obstacles for identifying the primary models are precisely those for identifying any arbitrary CFA model: One either must find the specific class this model belongs to and use identification results for this class or explicitly solve for the known parameters algebraically as we discuss in the introduction section. The identification of the primary models hence leads back to the general problem of identifying CFA models that can only be solved for classes of models sharing a certain structure, but not in general.

On the other hand, as the example of a model with two factors and two indicators per factor with free loadings shows, unidentified primary models can be combined in a way that results in a model that is identified because of its free cross-model factor covariances (Bollen, Reference Bollen1989). Then, our theorem does not apply and further research is needed to determine in which cases this is possible. However, we take it that, in applied research, models subject to composition are those already employed in practice and extensively studied, such that identification conditions for the primary models are readily available.

Lastly, for complicated primary models that are neither reduced nor equipped with trivially full-rank loading matrices, identification of the composed model can be checked algorithmically. The Python code we supply for the identification of the bifactor models discussed in this article is easily generalized to more sophisticated models. There are no Python skills required beyond defining variables and using the respective libraries’ methods.

We thus supply researchers with a powerful tool to determine their models’ status of identification and lay the mathematical groundwork for discussing a wide range of specific types of composed models that commonly suffer from identification issues.

Funding

Open Access funding enabled and organized by Projekt DEAL. The research was funded by the German Research Foundation (DFG), project number KO-4770/2-1.

Footnotes

We thank PD Dr. Werner Nagel, formerly Institute of Mathematics, Friedrich Schiller University Jena, for checking an early version of the theorems and proofs in the Supplementary Material and for providing valuable suggestions.

Supplementary Information The online version contains supplementary material available at (https://doi.org/10.1007/s11336-023-09933-6.

Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 In line with the established literature, we assume that the mean structure is identified and do not consider it in the present article (see, e.g., Bollen, Reference Bollen1989).

2 We thank an anonymous reviewer for rightly pointing out this fact which ultimately led to the subsequent addendum.

3 So much so that many authors of papers discussing CFA models even implicitly limit themselves to only considering restrictions in the form of setting some covariances to zero.

4 Note that this restriction can be denoted affine linear, but it is not strictly linear in the algebraic sense that is used in the assumptions of our theorem.

References

References

Anderson, T. W., & Rubin, H. (1956). Statistical inference in factor analysis. In Neyman, J., (Ed.), Proceedings of the third Berkeley symposium on mathematical statistics and probability (pp. 111150). University of California Press.Google Scholar
Bekker, P. A. (1989). Identification in restricted factor models and the evaluation of rank conditions. Journal of Econometrics, 41(1), 516. https://doi.org/10.1016/0304-4076(89)90040-7CrossRefGoogle Scholar
Bekker, P. A., Merckens, A., & Wansbeek, T. J. (1994). Identification, equivalent models and computer algebra. Academic Press. https://doi.org/10.1016/C2013-0-07176-9 Google Scholar
Bekker, P. A., & ten Berge, J. M. F. (1997). Generic global identification in factor analysis. Linear Algebra and its Applications, 264, 255263. https://doi.org/10.1016/s0024-3795(96)00363-1CrossRefGoogle Scholar
Bollen, K. A. (1989). Structural equations with latent variables. Wiley. https://doi.org/10.1002/9781118619179 CrossRefGoogle Scholar
Bollen, K. A., & Curran, P. J. (2005). Latent curve models: A structural equation perspective. Wiley. https://doi.org/10.1002/0471746096 CrossRefGoogle Scholar
Cai, L., Yang, J. S., & Hansen, M. (2011). Generalized full-information item bifactor analysis. Psychological Methods, 16(3), 221248. https://doi.org/10.1037/a0023350CrossRefGoogle ScholarPubMed
Chen, F. F., Hayes, A., Carver, C. S., Laurenceau, J.-P., & Zhang, Z. (2012). Modeling general and specific variance in multifaceted constructs: A comparison of the bifactor model to other approaches. Journal of Personality, 80(1), 219251. https://doi.org/10.1111/j.1467-6494.2011.00739.xCrossRefGoogle ScholarPubMed
Christensen, A. P., Silvia, P. J., Nusbaum, E. C., & Beaty, R. E. (2018). Clever people: Intelligence and humor production ability. Psychology of Aesthetics, Creativity, and the Arts, 12(2), 136143. https://doi.org/10.1037/aca0000109CrossRefGoogle Scholar
Courvoisier, D. S., Nussbeck, F. W., Eid, M., Geiser, C., & Cole, D. A. (2008). Analyzing the convergent and discriminant validity of states and traits: Development and applications of multimethod latent state-trait models. Psychological Assessment, 20(3), 270280. https://doi.org/10.1037/a0012812CrossRefGoogle Scholar
Crede, M., & Harms, P. (2019). Questionable research practices when using confirmatory factor analysis. Journal of Managerial Psychology, 34(1), 1830. https://doi.org/10.1108/JMP-06-2018-0272CrossRefGoogle Scholar
Davis, W. R. (1993). The FC1 rule of identification for confirmatory factor analysis. Sociological Methods and Research, 21(4), 403437. https://doi.org/10.1177/0049124193021004001CrossRefGoogle Scholar
Debusscher, J., Hofmans, J., & De Fruyt, F. (2017). The multiple face(t)s of state conscientiousness: Predicting task performance and organizational citizenship behavior. Journal of Research in Personality, 69, 7885. https://doi.org/10.1016/j.jrp.2016.06.009CrossRefGoogle Scholar
Eid, M. (2000). A multitrait-multimethod model with minimal assumptions. Psychometrika, 65(2), 241261. https://doi.org/10.1007/bf02294377CrossRefGoogle Scholar
Eid, M., Geiser, C., & Koch, T. (in preparation). Structural equation modeling of multiple rater data. Guilford.Google Scholar
Eid, M., Geiser, C., Koch, T., & Heene, M. (2017). Anomalous results in G-factor models: Explanations and alternatives. Psychological Methods, 22(3), 541562. https://doi.org/10.1037/met0000083CrossRefGoogle ScholarPubMed
Eid, M., Koch, T., & Geiser, C. (2023). Multitrait-multimethod models. In Hoyle, R. H. (Ed.), Handbook of structural equation modeling 2nd edn. (pp. 349366). The Guilford Press.Google Scholar
Eid, M., Krumm, S., Koch, T., & Schulze, J. (2018). Bifactor models for predicting criteria by general and specific factors: Problems of non-identifiability and alternative solutions. Journal of Intelligence, 6(3), 42. https://doi.org/10.3390/jintelligence6030042CrossRefGoogle Scholar
Eid, M., Lischetzke, T., Nussbeck, F. W., & Trierweiler, L. I. (2003). Separating trait effects from trait-specific method effects in multitrait-multimethod models: A multiple-indicator CT-C(M-1) model. Psychological Methods, 8(1), 3860. https://doi.org/10.1037/1082-989x.8.1.38CrossRefGoogle ScholarPubMed
Eid, M., Notz, P., Steyer, R., & Schwenkmezger, P. (1994). Validating scales for the assessment of mood level and variability by latent state-trait analyses. Personality and Individual Differences, 16(1), 6376. https://doi.org/10.1016/0191-8869(94)90111-2CrossRefGoogle Scholar
Eid, M., Nussbeck, F. W., Geiser, C., Cole, D. A., Gollwitzer, M., & Lischetzke, T. (2008). Structural equation modeling of multitrait-multimethod data: Different models for different types of methods. Psychological Methods, 13(3), 230253. https://doi.org/10.1037/a0013219CrossRefGoogle ScholarPubMed
Fang, G., Guo, J., Xu, X., Ying, Z., & Zhang, S. (2021). Identifiability of bifactor models. Statistica Sinica, 31, 23092330. https://doi.org/10.5705/ss.202020.0386Google Scholar
Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456465. https://doi.org/10.1177/2515245920952393CrossRefGoogle Scholar
Geiser, C., Eid, M., & Nussbeck, F. W. (2008). On the meaning of the latent variables in the CT-C(M-1) model: A comment on Maydeu-Olivares and Coffman (2006). Psychological Methods, 13(1), 4957. https://doi.org/10.1037/1082-989X.13.1.49CrossRefGoogle Scholar
Geiser, C., Koch, T., & Eid, M. (2014). Data-generating mechanisms versus constructively defined latent variables in multitrait-multimethod analysis: A comment on Castro-Schilo, Widaman, and Grimm (2013). Structural Equation Modeling: A Multidisciplinary Journal, 21(4), 509523. https://doi.org/10.1080/10705511.2014.919816CrossRefGoogle ScholarPubMed
Gibbons, R. D., Bock, R. D., Hedeker, D., Weiss, D. J., Segawa, E., Bhaumik, D. K., Kupfer, D. J., Frank, E., Grochocinski, V. J., & Stover, A. (2007). Full-information item bifactor analysis of graded response data. Applied Psychological Measurement, 31(1), 419. https://doi.org/10.1177/0146621606289485CrossRefGoogle Scholar
Gibbons, R. D., & Hedeker, D. R. (1992). Full-information item bi-factor analysis. Psychometrika, 57(3), 423436. https://doi.org/10.1007/bf02295430CrossRefGoogle Scholar
Grayson, D., & Marsh, H. W. (1994). Identification with deficient rank loading matrices in confirmatory factor analysis: Multitrait-multimethod models. Psychometrika, 59(1), 121134. https://doi.org/10.1007/bf02294271CrossRefGoogle Scholar
Green, S., & Yang, Y. (2018). Empirical underidentification with the bifactor model: A case study. Educational and Psychological Measurement, 78(5), 717736. https://doi.org/10.1177/0013164417719947CrossRefGoogle ScholarPubMed
Hedeker, D., & Gibbons, R. D. (2006). Longitudinal data analysis. Wiley. https://doi.org/10.1002/0470036486 Google Scholar
Holzinger, K. J., & Swineford, F. (1939). A study in factor analysis: The stability of a bi-factor solution. University of Chicago Press.Google Scholar
Jeon, M., Rijmen, F., & Rabe-Hesketh, S. (2013). Modeling differential item functioning using a generalization of the multiple-group bifactor model. Journal of Educational and Behavioral Statistics, 38(1), 3260. https://doi.org/10.3102/1076998611432173CrossRefGoogle Scholar
Jeon, M., Rijmen, F., & Rabe-Hesketh, S. (2018). CFA models with a general factor and multiple sets of secondary factors. Psychometrika, 83(4), 785808. https://doi.org/10.1007/s11336-018-9633-xCrossRefGoogle ScholarPubMed
Jöreskog, K. G. (1978). Structural analysis of covariance and correlation matrices. Psychometrika, 43(4), 443477. https://doi.org/10.1007/bf02293808CrossRefGoogle Scholar
Kenny, D. A. (1976). An empirical application of confirmatory factor analysis to the multitrait-multimethod matrix. Journal of Experimental Social Psychology, 12(3), 247252. https://doi.org/10.1016/0022-1031(76)90055-xCrossRefGoogle Scholar
Kenny, D. A., & Kashy, D. A. (1992). Analysis of the multitrait-multimethod matrix by confirmatory factor analysis. Psychological Bulletin, 112(1), 165172. https://doi.org/10.1037/0033-2909.112.1.165CrossRefGoogle Scholar
Koch, T., Eid, M., & Lochner, K. (2018). Multitrait-multimethod-analysis: The psychometric foundation of CFA-MTMM models. In Irwing, P., Booth, T., & Hughes, D. J. (Eds.), The Wiley handbook of psychometric testing: A multidisciplinary reference on survey, scale and test development (pp. 781846). Wiley Online Library. https://doi.org/10.1002/9781118489772.ch25 CrossRefGoogle Scholar
Koch, T., Holtmann, J., Bohn, J., & Eid, M. (2018). Explaining general and specific factors in longitudinal, multimethod, and bifactor models: Some caveats and recommendations. Psychological Methods, 23(3), 505523. https://doi.org/10.1037/met0000146CrossRefGoogle ScholarPubMed
Little, T. D. (2013). Longitudinal structural equation modeling. Guilford Press.Google Scholar
Markon, K. E. (2019). Bifactor and hierarchical models: Specification, inference, and interpretation. Annual Review of Clinical Psychology, 15(1), 5169. https://doi.org/10.1146/annurev-clinpsy-050718-095522CrossRefGoogle ScholarPubMed
Marsh, H. W., Morin, A. J. S., Parker, P. D., & Kaur, G. (2014). Exploratory structural equation modeling: An integration of the best features of exploratory and confirmatory factor analysis. Annual Review of Clinical Psychology, 10(1), 85110. https://doi.org/10.1146/annurev-clinpsy-032813-153700CrossRefGoogle ScholarPubMed
McArdle, J. J., & Nesselroade, J. R. (2014). Longitudinal data analysis using structural equation models. American Psychological Association. https://doi.org/10.1037/14440-000 CrossRefGoogle Scholar
Meurer, A., Smith, C. P., Paprocki, M., Èertík, O., Kirpichev, S. B., Rocklin, M., Kumar, A., Ivanov, S., Moore, J. K., Singh, S., Rathnayake, T., Vig, S., Granger, B. E., Muller, R. P., Bonazzi, F., Gupta, H., Vats, S., Johansson, F., Pedregosa, F., Curry, M. J., Terrel, A. R., Rouǩa, Š., Saboo, A., Fernando, I., Kulal, S., Cimrman, R., & Scopatz, A. (2017). SymPy: Symbolic computing in Python. PeerJ Computer Science, 3, e103. https://doi.org/10.7717/peerj-cs.103 CrossRefGoogle Scholar
Newsom, J. T. (2015). Longitudinal structural equation modeling: A comprehensive introduction. Routledge. https://doi.org/10.4324/9781315871318 CrossRefGoogle Scholar
Nussbeck, F. W., Eid, M., Geiser, C., Courvoisier, D. S., & Lischetzke, T. (2009). A CTC(M-1) model for different types of raters. Methodology, 5(3), 8898. https://doi.org/10.1027/1614-2241.5.3.88CrossRefGoogle Scholar
Plieninger, H., & Meiser, T. (2014). Validity of multiprocess IRT models for separating content and response styles. Educational and Psychological Measurement, 74(5), 875899. https://doi.org/10.1177/0013164413514998CrossRefGoogle Scholar
Pohl, S., & Steyer, R. (2010). Modeling common traits and method effects in multitrait-multimethod analysis. Multivariate Behavioral Research, 45(1), 4572. https://doi.org/10.1080/00273170903504729CrossRefGoogle ScholarPubMed
Reilly, T. (1995). A necessary and sufficient condition for identification of confirmatory factor analysis models of factor complexity one. Sociological Methods & Research, 23(4), 421441. https://doi.org/10.1177/0049124195023004002CrossRefGoogle Scholar
Reilly, T., & O’Brien, R. M. (1996). Identification of confirmatory factor analysis models of arbitrary complexity: The side-by-side rule. Sociological Methods & Research, 24(4), 473491. https://doi.org/10.1177/0049124196024004003CrossRefGoogle Scholar
Reise, S. P. (2012). The rediscovery of bifactor measurement models. Multivariate Behavioral Research, 47(5), 667696. https://doi.org/10.1080/00273171.2012.715555CrossRefGoogle ScholarPubMed
Rijmen, F. (2010). Formal relations and an empirical comparison among the bi-factor, the testlet, and a second-order multidimensional IRT model. Journal of Educational Measurement, 47(3), 361372. https://doi.org/10.1111/j.1745-3984.2010.00118.xCrossRefGoogle Scholar
Schermelleh-Engel, K., Keith, N., Moosbrugger, H., & Hodapp, V. (2004). Decomposing person and occasion-specific effects: An extension of latent state-trait (LST) theory to hierarchical LST models. Psychological Methods, 9(2), 198219. https://doi.org/10.1037/1082-989x.9.2.198CrossRefGoogle ScholarPubMed
Schmid, J., & Leiman, J. M. (1957). The development of hierarchical factor solutions. Psychometrika, 22(1), 5361. https://doi.org/10.1007/BF02289209CrossRefGoogle Scholar
Schmitt, M. (2000). Mother-daughter attachment and family cohesion: Single-and multi-construct latent state-trait models of current and retrospective perceptions. European Journal of Psychological Assessment, 16(2), 115124. https://doi.org/10.1027//1015-5759.16.2.115CrossRefGoogle Scholar
Shapiro, A. (1985). Identifiability of factor analysis: Some results and open problems. Linear Algebra and its Applications, 70, 17. https://doi.org/10.1016/0024-3795(85)90038-2CrossRefGoogle Scholar
Steyer, R. (1989). Models of classical psychometric test theory as stochastic measurement models: Representation, uniqueness, meaningfulness, identifiability, and testability. Methodika, 3, 2560.Google Scholar
Steyer, R. (2015). Classical (psychometric) test theory. In International encyclopedia of the social and behavioral sciences (pp. 785791). Elsevier. https://doi.org/10.1016/b978-0-08-097086-8.44006-7 CrossRefGoogle Scholar
Steyer, R., Geiser, C., & Fiege, C. (2012). Latent state-trait models. In APA Handbook of research methods in psychology, Vol 3: Data analysis and research publication (pp. 291308). American Psychological Association. https://doi.org/10.1037/13621-014.Google Scholar
Steyer, R., Mayer, A., Geiser, C., & Cole, D. A. (2015). A theory of states and traits-Revised. Annual Review of Clinical Psychology, 11(1), 7198. https://doi.org/10.1146/annurev-clinpsy-032813-153719CrossRefGoogle ScholarPubMed
Tóth-Király, I., Morin, A. J. S., Böthe, B., Orosz, G., & Rigó, A. (2017). Investigating the multidimensionality of need fulfillment: A bifactor exploratory structural equation modeling representation. Structural Equation Modeling: A Multidisciplinary Journal, 25(2), 267286. https://doi.org/10.1080/10705511.2017.1374867CrossRefGoogle Scholar
Wang, J., & Wang, X. (2012). Structural equation modeling: Applications using Mplus. Wiley. https://doi.org/10.1002/9781118356258 CrossRefGoogle Scholar
Wegge, L. L. (1996). Local identifiability of the factor analysis and measurement error model parameter. Journal of Econometrics, 70(2), 351382. https://doi.org/10.1016/0304-4076(94)01676-3CrossRefGoogle Scholar
Wu, H., & Estabrook, R. (2016). Identification of confirmatory factor analysis models of different levels of invariance for ordered categorical outcomes. Psychometrika, 81(4), 10141045. https://doi.org/10.1007/s11336-016-9506-0CrossRefGoogle ScholarPubMed
Yung, Y.-F., Thissen, D., & McLeod, L. D. (1999). On the relationship between the higher-order factor model and the hierarchical factor model. Psychometrika, 64(2), 113128. https://doi.org/10.1007/BF02294531CrossRefGoogle Scholar
Anderson, T. W., & Rubin, H. (1956). Statistical inference in factor analysis. In Neyman, J., (Ed.), Proceedings of the third Berkeley symposium on mathematical statistics and probability (pp. 111150). University of California Press.Google Scholar
Bekker, P. A. (1989). Identification in restricted factor models and the evaluation of rank conditions. Journal of Econometrics, 41(1), 516. https://doi.org/10.1016/0304-4076(89)90040-7CrossRefGoogle Scholar
Bekker, P. A., Merckens, A., & Wansbeek, T. J. (1994). Identification, equivalent models and computer algebra. Academic Press. https://doi.org/10.1016/C2013-0-07176-9 Google Scholar
Bekker, P. A., & ten Berge, J. M. F. (1997). Generic global identification in factor analysis. Linear Algebra and its Applications, 264, 255263. https://doi.org/10.1016/s0024-3795(96)00363-1CrossRefGoogle Scholar
Bollen, K. A. (1989). Structural equations with latent variables. Wiley. https://doi.org/10.1002/9781118619179 CrossRefGoogle Scholar
Bollen, K. A., & Curran, P. J. (2005). Latent curve models: A structural equation perspective. Wiley. https://doi.org/10.1002/0471746096 CrossRefGoogle Scholar
Cai, L., Yang, J. S., & Hansen, M. (2011). Generalized full-information item bifactor analysis. Psychological Methods, 16(3), 221248. https://doi.org/10.1037/a0023350CrossRefGoogle ScholarPubMed
Chen, F. F., Hayes, A., Carver, C. S., Laurenceau, J.-P., & Zhang, Z. (2012). Modeling general and specific variance in multifaceted constructs: A comparison of the bifactor model to other approaches. Journal of Personality, 80(1), 219251. https://doi.org/10.1111/j.1467-6494.2011.00739.xCrossRefGoogle ScholarPubMed
Christensen, A. P., Silvia, P. J., Nusbaum, E. C., & Beaty, R. E. (2018). Clever people: Intelligence and humor production ability. Psychology of Aesthetics, Creativity, and the Arts, 12(2), 136143. https://doi.org/10.1037/aca0000109CrossRefGoogle Scholar
Courvoisier, D. S., Nussbeck, F. W., Eid, M., Geiser, C., & Cole, D. A. (2008). Analyzing the convergent and discriminant validity of states and traits: Development and applications of multimethod latent state-trait models. Psychological Assessment, 20(3), 270280. https://doi.org/10.1037/a0012812CrossRefGoogle Scholar
Crede, M., & Harms, P. (2019). Questionable research practices when using confirmatory factor analysis. Journal of Managerial Psychology, 34(1), 1830. https://doi.org/10.1108/JMP-06-2018-0272CrossRefGoogle Scholar
Davis, W. R. (1993). The FC1 rule of identification for confirmatory factor analysis. Sociological Methods and Research, 21(4), 403437. https://doi.org/10.1177/0049124193021004001CrossRefGoogle Scholar
Debusscher, J., Hofmans, J., & De Fruyt, F. (2017). The multiple face(t)s of state conscientiousness: Predicting task performance and organizational citizenship behavior. Journal of Research in Personality, 69, 7885. https://doi.org/10.1016/j.jrp.2016.06.009CrossRefGoogle Scholar
Eid, M. (2000). A multitrait-multimethod model with minimal assumptions. Psychometrika, 65(2), 241261. https://doi.org/10.1007/bf02294377CrossRefGoogle Scholar
Eid, M., Geiser, C., & Koch, T. (in preparation). Structural equation modeling of multiple rater data. Guilford.Google Scholar
Eid, M., Geiser, C., Koch, T., & Heene, M. (2017). Anomalous results in G-factor models: Explanations and alternatives. Psychological Methods, 22(3), 541562. https://doi.org/10.1037/met0000083CrossRefGoogle ScholarPubMed
Eid, M., Koch, T., & Geiser, C. (2023). Multitrait-multimethod models. In Hoyle, R. H. (Ed.), Handbook of structural equation modeling 2nd edn. (pp. 349366). The Guilford Press.Google Scholar
Eid, M., Krumm, S., Koch, T., & Schulze, J. (2018). Bifactor models for predicting criteria by general and specific factors: Problems of non-identifiability and alternative solutions. Journal of Intelligence, 6(3), 42. https://doi.org/10.3390/jintelligence6030042CrossRefGoogle Scholar
Eid, M., Lischetzke, T., Nussbeck, F. W., & Trierweiler, L. I. (2003). Separating trait effects from trait-specific method effects in multitrait-multimethod models: A multiple-indicator CT-C(M-1) model. Psychological Methods, 8(1), 3860. https://doi.org/10.1037/1082-989x.8.1.38CrossRefGoogle ScholarPubMed
Eid, M., Notz, P., Steyer, R., & Schwenkmezger, P. (1994). Validating scales for the assessment of mood level and variability by latent state-trait analyses. Personality and Individual Differences, 16(1), 6376. https://doi.org/10.1016/0191-8869(94)90111-2CrossRefGoogle Scholar
Eid, M., Nussbeck, F. W., Geiser, C., Cole, D. A., Gollwitzer, M., & Lischetzke, T. (2008). Structural equation modeling of multitrait-multimethod data: Different models for different types of methods. Psychological Methods, 13(3), 230253. https://doi.org/10.1037/a0013219CrossRefGoogle ScholarPubMed
Fang, G., Guo, J., Xu, X., Ying, Z., & Zhang, S. (2021). Identifiability of bifactor models. Statistica Sinica, 31, 23092330. https://doi.org/10.5705/ss.202020.0386Google Scholar
Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456465. https://doi.org/10.1177/2515245920952393CrossRefGoogle Scholar
Geiser, C., Eid, M., & Nussbeck, F. W. (2008). On the meaning of the latent variables in the CT-C(M-1) model: A comment on Maydeu-Olivares and Coffman (2006). Psychological Methods, 13(1), 4957. https://doi.org/10.1037/1082-989X.13.1.49CrossRefGoogle Scholar
Geiser, C., Koch, T., & Eid, M. (2014). Data-generating mechanisms versus constructively defined latent variables in multitrait-multimethod analysis: A comment on Castro-Schilo, Widaman, and Grimm (2013). Structural Equation Modeling: A Multidisciplinary Journal, 21(4), 509523. https://doi.org/10.1080/10705511.2014.919816CrossRefGoogle ScholarPubMed
Gibbons, R. D., Bock, R. D., Hedeker, D., Weiss, D. J., Segawa, E., Bhaumik, D. K., Kupfer, D. J., Frank, E., Grochocinski, V. J., & Stover, A. (2007). Full-information item bifactor analysis of graded response data. Applied Psychological Measurement, 31(1), 419. https://doi.org/10.1177/0146621606289485CrossRefGoogle Scholar
Gibbons, R. D., & Hedeker, D. R. (1992). Full-information item bi-factor analysis. Psychometrika, 57(3), 423436. https://doi.org/10.1007/bf02295430CrossRefGoogle Scholar
Grayson, D., & Marsh, H. W. (1994). Identification with deficient rank loading matrices in confirmatory factor analysis: Multitrait-multimethod models. Psychometrika, 59(1), 121134. https://doi.org/10.1007/bf02294271CrossRefGoogle Scholar
Green, S., & Yang, Y. (2018). Empirical underidentification with the bifactor model: A case study. Educational and Psychological Measurement, 78(5), 717736. https://doi.org/10.1177/0013164417719947CrossRefGoogle ScholarPubMed
Hedeker, D., & Gibbons, R. D. (2006). Longitudinal data analysis. Wiley. https://doi.org/10.1002/0470036486 Google Scholar
Holzinger, K. J., & Swineford, F. (1939). A study in factor analysis: The stability of a bi-factor solution. University of Chicago Press.Google Scholar
Jeon, M., Rijmen, F., & Rabe-Hesketh, S. (2013). Modeling differential item functioning using a generalization of the multiple-group bifactor model. Journal of Educational and Behavioral Statistics, 38(1), 3260. https://doi.org/10.3102/1076998611432173CrossRefGoogle Scholar
Jeon, M., Rijmen, F., & Rabe-Hesketh, S. (2018). CFA models with a general factor and multiple sets of secondary factors. Psychometrika, 83(4), 785808. https://doi.org/10.1007/s11336-018-9633-xCrossRefGoogle ScholarPubMed
Jöreskog, K. G. (1978). Structural analysis of covariance and correlation matrices. Psychometrika, 43(4), 443477. https://doi.org/10.1007/bf02293808CrossRefGoogle Scholar
Kenny, D. A. (1976). An empirical application of confirmatory factor analysis to the multitrait-multimethod matrix. Journal of Experimental Social Psychology, 12(3), 247252. https://doi.org/10.1016/0022-1031(76)90055-xCrossRefGoogle Scholar
Kenny, D. A., & Kashy, D. A. (1992). Analysis of the multitrait-multimethod matrix by confirmatory factor analysis. Psychological Bulletin, 112(1), 165172. https://doi.org/10.1037/0033-2909.112.1.165CrossRefGoogle Scholar
Koch, T., Eid, M., & Lochner, K. (2018). Multitrait-multimethod-analysis: The psychometric foundation of CFA-MTMM models. In Irwing, P., Booth, T., & Hughes, D. J. (Eds.), The Wiley handbook of psychometric testing: A multidisciplinary reference on survey, scale and test development (pp. 781846). Wiley Online Library. https://doi.org/10.1002/9781118489772.ch25 CrossRefGoogle Scholar
Koch, T., Holtmann, J., Bohn, J., & Eid, M. (2018). Explaining general and specific factors in longitudinal, multimethod, and bifactor models: Some caveats and recommendations. Psychological Methods, 23(3), 505523. https://doi.org/10.1037/met0000146CrossRefGoogle ScholarPubMed
Little, T. D. (2013). Longitudinal structural equation modeling. Guilford Press.Google Scholar
Markon, K. E. (2019). Bifactor and hierarchical models: Specification, inference, and interpretation. Annual Review of Clinical Psychology, 15(1), 5169. https://doi.org/10.1146/annurev-clinpsy-050718-095522CrossRefGoogle ScholarPubMed
Marsh, H. W., Morin, A. J. S., Parker, P. D., & Kaur, G. (2014). Exploratory structural equation modeling: An integration of the best features of exploratory and confirmatory factor analysis. Annual Review of Clinical Psychology, 10(1), 85110. https://doi.org/10.1146/annurev-clinpsy-032813-153700CrossRefGoogle ScholarPubMed
McArdle, J. J., & Nesselroade, J. R. (2014). Longitudinal data analysis using structural equation models. American Psychological Association. https://doi.org/10.1037/14440-000 CrossRefGoogle Scholar
Meurer, A., Smith, C. P., Paprocki, M., Èertík, O., Kirpichev, S. B., Rocklin, M., Kumar, A., Ivanov, S., Moore, J. K., Singh, S., Rathnayake, T., Vig, S., Granger, B. E., Muller, R. P., Bonazzi, F., Gupta, H., Vats, S., Johansson, F., Pedregosa, F., Curry, M. J., Terrel, A. R., Rouǩa, Š., Saboo, A., Fernando, I., Kulal, S., Cimrman, R., & Scopatz, A. (2017). SymPy: Symbolic computing in Python. PeerJ Computer Science, 3, e103. https://doi.org/10.7717/peerj-cs.103 CrossRefGoogle Scholar
Newsom, J. T. (2015). Longitudinal structural equation modeling: A comprehensive introduction. Routledge. https://doi.org/10.4324/9781315871318 CrossRefGoogle Scholar
Nussbeck, F. W., Eid, M., Geiser, C., Courvoisier, D. S., & Lischetzke, T. (2009). A CTC(M-1) model for different types of raters. Methodology, 5(3), 8898. https://doi.org/10.1027/1614-2241.5.3.88CrossRefGoogle Scholar
Plieninger, H., & Meiser, T. (2014). Validity of multiprocess IRT models for separating content and response styles. Educational and Psychological Measurement, 74(5), 875899. https://doi.org/10.1177/0013164413514998CrossRefGoogle Scholar
Pohl, S., & Steyer, R. (2010). Modeling common traits and method effects in multitrait-multimethod analysis. Multivariate Behavioral Research, 45(1), 4572. https://doi.org/10.1080/00273170903504729CrossRefGoogle ScholarPubMed
Reilly, T. (1995). A necessary and sufficient condition for identification of confirmatory factor analysis models of factor complexity one. Sociological Methods & Research, 23(4), 421441. https://doi.org/10.1177/0049124195023004002CrossRefGoogle Scholar
Reilly, T., & O’Brien, R. M. (1996). Identification of confirmatory factor analysis models of arbitrary complexity: The side-by-side rule. Sociological Methods & Research, 24(4), 473491. https://doi.org/10.1177/0049124196024004003CrossRefGoogle Scholar
Reise, S. P. (2012). The rediscovery of bifactor measurement models. Multivariate Behavioral Research, 47(5), 667696. https://doi.org/10.1080/00273171.2012.715555CrossRefGoogle ScholarPubMed
Rijmen, F. (2010). Formal relations and an empirical comparison among the bi-factor, the testlet, and a second-order multidimensional IRT model. Journal of Educational Measurement, 47(3), 361372. https://doi.org/10.1111/j.1745-3984.2010.00118.xCrossRefGoogle Scholar
Schermelleh-Engel, K., Keith, N., Moosbrugger, H., & Hodapp, V. (2004). Decomposing person and occasion-specific effects: An extension of latent state-trait (LST) theory to hierarchical LST models. Psychological Methods, 9(2), 198219. https://doi.org/10.1037/1082-989x.9.2.198CrossRefGoogle ScholarPubMed
Schmid, J., & Leiman, J. M. (1957). The development of hierarchical factor solutions. Psychometrika, 22(1), 5361. https://doi.org/10.1007/BF02289209CrossRefGoogle Scholar
Schmitt, M. (2000). Mother-daughter attachment and family cohesion: Single-and multi-construct latent state-trait models of current and retrospective perceptions. European Journal of Psychological Assessment, 16(2), 115124. https://doi.org/10.1027//1015-5759.16.2.115CrossRefGoogle Scholar
Shapiro, A. (1985). Identifiability of factor analysis: Some results and open problems. Linear Algebra and its Applications, 70, 17. https://doi.org/10.1016/0024-3795(85)90038-2CrossRefGoogle Scholar
Steyer, R. (1989). Models of classical psychometric test theory as stochastic measurement models: Representation, uniqueness, meaningfulness, identifiability, and testability. Methodika, 3, 2560.Google Scholar
Steyer, R. (2015). Classical (psychometric) test theory. In International encyclopedia of the social and behavioral sciences (pp. 785791). Elsevier. https://doi.org/10.1016/b978-0-08-097086-8.44006-7 CrossRefGoogle Scholar
Steyer, R., Geiser, C., & Fiege, C. (2012). Latent state-trait models. In APA Handbook of research methods in psychology, Vol 3: Data analysis and research publication (pp. 291308). American Psychological Association. https://doi.org/10.1037/13621-014.Google Scholar
Steyer, R., Mayer, A., Geiser, C., & Cole, D. A. (2015). A theory of states and traits-Revised. Annual Review of Clinical Psychology, 11(1), 7198. https://doi.org/10.1146/annurev-clinpsy-032813-153719CrossRefGoogle ScholarPubMed
Tóth-Király, I., Morin, A. J. S., Böthe, B., Orosz, G., & Rigó, A. (2017). Investigating the multidimensionality of need fulfillment: A bifactor exploratory structural equation modeling representation. Structural Equation Modeling: A Multidisciplinary Journal, 25(2), 267286. https://doi.org/10.1080/10705511.2017.1374867CrossRefGoogle Scholar
Wang, J., & Wang, X. (2012). Structural equation modeling: Applications using Mplus. Wiley. https://doi.org/10.1002/9781118356258 CrossRefGoogle Scholar
Wegge, L. L. (1996). Local identifiability of the factor analysis and measurement error model parameter. Journal of Econometrics, 70(2), 351382. https://doi.org/10.1016/0304-4076(94)01676-3CrossRefGoogle Scholar
Wu, H., & Estabrook, R. (2016). Identification of confirmatory factor analysis models of different levels of invariance for ordered categorical outcomes. Psychometrika, 81(4), 10141045. https://doi.org/10.1007/s11336-016-9506-0CrossRefGoogle ScholarPubMed
Yung, Y.-F., Thissen, D., & McLeod, L. D. (1999). On the relationship between the higher-order factor model and the hierarchical factor model. Psychometrika, 64(2), 113128. https://doi.org/10.1007/BF02294531CrossRefGoogle Scholar
Figure 0

Figure 1. A bifactor ESEM model given by items Xji\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$X_{ji}$$\end{document}, general factor NF, and specific factors SXi\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S_{Xi}$$\end{document}, j∈{1,2},i∈{1,2,3}\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$j\in \{1,2\}, i\in \{1,2,3\}$$\end{document}, on the one hand, and a two-factor model with factors P and N indicated by their respective items, Pi\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$P_{i}$$\end{document}, Ni\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$N_{i}$$\end{document}, i∈{1,2}\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$i\in \{1,2\}$$\end{document}, on the other. In the bifactor ESEM model, the exploratory loadings are shown with dotted lines. There are no cross-loadings, but all factors are correlated across models. Cross-model covariances are represented by dashed lines. The nomenclature is chosen to resemble the model employed by Tóth-Király et al. (2017) relating need fulfillment to positive and negative affect. Analogously to Figs. 2 and 3, the parameter labels have been omitted.

Figure 1

Figure 2. A composed model with a CT-CU model and a multiprocess IRT model as primary models. The (restricted) CT-CU model consists of factors ERS and MRS indicated by their respective items. Residual variables of items ERSi\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$ERS_{i}$$\end{document} and MRSi\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$MRS_{i}$$\end{document} are correlated for every i, i∈{1,2,3}\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$i\in \{1,2,3\}$$\end{document}. The multiprocess IRT model is given by factors PI, PII, PIII. The auxiliary latent variables Psi∗\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Ps_{i}^{*}$$\end{document} are indicated by dichotomous manifest variables Psi\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Ps_{i}$$\end{document} with unit loadings. (Ps2∗\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Ps_{2}^{*}$$\end{document} and Ps3∗\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Ps_{3}^{*}$$\end{document} are pseudo-variables and can be identified with PII and PIII, respectively.) There are no cross-loadings, but all factors are correlated across models. Cross-model covariances are represented by dashed lines. This is a simplified version of the model employed by Plieninger and Meiser (2014) to relate response styles and IRT processes, in which the factors are not simply correlated across models, but the process factors are regressed on the response style factors, and there is an additional criterion variable. Analogously to Figs. 1 and  3, the parameter labels have been omitted.

Figure 2

Figure 3. A composed model with two growth curve models (see, e.g., Bollen & Curran 2005) given by items Xji\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$X_{ji}$$\end{document}, occasions OXi\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$O_{Xi}$$\end{document}, intercept factor IntX\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Int_{X}$$\end{document}, slope factor SloX\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Slo_{X}$$\end{document} as well as items Yji\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Y_{ji}$$\end{document}, occasions OYi\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$O_{Yi}$$\end{document}, intercept factor IntY\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Int_{Y}$$\end{document}, slope factor SloY\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Slo_{Y}$$\end{document}, j∈{1,2}\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$j\in \{1,2\}$$\end{document} and i∈{1,2,3}\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$i\in \{1,2,3\}$$\end{document}, respectively. As in Fig. 4, there are no cross-loadings. All factors are correlated across models, and cross-model covariances are represented by dashed lines. Loadings, (co-)variances, and errors have not been labeled for the sake of readability because the model is not discussed further in this article.

Figure 3

Figure 4. A composed model with two bifactor models as primary models, given by items Xji\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$X_{ji}$$\end{document}, general factor GX\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$G_{X}$$\end{document}, and specific factors SXi\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S_{Xi}$$\end{document} as well as items Yji\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Y_{ji}$$\end{document}, general factor GY\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$G_{Y}$$\end{document}, and specific factors SYi\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S_{Yi}$$\end{document}, j∈{1,2}\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$j\in \{1,2\}$$\end{document} and i∈{1,2,3}\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$i\in \{1,2,3\}$$\end{document}, respectively. Items for the first primary (bifactor) model do not load on the second primary (bifactor) model and vice versa. Black dashed lines represent covariances on the diagonal of the cross-model factor covariance matrix ΦYX\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\Phi _{YX}$$\end{document} as given in Eq. (9), and gray dashed lines represent off-diagonal covariances in ΦYX\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\Phi _{YX}$$\end{document}. If the covariances represented by the gray dashed lines are fixed to zero as in Eq. (10), the model is a multiconstruct bifactor model (see Koch et al., 2018).

Figure 4

Figure 5. A composed model with two reduced bifactor models as primary models. A reduced bifactor model with items Xji\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$X_{ji}$$\end{document}, general factor GX\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$G_{X}$$\end{document}, and specific factors SX2\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S_{X2}$$\end{document} and SX3\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S_{X3}$$\end{document}, composed with another reduced bifactor model with items Yji\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Y_{ji}$$\end{document}, general factor GY\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$G_{Y}$$\end{document}, and specific factors SY2\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S_{Y2}$$\end{document} and SY3\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S_{Y3}$$\end{document} for j∈{1,2}\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$j\in \{1,2\}$$\end{document} and i∈{1,2,3}\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$i\in \{1,2,3\}$$\end{document}. As the non-reduced version depicted in Fig. 4, items for the first primary (bifactor) model do not load on the second primary (bifactor) model and vice versa. In comparison with the model depicted in Fig. 4, note the additional covariances (black solid lines) that are allowed to be nonzero in the bifactor(S-1)\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$(S-1)$$\end{document} model (Eid et al., 2017). The gray dashed lines between the specific factors across the primary models are additional covariances that are allowed to be nonzero while still obtaining an identified composed model.

Figure 5

Figure 6. A composed model with two latent means models as primary models. A latent means model with items Xji\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$X_{ji}$$\end{document}, general factor GX\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$G_X$$\end{document}, and specific factors SX1\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S_{X1}$$\end{document} and SX3\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S_{X3}$$\end{document}, composed with another latent means model with items Yji\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$Y_{ji}$$\end{document}, general factor GY\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$G_Y$$\end{document}, and specific factors SY1\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S_{Y1}$$\end{document} and SY3\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$S_{Y3}$$\end{document}, j∈{1,2}\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$j\in \{1,2\}$$\end{document} and i∈{1,2,3}\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$i\in \{1,2,3\}$$\end{document}. All factors are correlated.

Supplementary material: File

Bee et al. supplementary Material

Supplementary Material
Download Bee et al. supplementary Material(File)
File 242.9 KB