We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As a multivariate model of the number of events, Rasch's multiplicative Poisson model is extended such that the parameters for individuals in the prior gamma distribution have continuous covariates. The parameters for individuals are integrated out and the hyperparameters in the prior distribution are estimated by a numerical method separately from difficulty parameters that are treated as fixed parameters or random variables. In addition, a method is presented for estimating parameters in Rasch's model with missing values.
A method for estimation in factor analysis is presented. The method is based on the assumption that the residual (specific and error) variances are proportional to the reciprocal values of the diagonal elements of the inverted covariance (correlation) matrix. The estimation is performed by a modification of Whittle's least squares technique. The method is independent of the unit of scoring in the tests. Applications are given in the form of nine reanalyses of data of various kinds found in earlier literature.
Horn’s parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy–Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy–Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy–Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy–Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.
Certain questions about the shape of item-test regressions are answered on the basis of data obtained from 103,275 examinees. Ten of the regressions obtained are shown in illustrative plots.
The relationship between variables in applied and experimental research is often investigated by the use of extreme (i.e., upper and lower) groups. Earlier analytical work has demonstrated that the extreme groups procedure is more powerful than the standard correlational approach for some values of the correlation and extreme group size. The present article provides methods for using the covariance information that is usually discarded in the classical extreme groups approach. Essentially, then, the new procedure combines the extreme groups approach and the correlational approach. Consequently, it includes the advantages of each and is shown to be more powerful than either approach used alone.
This article analyzes the phenomenology and semiotics of religious processions. On the one hand, these rituals succeed in congregating several individual agencies, thus helping them to obliterate the frontier between the sacred environment of the place of worship and the profane environment of the space surrounding it. Consequently, in religious processions, subjects experience an enlargement of the environment of the sacred that encourages them to believe in its omnipresence, in the reassuring idea that their entire existence takes place (literally and metaphorically) under the protection of transcendence. On the other hand, “accidents” caused by the persistence of individual agencies within the collective one constantly “threaten” the symbolic efficacy of religious processions: the tentative expansion of the sacred environment into the profane results in a symmetrical expansion of the latter into the former.
This article examines the role of referential directness and community emblematization in the documentation of Libyan sign processes construed by Italian colonial ethnographers as secretive. I examine the key texts on these practices to show that colonial ethnographers metasemiotically framed the so-called argots of Libya in terms of what was understood to be their occulting function of hiding one’s intentions and their anti-language function of opposing established society. I show that Italian colonial-ethnological preoccupations with clarity and moral unity were articulated against the discursive background of French colonial ethnology of Algeria as well as Italian racist criminology anchored in the metaphor of relative opacity.
In a recent article Bentler and Woodward (1983) discussed computational and statistical issues related to the greatest lower bound ρ+ to reliability. Although my work (Shapiro, 1982) was cited frequently some results presented were misunderstood. A sample estimate + of ρ + was considered and it was claimed (Bentler & Woodward) that: “Since + is not a closed form expression ... an exact analytic expression for h has not been found” (p. 247). (h is a vector of partial derivatives of ρ+ as a function of the covariance matrix.) Therefore Bentler and Woodward proposed to use numerical derivatives in order to evaluate the asymptotic variance avar (+) of +.
The basic models of signal detection theory involve the parametric measure, d′, generally interpreted as a detectability index. Given two observers, one might wish to know whether their detectability indices are equal or unequal. Gourevitch and Galanter (1967) proposed a large sample statistical test that could be used to test the hypothesis of equal d′ values. In this paper, their large two sample test is extended to a K-sample detection test. If the null hypothesis d1′ = d2′ = ... = dK′ is rejected, one can employ the post hoc confidence interval procedure described in this paper to locate possible statistically significant sources of variance and differences. In addition, it is shown how one can use the Gourevitch and Galanter statistics to test d′ = 0 for a single individual.
This paper brings together and compares two developments in the analysis of Likert attitude scales. The first is the generalization of latent class models to ordered response categories. The second is the introduction of latent trait models with multiplicative parameter structures for the analysis of rating scales. Key similarities and differences between these two methods are described and illustrated by applying a latent trait model and a latent class model to the analysis of a set of “life satisfaction” data. The way in which the latent trait model defines a unit of measurement, takes into account the order of the response categories, and scales the latent classes, is discussed. While the latent class model provides better fit to these data, this is achieved at the cost of a logically inconsistent assignment of individuals to latent classes.
A method is developed for estimating the response time distribution of an unobserved component in a two-component serial model, assuming the components are stochastically independent. The estimate of the component’s density function is constrained only to be unimodal and non-negative. Numerical examples suggest that the method can yield reasonably accurate estimates with sample sizes of 300 and, in some cases, with sample sizes as small as 100.
Common applications of latent variable analysis fail to recognize that data may be obtained from several populations with different sets of parameter values. This article describes the problem and gives an overview of methodology that can address heterogeneity. Artificial examples of mixtures are given, where if the mixture is not recognized, strongly distorted results occur. MIMIC structural modeling is shown to be a useful method for detecting and describing heterogeneity that cannot be handled in regular multiple-group analysis. Other useful methods instead take a random effects approach, describing heterogeneity in terms of random parameter variation across groups. These random effects models connect with emerging methodology for multilevel structural equation modeling of hierarchical data. Examples are drawn from educational achievement testing, psychopathology, and sociology of education. Estimation is carried out by the LISCOMP program.
One of the intriguing questions of factor analysis is the extent to which one can reduce the rank of a symmetric matrix by only changing its diagonal entries. We show in this paper that the set of matrices, which can be reduced to rank r, has positive (Lebesgue) measure if and only if r is greater or equal to the Ledermann bound. In other words the Ledermann bound is shown to be almost surely the greatest lower bound to a reduced rank of the sample covariance matrix. Afterwards an asymptotic sampling theory of so-called minimum trace factor analysis (MTFA) is proposed. The theory is based on continuous and differential properties of functions involved in the MTFA. Convex analysis techniques are utilized to obtain conditions for differentiability of these functions.
Psychologists often use latent transition analysis (LTA) to investigate state-to-state change in discrete latent constructs involving delinquent or risky behaviors. In this setting, latent-state-dependent nonignorable missingness is a potential concern. For some longitudinal models (e.g., growth models), a large literature has addressed extensions to accommodate nonignorable missingness. In contrast, little research has addressed how to extend the LTA to accommodate nonignorable missingness. Here we present a shared parameter LTA that can reduce bias due to latent-state-dependent nonignorable missingness: a parallel-process missing-not-at-random (MNAR-PP) LTA. The MNAR-PP LTA allows outcome process parameters to be interpreted as in the conventional LTA, which facilitates sensitivity analyses assessing changes in estimates between LTA and MNAR-PP LTA. In a sensitivity analysis for our empirical example, previous and current membership in high-delinquency states predicted adolescents’ membership in missingness states that had high nonresponse probabilities for some or all items. A conventional LTA overestimated the proportion of adolescents ending up in a low-delinquency state, compared to an MNAR-PP LTA.
It is shown that measurement error in predictor variables can be modeled using item response theory (IRT). The predictor variables, that may be defined at any level of an hierarchical regression model, are treated as latent variables. The normal ogive model is used to describe the relation between the latent variables and dichotomous observed variables, which may be responses to tests or questionnaires. It will be shown that the multilevel model with measurement error in the observed predictor variables can be estimated in a Bayesian framework using Gibbs sampling. In this article, handling measurement error via the normal ogive model is compared with alternative approaches using the classical true score model. Examples using real data are given.
A procedure is derived for obtaining an orthogonal transformation which most nearly transforms one given matrix into another given matrix, according to some least-squares criterion of fit. From this procedure, three analytic methods are derived for obtaining an orthogonal factor matrix which closely approximates a given oblique factor matrix. The case is considered of approximating a specified subset of oblique vectors by orthogonal vectors.