We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Component loss functions (CLFs) similar to those used in orthogonal rotation are introduced to define criteria for oblique rotation in factor analysis. It is shown how the shape of the CLF affects the performance of the criterion it defines. For example, it is shown that monotone concave CLFs give criteria that are minimized by loadings with perfect simple structure when such loadings exist. Moreover, if the CLFs are strictly concave, minimizing must produce perfect simple structure whenever it exists. Examples show that methods defined by concave CLFs perform well much more generally. While it appears important to use a concave CLF, the specific CLF used is less important. For example, the very simple linear CLF gives a rotation method that can easily outperform the most popular oblique rotation methods promax and quartimin and is competitive with the more complex simplimax and geomin methods.
Ideal-points are widely used to model choices when preferences are single-peaked. Ideal-point choice models have been typically estimated at the individual-level, or have been based on the assumption that ideal-points are normally distributed over the population of choice makers. We propose two probabilistic ideal-point choice models for the external analysis of preferences that allow for more flexible multimodal distributions of ideal-points, thus acknowledging the existence of subpopulations with distinct preferences. The first model extends the ideal-point probit model for heterogeneous preferences to accommodate a mixture of multivariate normal distributions of ideal-points. The second model assumes that ideal-points are uniformly distributed within finite ranges of the attribute space, leading to a more simplistic formulation and a more flexible distribution. The two models are applied to simulated and actual choice data, and compared to the ideal-point probit model.
The present paper proposes a hierarchical, multi-unidimensional two-parameter logistic item response theory (2PL-MUIRT) model extended for a large number of groups. The proposed model was motivated by a large-scale integrative data analysis (IDA) study which combined data (N = 24,336) from 24 independent alcohol intervention studies. IDA projects face unique challenges that are different from those encountered in individual studies, such as the need to establish a common scoring metric across studies and to handle missingness in the pooled data. To address these challenges, we developed a Markov chain Monte Carlo (MCMC) algorithm for a hierarchical 2PL-MUIRT model for multiple groups in which not only were the item parameters and latent traits estimated, but the means and covariance structures for multiple dimensions were also estimated across different groups. Compared to a few existing MCMC algorithms for multidimensional IRT models that constrain the item parameters to facilitate estimation of the covariance matrix, we adapted an MCMC algorithm so that we could directly estimate the correlation matrix for the anchor group without any constraints on the item parameters. The feasibility of the MCMC algorithm and the validity of the basic calibration procedure were examined using a simulation study. Results showed that model parameters could be adequately recovered, and estimated latent trait scores closely approximated true latent trait scores. The algorithm was then applied to analyze real data (69 items across 20 studies for 22,608 participants). The posterior predictive model check showed that the model fit all items well, and the correlations between the MCMC scores and original scores were overall quite high. An additional simulation study demonstrated robustness of the MCMC procedures in the context of the high proportion of missingness in data. The Bayesian hierarchical IRT model using the MCMC algorithms developed in the current study has the potential to be widely implemented for IDA studies or multi-site studies, and can be further refined to meet more complicated needs in applied research.
Wu and Browne (Psychometrika 80(3):571–600, 2015. https://doi.org/10.1007/s11336-015-9451-3; henceforth W & B) introduced the notion of adventitious error to explicitly take into account approximate goodness of fit of covariance structure models (CSMs). Adventitious error supposes that observed covariance matrices are not directly sampled from a theoretical population covariance matrix but from an operational population covariance matrix. This operational matrix is randomly distorted from the theoretical matrix due to differences in study implementations. W & B showed how adventitious error is linked to the root mean square error of approximation (RMSEA) and how the standard errors (SEs) of parameter estimates are augmented. Our contribution is to consider adventitious error as a general phenomenon and to illustrate its consequences. Using simulations, we illustrate that its impact on SEs can be generalized to pairwise relations between variables beyond the CSM context. Using derivations, we conjecture that heterogeneity of effect sizes across studies and overestimation of statistical power can both be interpreted as stemming from adventitious error. We also show that adventitious error, if it occurs, has an impact on the uncertainty of composite measurement outcomes such as factor scores and summed scores. The results of a simulation study show that the impact on measurement uncertainty is rather small although larger for factor scores than for summed scores. Adventitious error is an assumption about the data generating mechanism; the notion offers a statistical framework for understanding a broad range of phenomena, including approximate fit, varying research findings, heterogeneity of effects, and overestimates of power.
The High Court has often said that the common law must conform to the Constitution. The High Court has not completely explained why this is so. This requirement is not explicitly mentioned anywhere in the Constitution itself. A number of scholars have suggested possible answers. One is that the Constitution is the supreme law and binding on everyone. Another suggestion has been that the common law must conform because the Constitution constrains ‘state action’: something more than just an exercise of constitutionally conferred power. This latter explanation appears to deviate from the High Court's exposition of the common law's relationship with the Constitution in Lange v Australian Broadcasting Commission. This article suggests that the Constitution has a broader application to the common law, in that it constrains all uses of judicial power, not just those that are considered to be ‘state action’. It contends that it is implicit in s 71 of the Constitution that the power to develop the common law yields to constitutional imperatives. This theory is more descriptively consistent with the High Court's practice and observations about the relationship between the common law and the Constitution.
Papers involving factorial methods appearing in current literature often misuse factorial methods. These meaningless results are reached because of neglect of certain conditions basic to factor theory. The conditions are:
(1) The number of basic factors must be smaller than the number of tests.
(2) The diagonals of the correlation matrix must be regarded as unknown.
(3) The axes must be rotated into a simple configuration.
(4) Each factor must be overdetermined by appearance in several tests.
(5) Tests should have simple factorial composition.
A new paired comparison method, based upon choices between lotteries, is developed for the measurement of utilities of objects with respect to the utility of receiving nothing, i.e., the status quo. The method is used to estimate the utilities of four birthday gifts. These objects had also been studied in an earlier experiment which used choices between single objects and pairs of objects to determine a rational origin. A comparison of the results of the two experiments indicates that both methods scale objects with respect to the same rational origin and unit of measurement.
Factor analysis and principal component analysis result in computing a new coordinate system, which is usually rotated to obtain a better interpretation of the results. In the present paper, the idea of rotation to simple structure is extended to two dimensions. While the classical definition of simple structure is aimed at rotating (one-dimensional) factors, the extension to a simple structure for two dimensions is based on the rotation of planes. The resulting planes (principal planes) reveal a better view of the data than planes spanned by factors from classical rotation and hence allow a more reliable interpretation. The usefulness of the method as well as the effectiveness of a proposed algorithm are demonstrated by simulation experiments and an example.
We convey our experiences developing and implementing an online experiment to elicit subjective beliefs and economic preferences. The COVID-19 pandemic and associated closures of our laboratories required us to conduct an online experiment in order to collect beliefs and preferences associated with the pandemic in a timely manner. Since we had not previously conducted a similar multi-wave online experiment, we faced design and implementation considerations that are not present when running a typical laboratory experiment. By discussing these details more fully, we hope to contribute to the online experiment methodology literature at a time when many other researchers may be considering conducting an online experiment for the first time. We focus primarily on methodology; in a complementary study we focus on initial research findings.