We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Empirical Bayes methods are shown to provide a practical alternative to standard least squares methods in fitting high dimensional models to sparse data. An example concerning prediction bias in educational testing is presented as an illustration.
A family of solutions for linear relations among k sets of variables is proposed. It is shown how these solutions apply for k = 2, and how they can be generalized from there to k ≥ 3.
The family of solutions depends on three independent choices: (i) to what extent a solution may be influenced by differences in variances of components within each set; (ii) to what extent the sets may be differentially weighted with respect to their contribution to the solution—including orthogonality constraints; (iii) whether or not individual sets of variables may be replaced by an orthogonal and unit normalized basis.
Solutions are compared with respect to their optimality properties. For each solution the appropriate stationary equations are given. For one example it is shown how the determinantal equation of the stationary equations can be interpreted.
The results of three empirical studies on the sampling fluctuation of centroid factor loadings are reported. The first study is based on data which happened to be available on 8 variables for 700 cases and which were factored to three factors for subsamples. The second study is based on fictitious data for 2500 cases which provided separate analyses on 25 samples for each of three situations: 5 variables, one factor; 5 variables, two factors; and 6 variables, three factors. The third study, based on real data for 9 variables and 7000 cases, involves separate factorization for 35 samples of 200 cases. The three studies agree in showing that the sampling behavior of first centroid factor loadings is much like that of correlation coefficients, whereas the sampling fluctuations for loadings beyond the first are disturbingly large.
It often happens that a theory specifies some variables or states which cannot be identified completely in an experiment. When this happens, there are important questions as to whether the experiment is relevant to certain assumptions of the theory. Some of these questions are taken up in the present article, where a method is developed for describing the implications of a theory for an experiment. The method consists of constructing a second theory with all of its states identifiable in the outcome-space of the experiment. The method can be applied (i.e., an equivalent identifiable theory exists) whenever a theory specifies a probability function on the sample-space of possible outcomes of the experiment. An interesting relationship between lumpability of states and recurrent events plays an important role in the development of the identifiable theory. An identifiable theory of an experiment can be used to investigate relationships among different theories of the experiment. As an example, an identifiable theory of all-or-none learning is developed, and it is shown that a large class of all-or-none theories are equivalent for experiments in which a task is learned to a strict criterion.
The personnel-classification problems considered in this paper are related to those studied by Brogden (2), Lord (6), and Thorndike (8). Section 1 gives an approach to personnel classification. A basic problem and variations of it are treated in section 2; and the computation of a solution is illustrated in section 3. Two extensions of the basic problem are presented in section 4. Most of the methods indicated for computing solutions are applications of the “simplex” method used in linear programming (see 1, Chs. XXII, XXIII). The capabilities of a high speed computer in regard to the simplex method are discussed briefly (see section 1).
Electoral management bodies are increasingly being recognised as ‘fourth branch’ institutions that have a role to play in safeguarding electoral democracy against government attempts to undermine the fairness of the electoral process. This article explores the extent to which the Australian Electoral Commission (‘AEC’) fulfils that constitutional function by facilitating and protecting electoral democracy. It demonstrates that independence, impartiality and a supportive legislative framework help the AEC to be effective in performing these roles, but that inadequate powers, lack of budgetary autonomy and answerability to political actors operate as constraints. More generally, the analysis presented shows the value of expanding our understanding of the role of fourth branch institutions so that we take account of their activities in both fostering and safeguarding key democratic values.
A set of definitions leading to an axiom set which makes possible distinction between two or more forms of lexicographic choice is offered. It is shown that, by treating value as quantized and only taking a finite number of levels for each attribute of a multi-attribute choice situation, lexicographic evaluation might be subsumed under more familiar forms of choice model. Special cases and implications are discussed.
This paper is a mathematical supplement to the preceding paper by Professor Godfrey H. Thomson. It gives rigorous proofs of theorems enunciated by him and by Dr. J. Ridley Thompson, and extends them. Its basic theorem is that if a matrix of correlations is to be factorized without the aid of higher factors than s-factors (with n-s zero loadings), then the largest latent root of the matrix must not exceed the sum of the s largest communalities on the diagonal.
Bi-factor and second-order models based on copulas are proposed for item response data, where the items are sampled from identified subdomains of some larger domain such that there is a homogeneous dependence within each domain. Our general models include the Gaussian bi-factor and second-order models as special cases and can lead to more probability in the joint upper or lower tail compared with the Gaussian bi-factor and second-order models. Details on maximum likelihood estimation of parameters for the bi-factor and second-order copula models are given, as well as model selection and goodness-of-fit techniques. Our general methodology is demonstrated with an extensive simulation study and illustrated for the Toronto Alexithymia Scale. Our studies suggest that there can be a substantial improvement over the Gaussian bi-factor and second-order models both conceptually, as the items can have interpretations of discretized maxima/minima or mixtures of discretized means in comparison with discretized means, and in fit to data.
In this paper, normal/independent distributions, including but not limited to the multivariate t distribution, the multivariate contaminated distribution, and the multivariate slash distribution, are used to develop a robust Bayesian approach for analyzing structural equation models with complete or missing data. In the context of a nonlinear structural equation model with fixed covariates, robust Bayesian methods are developed for estimation and model comparison. Results from simulation studies are reported to reveal the characteristics of estimation. The methods are illustrated by using a real data set obtained from diabetes patients.
Up to the present only empirical methods have been available for determining the number of factors to be extracted from a matrix of correlations. The problem has been confused by the implicit attitude that a matrix of intercorrelations between psychological variables has a rank which is determinable. A table of residuals always contains error variance and common factor variance. The extraction of successive factors increases the proportion of error variance remaining to common factor variance remaining, and a point is reached where the extraction of more dimensions would contain so much error variance that the common factor variance would be overshadowed. The critical value for this point is determined by probability theory and does not take into account the size of the residuals. Interpretation of the criterion is discussed.
Three methods for estimating reliability are studied within the context of nonparametric item response theory. Two were proposed originally by Mokken (1971) and a third is developed in this paper. Using a Monte Carlo strategy, these three estimation methods are compared with four “classical” lower bounds to reliability. Finally, recommendations are given concerning the use of these estimation methods.
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. This approach consists of iteratively performing (steps of) existing algorithms for ordinary least squares (OLS) fitting of the same model. The approach is based on minimizing a function that majorizes the WLS loss function. The generality of the approach implies that, for every model for which an OLS fitting algorithm is available, the present approach yields a WLS fitting algorithm. In the special case where the WLS weight matrix is binary, the approach reduces to missing data imputation.
Estimation of effect size is of interest in many applied fields such as Psychology, Sociology and Education. However there are few nonparametric estimators of effect size proposed in the existing literature, and little is known about the distributional characteristics of these estimators. In this article, two estimators based on the sample quantiles are proposed and studied. The first one is the estimator suggested by Hedges and Olkin (see page 93 of Hedges & Olkin, 1985) for the situation where a treatment effect is evaluated against a control group (Case A). A modified version of the robust estimator by Hedges and Olkin is also proposed for the situation where two parallel treatments are compared (Case B). Large sample distributions of both estimators are derived. Their asymptotic relative efficiencies with respect to the normal maximum likelihood estimators under several common distributions are evaluated. The robust properties of the proposed estimators are discussed with respect to the sample-wise breakdown points proposed by Akritas (1991). Simulation studies are provided in which the performing characteristics of the proposed estimator are compared to that of the nonparametric estimators by Kraemer and Andrews (1982). Interval estimation of the effect sizes is also discussed. In an example, interval estimates for the data set in Kraemer and Andrews (1982) are calculated for both cases A and B.