We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, a theoretical index of dimensionality, called the theoretical DETECT index, is proposed to provide a theoretical foundation for the DETECT procedure. The purpose of DETECT is to assess certain aspects of the latent dimensional structure of a test, important to practitioner and research alike. Under reasonable modeling restrictions referred to as “approximate simple structure”, the theoretical DETECT index is proven to be maximized at the correct dimensionality-based partition of a test, where the number of item clusters in this partition corresponds to the number of substantively separate dimensions present in the test and by “correct” is meant that each cluster in this partition contains only items that correspond to the same separate dimension. It is argued that the separation into item clusters achieved by DETECT is appropriate from the applied perspective of desiring a partition into clusters that are interpretable as substantively distinct between clusters and substantively homogeneous within cluster. Moreover, the maximum DETECT index value is a measure of the amount of multidimensionality present. The estimation of the theoretical DETECT index is discussed and a genetic algorithm is developed to effectively execute DETECT. The study of DETECT is facilitated by the recasting of two factor analytic concepts in a multidimensional item response theory setting: a dimensionally homogeneous item cluster and an approximate simple structure test.
In many regression applications, users are often faced with difficulties due to nonlinear relationships, heterogeneous subjects, or time series which are best represented by splines. In such applications, two or more regression functions are often necessary to best summarize the underlying structure of the data. Unfortunately, in most cases, it is not known a priori which subset of observations should be approximated with which specific regression function. This paper presents a methodology which simultaneously clusters observations into a preset number of groups and estimates the corresponding regression functions' coefficients, all to optimize a common objective function. We describe the problem and discuss related procedures. A new simulated annealing-based methodology is described as well as program options to accommodate overlapping or nonoverlapping clustering, replications per subject, univariate or multivariate dependent variables, and constraints imposed on cluster membership. Extensive Monte Carlo analyses are reported which investigate the overall performance of the methodology. A consumer psychology application is provided concerning a conjoint analysis investigation of consumer satisfaction determinants. Finally, other applications and extensions of the methodology are discussed.
Fifty-three tests designed to measure aspects of creative thinking were administered to 410 air cadets and student officers. The scores were intercorrelated and 16 factors were extracted. Orthogonal rotations resulted in 14 identifiable factors, a doublet, and a residual. Nine previously identified factors were: verbal comprehension, numerical facility, perceptual speed, visualization, general reasoning, word fluency, associational fluency, ideational fluency, and a factor combining Thurstone's closure I and II. Five new factors were identified as originality, redefinition, adaptive flexibility, spontaneous flexibility, and sensitivity to problems.
Several articles in the past fifteen years have suggested various models for analyzing dichotomous test or questionnaire items which were constructed to reflect an assumed underlying structure. This paper shows that many models are special cases of latent class analysis. A currently available computer program for latent class analysis allows parameter estimates and goodness-of-fit tests not only for the models suggested by previous authors, but also for many models which they could not test with the more specialized computer programs they developed. Several examples are given of the variety of models which may be generated and tested. In addition, a general framework for conceptualizing all such models is given. This framework should be useful for generating models and for comparing various models.
This paper extends the biplot technique to canonical correlation analysis and redundancy analysis. The plot of structure correlations is shown to the optimal for displaying the pairwise correlations between the variables of the one set and those of the second. The link between multivariate regression and canonical correlation analysis/redundancy analysis is exploited for producing an optimal biplot that displays a matrix of regression coefficients. This plot can be made from the canonical weights of the predictors and the structure correlations of the criterion variables. An example is used to show how the proposed biplots may be interpreted.
Empirical Bayes methods are shown to provide a practical alternative to standard least squares methods in fitting high dimensional models to sparse data. An example concerning prediction bias in educational testing is presented as an illustration.
A family of solutions for linear relations among k sets of variables is proposed. It is shown how these solutions apply for k = 2, and how they can be generalized from there to k ≥ 3.
The family of solutions depends on three independent choices: (i) to what extent a solution may be influenced by differences in variances of components within each set; (ii) to what extent the sets may be differentially weighted with respect to their contribution to the solution—including orthogonality constraints; (iii) whether or not individual sets of variables may be replaced by an orthogonal and unit normalized basis.
Solutions are compared with respect to their optimality properties. For each solution the appropriate stationary equations are given. For one example it is shown how the determinantal equation of the stationary equations can be interpreted.
The results of three empirical studies on the sampling fluctuation of centroid factor loadings are reported. The first study is based on data which happened to be available on 8 variables for 700 cases and which were factored to three factors for subsamples. The second study is based on fictitious data for 2500 cases which provided separate analyses on 25 samples for each of three situations: 5 variables, one factor; 5 variables, two factors; and 6 variables, three factors. The third study, based on real data for 9 variables and 7000 cases, involves separate factorization for 35 samples of 200 cases. The three studies agree in showing that the sampling behavior of first centroid factor loadings is much like that of correlation coefficients, whereas the sampling fluctuations for loadings beyond the first are disturbingly large.
It often happens that a theory specifies some variables or states which cannot be identified completely in an experiment. When this happens, there are important questions as to whether the experiment is relevant to certain assumptions of the theory. Some of these questions are taken up in the present article, where a method is developed for describing the implications of a theory for an experiment. The method consists of constructing a second theory with all of its states identifiable in the outcome-space of the experiment. The method can be applied (i.e., an equivalent identifiable theory exists) whenever a theory specifies a probability function on the sample-space of possible outcomes of the experiment. An interesting relationship between lumpability of states and recurrent events plays an important role in the development of the identifiable theory. An identifiable theory of an experiment can be used to investigate relationships among different theories of the experiment. As an example, an identifiable theory of all-or-none learning is developed, and it is shown that a large class of all-or-none theories are equivalent for experiments in which a task is learned to a strict criterion.
The personnel-classification problems considered in this paper are related to those studied by Brogden (2), Lord (6), and Thorndike (8). Section 1 gives an approach to personnel classification. A basic problem and variations of it are treated in section 2; and the computation of a solution is illustrated in section 3. Two extensions of the basic problem are presented in section 4. Most of the methods indicated for computing solutions are applications of the “simplex” method used in linear programming (see 1, Chs. XXII, XXIII). The capabilities of a high speed computer in regard to the simplex method are discussed briefly (see section 1).
Electoral management bodies are increasingly being recognised as ‘fourth branch’ institutions that have a role to play in safeguarding electoral democracy against government attempts to undermine the fairness of the electoral process. This article explores the extent to which the Australian Electoral Commission (‘AEC’) fulfils that constitutional function by facilitating and protecting electoral democracy. It demonstrates that independence, impartiality and a supportive legislative framework help the AEC to be effective in performing these roles, but that inadequate powers, lack of budgetary autonomy and answerability to political actors operate as constraints. More generally, the analysis presented shows the value of expanding our understanding of the role of fourth branch institutions so that we take account of their activities in both fostering and safeguarding key democratic values.
A set of definitions leading to an axiom set which makes possible distinction between two or more forms of lexicographic choice is offered. It is shown that, by treating value as quantized and only taking a finite number of levels for each attribute of a multi-attribute choice situation, lexicographic evaluation might be subsumed under more familiar forms of choice model. Special cases and implications are discussed.
This paper is a mathematical supplement to the preceding paper by Professor Godfrey H. Thomson. It gives rigorous proofs of theorems enunciated by him and by Dr. J. Ridley Thompson, and extends them. Its basic theorem is that if a matrix of correlations is to be factorized without the aid of higher factors than s-factors (with n-s zero loadings), then the largest latent root of the matrix must not exceed the sum of the s largest communalities on the diagonal.