We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many screening tests dichotomize a measurement to classify subjects. Typically a cut-off value is chosen in a way that allows identification of an acceptable number of cases relative to a reference procedure, but does not produce too many false positives at the same time. Thus for the same sample many pairs of sensitivities and false positive rates result as the cut-off is varied. The curve of these points is called the receiver operating characteristic (ROC) curve. One goal of diagnostic meta-analysis is to integrate ROC curves and arrive at a summary ROC (SROC) curve. Holling, Böhning, and Böhning (Psychometrika 77:106–126, 2012a) demonstrated that finite semiparametric mixtures can describe the heterogeneity in a sample of Lehmann ROC curves well; this approach leads to clusters of SROC curves of a particular shape. We extend this work with the help of the \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$t_{\alpha }$$\end{document} transformation, a flexible family of transformations for proportions. A collection of SROC curves is constructed that approximately contains the Lehmann family but in addition allows the modeling of shapes beyond the Lehmann ROC curves. We introduce two rationales for determining the shape from the data. Using the fact that each curve corresponds to a natural univariate measure of diagnostic accuracy, we show how covariate adjusted mixtures lead to a meta-regression on SROC curves. Three worked examples illustrate the method.
Correlation coefficients derived from an hypothetical simple structure for twenty tests and four factors were “loaded” with chance error components. Centroid analyses and rotations to give least square determinations of the hypothetical simple structure were made for several conditions. It is concluded that the experimental situation of inaccurate coefficients and estimated communalities permits accurate determination of primary trait loadings provided that the rank of the centroid matrix is equal to or greater than that of the underlying primary trait matrix. Of several criteria for the completeness of factorization which were tested, none was wholly satisfactory.
The study presents statistical procedures that monitor functioning of items over time. We propose generalized likelihood ratio tests that surveil multiple item parameters and implement with various sampling techniques to perform continuous or intermittent monitoring. The procedures examine stability of item parameters across time and inform compromise as soon as they identify significant parameter shift. The performance of the monitoring procedures was validated using simulated and real-assessment data. The empirical evaluation suggests that the proposed procedures perform adequately well in identifying the parameter drift. They showed satisfactory detection power and gave timely signals while regulating error rates reasonably low. The procedures also showed superior performance when compared with the existent methods. The empirical findings suggest that multivariate parametric monitoring can provide an efficient and powerful control tool for maintaining the quality of items. The procedures allow joint monitoring of multiple item parameters and achieve sufficient power using powerful likelihood-ratio tests. Based on the findings from the empirical experimentation, we suggest some practical strategies for performing online item monitoring.
A scaled difference test statistic \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$\tilde{T}{}_{d}$\end{document} that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (Psychometrika 66:507–514, 2001). The statistic \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$\tilde{T}_{d}$\end{document} is asymptotically equivalent to the scaled difference test statistic \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$\bar{T}_{d}$\end{document} introduced in Satorra (Innovations in Multivariate Statistical Analysis: A Festschrift for Heinz Neudecker, pp. 233–247, 2000), which requires more involved computations beyond standard output of SEM software. The test statistic \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$\tilde{T}_{d}$\end{document} has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic \documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$\bar{T}_{d}$\end{document} that avoids negative chi-square values.
In item response theory (IRT), it is often necessary to perform restricted recalibration (RR) of the model: A set of (focal) parameters is estimated holding a set of (nuisance) parameters fixed. Typical applications of RR include expanding an existing item bank, linking multiple test forms, and associating constructs measured by separately calibrated tests. In the current work, we provide full statistical theory for RR of IRT models under the framework of pseudo-maximum likelihood estimation. We describe the standard error calculation for the focal parameters, the assessment of overall goodness-of-fit (GOF) of the model, and the identification of misfitting items. We report a simulation study to evaluate the performance of these methods in the scenario of adding a new item to an existing test. Parameter recovery for the focal parameters as well as Type I error and power of the proposed tests are examined. An empirical example is also included, in which we validate the pediatric fatigue short-form scale in the Patient-Reported Outcome Measurement Information System (PROMIS), compute global and local GOF statistics, and update parameters for the misfitting items.
Certain combinations of languages and scripts have come to take on indexical properties within the world of advertising in northern India. Such properties are regimented by what is on offer—commercial items, government services and information, schooling, and coaching services. An exploration of changing conventions in advertising since the 1990s, a period of accelerated liberalization in India, reveals that there have been especially dramatic changes in education and coaching services. By considering combinations of language and script as partly constitutive of the voice of an institution, this article accounts for the changing possibilities for the articulation of institutional distinctions and the ways institutional voices commoditize aspects of personae during the last twenty-five years or so in northern India.
The oblimax, promax, maxplane, and Harris-Kaiser techniques are compared. For five data sets, of varying reliability and factorial complexity, each having a graphic oblique solution (used as criterion), solutions obtained using the four methods are evaluated on (1) hyperplane-counts, (2) agreement of obtained with graphic within-method primary factor correlations and angular separations, (3) angular separations between obtained and corresponding graphic primary axes. The methods are discussed and ranked (descending order): Harris-Kaiser, promax, oblimax, maxplane. The Harris-Kaiser procedure—independent cluster version for factorially simple data, P'P proportional to Φ, with equamax rotations, for complex—is recommended.
A probabilistic choice model is developed for paired comparisons data about psychophysical stimuli. The model is based on Thurstone's Law of Comparative Judgment Case V and assumes that each stimulus is measured on a small number of physical variables. The utility of a stimulus is related to its values on the physical variables either by means of an additive univariate spline model or by means of multivariate spline model. In the additive univariate spline model, a separate univariate spline transformation is estimated for each physical dimension and the utility of a stimulus is assumed to be an additive combination of these transformed values. In the multivariate spline model, the utility of a stimulus is assumed to be a general multivariate spline function in the physical variables. The use of B splines for estimating the transformation functions is discussed and it is shown how B splines can be generalized to the multivariate case by using as basis functions tensor products of the univariate basis functions. A maximum likelihood estimation procedure for the Thurstone Case V model with spline transformation is described and applied for illustrative purposes to various artificial and real data sets. Finally, the model is extended using a latent class approach to the case where there are unreplicated paired comparisons data from a relatively large number of subjects drawn from a heterogeneous population. An EM algorithm for estimating the parameters in this extended model is outlined and illustrated on some real data.
This article explores the effect of treaty withdrawal on domestic legislation implementing a treaty in the Australian constitutional context. In R (Miller) v Secretary of State for Exiting the European Union (‘Miller’), the Supreme Court of the United Kingdom held that the executive cannot exercise its prerogative power to withdraw from a treaty where that withdrawal would frustrate or invalidate domestic law. This article contends that treaty withdrawal would be unlikely to have this effect on a law implementing a treaty in the Australian context. The article ultimately draws two conclusions. First, a law implementing a treaty would likely survive treaty withdrawal in most cases due to the law’s enduring nexus with Australia’s foreign relations, enabling its continued characterisation as a law ‘with respect to’ s 51(xxix) of the Constitution. Secondly, in the event that withdrawal does lead to a loss of constitutional support, the law would likely become prospectively invalid from the date of effective withdrawal (an outcome identical to legislative repeal in its effect). The article contends that this outcome would not, however, engage the constraint on executive power so emphatically reasserted in Miller. This is because the law’s invalidity is consistent with the implied will of the legislature and thus reinforces, rather than contravenes, the fundamental principle of parliamentary sovereignty which the constraint on executive power protects.
In the simulation of human behavior on a digital computer, one first attempts to discover the manner in which subjects (Ss) internally represent the environment and the rules that they employ for acting upon this representation. The interaction between the rules and the environmental representation over a period of time constitutes a set of processes. Processes can be expressed as flow charts which, in turn, are stated formally in terms of a computer program. The program serves as a theory which is tested by executing the program on a computer and comparing the machine's performance with S's behavior.
An extension of Wollenberg's redundancy analysis is proposed to derive Y- variates corresponding to the optimal X- variates. These variates are maximally correlated with the given X- variates, and depending upon the standardization chosen they also have certain properties of orthogonality.
Three IRT diagnostic-classification-modeling (DCM)-based multiple choice (MC) item design principles are stated that improve classroom quiz student diagnostic classification. Using proven-optimal maximum likelihood-based student classification, example items demonstrate that adherence to these item design principles increases attribute (skills and especially misconceptions) correct classification rates (CCRs). Simple formulas compute these needed item CCRs. By use of these psychometrically driven item design principles, hopefully enough attributes can be accurately diagnosed by necessarily short MC-item-based quizzes to be widely instructionally useful. These results should then stimulate increased use of well-designed MC item quizzes that target accurately diagnosing skills/misconceptions, thereby enhancing classroom learning.
For certain kinds of structure consisting of quantitative dimensions superimposed on a discrete class structure, spatial representations can be viewed as being composed of two subspaces, the first of which reveals the discrete classes as isolated clusters and the second of which contains variation along the quantitative attributes. A numerical method is presented for rotating a multi-dimensional configuration or factor solution so that the first few axes span the space of classes and the remaining axes span the space of quantitative variation. The use of this method is then illustrated in the analysis of some experimental data.
It is commonly held that even where questionnaire response is poor, correlational studies are affected only by loss of degrees of freedom or precision. We show that this supposition is not true. If the decision to respond is correlated with a substantive variable of interest, then regression or analysis of variance methods based upon the questionnaire results may be adversely affected by self-selection bias. Moreover such bias may arise even where response is 100%. The problem in both cases arises where selection information is passed to the score indirectly via the disturbance or individual effects, rather than entirely via the observable explanatory variables. We suggest tests for the ensuing self-selection bias and possible ways of handling the ensuing problems of inference.
Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge type of regularization into GSCA in a unified framework, thereby enabling to handle multi-collinearity problems effectively. An alternating regularized least squares algorithm is developed for parameter estimation. A Monte Carlo simulation study is conducted to investigate the performance of the proposed method as compared to its non-regularized counterpart. An application is also presented to demonstrate the empirical usefulness of the proposed method.