We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Sijilmassi et al. argue that myths serve to gain coalitional support by detailing shared histories of ancestry and cooperation. They overlook the emotional influences of stories, which include myths of human origin. We suggest that influential myths do not promote cooperation principally by signaling common ancestry, but by prompting human emotions of interdependence and connection.
The implicit revolution seems to have arrived with the declaration that “explicit measures are informed by and (possibly) rendered invalid by unconscious cognition.” What is the view from survey research, which has relied on explicit methodology for over a century, and whose methods have extended to the political domain in ways that have changed the landscape of politics in the United States and beyond? One survey researcher weighs in. The overwhelming evidence points to the continuing power of explicit measures to predict voting and behavior. Whether implicit measures can do the same, especially beyond what explicit measures can do, is far more ambiguous. The analysis further raises doubts, as others before have done, as to what exactly implicit measures measure, and particularly questions the co-opting among implicit researchers the word “attitude” when such measures instead represent associations. The conclusion: Keep your torches at home. There is no revolution.
Chapter 7 introduces the topic of online grooming of children, which is facilitated by text chat due to the anonymity it provides predators. It examines one published example of chat interaction between an identified offender and his young teenage victim, which provides new insights on the interactional behaviours of predators when attempting to groom children, in the early nonsexual stages of online relationships. The analysis of this single episode demonstrates that online predators may use self-disclosure and personal announcements intended to provoke interest and sympathy in their victims. This has the effect of the victim letting down her guard and submitting personal self-disclosures of her own. Specifically, initial grooming trajectories may include getting acquainted behaviours, small talk, troubles announcements, self-disclosures involving personal life, expression of feelings; requests for information about relationships and discussion of sexual interests. While not evident in the examined chat interaction, exchange of photographs is also known to be common. Chapter findings suggest that it may be possible to recognize online predators and protect children, in early nonsexual stages of grooming, though further conversation analytical research across a variety of contexts and age groups is urgently needed.
This paper proposes an ordinal generalization of the hierarchical classes model originally proposed by De Boeck and Rosenberg (1998). Any hierarchical classes model implies a decomposition of a two-way two-mode binary array M into two component matrices, called bundle matrices, which represent the association relation and the set-theoretical relations among the elements of both modes in M. Whereas the original model restricts the bundle matrices to be binary, the ordinal hierarchical classes model assumes that the bundles are ordinal variables with a prespecified number of values. This generalization results in a classification model with classes ordered along ordinal dimensions. The ordinal hierarchical classes model is shown to subsume Coombs and Kao's (1955) model for nonmetric factor analysis. An algorithm is described to fit the model to a given data set and is subsequently evaluated in an extensive simulation study. An application of the model to student housing data is discussed.
In the eras of both film-based and digital photography, and to differing degrees and with differing consequences in documentary and art photography, the role of indexicality in establishing the veracity of the image has been paramount in photo theory. There is a need, however, for expanding the ways we think about indexicality in photography. In addition to the familiar concepts of the trace, citation, reference, social diacritics, interactional cues, and artists’ intentions, the indexicality of photography may be better understood through appeal to less direct, or more densely mediated, modalities of indexicality. These include qualia (hypostatically abstracted -nesses), dicentization (the upshifting of icons into indexes of contiguity), and propositionality (the assertion of messages subject to truth claims).
Differential item functioning (DIF) analysis is an important step in establishing the validity of measurements. Most traditional methods for DIF analysis use an item-by-item strategy via anchor items that are assumed DIF-free. If anchor items are flawed, these methods will yield misleading results due to biased scales. In this article, based on the fact that the item’s relative change of difficulty difference (RCD) does not depend on the mean ability of individual groups, a new DIF detection method (RCD-DIF) is proposed by comparing the observed differences against those with simulated data that are known DIF-free. The RCD-DIF method consists of a D-QQ (quantile quantile) plot that permits the identification of internal references points (similar to anchor items), a RCD-QQ plot that facilitates visual examination of DIF, and a RCD graphical test that synchronizes DIF analysis at the test level with that at the item level via confidence intervals on individual items. The RCD procedure visually reveals the overall pattern of DIF in the test and the size of DIF for each item and is expected to work properly even when the majority of the items possess DIF and the DIF pattern is unbalanced. Results of two simulation studies indicate that the RCD graphical test has Type I error rate comparable to those of existing methods but with greater power.
A theoretical discussion of the factor pattern of predictor tests and criterion shows that ordinary test selection methods break down under certain circumstances. It is shown that maximal results may not occur if suppressor variables are present among the predictors. Suggested solutions to the problem include: (1) prior item analysis of tests against the criterion, (2) selection of several trial batteries including some with suppressor variables on the basis of a factor analysis of tests and criterion, (3) modification of the usual test selection procedures to include separate solutions based upon each of several starting variables, or (4) the cumbersome and tedious solution of all possible combinations of predictors. The solutions are recommended in the order named above. Although all of the suggested solutions involve added labor and may not be necessary, the test or battery constructor should at least be aware of the problem.
For continuous distributions associated with dichotomous item scores, the proportion of common-factor variance in the test, H2, may be expressed as a function of intercorrelations among items. H2 is somewhat larger than the coefficient a except when the items have only one common factor and its loadings are restricted in value. The dichotomous item scores themselves are shown not to have a factor structure, precluding direct interpretation of the Kuder-Richardson coefficient, rK-R, in terms of factorial properties. The value of rK-R is equal to that of a coefficient of equivalence, H2Φ, when the mean item variance associated with common factors equals the mean interitem covariance. An empirical study with synthetic test data from populations of varying factorial structure showed that the four parameters mentioned may be adequately estimated from dichotomous data.
The core of the paper consists of the treatment of two special decompositions for correspondence analysis of two-way ordered contingency tables: the bivariate moment decomposition and the hybrid decomposition, both using orthogonal polynomials rather than the commonly used singular vectors. To this end, we will detail and explain the basic characteristics of a particular set of orthogonal polynomials, called Emerson polynomials. It is shown that such polynomials, when used as bases for the row and/or column spaces, can enhance the interpretations via linear, quadratic and higher-order moments of the ordered categories. To aid such interpretations, we propose a new type of graphical display—the polynomial biplot.
Four measurement designs are presented for use with correlation coefficients corrected, in one variable, for attenuation due to unreliability—coefficients that we term partially disattenuated correlation coefficients. Asymptotic expressions are derived for the variances and covariances of the estimates accompanying each design. Empirical simulation results that bear on the preceding mathematical developments are then presented. In addition to providing insights into the distributions of the estimates, the empirical results demonstrate satisfactory Type I error control for typical inferential applications. Power is shown to be equal to or greater than that of corresponding product-moment correlations in three of the four designs. Implications for practice are discussed.
On the occasion of Psychometrika's fiftieth anniversary, the past twenty-five years' developments in mental test theory are reviewed, with special emphasis on the topics receiving attention in the pages of this journal. (Analogous reviews for Psychometrika's first quarter century were given by Gulliksen and Guilford in 1961.) Much of the recent progress in test theory (and in other branches of psychometrics as well) has been made by treating the problems in this field as being essentially ones of statistical inference. It is concluded that (a) research in test theory is in a healthy state and (b) Psychometrika is an important source of information about that research.
Computer-based interactive items have become prevalent in recent educational assessments. In such items, detailed human–computer interactive process, known as response process, is recorded in a log file. The recorded response processes provide great opportunities to understand individuals’ problem solving processes. However, difficulties exist in analyzing these data as they are high-dimensional sequences in a nonstandard format. This paper aims at extracting useful information from response processes. In particular, we consider an exploratory analysis that extracts latent variables from process data through a multidimensional scaling framework. A dissimilarity measure is described to quantify the discrepancy between two response processes. The proposed method is applied to both simulated data and real process data from 14 PSTRE items in PIAAC 2012. A prediction procedure is used to examine the information contained in the extracted latent variables. We find that the extracted latent variables preserve a substantial amount of information in the process and have reasonable interpretability. We also empirically prove that process data contains more information than classic binary item responses in terms of out-of-sample prediction of many variables.
We address several issues that are raised by Bentler and Tanaka's [1983] discussion of Rubin and Thayer [1982]. Our conclusions are: standard methods do not completely monitor the possible existence of multiple local maxima; summarizing inferential precision by the standard output based on second derivatives of the log likelihood at a maximum can be inappropriate, even if there exists a unique local maximum; EM and LISREL can be viewed as complementary, albeit not entirely adequate, tools for factor analysis.
The partitioning of squared Eucliean distance between two vectors in M-dimensional space into the sum of squared lengths of vectors in mutually orthogonal subspaces is discussed and applications given to specific cluster analysis problems. Examples of how the partitioning idea can be used to help describe and interpret derived clusters, derive similarity measures for use in cluster analysis, and to design Monte Carlo studies with carefully specified types and magnitudes of differences between the underlying population mean vectors are presented. Most of the example applications presented in this paper involve the clustering of longitudinal data, but their use in cluster analysis need not be limited to this arena.