We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Consider an old test X consisting of s sections and two new tests Y and Z similar to X consisting of p and q sections respectively. All subjects are given test X plus two variable sections from either test Y or Z. Different pairings of variable sections are given to each subsample of subjects. We present a method of estimating the covariance matrix of the combined test (X1, ..., Xs, Y1, ..., Yp, Z1, ..., Zq) and describe an application of these estimation techniques to linear, observed-score, test equating.
Research questions in the human sciences often seek to answer if and when a process changes across time. In functional MRI studies, for instance, researchers may seek to assess the onset of a shift in brain state. For daily diary studies, the researcher may seek to identify when a person’s psychological process shifts following treatment. The timing and presence of such a change may be meaningful in terms of understanding state changes. Currently, dynamic processes are typically quantified as static networks where edges indicate temporal relations among nodes, which may be variables reflecting emotions, behaviors, or brain activity. Here we describe three methods for detecting changes in such correlation networks from a data-driven perspective. Networks here are quantified using the lag-0 pair-wise correlation (or covariance) estimates as the representation of the dynamic relations among variables. We present three methods for change point detection: dynamic connectivity regression, max-type method, and a PCA-based method. The change point detection methods each include different ways to test if two given correlation network patterns from different segments in time are significantly different. These tests can also be used outside of the change point detection approaches to test any two given blocks of data. We compare the three methods for change point detection as well as the complementary significance testing approaches on simulated and empirical functional connectivity fMRI data examples.
Suppose a collection of standard tests is given to all subjects in a random sample, but a different new test is given to each group of subjects in nonoverlapping subsamples. A simple method is developed for displaying the information that the data set contains about the correlational structure of the new tests. This is possible to some extent, even though each subject takes only one new test. The method uses plausible values of the partial correlations among the new tests given the standard tests in order to generate plausible simple correlations among the new tests and plausible multiple correlations between composites of the new tests and the standard tests. The real data example included suggests that the method can be useful in practical problems.
Were nineteenth century war outcomes the main determinant of state trajectories in Latin America? In this chapter I turn from examining whether and to which degree war outcomes affected comparative state capacity levels and try to determining whether war outcomes were the main factor affecting the relative position of Latin American countries in the regional state capacity ranking. Exploring the conditions that predict the rank ordering of Latin American state capacity c. 1900—which has remained virtually the same ever since—has become a standard approach in the literature. In this chapter I explore this comparative historical puzzle by replicating previously used techniques. I use qualitative comparative analysis to show that accumulated victory and defeat throughout the nineteenth century is almost a sufficient condition for states to be in the upper and lower end of a state capacity ranking, respectively. I then use simple correlations to evaluate how war outcomes were related to a broad set of state capacity indicators at the turn of the century. Finally, I discuss case-specific expectations in longitudinal data that will be explored in the case studies.
This chapter considers the role of neuropsychology in the diagnostic process. It covers who can undertake a neuropsychological assessment, when to undertake an assessment, and some of the assumptions underlying neuropsychological assesssment. Basic psychometrics are covered, using the premise that undertanding a few basic concepts is sufficient for most practioners as more complex ideas are developed from these basics. This includes the normal distribution, different types of average, the standard deviation, and the correlation. Next, the relationship between different tyes of metrics is discussed, focusing on IQ/Index scores, T-scores, scaled scores, and percentiles.
This is the story of the Princeton Wine Group, a group whose membership has been relatively constant for almost 40 years. This group has enjoyed 244 blind tastings involving 1,708 different wines. A statistical analysis was performed at each tasting examining whether participants ranked the quality of wines similarly and whether the preferences of the group were correlated with several variables including professional wine ratings and the prices of the wine. The article concludes with a discussion of lessons learned from a lifetime of wine tastings.
Compositional data for 464 clay minerals (2:1 type) were analyzed by statistical techniques. The objective was to understand the similarities and differences between the groups and subgroups and to evaluate statistically clay mineral classification in terms of chemical parameters. The statistical properties of the distributions of total layer charge (TLC), K, VIAl, VIMg, octahedral charge (OC) and tetrahedral charge (TC) were initially evaluated. Critical-difference (P = 1%) comparisons of individual characteristics show that all the clay micas (illite, glauconite and celadonite) differ significantly from all the smectites (montmorillonite, beidellite, nontronite and saponite) only in their TLC and K levels; they cannot be distinguished by their VIAl, VIMg, TC or OC values which reveal no significant differences between several minerals.
Linear discriminant analysis using equal prior was therefore performed to analyze the combined effect of all the chemical parameters. Using six parameters [TLC, K, VIAl, VIMg, TC and OC], eight minerals groups could be derived, corresponding to the three clay micas, four smectites (mentioned above) and vermiculite. The fit between predicted and experimental values was 88.1%. Discriminant analysis using two parameters (TLC and K) resulted in classification into three broad groups corresponding to the clay micas, smectites and vermiculites (87.7% fit). Further analysis using the remaining four parameters resulted in subgroup-level classification with an 85–95% fit between predicted and experimental results. The three analyses yielded D2 Mahalanobis distances, which quantify chemical similarities and differences between the broad groups, within members of a subgroup and also between the subgroups. Classification functions derived here can be used as an aid for classification of 2:1 minerals.
The chapter presents two surveys of low-mass star formation regions (LMSFR). The first survey uses the IRAM (Institute for Radio Astronomy in the Millimeter Range) 30 metre telescope at Pico Veleta in Spain to identify 16 deeply embedded YSOs and the emission from eight complex organic molecules (COMs). The second survey uses ALMA (Atacama Large Millimetre Array) directed towards five low-mass candidates (all in the Serpens cluster at distances ~440 pc) and detected emission from five COMs species.
The chapter describes the assumptions and goals of large n statistical studies and evaluates the extent to which they generate knowledge about international relations.
Credit-score models provide one of the many contexts through which the big data micro-segmentation or ‘personalisation’ phenomenon can be analysed and critiqued. This chapter approaches the issue through the lens of anti-discrimination law, and in particular the concept of indirect discrimination. The argument presented is that, despite its initial promise based on its focus on impact, ‘indirect discrimination’ is after all unlikely to deliver a mechanism to intervene and curb the excesses of the personalised service model. The reason for its failure does not lie in its inherent weaknesses but rather in the 'shortcomings' (entrenched biases) of empirical reality itself which any 'accurate' (or useful) statistical analysis cannot but reflect. Still, the anti-discrimination context offers insights that are valuable beyond its own disciplinary boundaries. For example, the opportunities for oversight and review based on correlations within outputs rather than analysis of inputs is fundamentally at odds with the current trend that demands greater transparency of AI but may after all be more practical and realistic considering the ‘natural’ opacity of learning algorithms and businesses’ ‘natural’ secrecy. The credit risk score context also provides a low-key yet powerful illustration of the oppressive potential of a world in which individual behaviour from ANY sphere or domain may be used for ANY purpose; where a bank, insurance company, employer, health care provider, or indeed any government authority can tap into our social DNA to pre-judge us, should it be considered appropriate and necessary for their manifold objectives.
The history of attempts to identify a correlational structure across a battery of cognitive tasks in nonhumans is reviewed, specifically with respect to mice. The literature on human cognition has long included the idea that subjects retain something of their rank-ordering across a battery of cognitive tasks, yielding what is characterized as a positive manifold of cross-task correlations. This manifold is often referred to as marking a general intelligence factor (g). The literature on individual differences in nonhumans has until recently evidenced a different conclusion, one that emphasized the evolution of niche-specific adaptive specializations rather than the evolution of general cognitive mechanisms. That conclusion in the nonhuman literature has now been successfully challenged. There is little doubt that nonhuman subjects also reveal a positive manifold that is of a similar magnitudecompared to the human data. Problems remain in deciding whether this nonhuman g is the same g as is found using human subjects. The available data are promising but not decisive in suggesting that the two constructs are marking similar processes.
We introduce the concepts of waveform correlations, and the way in which they can be exploited to extract information from seismograms.We show how correlation procedures can be used to determine time and phase delays. We then consider the closely related topic of transfer functions between aspects of the wavefield, and this leads into a discussion of the ways by which seismograms can be compared – a topic of importance in the comparison of observations and simulations. We also consider the nature of receiver functions and the correlation of teleseismic signals at a receiver to yield information on local structure.
Exploiting Seismic Waveforms introduces a range of recent developments in seismology including the application of correlation techniques, understanding of multi-scale heterogeneity and the extraction of structure and source information by seismic waveform inversion. It provides a full treatment of correlation methods for seismic noise and event signals, and develops inverse methods for both sources and structure. Higher frequency components of seismograms are frequently neglected, or removed by filtering, but they contain information about seismic structure on scales that cannot be revealed by seismic tomography. Sufficient computational resources are now available for waveform inversion for 3-D structure to be a practical procedure and this book describes suitable algorithms and examples reflecting current best practice. Intended for students and researchers in seismology, this book provides a physical understanding of seismic waveforms and the way that different aspects of the seismic wavefield are revealed by the way that seismic data are handled.
This chapter explains why atom jumps with a vacancy mechanism are not random, even if the vacancy itself moves by random walk. In an alloy with chemical interactions strong enough to cause a phase transformation, the vacancy frequently resides at energetically favorable locations, so any assumption of random walk can be seriously in error. When materials with different diffusivities are brought into contact, their interface is displaced with time because the fluxes of atoms across the interface are not equal in both directions. Even the meaning of the interface, or at least its position, requires new concepts. An applied field can bias the diffusion process towards a particular direction, and such a bias can also be created by chemical interactions between atoms. When thermal atom diffusion occurs in parallel with atom jumps forced without thermal activation, a steady state can be calculated, but it is not a state of thermodynamic equilibrium. Finally, the venerable statistical mechanics model of diffusion by Vineyard is described.
In this chapter we focus on the stress-energy bitensor and its symmetrized product, with two goals: (1) to present the point-separation regularization scheme, and (2) to use it to calculate the noise kernel that is the correlation function of the stress-energy bitensor and explore its properties. In the first part we introduce the necessary properties and geometric tools for analyzing bitensors, geometric objects that have support at two separate spacetime points. The second part presents the point-separation method for regularizing the ultraviolet divergences of the stress-energy tensor for quantum fields in a general curved spacetime. In the third part we derive a formal expression for the noise kernel in terms of the higher order covariant derivatives of the Green functions taken at two separate points. One simple yet important fact we show is that for a massless conformal field the trace of the noise kernel identically vanishes. In the fourth part we calculate the noise kernel for a conformal field in de Sitter space, both in the conformal Bunch–Davies vacuum and in the static Gibbons–Hawking vacuum. These results are useful for treating the backreaction and fluctuation effects of quantum fields.
Zeta-function regularization is arguably the most elegant of the four major regularization methods used for quantum fields in curved spacetime, linked to the heat kernel and spectral theorems in mathematics. The only drawback is that it can only be applied to Riemannian spaces (also called Euclidean spaces), whose metrics have a ++++ signature, where the invariant operator is of the elliptic type, as opposed to the hyperbolic type in pseudo-Riemannian spaces (also called Lorentzian spaces) with a −+++ signature. Besides, the space needs to have sufficiently large symmetry that the spectrum of the invariant operator can be calculated explicitly in analytic form. In the first part we define the zeta function, showing how to calculate it in several representative spacetimes and how the zeta-function regularization scheme works. We relate it to the heat kernel and derive the effective Lagrangian from it via the Schwinger proper time formalism. In the second part we show how to obtain the correlation function of the stress-energy bitensor, also known as the noise kernel, from the second metric variation of the effective action. Noise kernel plays a central role in stochastic gravity as much as the expectation values of stress-energy tensor do for semiclassical gravity.
Subcutaneous fat thickness and fatty acid composition (FAC) play an important role on seasoning loss and organoleptic characteristics of seasoned hams. Dry-cured ham industry prefers meats with low contents of polyunsaturated fatty acids (PUFA) because these negatively affect fat firmness and ham quality, whereas consumers require higher contents in those fatty acids (FA) for their positive effect on human health. A population of 950 Italian Large White pigs from the Italian National Sib Test Selection Programme was investigated with the aim to estimate heritabilities, genetic and phenotypic correlations of backfat FAC, Semimembranosus muscle intramuscular fat (IMF) content and other carcass traits. The pigs were reared in controlled environmental condition at the same central testing station and were slaughtered at reaching 150 kg live weight. Backfat samples were collected to analyze FAC by gas chromatography. Carcass traits showed heritability levels from 0.087 for estimated carcass lean percentage to 0.361 for hot carcass weight. Heritability values of FA classes were low-to-moderate, all in the range 0.245 for n-3 PUFA to 0.264 for monounsaturated FA (MUFA). Polyunsaturated fatty acids showed a significant genetic correlation with loin thickness (0.128), backfat thickness (−0.124 for backfat measured by Fat-O-Meat’er and −0.175 for backfat measured by calibre) and IMF (−0.102). Obviously, C18:2(n-6) shows similar genetic correlations with the same traits (0.211 with loin thickness, −0.206 with backfat measured by Fat-O-Meat’er, −0.291 with backfat measured by calibre and −0.171 with IMF). Monounsaturated FA, except with the backfat measured by calibre (0.068; P<0.01), do not show genetic correlations with carcass characteristics, whereas a negative genetic correlation was found between MUFA and saturated FA (SFA; −0.339; P<0.001). These results suggest that MUFA/SFA ratio could be increased without interfering with carcass traits. The level of genetic correlations between FA and carcass traits should be taken into account in dealing with the development of selection schemes addressed to modify carcass composition and/or backfat FAC.
where $f,\ldots ,f_{m}$ are bounded ‘pretentious’ multiplicative functions, under certain natural hypotheses. We then deduce several desirable consequences. First, we characterize all multiplicative functions $f:\mathbb{N}\rightarrow \{-1,+1\}$ with bounded partial sums. This answers a question of Erdős from $1957$ in the form conjectured by Tao. Second, we show that if the average of the first divided difference of the multiplicative function is zero, then either $f(n)=n^{s}$ for $\operatorname{Re}(s)<1$ or $|f(n)|$ is small on average. This settles an old conjecture of Kátai. Third, we apply our theorem to count the number of representations of $n=a+b$, where $a,b$ belong to some multiplicative subsets of $\mathbb{N}$. This gives a new ‘circle method-free’ proof of a result of Brüdern.
This paper seeks to establish good practice in setting inputs for operational risk models for banks, insurers and other financial service firms. It reviews Basel, Solvency II and other regulatory requirements as well as publicly available literature on operational risk modelling. It recommends a combination of historic loss data and scenario analysis for modelling of individual risks, setting out issues with these data, and outlining good practice for loss data collection and scenario analysis. It recommends the use of expert judgement for setting correlations, and addresses information requirements for risk mitigation allowances and capital allocation, before briefly covering Bayesian network methods for modelling operational risks.
The difficulties and costs of measuring individual feed intake in dairy cattle are the primary factors limiting the genetic study of feed intake and utilisation, and hence the potential of their subsequent industry-wide applications. However, indirect selection based on heritable, easily measurable, and genetically correlated traits, such as conformation traits, may be an alternative approach to improve feed efficiency. The aim of this study was to estimate genetic and phenotypic correlations among feed intake, production, and feed efficiency traits (particularly residual feed intake; RFI) with routinely recorded conformation traits. A total of 496 repeated records from 260 Holstein dairy cows in different lactations (260, 159 and 77 from first, second and third lactation, respectively) were considered in this study. Individual daily feed intake and monthly BW and body condition scores of these animals were recorded from 5 to 305 days in milk within each lactation from June 2007 to July 2013. Milk yield and composition data of all animals within each lactation were retrieved, and the first lactation conformation traits for primiparous animals were extracted from databases. Individual RFI over 301 days was estimated using linear regression of total 301 days actual energy intake on a total of 301 days estimated traits of metabolic BW, milk production energy requirement, and empty BW change. Pair-wise bivariate animal models were used to estimate genetic and phenotypic parameters among the studied traits. Estimated heritabilities of total intake and production traits ranged from 0.27±0.07 for lactation actual energy intake to 0.45±0.08 for average body condition score over 301 days of the lactation period. RFI showed a moderate heritability estimate (0.20±0.03) and non-significant phenotypic and genetic correlations with lactation 3.5 % fat-corrected milk and average BW over lactation. Among the conformation traits, dairy strength, stature, rear attachment width, chest width and pin width had significant (P<0.05) moderate to strong genetic correlations with RFI. Combinations of these conformation traits could be used as RFI indicators in the dairy genetic improvement programmes to increase the accuracy of the genetic evaluation of feed intake and utilisation included in the index.