We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 7, the author introduces both content analysis and basic statistical analysis to help evaluate the effectiveness of assessments. The focus of the chapter is on guidelines for creating and evaluating reading and listening inputs and selected response item types, particularly multiple-choice items that accompany these inputs. The author guides readers through detailed evaluations of reading passages and accompanying multiple-choice items that need major revisions. The author discusses generative artificial intelligence as an aid for drafting inputs and creating items and includes an appendix which guides readers through the use of ChatGPT for this purpose. The author also introduces test-level statistics, including minimum, maximum, range, mean, variance, standard deviation, skewness, and kurtosis. The author shows how to calculate these statistics for an actual grammar tense test and includes an appendix with detailed guidelines for conducting these analyses using Excel software.
This chapter considers the role of neuropsychology in the diagnostic process. It covers who can undertake a neuropsychological assessment, when to undertake an assessment, and some of the assumptions underlying neuropsychological assesssment. Basic psychometrics are covered, using the premise that undertanding a few basic concepts is sufficient for most practioners as more complex ideas are developed from these basics. This includes the normal distribution, different types of average, the standard deviation, and the correlation. Next, the relationship between different tyes of metrics is discussed, focusing on IQ/Index scores, T-scores, scaled scores, and percentiles.
Central tendency describes the typical value of a variable.Measures of central tendency by level of measurement are covered including the mean, median, and mode.Appropriate use of each measure by level of measurement is the central theme of the chapter.The chapter shows how to find these measures of central tendency by hand and in the R Commander with detailed instructions and steps.Skewed distributions and outliers of data are also covered, as is the relationship between the mean and median in these cases.
Chapter 3 covers MEASURES OF LOCATION, SPREAD, AND SKEWNESS and includes the following specific topics, among others:Mode, Median, Mean, Weighted Mean,Range, Interquartile Range, Variance, Standard Deviation, and Skewness.
Chapter 3 covers measures of location, spread and skewness and includes the following specific topics, among others: mode, median, mean, weighted mean, range, interquartile range, variance, standard deviation, and skewness.
A review of basic probability theory – probability density, expectation, mean, variance/covariance, median, median absolute deviation, quantiles, skewness/kurtosis and correlation – is first given. Exploratory data analysis methods (histograms, quantile-quantile plots and boxplots) are then introduced. Finally, topics including Mahalanobis distance, Bayes theorem, classification, clustering and information theory are covered.
This chapter reviews statistics and data-analysis tools. Starting from basic statistical concepts such as mean, variance, and the Gaussian distribution, we introduce the principal tools required for data analysis. We discuss both Bayesian and frequentist statistical approaches, with emphasis on the former. This leads us to describe how to calculate the goodness of fit of data to theory, and how to constrain the parameters of a model. Finally, we introduce and explain, both intuitively and mathematically, two important statistical tools: Markov chain Monte Carlo (MCMC) and the Fisher information matrix.
This chapter reviews some essential concepts of probability and statistics, including: line plots, histograms, scatter plots, mean, median, quantiles, variance, random variables, probability density function, expectation of a random variable, covariance and correlation, independence the normal distribution (also known as the Gaussian distribution), the chi-square distribution. The above concepts provide the foundation for the statistical methods discussed in the rest of this book.
This chapter covers the analysis of static systems under probabilistic input uncertainty. The first part of the chapter is devoted to analyzing linear and nonlinear static systems when the first and second moments of the input vector are known, and it provides techniques for characterizing the first and second moments of the state vector. For the linear case, the techniques provide the exact moment characterization, whereas for the nonlinear case, the characterization, which is based on a linearization of the system model, is approximate. The second part of the chapter provides techniques for the analysis of both linear and nonlinear static systems when the pdf of the input vector is known. The techniques included provide exact characterizations of the state pdf for both linear and nonlinear systems. In both cases, the inversion of the input-to-state mapping is required, which in the linear case involves the computation of the inverse of a matrix; however, for the nonlinear, it involves obtaining an analytical expression for the input-to-state mapping. The chapter concludes by utilizing the techniques developed to study the power flow problem under active power injection uncertainty.
We consider a class of phase-type distributions (PH-distributions), to be called the MMPP class of PH-distributions, and find bounds of their mean and squared coefficient of variation (SCV). As an application, we have shown that the SCV of the event-stationary inter-event time for Markov modulated Poisson processes (MMPPs) is greater than or equal to unity, which answers an open problem for MMPPs. The results are useful for selecting proper PH-distributions and counting processes in stochastic modeling.
This chapter explores the reception of classical ethical philosophy in the fourth-century Cappadocian Father, Gregory of Nazianzus, by focusing on the first of his five Theological Orations (Or. 27). An Athenian-trained rhetorician who became the most widely studied and imitated author in Byzantium, Gregory weaves together various strands from ancient ethical discourse in order to set out the moral and cultural prerequisites for performing theology. Gregory’s construction of the ideal theologian reflects late-antique discussions about the proper exegesis of texts, the moral character expected of teachers and students, and the policing of discourse. Finally, Gregory distinguishes the appropriate performance of theology from theology performed simpliciter through a set of qualifications that reflect a recognisably Aristotelian framework, one that can be traced back to the Nicomachean Ethics.
We prove four theorems characterizing the unweighted power means among all unweighted means. We then build a tool for converting characterization theorems for unweighted means into characterization theorems for weighted theorems. Using this tool, we deduce four theorems characterizing the weighted power means among all weighted means. The main new feature of the theorems proved in this chapter is that they do not assume continuity.
We give an overview of the whole book. We explain the problem of measuring diversity, summarizing the mathematical concepts with which it connects (including entropy and measures of size such as cardinality, volume and Euler characteristic). We indicate some of the branches of mathematics that will be involved (information theory, geometry, probability theory, abstract algebra) and the techniques that will be used (functional equations and a little category theory).
The global biodiversity crisis is one of humanity's most urgent problems, but even quantifying biological diversity is a difficult mathematical and conceptual challenge. This book brings new mathematical rigour to the ongoing debate. It was born of research in category theory, is given strength by information theory, and is fed by the ancient field of functional equations. It applies the power of the axiomatic method to a biological problem of pressing concern, but it also presents new theorems that stand up as mathematics in their own right, independently of any application. The question 'what is diversity?' has surprising mathematical depth, and this book covers a wide breadth of mathematics, from functional equations to geometric measure theory, from probability theory to number theory. Despite this range, the mathematical prerequisites are few: the main narrative thread of this book requires no more than an undergraduate course in analysis.
This chapter discusses two types of descriptive statistics: models of central tendency and models of variability. Models of central tendency describe the location of the middle of the distribution, and models of variability describe the degree that scores are spread out from one another. There are four models of central tendency in this chapter. Listed in ascending order of the complexity of their calculations, these are the mode, median, mean, and trimmed mean. There are also four principal models of variability discussed in this chapter: the range, interquartile range, standard deviation, and variance. For the latter two statistics, students are shown three possible formulas (sample standard deviation and variance, population standard deviation and variance, and population standard deviation and variance estimated from sample data), along with an explanation of when it is appropriate to use each formula. No statistical model of central tendency or variability tells you everything you may need to know about your data. Only by using multiple models in conjunction with each other can you have a thorough understanding of your data.
This chapter focuses on Proclus’ use of a theological notion of harmony, which is designed to reveal the essence, intelligible relations, and causality of the soul by taking its harmonic structure as a starting point. The fact that the soul is made of specific means and proportions paves the way to the claim that the soul’s essence consists of a logos. This represents neither just an exegetical remark related to Plato’s divisio animae nor the mere use of an image: Proclus regards Plato’s account of the soul’s harmonic structure as a specific key to access theology. By analysing the harmonic component within Proclus’ iconic theology, a clear analysis of both the “theological” implications of Proclus’ study of the harmonic structure of the Platonic world-soul and of the metaphysical-theological function of the ambivalent notion of logos emerges.
We introduce the most commonly encountered data types and their properties. We describe the process of data sampling, focusing on the distinction between the sampled statistical population and the collected sample, stressing the need for a carefully designed sampling strategy. We introduce the sample statistics that form the core of data analysis, characterising both the position of values (arithmetic mean, median and others) and the spread of values (e.g. variance). The visualisation of individual variables by histograms and box-and-whiskers plots is introduced and later demonstrated with R code. We also briefly discuss the concept and properties of distribution and probability density functions, addressing discrete and continuous variables separately.
Pervez Ghauri, University of Birmingham,Kjell Grønhaug, Norwegian School of Economics and Business Administration, Bergen-Sandviken,Roger Strange, University of Sussex
The appropriate method of data analysis depends upon a variety of factors that have been specified in the research question and as part of the research design. One key issue is whether the data are qualitative or quantitative, and this depends upon the underlying research approach. If the research approach is deductive, then most of the data are likely to be expressed as numbers and the key issue will be selecting the appropriate statistical techniques for describing and analysing the data. In this chapter, we will concentrate on techniques for describing quantitative data and for providing simple preliminary analyses.
Pervez Ghauri, University of Birmingham,Kjell Grønhaug, Norwegian School of Economics and Business Administration, Bergen-Sandviken,Roger Strange, University of Sussex
The appropriate method of data analysis depends upon a variety of factors that have been specified in the research question and as part of the research design. One key issue is whether the data are qualitative or quantitative, and this depends upon the underlying research approach. If the research approach is deductive, then most of the data are likely to be expressed as numbers and the key issue will be selecting the appropriate statistical techniques for describing and analysing the data. In this chapter, we will concentrate on techniques for describing quantitative data and for providing simple preliminary analyses.
Errors in data are a part of life for experimenters in science and engineering. This chapter considers the types of errors, including random and systematic error that can occur during an experiment and methods by which uncertainties arising from such errors can be combined. Many worked examples are included in this chapter, as well as exercises for the student to complete