We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A paired composition is a response (upon a dependent variable) to the ordered pair <j, k> of stimuli, treatments, etc. The present paper develops an alternative analysis for the paired compositions layout previously treated by Bechtel's [1967] scaling model. The alternative model relaxes the previous one by including row and column scales that provide an expression of bias for each pair of objects. The parameter estimation and hypothesis testing procedures for this model are illustrated by means of a small group analysis, which represents a new approach to pairwise sociometrics and personality assessment.
The well-known Rasch model is generalized to a multicomponent model, so that observations of component events are not needed to apply the model. It is shown that the generalized model has retained the property of the specific objectivity of the Rasch model. For a restricted variant of the model, maximum likelihood estimates of its parameters and a statistical test of the model are given. The results of an application to a mathematics test involving six components are described.
The G-DINA (generalized deterministic inputs, noisy “and” gate) model is a generalization of the DINA model with more relaxed assumptions. In its saturated form, the G-DINA model is equivalent to other general models for cognitive diagnosis based on alternative link functions. When appropriate constraints are applied, several commonly used cognitive diagnosis models (CDMs) can be shown to be special cases of the general models. In addition to model formulation, the G-DINA model as a general CDM framework includes a component for item-by-item model estimation based on design and weight matrices, and a component for item-by-item model comparison based on the Wald test. The paper illustrates the estimation and application of the G-DINA model as a framework using real and simulated data. It concludes by discussing several potential implications of and relevant issues concerning the proposed framework.
A general model is presented for homogeneous, dichotomous items when the answer key is not known a priori. The model is structurally related to the two-class latent structure model with the roles of respondents and items interchanged. For very small sets of respondents, iterative maximum likelihood estimates of the parameters can be obtained by existing methods. For other situations, new estimation methods are developed and assessed with Monte Carlo data. The answer key can be accurately reconstructed with relatively small sets of respondents. The model is useful when a researcher wants to study objectively the knowledge possessed by members of a culturally coherent group that the researcher is not a member of.
A method is presented to provide estimates of parameters of specified nonlinear equations from ordinal data generated from a crossed design. The analytic method, NOPE, is an iterative method in which monotone regression and the Gauss-Newton method of least squares are applied alternatively until a measure of stress is minimized. Examples of solutions from artificial data are presented together with examples of applications of the method to experimental results.
In this paper it is demonstrated how statistical inference from multistage test designs can be made based on the conditional likelihood. Special attention is given to parameter estimation, as well as the evaluation of model fit. Two reasons are provided why the fit of simple measurement models is expected to be better in adaptive designs, compared to linear designs: more parameters are available for the same number of observations; and undesirable response behavior, like slipping and guessing, might be avoided owing to a better match between item difficulty and examinee proficiency. The results are illustrated with simulated data, as well as with real data.
Taking a step-by-step approach to modelling neurons and neural circuitry, this textbook teaches students how to use computational techniques to understand the nervous system at all levels, using case studies throughout to illustrate fundamental principles. Starting with a simple model of a neuron, the authors gradually introduce neuronal morphology, synapses, ion channels and intracellular signalling. This fully updated new edition contains additional examples and case studies on specific modelling techniques, suggestions on different ways to use this book, and new chapters covering plasticity, modelling extracellular influences on brain circuits, modelling experimental measurement processes, and choosing appropriate model structures and their parameters. The online resources offer exercises and simulation code that recreate many of the book's figures, allowing students to practice as they learn. Requiring an elementary background in neuroscience and high-school mathematics, this is an ideal resource for a course on computational neuroscience.
Modelling a neural system involves the selection of the mathematical form of the model’s components, such as neurons, synapses and ion channels, plus assigning values to the model’s parameters. This may involve matching to the known biology, fitting a suitable function to data or computational simplicity. Only a few parameter values may be available through existing experimental measurements or computational models. It will then be necessary to estimate parameters from experimental data or through optimisation of model output. Here we outline the many mathematical techniques available. We discuss how to specify suitable criteria against which a model can be optimised. For many models, ranges of parameter values may provide equally good outcomes against performance criteria. Exploring the parameter space can lead to valuable insights into how particular model components contribute to particular patterns of neuronal activity. It is important to establish the sensitivity of the model to particular parameter values.
Chapter 2 discusses methods of estimation for the parameters of the GLM, with a strong emphasis on ordinary least squares estimation (OLS). OLS estimation minimizes the squared difference between observed and estimated values of the dependent variable, in units of this variable. A total of nine optimization criteria is discussed. The OLS solution is explained in detail
In this work, a new adaptive digital predistorter (DPD) is proposed to linearize radio frequency power amplifiers (PA). The DPD structure is composed of two sub-models. A Feedback–Wiener sub-model, describing the main inverse nonlinearities of the PA, combined with a second sub-model based on a memory polynomial (MP) model. The interest of this structure is that only the MP model is identified in real time to compensate deviations from the initial behavior and thus further improve the linearization. The identification architecture combines offline measurement and online parameter estimation with small number of coefficients in the MP sub-model to track the changes in the PA characteristics. The proposed structure is used to linearize a class AB 75 W PA, designed by Telerad society for aeronautical communications in Ultra High Frequency (UHF) / Very High Frequency (VHF) bands. The obtained results, in terms of identification of optimal DPD and the performances of the digital processing, show a good trade-off between linearization performances and computational complexity.
This chapter defines the COM–Poisson distribution in greater detail, discussing its associated attributes and computing tools available for analysis. This chapter first details how the COM–Poisson distribution was derived, and then describes the probability distribution, and introduces computing functions available in R that can be used to determine various probabilistic quantities of interest, including the normalizing constant, probability and cumulative distribution functions, random number generation, mean, and variance. The chapter then outlines the distributional and statistical properties associated with this model, and discusses parameter estimation and statistical inference associated with the COM–Poisson model. Various processes for generating random data are then discussed, along with associated available R computing tools. Continued discussion provides reparametrizations of the density function that serve as alternative forms for statistical analyses and model development, considers the COM–Poisson as a weighted Poisson distribution, and details discussion describing the various ways to approximate the COM–Poisson normalizing function.
There is a growing interest in studying individual differences in choices that involve trading off reward amount and delay to delivery because such choices have been linked to involvement in risky behaviors, such as substance abuse. The most ubiquitous proposal in psychology is to model these choices assuming delayed rewards lose value following a hyperbolic function, which has one free parameter, named discounting rate. Consequently, a fundamental issue is the estimation of this parameter. The traditional approach estimates each individual’s discounting rate separately, which discards individual differences during modeling and ignores the statistical structure of the population. The present work adopted a different approximation to parameter estimation: each individual’s discounting rate is estimated considering the information provided by all subjects, using state-of-the-art Bayesian inference techniques. Our goal was to evaluate whether individual discounting rates come from one or more subpopulations, using Mazur’s (1987) hyperbolic function. Twelve hundred eighty-four subjects answered the Intertemporal Choice Task developed by Kirby, Petry and Bickel (1999). The modeling techniques employed permitted the identification of subjects who produced random, careless responses, and who were discarded from further analysis. Results showed that one-mixture hierarchical distribution that uses the information provided by all subjects suffices to model individual differences in delay discounting, suggesting psychological variability resides along a continuum rather than in discrete clusters. This different approach to parameter estimation has the potential to contribute to the understanding and prediction of decision making in various real-world situations where immediacy is constantly in conflict with magnitude.
In this article we consider the estimation of the log-normalization constant associated to a class of continuous-time filtering models. In particular, we consider ensemble Kalman–Bucy filter estimates based upon several nonlinear Kalman–Bucy diffusions. Using new conditional bias results for the mean of the aforementioned methods, we analyze the empirical log-scale normalization constants in terms of their
$\mathbb{L}_n$
-errors and
$\mathbb{L}_n$
-conditional bias. Depending on the type of nonlinear Kalman–Bucy diffusion, we show that these are bounded above by terms such as
$\mathsf{C}(n)\left[t^{1/2}/N^{1/2} + t/N\right]$
or
$\mathsf{C}(n)/N^{1/2}$
(
$\mathbb{L}_n$
-errors) and
$\mathsf{C}(n)\left[t+t^{1/2}\right]/N$
or
$\mathsf{C}(n)/N$
(
$\mathbb{L}_n$
-conditional bias), where t is the time horizon, N is the ensemble size, and
$\mathsf{C}(n)$
is a constant that depends only on n, not on N or t. Finally, we use these results for online static parameter estimation for the above filtering models and implement the methodology for both linear and nonlinear models.
The aerodynamic modelling is one of the challenging tasks that is generally established using the results of the computational fluid dynamic software and wind tunnel analysis performed either on the scaled model or the prototype. In order to improve the confidence of the estimates, the conventional parameter estimation methods such as equation error method (EEM) and output error method (OEM) are more often applied to extract the aircraft’s stability and control derivatives from its respective flight test data. The quality of the estimates gets influenced due to the presence of the measurement and process noises in the flight test data. With the advancement in the machine learning algorithms, the data driven methods have got more attention in the modelling of a system based on the input-output measurements and also, in the identification of the system/model parameters. The research article investigates the longitudinal stability and control derivatives of the aerodynamic models by using an integrated optimisation algorithm based on a recurrent neural network. The flight test data of Hansa-3 and HFB 320 aircraft were used as case studies to see the efficacy of the parameter estimation algorithm and further, the confidence of the estimates were demonstrated in terms of the standard deviations. Finally, the simulated variables obtained using the estimates demonstrate a qualitative estimation in the presence of the noise.
As discussed in Chapter 1, corpus representativeness depends on two sets of considerations: domain considerations and distribution considerations. Domain considerations focus on describing the arena of language use, and operationally specifying a set of texts that could potentially be included in the corpus. The linguistic research goal, which involves both a linguistic feature and a discourse domain of interest, forms the foundation of corpus representativeness. Representativeness cannot be designed for or evaluated outside of the context of a specific linguistic research goal. Linguistic parameter estimation is the use of corpus-based data to approximate quantitative information about linguistic distributions in the domain. Domain considerations focus on what should be included in a corpus, based on qualitative characteristics of the domain. Distribution considerations focus on how many texts should be included in a corpus, relative to the variation of the linguistic features of interest. Corpus representativeness is not a dichotomy (representative or not representative), but rather is a continuous construct. A corpus may be representative to a certain extent, in particular ways, and for particular purposes.
The kappa distribution has been applied to study the frequency of hydrological events. This chapter discusses the kappa distribution and its parameter estimation using the methods of entropy, maximum likelihood, and moments.
This article recasts the traditional challenge of calibrating a material constitutive model into a hierarchical probabilistic framework. We consider a Bayesian framework where material parameters are assigned distributions, which are then updated given experimental data. Importantly, in true engineering setting, we are not interested in inferring the parameters for a single experiment, but rather inferring the model parameters over the population of possible experimental samples. In doing so, we seek to also capture the inherent variability of the material from coupon-to-coupon, as well as uncertainties around the repeatability of the test. In this article, we address this problem using a hierarchical Bayesian model. However, a vanilla computational approach is prohibitively expensive. Our strategy marginalizes over each individual experiment, decreasing the dimension of our inference problem to only the hyperparameter—those parameter describing the population statistics of the material model only. Importantly, this marginalization step, requires us to derive an approximate likelihood, for which, we exploit an emulator (built offline prior to sampling) and Bayesian quadrature, allowing us to capture the uncertainty in this numerical approximation. Importantly, our approach renders hierarchical Bayesian calibration of material models computational feasible. The approach is tested in two different examples. The first is a compression test of simple spring model using synthetic data; the second, a more complex example using real experiment data to fit a stochastic elastoplastic model for 3D-printed steel.
For robot manipulators, there are two types of disturbances. One is model parametric uncertainty; the other is unmodelled parameters such as joint friction forces and external disturbances. Unmodelled joint frictions and external disturbances reduce performance in terms of positioning accuracy and repeatability. In order to compensate for unmodelled parameters, the design of a new controller is considered. First, the modelled and unmodelled parameters are included in a dynamic model. Then, based on the dynamic model, a new Lyapunov function is developed. After that, new nonlinear joint friction and external disturbance estimation laws are derived as an analytic solution from the Lyapunov function; thus, the stability of the closed system is guaranteed. Better values of the adaptive dynamic compensators can be extracted by fuzzy rules according to the tracking error. Limitations and knowledge about friction and external disturbances are not required for the design of the controller. The controller compensates for all possible model parameter uncertainties, all possible unknown joint frictions and external disturbances.
This paper proposes a procedure to improve the accuracy of the light aircraft 6 DOF simulation model by implementing model tuning and aerodynamic database correction using flight test data. In this study, the full-scale flight testing of a 2-seater aircraft has been performed in specific longitudinal manoeuver for model enhancement and simulation validation purposes. The baseline simulation model database is constructed using multi-fidelity analysis methods such as wind tunnel (W/T) test, computational fluid dynamic (CFD) and empirical calculation. The enhancement process starts with identifying longitudinal equations of motion for sensitivity analysis, where the effect of crucial parameters is analysed and then adjusted using the model tuning technique. Next, the classical Maximum Likelihood (ML) estimation method is applied to calculate aerodynamic derivatives from flight test data, these parameters are utilised to correct the initial aerodynamic table. A simulation validation process is introduced to evaluate the accuracy of the enhanced 6 DOF simulation model. The presented results demonstrate that the applied enhancement procedure has improved the simulation accuracy in longitudinal motion. The discrepancy between the simulation and flight test response showed significant improvement, which satisfies the regulation tolerance.
Flight delays may be decreased in a predictable way if the Weibull wind speed parameters of a runway, which are an important aspect of safety during the take-off and landing phases of aircraft, can be determined. One aim of this work is to determine the wind profile of Hasan Polatkan Airport (HPA) as a case study. Numerical methods for Weibull parameter determination perform better when the average wind speed estimation is the main objective. In this paper, a novel objective function that minimises the root-mean-square error by employing the cumulative distribution function is proposed based on the genetic algorithm and particle swarm optimisation. The results are compared with well-known numerical methods, such as maximum-likelihood estimation, the empirical method, the graphical method and the equivalent energy method, as well as the available objective function. Various statistical tests in the literature are applied, such as R2, Root-Mean-Square Error (RMSE) and $\chi$2. In addition, the Mean Absolute Error (MAE) and total elapsed time calculated using the algorithms are compared. According to the results of the statistical tests, the proposed methods outperform others, achieving scores as high as 0.9789 and 0.9996 for the R2 test, as low as 0.0058 and 0.0057 for the RMSE test, 0.0036 and 0.0045 for the MAE test and 3.53 × 10−5 and 3.50 × 10−5 for the $\chi$2 test. In addition, the determination of the wind speed characteristics at HPA show that low wind speed characteristics and regimes throughout the year offer safer take-off and landing schedules for target aircraft. The principle aim of this paper is to help establish the correct orientation of new runways at HPA and maximise the capacity of the airport by minimising flight delays, which represent a significant impediment to air traffic flow.