We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Taking a step-by-step approach to modelling neurons and neural circuitry, this textbook teaches students how to use computational techniques to understand the nervous system at all levels, using case studies throughout to illustrate fundamental principles. Starting with a simple model of a neuron, the authors gradually introduce neuronal morphology, synapses, ion channels and intracellular signalling. This fully updated new edition contains additional examples and case studies on specific modelling techniques, suggestions on different ways to use this book, and new chapters covering plasticity, modelling extracellular influences on brain circuits, modelling experimental measurement processes, and choosing appropriate model structures and their parameters. The online resources offer exercises and simulation code that recreate many of the book's figures, allowing students to practice as they learn. Requiring an elementary background in neuroscience and high-school mathematics, this is an ideal resource for a course on computational neuroscience.
Modelling a neural system involves the selection of the mathematical form of the model’s components, such as neurons, synapses and ion channels, plus assigning values to the model’s parameters. This may involve matching to the known biology, fitting a suitable function to data or computational simplicity. Only a few parameter values may be available through existing experimental measurements or computational models. It will then be necessary to estimate parameters from experimental data or through optimisation of model output. Here we outline the many mathematical techniques available. We discuss how to specify suitable criteria against which a model can be optimised. For many models, ranges of parameter values may provide equally good outcomes against performance criteria. Exploring the parameter space can lead to valuable insights into how particular model components contribute to particular patterns of neuronal activity. It is important to establish the sensitivity of the model to particular parameter values.
Chapter 2 discusses methods of estimation for the parameters of the GLM, with a strong emphasis on ordinary least squares estimation (OLS). OLS estimation minimizes the squared difference between observed and estimated values of the dependent variable, in units of this variable. A total of nine optimization criteria is discussed. The OLS solution is explained in detail
In this work, a new adaptive digital predistorter (DPD) is proposed to linearize radio frequency power amplifiers (PA). The DPD structure is composed of two sub-models. A Feedback–Wiener sub-model, describing the main inverse nonlinearities of the PA, combined with a second sub-model based on a memory polynomial (MP) model. The interest of this structure is that only the MP model is identified in real time to compensate deviations from the initial behavior and thus further improve the linearization. The identification architecture combines offline measurement and online parameter estimation with small number of coefficients in the MP sub-model to track the changes in the PA characteristics. The proposed structure is used to linearize a class AB 75 W PA, designed by Telerad society for aeronautical communications in Ultra High Frequency (UHF) / Very High Frequency (VHF) bands. The obtained results, in terms of identification of optimal DPD and the performances of the digital processing, show a good trade-off between linearization performances and computational complexity.
This chapter defines the COM–Poisson distribution in greater detail, discussing its associated attributes and computing tools available for analysis. This chapter first details how the COM–Poisson distribution was derived, and then describes the probability distribution, and introduces computing functions available in R that can be used to determine various probabilistic quantities of interest, including the normalizing constant, probability and cumulative distribution functions, random number generation, mean, and variance. The chapter then outlines the distributional and statistical properties associated with this model, and discusses parameter estimation and statistical inference associated with the COM–Poisson model. Various processes for generating random data are then discussed, along with associated available R computing tools. Continued discussion provides reparametrizations of the density function that serve as alternative forms for statistical analyses and model development, considers the COM–Poisson as a weighted Poisson distribution, and details discussion describing the various ways to approximate the COM–Poisson normalizing function.
There is a growing interest in studying individual differences in choices that involve trading off reward amount and delay to delivery because such choices have been linked to involvement in risky behaviors, such as substance abuse. The most ubiquitous proposal in psychology is to model these choices assuming delayed rewards lose value following a hyperbolic function, which has one free parameter, named discounting rate. Consequently, a fundamental issue is the estimation of this parameter. The traditional approach estimates each individual’s discounting rate separately, which discards individual differences during modeling and ignores the statistical structure of the population. The present work adopted a different approximation to parameter estimation: each individual’s discounting rate is estimated considering the information provided by all subjects, using state-of-the-art Bayesian inference techniques. Our goal was to evaluate whether individual discounting rates come from one or more subpopulations, using Mazur’s (1987) hyperbolic function. Twelve hundred eighty-four subjects answered the Intertemporal Choice Task developed by Kirby, Petry and Bickel (1999). The modeling techniques employed permitted the identification of subjects who produced random, careless responses, and who were discarded from further analysis. Results showed that one-mixture hierarchical distribution that uses the information provided by all subjects suffices to model individual differences in delay discounting, suggesting psychological variability resides along a continuum rather than in discrete clusters. This different approach to parameter estimation has the potential to contribute to the understanding and prediction of decision making in various real-world situations where immediacy is constantly in conflict with magnitude.
In this article we consider the estimation of the log-normalization constant associated to a class of continuous-time filtering models. In particular, we consider ensemble Kalman–Bucy filter estimates based upon several nonlinear Kalman–Bucy diffusions. Using new conditional bias results for the mean of the aforementioned methods, we analyze the empirical log-scale normalization constants in terms of their
$\mathbb{L}_n$
-errors and
$\mathbb{L}_n$
-conditional bias. Depending on the type of nonlinear Kalman–Bucy diffusion, we show that these are bounded above by terms such as
$\mathsf{C}(n)\left[t^{1/2}/N^{1/2} + t/N\right]$
or
$\mathsf{C}(n)/N^{1/2}$
(
$\mathbb{L}_n$
-errors) and
$\mathsf{C}(n)\left[t+t^{1/2}\right]/N$
or
$\mathsf{C}(n)/N$
(
$\mathbb{L}_n$
-conditional bias), where t is the time horizon, N is the ensemble size, and
$\mathsf{C}(n)$
is a constant that depends only on n, not on N or t. Finally, we use these results for online static parameter estimation for the above filtering models and implement the methodology for both linear and nonlinear models.
The aerodynamic modelling is one of the challenging tasks that is generally established using the results of the computational fluid dynamic software and wind tunnel analysis performed either on the scaled model or the prototype. In order to improve the confidence of the estimates, the conventional parameter estimation methods such as equation error method (EEM) and output error method (OEM) are more often applied to extract the aircraft’s stability and control derivatives from its respective flight test data. The quality of the estimates gets influenced due to the presence of the measurement and process noises in the flight test data. With the advancement in the machine learning algorithms, the data driven methods have got more attention in the modelling of a system based on the input-output measurements and also, in the identification of the system/model parameters. The research article investigates the longitudinal stability and control derivatives of the aerodynamic models by using an integrated optimisation algorithm based on a recurrent neural network. The flight test data of Hansa-3 and HFB 320 aircraft were used as case studies to see the efficacy of the parameter estimation algorithm and further, the confidence of the estimates were demonstrated in terms of the standard deviations. Finally, the simulated variables obtained using the estimates demonstrate a qualitative estimation in the presence of the noise.
As discussed in Chapter 1, corpus representativeness depends on two sets of considerations: domain considerations and distribution considerations. Domain considerations focus on describing the arena of language use, and operationally specifying a set of texts that could potentially be included in the corpus. The linguistic research goal, which involves both a linguistic feature and a discourse domain of interest, forms the foundation of corpus representativeness. Representativeness cannot be designed for or evaluated outside of the context of a specific linguistic research goal. Linguistic parameter estimation is the use of corpus-based data to approximate quantitative information about linguistic distributions in the domain. Domain considerations focus on what should be included in a corpus, based on qualitative characteristics of the domain. Distribution considerations focus on how many texts should be included in a corpus, relative to the variation of the linguistic features of interest. Corpus representativeness is not a dichotomy (representative or not representative), but rather is a continuous construct. A corpus may be representative to a certain extent, in particular ways, and for particular purposes.
The kappa distribution has been applied to study the frequency of hydrological events. This chapter discusses the kappa distribution and its parameter estimation using the methods of entropy, maximum likelihood, and moments.
This article recasts the traditional challenge of calibrating a material constitutive model into a hierarchical probabilistic framework. We consider a Bayesian framework where material parameters are assigned distributions, which are then updated given experimental data. Importantly, in true engineering setting, we are not interested in inferring the parameters for a single experiment, but rather inferring the model parameters over the population of possible experimental samples. In doing so, we seek to also capture the inherent variability of the material from coupon-to-coupon, as well as uncertainties around the repeatability of the test. In this article, we address this problem using a hierarchical Bayesian model. However, a vanilla computational approach is prohibitively expensive. Our strategy marginalizes over each individual experiment, decreasing the dimension of our inference problem to only the hyperparameter—those parameter describing the population statistics of the material model only. Importantly, this marginalization step, requires us to derive an approximate likelihood, for which, we exploit an emulator (built offline prior to sampling) and Bayesian quadrature, allowing us to capture the uncertainty in this numerical approximation. Importantly, our approach renders hierarchical Bayesian calibration of material models computational feasible. The approach is tested in two different examples. The first is a compression test of simple spring model using synthetic data; the second, a more complex example using real experiment data to fit a stochastic elastoplastic model for 3D-printed steel.
For robot manipulators, there are two types of disturbances. One is model parametric uncertainty; the other is unmodelled parameters such as joint friction forces and external disturbances. Unmodelled joint frictions and external disturbances reduce performance in terms of positioning accuracy and repeatability. In order to compensate for unmodelled parameters, the design of a new controller is considered. First, the modelled and unmodelled parameters are included in a dynamic model. Then, based on the dynamic model, a new Lyapunov function is developed. After that, new nonlinear joint friction and external disturbance estimation laws are derived as an analytic solution from the Lyapunov function; thus, the stability of the closed system is guaranteed. Better values of the adaptive dynamic compensators can be extracted by fuzzy rules according to the tracking error. Limitations and knowledge about friction and external disturbances are not required for the design of the controller. The controller compensates for all possible model parameter uncertainties, all possible unknown joint frictions and external disturbances.
This paper proposes a procedure to improve the accuracy of the light aircraft 6 DOF simulation model by implementing model tuning and aerodynamic database correction using flight test data. In this study, the full-scale flight testing of a 2-seater aircraft has been performed in specific longitudinal manoeuver for model enhancement and simulation validation purposes. The baseline simulation model database is constructed using multi-fidelity analysis methods such as wind tunnel (W/T) test, computational fluid dynamic (CFD) and empirical calculation. The enhancement process starts with identifying longitudinal equations of motion for sensitivity analysis, where the effect of crucial parameters is analysed and then adjusted using the model tuning technique. Next, the classical Maximum Likelihood (ML) estimation method is applied to calculate aerodynamic derivatives from flight test data, these parameters are utilised to correct the initial aerodynamic table. A simulation validation process is introduced to evaluate the accuracy of the enhanced 6 DOF simulation model. The presented results demonstrate that the applied enhancement procedure has improved the simulation accuracy in longitudinal motion. The discrepancy between the simulation and flight test response showed significant improvement, which satisfies the regulation tolerance.
Flight delays may be decreased in a predictable way if the Weibull wind speed parameters of a runway, which are an important aspect of safety during the take-off and landing phases of aircraft, can be determined. One aim of this work is to determine the wind profile of Hasan Polatkan Airport (HPA) as a case study. Numerical methods for Weibull parameter determination perform better when the average wind speed estimation is the main objective. In this paper, a novel objective function that minimises the root-mean-square error by employing the cumulative distribution function is proposed based on the genetic algorithm and particle swarm optimisation. The results are compared with well-known numerical methods, such as maximum-likelihood estimation, the empirical method, the graphical method and the equivalent energy method, as well as the available objective function. Various statistical tests in the literature are applied, such as R2, Root-Mean-Square Error (RMSE) and $\chi$2. In addition, the Mean Absolute Error (MAE) and total elapsed time calculated using the algorithms are compared. According to the results of the statistical tests, the proposed methods outperform others, achieving scores as high as 0.9789 and 0.9996 for the R2 test, as low as 0.0058 and 0.0057 for the RMSE test, 0.0036 and 0.0045 for the MAE test and 3.53 × 10−5 and 3.50 × 10−5 for the $\chi$2 test. In addition, the determination of the wind speed characteristics at HPA show that low wind speed characteristics and regimes throughout the year offer safer take-off and landing schedules for target aircraft. The principle aim of this paper is to help establish the correct orientation of new runways at HPA and maximise the capacity of the airport by minimising flight delays, which represent a significant impediment to air traffic flow.
The prediction of the post-diapause emergence is the first step towards a comprehensive decision support system that can contribute to a considerable reduction of pesticide use by forecasting a precise spraying date. The cumulative field emergence can be described as a function of the cumulative development rate. We investigated the impact of seven constant temperatures and five light regimes on post-diapause development in laboratory experiments. Development rate depended significantly on temperature but not on photoperiod. We therefore fit non-linear thermal performance curves, a better and more modern approach over past linear models, to describe the development rate as a function of temperature. The four parameter Brière function was the most suitable and was subsequently applied to temperature data from 36 previous pea fields, where pea moth emergence was measured with pheromone traps in Northern Hesse (Germany). In order to describe the variation in development times between individuals, we fit five nonlinear distribution models to the cumulative development rate as a function of cumulative field emergence. The three parameter Gompertz model was selected as the best fitted model. We validated the model performance with an independent field data set. The model correctly predicted the first moth in the trap and the peak emergence in 81.82% of cases, with an average deviation of only 2.00 and 2.09 days respectively.
This chapter is devoted to parameter estimation. We first discuss the physical dependence of CMB anisotropies on cosmological parameters. After a section on CMB data we then treat in some detail statistical methods for CMB data analysis. We discuss especially the Fisher matrix and explain Markov chain Monte Carlo methods. We also address degeneracies, combinations of cosmological parameters on which CMB anisotropies and polarization depend only weakly. Because of these degeneracies, cosmological parameter estimation also makes use of other, non CMB related, observations especially observations related to the large scale matter distribution. We summarize them and other cosmological observations in two separate sections.
This chapter is a succinct introduction to basic probabilistic methods for pattern recognition and machine learning. One focus is to clearly present the exact meanings of different terms, including the taxonomy of different probabilistic methods. We present a basic introduction to maximum likelihood and maximum a posteriori estimation, and a very brief example to showcase the concept of Bayesian estimation. For the nonparametric world, we start from the drawbacks of parametric methods, gradually analyzing the properties preferred for a nonparametric one, and finally reach the kernel density estimation, a typical nonparametric method.
The stability and control derivatives are essential parameters in the flight operation of aircraft, and their determination is a routine task using classical parameter estimation methods based on maximum likelihood and least-squares principles. At high angle-of-attack, the unsteady aerodynamics may pose difficulty in aerodynamic structure determination, hence data-driven methods based on artificial neural networks could be an alternative choice for building models to characterise the behaviour of the system based on the measured motion and control variables. This research paper investigates the feasibility of using a recurrent neural model based on an extreme learning machine network in the modelling of the aircraft dynamics in a restricted sense for identification of the aerodynamic parameters. The recurrent extreme learning machine network is combined with the Gauss–Newton method to optimise the unknowns of the postulated aerodynamic model. The efficacy of the proposed estimation algorithm is studied using real flight data from a quasi-steady stall manoeuvre. Furthermore, the estimates are validated against the parameters estimated using the maximum likelihood method. The standard deviations of the estimates demonstrate the effectiveness of the proposed algorithm. Finally, the quantities regenerated using the estimates present good agreement with their corresponding measured values, confirming that a qualitative estimation can be obtained using the proposed estimation algorithm.
This paper studies the parameter estimation for Ornstein–Uhlenbeck stochastic volatility models driven by Lévy processes. We propose computationally efficient estimators based on the method of moments that are robust to model misspecification. We develop an analytical framework that enables closed-form representation of model parameters in terms of the moments and autocorrelations of observed underlying processes. Under moderate assumptions, which are typically much weaker than those for likelihood methods, we prove large-sample behaviors for our proposed estimators, including strong consistency and asymptotic normality. Our estimators obtain the canonical square-root convergence rate and are shown through numerical experiments to outperform likelihood-based methods.