We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We here round off a book on biophysical foundations and computational modeling of electric and magnetic signals in the brain. We summarize some key insights from such modeling, and we clear up some common misconceptions about extracellular potentials. We address the main limitations with the standard modeling framework used to compute extracellular potentials, discussing the uncertainty in model parameters and its neglect of ephaptic interactions between active neurons. We identify what we believe are key areas of future applications and give an outlook for future modeling challenges.
We introduce a variant of Shepp’s classical urn problem in which the optimal stopper does not know whether sampling from the urn is done with or without replacement. By considering the problem’s continuous-time analog, we provide bounds on the value function and, in the case of a balanced urn (with an equal number of each ball type), an explicit solution is found. Surprisingly, the optimal strategy for the balanced urn is the same as in the classical urn problem. However, the expected value upon stopping is lower due to the additional uncertainty present.
In this chapter, we consider qualitative and quantitative aspects of risk related to the development, implementation, and uses of quantitative models in enterprise risk management (ERM). First, we discuss the different ways that model risk arises, including defective models, inappropriate applications, and inadequate or inappropriate interpretation of the results. We consider the lifecycle of a model – from development, through regular updating and revision, to the decommissioning stage. We review quantitative approaches to measuring model and parameter uncertainty, based on a Bayesian framework. Finally, we discuss some aspects of model governance, and some potential methods for mitigating model risk.
Pricing ultra-long-dated pension liabilities under the market-consistent valuation is challenged by the scarcity of the long-term market instruments that match or exceed the terms of pension liabilities. We develop a robust self-financing hedging strategy which adopts a min–max expected shortfall hedging criterion to replicate the long-dated liabilities for agents who fear parameter misspecification. We introduce a backward robust least squares Monte Carlo method to solve this dynamic robust optimization problem. We find that both naive and robust optimal portfolios depend on the hedging horizon and the current funding ratio. The robust policy suggests taking more risk when the current funding ratio is low. The yield curve constructed by the robust dynamic hedging portfolio is always lower than the naive one but is higher than the model-based yield curve in a low-rate environment.
The response of glaciers to climate change has major implications for sea-level change and water resources around the globe. Large-scale glacier evolution models are used to project glacier runoff and mass loss, but are constrained by limited observations, which result in models being over-parameterized. Recent systematic geodetic mass-balance observations provide an opportunity to improve the calibration of glacier evolution models. In this study, we develop a calibration scheme for a glacier evolution model using a Bayesian inverse model and geodetic mass-balance observations, which enable us to quantify model parameter uncertainty. The Bayesian model is applied to each glacier in High Mountain Asia using Markov chain Monte Carlo methods. After 10,000 steps, the chains generate a sufficient number of independent samples to estimate the properties of the model parameters from the joint posterior distribution. Their spatial distribution shows a clear orographic effect indicating the resolution of climate data is too coarse to resolve temperature and precipitation at high altitudes. Given the glacier evolution model is over-parameterized, particular attention is given to identifiability and the need for future work to integrate additional observations in order to better constrain the plausible sets of model parameters.
In this article, we study parameter uncertainty and its actuarial implications in the context of economic scenario generators. To account for this additional source of uncertainty in a consistent manner, we cast Wilkie’s four-factor framework into a Bayesian model. The posterior distribution of the model parameters is estimated using Markov chain Monte Carlo methods and is used to perform Bayesian predictions on the future values of the inflation rate, the dividend yield, the dividend index return and the long-term interest rate. According to the US data, parameter uncertainty has a significant impact on the dispersion of the four economic variables of Wilkie’s framework. The impact of such parameter uncertainty is then assessed for a portfolio of annuities: the right tail of the loss distribution is significantly heavier when parameters are assumed random and when this uncertainty is estimated in a consistent manner. The risk measures on the loss variable computed with parameter uncertainty are at least 12% larger than their deterministic counterparts.
Model and parameter uncertainties are common whenever some parametric model is selected to value a derivative instrument. Combining the Monte Carlo method with the Smolyak interpolation algorithm, we propose an accurate efficient numerical procedure to quantify the uncertainty embedded in complex derivatives. Except for the value function being sufficiently smooth with respect to the model parameters, there are no requirements on the payoff or candidate models. Numerical tests carried out quantify the uncertainty of Bermudan put options and down-and-out put options under the Heston model, with each model parameter specified in an interval.
This paper defines the ‘Case Deleted’ Deviance - a new objective function for evaluating Generalised Linear Models, and applies this to a number of practical examples in the pricing of general insurance. The paper details practical approximations to enable the efficient calculation of the objective, and derives modifications to the standard Generalised Linear Modelling algorithm to allow the derivation of scaled parameters from this measure to reduce potential over fitting to historical data. These scaled parameters improve the predictiveness of the model when applied to previously unseen data points, the most likely being related to future business written. The potential for over fitting has increased due to number of factors now used, particularly in pricing personal lines business and the advent of price comparison sites which has increased the penalties of mis-estimation. New material in this paper has been included in a UK patent application No. 1020091.3.
We propose a new method for two-dimensional mortality modelling. Our approach smoothes the data set in the dimensions of cohort and age using Bayesian smoothing splines. The method allows the data set to be imbalanced, since more recent cohorts have fewer observations. We suggest an initial model for observed death rates, and an improved model which deals with the numbers of deaths directly. Unobserved death rates are estimated by smoothing the data with a suitable prior distribution. To assess the fit and plausibility of our models we perform model checks by introducing appropriate test quantities. We show that our final model fulfils nearly all requirements set for a good mortality model.
Actuaries are often faced with the task of estimating tails of loss distributions from just a few observations. Thus estimates of tail probabilities (reinsurance prices) and percentiles (solvency capital requirements) are typically subject to substantial parameter uncertainty. We study the bias and MSE of estimators of tail probabilities and percentiles, with focus on 1-parameter exponential families. Using asymptotic arguments it is shown that tail estimates are subject to significant positive bias. Moreover, the use of bootstrap predictive distributions, which has been proposed in the actuarial literature as a way of addressing parameter uncertainty, is seen to double the estimation bias. A bias corrected estimator is thus proposed. It is then shown that the MSE of the MLE, the parametric bootstrap and the bias corrected estimators only differ in terms of order O(n−2), which provides decision-makers with some flexibility as to which estimator to use. The accuracy of asymptotic methods, even for small samples, is demonstrated exactly for the exponential and related distributions, while other 1-parameter distributions are considered in a simulation study. We argue that the presence of positive bias may be desirable in solvency capital calculations, though not necessarily in pricing problems.
This paper introduces a new framework for modelling the joint development over time of mortality rates in a pair of related populations with the primary aim of producing consistent mortality forecasts for the two populations. The primary aim is achieved by combining a number of recent and novel developments in stochastic mortality modelling, but these, additionally, provide us with a number of side benefits and insights for stochastic mortality modelling. By way of example, we propose an Age-Period-Cohort model which incorporates a mean-reverting stochastic spread that allows for different trends in mortality improvement rates in the short-run, but parallel improvements in the long run. Second, we fit the model using a Bayesian framework that allows us to combine estimation of the unobservable state variables and the parameters of the stochastic processes driving them into a single procedure. Key benefits of this include dampening down of the impact of Poisson variation in death counts, full allowance for paramater uncertainty, and the flexibility to deal with missing data. The framework is designed for large populations coupled with a small sub-population and is applied to the England & Wales national and Continuous Mortality Investigation assured lives males populations. We compare and contrast results based on the two-population approach with single-population results.
In this paper we review the Wilkie asset model for a variety of UK economic indices, including the Retail Prices Index, both without and with an ARCH model, the wages index, share dividend yields, share dividends and share prices, long term bond yields, short term bond yields and index-linked bond yields, in each case by updating the parameters to June 2009. We discuss how the model has performed from 1994 to 2009 and estimate the values of the parameters and their confidence intervals over various sub-periods to study their stability. Our analysis shows that the residuals of many of the series are much fatter-tailed than in a normal distribution. We observe also that besides the stochastic uncertainty built into the model by the random innovations there is also parameter uncertainty arising from the estimated values of the parameters.
This paper examines optimal monetary policy under uncertainty about fundamental parameters of a dynamic stochastic general-equilibrium model. In contrast to previous studies, a microfoundation of the model leads this uncertainty to generate uncertainty not only about the transmission of monetary policy but also about the transmission of shocks and about a social welfare loss function. In the presence of such uncertainty, this paper finds conditions under which optimal discretionary policy responds to shocks more aggressively than in the absence of the uncertainty. These conditions depend crucially on the persistence of shocks and the magnitude of policy multipliers. To obtain the conditions, taking proper account of uncertainty about the transmission of shocks and about the welfare loss function is of crucial importance.
Empirical evidence suggests that the instrument rule describing the interest rate–setting behavior of the Federal Reserve is nonlinear. This paper shows that optimal monetary policy under parameter uncertainty can motivate this pattern. If the central bank is uncertain about the slope of the Phillips curve and follows a min–max strategy to formulate policy, the interest rate reacts more strongly to inflation when inflation is further away from target. The reason is that the worst case the central bank takes into account is endogenous and depends on the inflation rate and the output gap. As inflation increases, the worst-case perception of the Phillips curve slope becomes larger, thus requiring a stronger interest rate adjustment. Empirical evidence supports this form of nonlinearity for post-1982 U.S. data.
This paper presents a study of the application of adaptive and robust control methods to a cooperative manipulation system which is developed for handling an object by three dimensional revolute-jointed manipulators. The adaptive control algorithm supports the parameter adaptive law that provides guaranteed stability for uncertain systems. In designing the robust control structure, contact and friction constraints for grasp and bearing conditions, structural flexibility or such similar factors as various unmodeled dynamics are considered as uncertainties that determine available values of control parameters. The novelty of results in the present paper is to define new control inputs using parametric uncertainties and the Lyapunov based theory of guaranteed stability of uncertain systems for handling objects in a spatial workspace.
This paper proposes a general method based on a property of zero-sum two-player games to derive robust optimal monetary policy rules—the best rules among those that yield an acceptable performance in a specified range of models—when the true model is unknown and model uncertainty is viewed as uncertainty about parameters of the structural model. The method is applied to characterize robust optimal Taylor rules in a simple forward-looking macroeconomic model that can be derived from first principles. Although it is commonly believed that monetary policy should be less responsive when there is parameter uncertainty, we show that robust optimal Taylor rules prescribe in general a stronger response of the interest rate to fluctuations in inflation and the output gap than is the case in the absence of uncertainty. Thus model uncertainty does not necessarily justify a relatively small response of actual monetary policy.
This paper examines monetary policy in a two-equation macroeconomic model when the policymaker recognizes that the model is an approximation and is uncertain about the quality of that approximation. It is argued that the minimax approach of robust control provides a general and tractable alternative to the conventional Bayesian decision theoretic approach. Robust control techniques are used to construct robust monetary policies. In most (but not all) cases, these robust policies are more aggressive than the optimal policies absent model uncertainty. The specific robust policies depend strongly on the formulation of model uncertainty used, and we make some suggestions about which formulation is most relevant for monetary policy applications.
For the purpose of Value-at-Risk (VaR) analysis, a model for the return distribution is important because it describes the potential behavior of a financial security in the future. What is primarily, is the behavior in the tail of the distribution since VaR analysis deals with extreme market situations. We analyze the extension of the normal distribution function to allow for fatter tails and for time-varying volatility. Equally important to the distribution function are the associated parameter values. We argue that parameter uncertainty leads to uncertainty in the reported VaR estimates. There is a tradeoff between more complex tail-behavior and this uncertainty. The “best estimate”-VaR should be adjusted to take account of the uncertainty in the VaR. Finally, we consider the VaR forecast for a portfolio of securities. We propose a method to treat the modeling in a univariate, rather than a multivariate, framework. Such a choice allows us to reduce parameter uncertainty and to model directly the relevant variable.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.