We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The set of basic topics then continues with a major application domain of our theory: linear least-squares estimation (llse) of the state of an evolving system (aka Kalman filtering), which turns out to be an immediate application of the outer–inner factorization theory developed in Chapter 8. To complete this discussion, we also show how the theory extends naturally to cover the smoothing case (which is often considered “difficult”).
Deep neural networks have become an important tool for use in actuarial tasks, due to the significant gains in accuracy provided by these techniques compared to traditional methods, but also due to the close connection of these models to the generalized linear models (GLMs) currently used in industry. Although constraining GLM parameters relating to insurance risk factors to be smooth or exhibit monotonicity is trivial, methods to incorporate such constraints into deep neural networks have not yet been developed. This is a barrier for the adoption of neural networks in insurance practice since actuaries often impose these constraints for commercial or statistical reasons. In this work, we present a novel method for enforcing constraints within deep neural network models, and we show how these models can be trained. Moreover, we provide example applications using real-world datasets. We call our proposed method ICEnet to emphasize the close link of our proposal to the individual conditional expectation model interpretability technique.
All statistical models have assumptions, and violation of these assumptions can affect the reliability of any conclusions we draw. Before we fit any statistical model, we need to explore the data to be sure we fit a valid model. Are relationships assumed to be a straight line really linear? Does the response variable follow the assumed distribution? Are variances consistent? We outline several graphical techniques for exploring data and introduce the analysis of model residuals as a powerful tool. If assumptions are violated, we consider two solutions, transforming variables to satisfy assumptions and using models that assume different distributions more consistent with the raw data and residuals. The exploratory stage can be extensive, but it is essential. At this pre-analysis stage, we also consider what to do about missing observations.
In this chapter we extend our discussion of the previous chapter to model dynamical systems with continuous state-spaces. We present statistical formulations to model and analyze noisy trajectories that evolve in a continuous state space whose output is corrupted by noise. In particular, we place special emphasis on linear Gaussian state-space models and, within this context, present Kalman filtering theory. The theory presented herein lends itself to the exploration of tracking algorithms explored in the chapter and in an end-of-chapter project.
In this chapter we introduce and apply hidden Markov models to model and analyze dynamical data. Hidden Markov models are one of simplest of dynamical models valid for systems evolving in a discrete state-space at discrete time points. We first describe the evaluation of the likelihood relevant to hidden Markov models and introduce the concept of filtering. We then describe how to obtain maximum likelihood estimators using expectation maximization. We then broaden our discussion to the Bayesian paradigm and introduce the Bayesian hidden Markov model. In this context, we describe the forward filtering backward sampling algorithm and Monte Carlo methods for sampling from hidden Markov model posteriors. As hidden Markov models are flexible modeling tools, we present a number of variants including the sticky hidden Markov model, the factorial hidden Markov model, and the infinite hidden Markov model. Finally, we conclude with a case study in fluorescence spectroscopy where we show how the basic filtering theory presented earlier may be extended to evaluate the likelihood of a second-order hidden Markov model.
Modelling and forecasting mortality is a topic of crucial importance to actuaries and demographers. However, forecasts from the majority of mortality projection models are continuations of past trends seen in the data. As such, these models are unable to account for external opinions or expert judgement. In this work, we present a method for the incorporation of deterministic opinions into the smoothing and forecasting of mortality rates using constraints. Not only does our approach yield a smooth transition from the past into the future, but also, the shapes of the resulting forecasts are governed by a combination of the opinion inputs and the speed of improvements observed in the data. In addition, our approach offers the possibility to compute the amount of uncertainty around the projected mortality trends conditional on the opinion inputs, and this allows us to highlight some of the pitfalls of deterministic projection methods.
Using novel microdata, we explore lifecycle consumption in Sub-Saharan Africa. We find that households' ability to smooth consumption over the lifecycle is large, particularly, in rural areas. Consumption in old age is sustained by shifting to self-farmed staple food, as opposed to traditional savings mechanisms or food gifts. This smoothing strategy indicates two important costs. The first cost is a loss of human capital as children seem to be diverted away from school and into producing self-farmed food. Second, a diet largely concentrated in staple food (e.g., maize in Malawi) in old age results in a loss of nutritional quality for households headed by the elderly.
In this paper we present results on the concentration properties of the smoothing and filtering distributions of some partially observed chaotic dynamical systems. We show that, rather surprisingly, for the geometric model of the Lorenz equations, as well as some other chaotic dynamical systems, the smoothing and filtering distributions do not concentrate around the true position of the signal, as the number of observations tends to ∞. Instead, under various assumptions on the observation noise, we show that the expected value of the diameter of the support of the smoothing and filtering distributions remains lower bounded by a constant multiplied by the standard deviation of the noise, independently of the number of observations. Conversely, under rather general conditions, the diameter of the support of the smoothing and filtering distributions are upper bounded by a constant multiplied by the standard deviation of the noise. To some extent, applications to the three-dimensional Lorenz 63 model and to the Lorenz 96 model of arbitrarily large dimension are considered.
Recently, there has been an increasing interest from life insurers to assess their portfolios' mortality risks. The new European prudential regulation, namely Solvency II, emphasized the need to use mortality and life tables that best capture and reflect the experienced mortality, and thus policyholders' actual risk profiles, in order to adequately quantify the underlying risk. Therefore, building a mortality table based on the experience of the portfolio is highly recommended and, for this purpose, various approaches have been introduced into actuarial literature. Although such approaches succeed in capturing the main features, it remains difficult to assess the mortality when the underlying portfolio lacks sufficient exposure. In this paper, we propose graduating the mortality curve using an adaptive procedure based on the local likelihood. The latter has the ability to model the mortality patterns even in presence of complex structures and avoids relying on expert opinions. However, such a technique fails to offer a consistent yet regular structure for portfolios with limited deaths. Although the technique borrows the information from the adjacent ages, it is sometimes not sufficient to produce a robust life table. In the presence of such a bias, we propose adjusting the corresponding curve, at the age level, based on a credibility approach. This consists in reviewing the assumption of the mortality curve as new observations arrive. We derive the updating procedure and investigate its benefits of using the latter instead of a sole graduation based on real datasets. Moreover, we look at the divergences in the mortality forecasts generated by the classic credibility approaches including Hardy–Panjer, the Poisson–Gamma model and the Makeham framework on portfolios originating from various French insurance companies.
We consider a modification to the Poisson common factor model and utilise a generalised linear model (GLM) framework that incorporates a smoothing process and a set of linear constraints. We extend the standard GLM model structure to adopt Lagrange methods and P-splines such that smoothing and constraints are applied simultaneously as the parameters are estimated. Our results on Australian, Canadian and Norwegian data show that this modification results in an improvement in mortality projection in terms of producing more accurate forecasts in the out-of-sample testing. At the same time, projected male-to-female ratio of death rates at each age converges to a constant and the residuals of the models are sufficiently random, indicating that the use of smoothing does not adversely affect the fit of the model. Further, the irregular patterns in the estimates of the age-specific parameters are moderated as a result of smoothing and this model can be used to produce more regular projected life tables for pricing purposes.
Under adaptive learning, recursive algorithms are proposed to represent how agents update their beliefs over time. For applied purposes, these algorithms require initial estimates of agents perceived law of motion. Obtaining appropriate initial estimates can become prohibitive within the usual data availability restrictions of macroeconomics. To circumvent this issue, we propose a new smoothing-based initialization routine that optimizes the use of a training sample of data to obtain initials consistent with the statistical properties of the learning algorithm. Our method is generically formulated to cover different specifications of the learning mechanism, such as the least-squares and the stochastic gradient algorithms. Using simulations, we show that our method is able to speed up the convergence of initial estimates in exchange for a higher computational cost.
We consider the time behaviour associated to the sequential Monte Carlo estimate of the backward interpretation of Feynman-Kac formulae. This is particularly of interest in the context of performing smoothing for hidden Markov models. We prove a central limit theorem under weaker assumptions than adopted in the literature. We then show that the associated asymptotic variance expression for additive functionals grows at most linearly in time under hypotheses that are weaker than those currently existing in the literature. The assumptions are verified for some hidden Markov models.
This study sought to determine whether monthly revisions of U.S. Department
of Agriculture current-year corn and soybean yield forecasts are correlated
and whether this correlation is associated with crop size. An
ex-ante measure of crop size based on percent deviation
of the current estimate from out-of-sample trend is used in efficiency tests
based on the Nordhaus framework for fixed-event forecasts. Results show that
available information about crop size is generally efficiently incorporated
in these forecasts. Thus, although this pattern may appear obvious to market
analysts in hindsight, it is largely based on new information and hence
difficult to anticipate.
Existing methods for estimating individual dairy cow energy balance typically either need information on feed intake, that is, the traditional input–output method, or frequent measurements of BW and body condition score (BCS), that is, the body reserve changes method (EBbody). The EBbody method holds the advantage of not requiring measurements of feed intake, which are difficult to obtain in practice. The present study aimed first to investigate whether the EBbody method can be simplified by basing EBbody on BW measurements alone, that is, removing the need for BCS measurements, and second to adapt the EBbody method for real-time use, thus turning it into a true on-farm tool. Data came from 77 cows (primiparous or multiparous, Danish Holstein, Red or Jersey) that took part in an experiment subjecting them to a planned change in concentrate intake during milking. BW was measured automatically during each milking and real-time smoothed using asymmetric double-exponential weighting and corrected for the weight of milk produced, gutfill and the growing conceptus. BCS assessed visually with 2-week intervals was also smoothed. EBbody was calculated from BW changes only, and in conjunction with BCS changes. A comparison of the increase in empty body weight (EBW) estimated from EBbody with EBW measured over the first 240 days in milk (DIM) for the mature cows showed that EBbody was robust to changes in the BCS coefficients, allowing functions for standard body protein change relative to DIM to be developed for breeds and parities. These standard body protein change functions allow EBbody to be estimated from frequent BW measurements alone, that is, in the absence of BCS measurements. Differences in EBbody levels before and after changes in concentrate intake were calculated to test the real-time functionality of the EBbody method. Results showed that significant EBbody increases could be detected 10 days after a 0.2 kg/day increase in concentrate intake. In conclusion, a real-time method for deriving EBbody from frequent BW measures either alone or in conjunction with BCS measures has been developed. This extends the applicability of the EBbody method, because real-time measures can be used for decision support and early intervention.
A modified method for polynomial smoothing and the calculation of derivatives of equally spaced step scan powder diffraction data is presented. The algorithm takes the angular dependence of the full width at half maximum (FWHM) of diffraction peaks into account, is very effective, and easy to code.
We examine shape optimization problems in the context of inexact sequential quadraticprogramming. Inexactness is a consequence of using adaptive finite element methods (AFEM)to approximate the state and adjoint equations (via the dual weightedresidual method), update the boundary, and compute the geometric functional. We present anovel algorithm that equidistributes the errors due to shape optimization anddiscretization, thereby leading to coarse resolution in the early stages and fineresolution upon convergence, and thus optimizing the computational effort. We discuss theability of the algorithm to detect whether or not geometric singularities such as cornersare genuine to the problem or simply due to lack of resolution – a new paradigm inadaptivity.
Several important classes of liability are sensitive to the direction of future mortality trends, and this paper presents some recent developments in fitting smooth models to historical mortality-experience data. We demonstrate the impact these models have on mortality projections, and the resulting impact which these projections have on financial products. We base our work round the Lee-Carter family of models. We find that each model fit, while using the same data and staying within the Lee-Carter family, can change the direction of the mortality projections. The main focus of the paper is to demonstrate the impact of these projections on various financial calculations, and we provide a number of ways of quantifying, both graphically and numerically, the model risk in such calculations. We conclude that the impact of our modelling assumptions is financially material. In short, there is a need for awareness of model risk when assessing longevity-related liabilities, especially for annuities and pensions.
The forecasting of the future mortality of the very old presents additional challenges since data quality can be poor at such ages. We consider a two-factor model for stochastic mortality, proposed by Cairns, Blake and Dowd, which is particularly well suited to forecasting at very high ages. We consider an extension to their model which improves fit and also allows forecasting at these high ages. We illustrate our methods with data from the Continuous Mortality Investigation.
In the light of recent judgments by the courts, there are areas where the interpretation of Policyholders' Reasonable Expectations (PRE) by actuaries may need to be reassessed. Furthermore, the discussion paper on the exercise of discretion expected from the Financial Services Authority (FSA), as part of its review of with-profits business, is likely to raise wider issues.
The time is therefore right for actuaries to have the opportunity to debate how PRE should be interpreted in the future. This paper is presented as a catalyst to enable that debate to happen, and the authors have set out their own views on some of the key issues.
The paper discusses certain areas where the interpretation of PRE adopted by Appointed Actuaries in the past may no longer be consistent with recent court judgments. Following that discussion, the actuarial profession should attempt to establish a revised interpretation of PRE, in order to provide greater assistance to Appointed Actuaries currently advising on with-profits business.
Mortality data are often classified by age at death and year of death. This classification results in a heterogeneous risk set and this can cause problems for the estimation and forecasting of mortality. In the modelling of such data, we replace the classical assumption that the numbers of claims follow the Poisson distribution with the weaker assumption that the numbers of claims have a variance proportional to the mean. The constant of proportionality is known as the dispersion parameter and it enables us to allow for heterogeneity; in the case of insurance data the dispersion parameter also allows for the presence of duplicates in a portfolio. We use both the quasi-likelihood and the extended quasi-likelihood to estimate models for the smoothing and forecasting of mortality tables jointly with smooth estimates of the dispersion parameters. We present three main applications of our method: first, we show how taking account of dispersion reduces the volatility of a forecast of a mortality table; second, we smooth mortality data by amounts, ie, when deaths are amounts claimed and exposed-to-risk are sums assured; third, we present a joint model for mortality by lives and by amounts with the property that forecasts by lives and by amounts are consistent. Our methods are illustrated with data from the Continuous Mortality Investigation.