1. Introduction
Technological advances, such as medical innovations, have delivered significant improvements in survival and quality of life for us. Many diseases and epidemics were fatal to human being throughout a long history, and recently become curable. A notable example is Malaria. Dr. Tu Youyou managed to develop drugs based on artemisinin from Chinese traditional herbal medicines, which have led to the survival and improved health of millions of people. In 2015, she got the Nobel Prize in Physiology or Medicine for her discoveries concerning a novel therapy against Malaria. In addition, the National Cancer Institute recently found sudden reductions in mortality rates for prostate cancer, which is likely due to effective treatments, screening methods for early diagnosis and public health programs [Reference Edwards, Brown, Wingo, Howe, Ward, Ries, Schrag, Jamison, Jemal, Wu, Friedman, Harlan, Warren, Anderson and Pickle18]. Moreover, although the recent coronavirus (COVID-19) pandemic is still ongoing currently, it is possible that the deaths caused by the COVID-19 would be gradually disappearing in the long run with more effective government interventions, vaccine development, and further enhancement of public health systems.
Similarly, the Advanced Driver Assistance Systems (ADAS) in highly automated vehicles may reduce or even ultimately eliminate accidents by perceiving dangerous situations. The development of highly automated vehicles and the ADAS is ongoing even though it is a challenge. Highly automated vehicles will minimize the number of accidents leading to a much lower number of claims for insurance companies ultimately. In addition, we can observe that the annual fatality rate per 100 million miles that motor vehicle traveled in the U.S. provided by the National Highway Traffic Safety Administration (NHTSA) [33] has declined gradually from 1920 to 1930 as plotted in Figure 1. The 1920's average rate is 18.52, and the 2010's average rate is only 1.13, where we can see that it declines substantially in a long run. Of course, other positive factors, such as improved infrastructure developments and safety regulations, may all together contribute to this decline. The longevity of ourselves has significantly increased in the last decade. It is certain for us to live longer and healthier on average in the long run due to artificial intelligence (AI) advances in health care. AI applications in the automotive industry will also change the risk landscape of insurance industry. They will help avoid accidents and saving lives, and consequently insurers will have fewer claims.
In the actuarial literature, there is a plethora of papers aiming at modeling improvements in life expectancy due to systematic factors, notably, the Lee–Carter model [Reference Lee and Carter31] and its extensions which are mortality projection models on time-series mortality. Beside, alternative continuous-time stochastic models for modeling mortality can be found in for example [Reference Biffis5,Reference Dahl12,Reference Jang and Ramli26,Reference Jang and Ramli27,Reference Luciano and Vigna32,Reference Schrager36]. The aim of this paper is to develop a specific continuous-time model which is based on doubly stochastic Poisson processes or Cox processes [Reference Cox9]. It could be used as a model component for the long-term survivor function capturing the greater longevity within a general competing-risks framework similarly as the literature of survival analysis for a particular failure type of interest, see for example [Reference Fine and Gray19,Reference Gray22,Reference Kalbfleisch and Prentice29, Sect. 8]. More precisely, it is a Cox Process with piecewise-constant decreasing intensity, since its underlying intensity process is piecewise-constant, and it jumps downward with random sizes to lower (but still positive) levels at random times as illustrated in Figure 2. Apparently, the occurrence timing of each innovation is uncertain in nature, so jump times in the underlying intensity process is random in our model, which is different from a simpler inhomogeneous Poisson process with a piecewise-constant decreasing intensity function. More generally, our model may be applicable to modeling many events of other types which are gradually disappearing. Recently, alternative intensity functions for Cox processes have been specified, see [Reference Albrecher, Araujo-Acuna and Beirlant1,Reference Badescu, Lin and Tang2,Reference Dassios, Jang and Zhao14,Reference Jang, Dassios and Zhao25,Reference Selch and Scherer37] and just to name a few.
We first obtain the Laplace transform of ultimate intensity integral in an analytic form. For insurance modeling based on a Cox process in general, the integral of the claim intensity process is crucial, as it is linked to the probability generating function (PGF) of claim numbers. More broadly, the integral of an underlying stochastic process has been played an important role to many applications in both finance and insurance. For example, the integral of the interest rate process is used to price zero-coupon bonds [Reference Cox, Ingersoll and Ross11,Reference Heath, Jarrow and Morton23,Reference Jang24,Reference Vasicek38], and the integral of the stock price process is also used for pricing Asian-type options [Reference Bayraktar and Xing4,Reference Cai and Kou8,Reference Dassios and Nagaradjasarma15,Reference Dufresne17,Reference Park, Jang and Jang34,Reference Rogers and Shi35]. The integral of the default intensity process is also required to obtain the default probability in reduced-form credit risk models [Reference Duffie and Singleton16,Reference Lando30]. The integrated hazard rate is required to derive the survivor function [Reference Dassios and Jang13,Reference Jang and Ramli26] in life insurance. In non-life insurance, Jarrow [Reference Jarrow28] proposed pricing formulas for catastrophe bonds, where the integral of the intensity process is used to derive the probability of no catastrophic event. Under certain assumptions, other key distributional properties, such as the PGF of point process, their associated moments and cumulants, and the probability of no more claims for a given time point, are derived in analytic forms (with some additional assumptions), which are all important for model applications. We then apply our results to calculate the survival probability in life insurance and reinsurance premium in non-life insurance.
This paper is structured as follows. In Section 2, we define our model of the Cox Process with a piecewise-constant decreasing intensity, and analyze its theoretical distributional properties. In Section 3, we apply our results to calculate the survival probability for life insurance and stop-loss reinsurance premium for non-life insurance, respectively. Section 4 makes a brief conclusion.
2. A Cox process with piecewise-constant decreasing intensity
In this section, we explain how to construct a Cox process with a piecewise-constant decreasing intensity for modeling gradually disappearing events (e.g., insurance claims of a certain type) in general, and then obtain its key distributional properties.
Let us first start by a brief review for the Cox process [Reference Cox9] in general. If $\lambda _{t}$ is the intensity process of a Cox point process $N_t$, it is well known that the PGF of $N_{t}$ is given by
which suggests that the problem of finding the distribution of point process $N_{t}$ is equivalent to the problem of finding the distribution of the integral of intensity $\lambda _t$. By convention, we denote the intensity integral process by
In the large family of Cox processes, we can consider various candidates of non-negative stochastic processes for $\lambda _t$, which provide us a great flexibility for modeling event arrivals in practice. For more details about Cox processes, see for example [Reference Basu and Dassios3,Reference Dassios and Jang13], and the books by [Reference Brémaud6,Reference Brémaud7,Reference Cox and Isham10,Reference Grandell20,Reference Grandell21].
To construct a Cox process with piecewise-constant decreasing intensity, we introduce a positive continuous-time stochastic process $X_t$ as a state process for intensity process $\lambda _t$. That is, $\lambda _t$ is a deterministic function for a given state of $X_t$. We denote this function by $h(\cdot )$, that is, $\lambda _t=h(X_t)$, and further assume that it is a strictly increasing function of $u$ on the positive real line. For example, $h(u)$ could be a nonlinear function, such as
where the constant $c$ can be considered as a measure of the nonlinear amplification effect of states to the intensity, for example, the impact of vaccine advance to the fatality rate of COVID-19. In addition, we assume that state process $X_t$ is piecewise-constant and stochastically decreasing. More precisely, $X_t$ is constant until a random downward jump (i.e., random drop) potentially occurs at time point $T_i$ (for any given $i=1,2,\ldots$): it jumps to a new (lower) level $Y_i$ if $Y_i< X_{T_{i_{-}}}$, or, it just stays (the same) at $X_{T_{i_{-}}}$ if $Y_i>X_{T_{i_{-}}}$, where jump sizes $\{Y_i\}_{i=1,2,\ldots }$ are independent identical distributed (i.i.d.) with the cumulative distribution function (CDF) $G(y)$ and density function $g(y),y \in \mathbb {R}^{+}$. The interarrival times of these jumps at $\{T_i\}_{i=1,2,\ldots }$ are also i.i.d. with the density function $p(t)$, and they are independent of jump sizes $\{Y_i\}_{i=1,2,...}$. Through the functional transformation $h(\cdot )$, the resulting intensity process $\lambda _t=h(X_t)$, therefore, is still piecewise-constant and decreasing as visualized in Figure 2. Note that these arrival times $\{T_i\}_{i=1,2,\ldots }$ are labeled for the jump times in the intensity process $\lambda _t$ rather than the jump times generated from the point process $N_t$. The evolution of technological developments and breakthroughs, such as AI algorithms which have been developed for health care and automotive industry, could be aggregately modeled by this state process $X_t$ as a proxy. The impacts of these technological advances to the health care and automotive industries are measured by the nonlinear function of $h(u)$. The intensity process $\lambda _t$ remains constant until a breakthrough, represented by a drop to a new level as the result of this breakthrough.
2.1. Laplace transform of ultimate intensity integral
For convenience, the Laplace transform of intensity integral at the ultimate time $t\rightarrow \infty$ conditional on $X_0=x>0$ is denoted by
Proposition 2.1. The Laplace transform of ultimate intensity integral $\Lambda _{\infty }$ conditional on $X_0=x>0$ satisfies
where
Proof. By construction, the interarrival times of these jumps at $\{T_i\}_{i=1,2,\ldots }$ are independent of jump sizes $\{Y_i\}_{i=1,2,\ldots }$, therefore, $T_1$ and $X_{T_1}$ are independent, and we have
Note that, given the initial state level $X_0=x>0$ and the realization of the first jump size $Y_1=y$, the state process at the first-jump time, $X_{T_1}$, stays at the same level as $X_{T_1^{-}}$ if $y>X_{T_1^{-}}$, or moves down to a new level $y$ if $y< X_{T_1^{-}}$, so we have
Corollary 2.1. If jump sizes follow a uniform distribution on $[0,1]$, then, the Laplace transform of $\Lambda _{\infty }$ conditional on $X_0=x \in (0,1)$ is given by
Proof. Set $g(y) =1$ in (2.3) with the condition $X_0=x \in (0,1)$, the result follows immediately.
Corollary 2.2. If jump sizes follow a uniform distribution on $[0,1]$ and interarrival times of jumps follow an exponential distribution with parameter $\nu >0$, that is, $p(t) =\nu \,e^{-\nu t}$, then the Laplace transform of $\Lambda _{\infty }$ conditional on $X_0=x \in (0,1)$ is given by
Proof. Set $p(t) =\nu e^{-\nu t}$ in (2.4), then we have
or
Differentiate it with respect to $x$ both sides, then we have
or
Given the initial condition $\phi (0)=1$, we can solve this ODE by
and the result follows.
By further specifying the functional form for $h(x)$, then, nicely, we can find the exact distribution of $\Lambda _{\infty }$.
Corollary 2.3. If jump sizes follow a uniform distribution on $[0,1]$, interarrival times of jumps follow an exponential distribution with parameter $\nu >0$, that is, $p\left ( t\right ) =\nu \,e^{-\nu t}$ and $h(u)=u^{1+c},\ c> 0$, then the Laplace transform of $\Lambda _{\infty }$ conditional on $X_0=x \in (0,1)$ is given by
which implies
Proof. Set $h(u)=u^{1+c},\ c>0$, then we have $h^{\prime }(u)= (c+1)u^{c}$. Hence, from (2.5), we have
which is the Laplace transform of a gamma distribution with the shape parameter ${(c+1)}/{c}$ and rate parameter ${\nu }/{x^{c}}$.
2.2. Probability generating function of event numbers
Let us find the expression for the PGF of $N_\infty$.
Corollary 2.4. If jump sizes follow a uniform distribution on $[0,1]$ and interarrival times of jumps follow an exponential distribution with parameter $\nu >0$, that is, $p\left ( t\right ) =\nu e^{-\nu t}$, then the PGF of $N_{\infty }$ conditional on $X_0=x \in (0,1)$ is given by
Proof. Based on the relationship between the PGF of $N_{\infty }$ and the Laplace transform of $\Lambda _{\infty }$ as given by in (2.1), we can set $v=1-\theta$ in (2.5) and hence obtain (2.8) immediately.
Corollary 2.5. If jump sizes follow a uniform distribution on $[0,1]$, interarrival times of jumps follow an exponential distribution with parameter $\nu >0$, that is, $p\left ( t\right ) =\nu e^{-\nu t}$ and $h(u)=u^{1+c}, c>0$, then the PGF of $N_{\infty }$ conditional on $X_0=x \in (0,1)$ is given by
which implies
Proof. Set $v =1-\theta$ in (2.8), then we have
which is the PGF of a negative binomial distribution with parameters ${(c+1)}/{c}$ and ${x^{c}}/{(\nu +x^{c})}$.
2.3. Cumulants and moments of intensity integral
Let us find the expressions for the cumulants and the first two moments of $\Lambda _{\infty }$.
Theorem 2.1. The $m$th cumulant of $\Lambda _{\infty }$ conditional on $X_0=x \in (0,1)$ is given by
Proof. By taking logarithm for the Laplace transform (2.5), we obtain the cumulant generating function
Hence, we have
and the cumulants (2.11) for any $m=1,2,\ldots$.
The expression for any moment of $\Lambda _{\infty }$ given $X_0=x$ can easily be obtained using its cumulants (2.11). For example, we provide the mean and variance as below.
Corollary 2.6. The mean and variance of $\Lambda _{\infty }$ conditional on $X_0=x \in (0,1)$ are, respectively, given by
By specifying the form of $h(u),$ we may obtain the mean and variance explicitly.
Corollary 2.7. If $h(u)=u^{1+c},\ c>0$, then the mean and variance of $\Lambda _{\infty }$ conditional on $X_0=x \in (0,1)$ are respectively given by
2.4. Moments of event numbers
Let us find the expressions for the first two moments of $N_{\infty }$.
Corollary 2.8. The mean and variance of $N_{\infty }$ conditional on $X_0=x \in (0,1)$ are respectively given by
Proof. Since
and
the variance of $N_{\infty }$ can be obtained by
Corollary 2.9. If $h(u)=u^{1+c}, c>0$, then the mean and variance of $N_{\infty }$ conditional on $X_0=x \in (0,1)$ are respectively given by
For example, by setting the initial state $x=0.9$ and varying the parameter $c$ for amplification effect, the conditional means and variances of $N_\infty$ for $\nu =0.5,1$ are plotted in Figure 3.
2.5. Probability of last event
As the intensity process is decreasing and eventually approaching to zero, the resulting points (i.e., events) are gradually disappearing, and an interesting problem is the probability of no more event (e.g., insurance claim of a certain type) beyond a given time point $t$, that is,
where $T^{*}$ is the time point at which the last event occurs. In order to find (2.14), let us first derive the CDF of $X_t$ in Lemma 2.1.
Lemma 2.1. If interarrival times of jumps follow an exponential distribution with parameter $\nu >0$, that is, $p\left ( t\right ) =\nu \,e^{-\nu t}$, then the CDF of $X_{t}$ conditional on $X_0=x>0$ is given by
Proof. By construction, $X_{t}$ is piecewise-constant and decreasing, which can be expressed by
where $M_{t}$ is a homogeneous Poisson process of constant rate $\nu$. Given $M_{t}=m$, the CDF of the minimum of $m$ i.i.d. random variables $\{Y_i\}_{i=1,\ldots,m}$ with the CDF $G(\varsigma )$ is $\left [1-\left (1-G(\varsigma )\right )^{m} \right ]$, that is,
Then, we have
where
Note that,
and the result (2.11) follows.
Based on Lemma 2.1, let us now derive the PGF of $T^{*}$ in Theorem 2.2.
Theorem 2.2. If jump sizes follow a uniform distribution on $[0,1]$ and interarrival times of jumps follow an exponential distribution with parameter $\nu >0$, that is, $p\left ( t\right ) =\nu \,e^{-\nu t}$, then the CDF of $T^{*}$ conditional on $X_0=x \in (0,1)$ is given by
Proof. Set $v =1$ in (2.5), then we have
Hence,
Based on the CDF (2.11) for $X_t$, we have the density of $X_t$ as
and then,
With a uniform distribution on the interval $[0,1]$ for jump sizes, (2.12) follows immediately.
Corollary 2.10. If jump sizes follow a uniform distribution on $[0,1]$, interarrival times of jumps follow an exponential distribution with parameter $\nu >0$, that is, $p\left ( t\right ) =\nu \,e^{-\nu t}$ and $h(u)=u^{1+c},\ c>0$, then the CDF of $T^{*}$ conditional on $X_0=x \in (0,1)$ is given by
Proof. Based on Theorem 2.2 and the additional assumption of $h(u)=u^{1+c},\ c>0$, we have
We can calculate the probability of no event (e.g., insurance claim of a certain type) beyond time $t$ as given in (2.17) via numerical integration. For example, by setting parameters $\nu =2,\ c=1.2$ with the initial state $x=0.9$, the associated probabilities are reported in Table 1 and plotted in Figure 4. As time $t$ is getting larger, the intensity level is getting smaller and smaller. The intensity integral increases only marginally and events are still generated, but it will eventually arrive at the time point of no event anymore. Hence, the larger the time $t$ is, the higher the probability of no more event beyond time $t$ is.
3. Applications in insurance
In this section, we discuss some potential applications of our model as a component to calculate the survival probability in life insurance and reinsurance premium in non-life insurance, respectively.
3.1. Life insurance
Medical innovation has made us live longer, healthier, and more prosperous lives. Additionally, due to artificial intelligence advances in health care, a much greater longevity would be possible in the long term as $t \rightarrow \infty$ or even within a couple of decades. Hence, we may apply our results to calculate survival probabilities which are the key inputs for life insurance. In general, by setting $v =1$ in (2.3) and solving the equation, we can obtain the ultimate survival probability (or survivor function) due to a particular event-type of interest as
where $\tau ^{*}$ is the first jump-arrival time in the point process $N_t$, that is,
which is modeling the first event-arrival time of a particular type. More specifically, if jump sizes follow a uniform distribution on $[0,1]$, interarrival times of jumps follow an exponential distribution with its parameter $\nu$, that is, $p\left ( t\right ) =\nu \,e^{-\nu t}$ and $h(u)=u^{1+c},\ c>0$, then the survival probability $\Pr \left \{ \tau ^{*}_x = \infty \right \}$ can be immediately obtained by setting $v=1$ in (2.6). For example, by setting the initial state $x=0.9$ and varying the parameter $c$ for amplification effect, the survival probabilities for $\nu =0.5,1$, are plotted in Figure 5, and the associated detailed numerical output is reported in Table 2.
Note that $\tau ^{*}_x$ is considered as a defective (improper) random variable with a point mass at time $t=\infty$. The survival probability (3.1) is corresponding to a particular type of gradually disappearing events (e.g., deaths caused by traffic accidents or deaths caused by the COVID-19) rather than overall events. Essentially, it is a marginal probability function, also well known as the subdistribution for a particular failure type of interest in the literature of survival analysis, see for example [Reference Fine and Gray19,Reference Gray22,Reference Kalbfleisch and Prentice29, Sect. 8]. The Cox process with piecewise-constant decreasing intensity as introduced in Section 2 provides us a key model component for modeling these types of gradually disappearing events rather than for all events within a general competing-risks framework. Of course, each life is not immortal in the reality, and we can simply add another model component acting as the force for all rest risks (e.g., the mortality force is increasing with respect to the age, year, or many other factors). This can be done by introducing another random time $\tau ^{*}_{b}$ acting as a competing risk that will eventually produce a death event at time
where $\tau ^{*}_{b}$ may be assumed to follow a simple Poisson process or more generally another Cox process with time-varying intensity $b(t)> 0$, and $b(t)$ could depends on years, ages, or many other factors. Therefore, the overall intensity is
where $b(t)$ is the baseline intensity for $\tau ^{*}$, and $\lambda _t$ is the type-specific intensity. Trivially, the ultimate overall survival probability (due to overall events) is zero, that is, $\Pr \left \{ \tau ^{*} = \infty \right \} = 0$. Therefore, here we are mainly interested in the nontrivial result of ultimate (marginal) survival probability (3.1) caused by events of a particular type, $\tau ^{*}_x$, driven by technological advances in a very long run. In fact, this research motivation is similar as the classical problem of ultimate ruin probability which are extensively studied in actuarial mathematics. In this paper, we are also dealing with the ultimate probability but within a different context considering the risk landscape change in the insurance industry.
3.2. Non-life insurance
The Advanced Driver Assistance Systems (ADAS) in highly automated vehicles may reduce or even eventually eliminate accidents by perceiving dangerous situations. Nearly perfect automated vehicles could be deployed as $t\rightarrow \infty$ or even within a couple of decades, which will minimize the number of accidents leading to fewer and fewer loss claims. Hence, we apply our results to the stop-loss reinsurance contract for a portfolio purely concentrated on the motor insurance business in the long run. Standard non-life insurance contracts are typically underwritten in the short term. However, they are often automatically renewed, and it is worth studying the long-term risk landscape in insurance similarly as the classical ruin problem in risk theory.
Let $\{Z_{i}\}_{i=1,2,\ldots }$ be the claim amounts, which are assumed to be i.i.d. with the CDF $H(z), z>0$. The total loss excess over the retention limit $K>0$ up to the time of $t\rightarrow \infty$ is given by
where $C_{\infty }=\sum _{i=1}^{N_{\infty }}Z_{i}$, and $N_{\infty }$ is the ultimate number of claims. Therefore, the stop-loss reinsurance premium is given by
which is similar to a perpetual call option in finance.
Corollary 3.1. If jump sizes follow a uniform distribution on $[0,1]$, the interarrival times of jumps follow an exponential distribution with parameter $\nu$, that is, $p(t) =\nu \,e^{-\nu t}$ and $h(u)=u^{1+c},\ c>0$, and the claim sizes follow an Erlang distribution $\textsf {Erlang}(\varphi,\beta )$, then the expected ultimate total loss conditional on $X_0=x \in (0,1)$ is given by
and the stop-loss reinsurance premium with the retention limit $K$ is given by
where $\gamma (\cdot,\cdot )$ is the lower incomplete gamma function.
Proof. We assume that the claim sizes follow an Erlang distribution denoted by $\textsf {Erlang}(\varphi,\beta )$, that is,
where $\beta$ is the rate parameter and $\varphi$ is the shape parameter. Note that,
so, $C_{\infty }$ follows an Erlang-mixture distribution with the density function
Then, we have
and
since
and
we have the probability mass function (PMF) of $N_{\infty }$ conditional on $X_0=x$ explicitly as
For example, by setting parameters $c=1.2,\varphi =1,\beta =1$ with the initial state $x=0.9$, we can calculate the expected total loss and stop-loss reinsurance premiums for $\nu =0.5,1$, which are plotted in Figures 6 and 7, respectively. The associated detailed numerical output is reported in Table 3.
4. Conclusion
A Cox process with piecewise-constant decreasing intensity has been introduced in this paper. The intensity process is a stochastic piecewise-constant decreasing function of time. Subject to the arrival of a breakthrough event, the intensity process might, or might not, be reduced to a new minimum level. The long-term properties are of our focus. We derive its ultimate distributional properties, such as the Laplace transform of the intensity integral process, the probability generating function of the point process, their associated moments and cumulants, and the probability of no more claims for a given time point. Using our model as a component within a general competing-risks framework, these results may be potentially applied to calculate survival probability for life insurance and reinsurance premium for non-life insurance in a very long run. Similarly, this model may be applicable in many other areas for modeling the evolution of gradually disappearing events, such as corporate defaults in credit risk modeling, trade arrivals in market microstructure, dividend payments by a stock, employment of a certain job type (e.g., typists) in labor market, and release of particles, as long as the underlying intensity process of the associated event arrivals is piecewise-constant and decreasing. Similarly as the classical ruin problem in insurance, the distributional properties and associated applications for a given finite-time horizon are much more difficult to be analytically studied, which are proposed for further research.
Acknowledgments
The authors would like to thank the reviewers for very helpful and constructive comments and suggestions. The corresponding author Hongbiao Zhao would like to acknowledge the financial support from the National Natural Science Foundation of China (#71401147) and the research fund provided by the Innovative Research Team of Shanghai University of Finance and Economics (#2020110930) and Shanghai Institute of International Finance and Economics.