Hostname: page-component-78c5997874-t5tsf Total loading time: 0 Render date: 2024-11-10T05:43:57.734Z Has data issue: false hasContentIssue false

Change of measure in a Heston–Hawkes stochastic volatility model

Published online by Cambridge University Press:  24 July 2024

David R. Baños*
Affiliation:
University of Oslo
Salvador Ortiz-Latorre*
Affiliation:
University of Oslo
Oriol Zamora Font*
Affiliation:
University of Oslo
*
*Postal address: Department of Mathematics, University of Oslo, P.O. Box 1053 Blindern, N-0316 Oslo, Norway.
*Postal address: Department of Mathematics, University of Oslo, P.O. Box 1053 Blindern, N-0316 Oslo, Norway.
*Postal address: Department of Mathematics, University of Oslo, P.O. Box 1053 Blindern, N-0316 Oslo, Norway.
Rights & Permissions [Opens in a new window]

Abstract

We consider the stochastic volatility model obtained by adding a compound Hawkes process to the volatility of the well-known Heston model. A Hawkes process is a self-exciting counting process with many applications in mathematical finance, insurance, epidemiology, seismology, and other fields. We prove a general result on the existence of a family of equivalent (local) martingale measures. We apply this result to a particular example where the sizes of the jumps are exponentially distributed. Finally, a practical application to efficient computation of exposures is discussed.

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Valuation of assets and financial derivatives constitutes one of the core subjects of modern financial mathematics. There have been several approaches to asset pricing, all of which can be classified into two larger groups: equilibrium pricing and rational pricing. The latter gives rise to the commonly used methodology of pricing financial instruments by ruling out arbitrage opportunities. As is well known, the absence of arbitrage is closely related to the existence of a probability measure, the so-called risk-neutral measure, here denoted by $\mathbb{Q}$ . Prices of derivatives are expectations under this measure. This connection is known as the fundamental theorem of asset pricing [Reference Delbaen and Schachermayer18].

In contrast, markets evolve in time, and they do so under the so-called real-world measure, here denoted by $\mathbb{P}$ . While we can attempt to model market movements under $\mathbb{P}$ , we also need the dynamics under $\mathbb{Q}$ for pricing purposes. If we are only interested in pricing, then modelling under $\mathbb{Q}$ and calibrating is possible by matching market prices to theoretical ones. However, for asset liability management, investors often need to assess their positions under $\mathbb{P}$ . A common risk management practice is to compute the risk exposure as a way to set economic and regulatory capital levels. Stein [Reference Stein51] shows that exposures computed under the risk-neutral measure are essentially arbitrary. They depend on the choice of numéraire and can be manipulated by choosing a different numéraire. Even when one chooses commonly used numéraires, these exposures can differ by a factor of two or more. Furthermore, a crucial feature when assessing risk exposures is their distribution. While models under $\mathbb{Q}$ may have well-known tractable distributional properties, this need not be the case upon passage to the $\mathbb{P}$ -world and vice versa. For instance, the Vasicek model for interest rates is invariant under a restricted family of measure changes. This is a desired property of the Vasicek model, but it does not need to hold for other models. Another example in which the passage from $\mathbb{P}$ to $\mathbb{Q}$ is relevant is that of certain commodity markets. In such markets the modelling of the convenience yield is of considerable importance because of risks related to the storage of goods. For instance, see [Reference Callegaro, Mazzoran and Sgarra11, Reference Filimonov, Bicchetti, Maystre and Sornette25, Reference Gonzato and Sgarra28] for modelling commodity markets using Hawkes processes. In particular, [Reference Gonzato and Sgarra28] employs a modelling framework with jumps in the volatility term. As well as in the study of commodity markets, the change of measure is also important for the term structure of interest rates. For example, [Reference Bernis, Garcin, Scotti and Sgarra8] includes a marked Hawkes process in the original Heath–Jarrow–Morton setup and investigates the pricing of vanilla fixed-income derivatives.

It is common practice in the literature to assume that such a measure change exists and set up a model under the risk-neutral measure. Nonetheless, such an assumption is not innocuous, and nonsensical results can occur if it is not satisfied; see e.g. the discussion in [Reference Wong and Heyde52] and the references therein. For instance, [Reference Bibby and Sørensen9, Reference Rydberg48] show examples of models for which no equivalent local martingale measure exists. See also [Reference Rydberg47] for conditions to check the existence of equivalent martingale measures under Markovian models without jumps.

From the modelling perspective, we adopt a stochastic volatility model with jumps where the volatility process is itself not Markovian. The non-Markovianity can be justified by the clustering of volatility; see e.g. [Reference Cont16]. It is true, however, that when we enlarge the state space to include the information provided by the intensity process, the model is Markovian. However, the underlying intensity process cannot be directly observed from data in most cases. Some filtering techniques are needed to estimate it; see e.g. [Reference Ceci and Gerardi12]. Some recent works modelling clustering of volatility using Markov models in higher dimensions are [Reference Bates5, Reference Bernis, Brignone, Scotti and Sgarra7, Reference Cont and Kokholm17, Reference Grasselli29, Reference Jaber, Illand and Li32, Reference Jiao, Ma, Scotti and Zhou37, Reference Pacati, Pompa and Renò43, Reference Recchioni, Iori, Tedeschi and Ouellette46, Reference Rømer49].

The literature on non-Markovian stochastic volatility models is vast. Here we mention some works on fractional volatility models, which appear when the fractional Brownian motion is used as a driving noise for the volatility. The main characteristics of such noise are the possibility of modelling short- and long-range dependence due to the fractional decay of its auto-correlation function and the ability to generate trajectories which are Hölder continuous of index different from $1/2$ , corresponding to the case of Brownian motion. Fractional models with long-range dependence were first studied in [Reference Comte and Renault15]. Then [Reference Alòs, León and Vives2] introduced Malliavin calculus to study the asymptotics of the implied volatility in general stochastic volatility models, including fractional models with long- and short-term dependence. The popularity of rough models (short-term dependence models) started with the findings of [Reference Gatheral, Jaisson and Rosenbaum26]. Their name stems from the roughness of the underlying noise in the stochastic evolution of volatility. Several studies in this direction have been carried out; see e.g. [Reference El Euch and Rosenbaum22, Reference El Euch and Rosenbaum23, Reference Gatheral, Jaisson and Rosenbaum26, Reference Jaber, Illand and Li31], to name a few. None of the aforementioned works discusses change of measure. In fact, it is not clear whether the volatility process in the rough Heston model (see e.g. [Reference El Euch and Rosenbaum22, Reference El Euch and Rosenbaum23]) is strictly positive, which is a necessary condition for a proper change of measure to be possible.

The rough Heston model extends the classical Heston model to a model with fractional noise. Moreover, it is proven in [Reference El Euch, Fukasawa and Rosenbaum21, Reference Jaisson and Rosenbaum35, Reference Jaisson and Rosenbaum36] that the rough Heston model can be obtained as a limit of diffusion processes where the noise is a Hawkes process. The latter justifies our choice of model with a self-exciting Hawkes factor.

The inclusion of jumps in the volatility may not be so clear at first glance. Often, researchers include jumps in the stock process in order to capture sudden changes of asset returns; see for instance [Reference Andersen, Benzoni and Lund3, Reference Chernov and Ghysels14, Reference Eraker, Johannes and Polson24]. Classical stochastic volatility models without jumps, such as the Heston model, have the property that returns conditional on volatility are normally distributed. This property fails to explain many features of asset price behaviour. By adding jumps in the returns, one can model sudden changes such as economic crashes, as well as having more freedom in the modelling of distributions of asset returns. Nevertheless, jumps in returns are transient; in other words, a jump in returns today has no impact on the future distribution of returns [Reference Eraker, Johannes and Polson24]. Hence, Markov models for jumps in the asset dynamics, such as Poisson processes, are often used. On the other hand, volatility is highly persistent. If the dynamics is driven by a Brownian motion, then volatility can only increase gradually by a sequence of small normally distributed increments. Self-exciting jumps in the volatility provide a rapidly moving persistent factor. This justifies our use of the Hawkes component in the volatility.

Several works support the presence of jumps in the diffusive volatility. The papers [Reference Bates6, Reference Duffie, Pan and Singleton20, Reference Pan44] provide evidence for the presence of positive jumps in volatility. For example, [Reference Pan44, Section 5.4] contains an empirical study on the higher moments of the volatility process, seeking evidence of jumps in volatility as first conjectured by [Reference Bates6]. The findings of [Reference Pan44] indicate the possibility of jumps (with positive mean jump size) in the stochastic volatility process, or at least fatter-tailed innovations in the volatility process. This justifies the choice of positive self-exciting jumps in our volatility, which we model by compounding a Hawkes process with independent jump sizes. The positivity of jumps also allows us to keep volatility from hitting zero with probability one, but on the other hand, the self-exciting property of the Hawkes process may cause the process to explode. For this reason, we assume a stability property to prevent explosion; see [Reference Jaisson34, Section 3.1.1].

In this work, we look at a Heston-type stochastic volatility model with correlated Brownian motions and add a jump part to the volatility process, namely, a compound Hawkes process with an exponential kernel in the intensity process. In this setting our model can be embedded into a Markovian framework by adding the intensity component, as in e.g. [Reference Duffie, Filipović and Schachermayer19, Reference Duffie, Pan and Singleton20]. In [Reference Duffie, Filipović and Schachermayer19] the authors provide a thorough discussion on the general topic of affine processes and a characterization of regular affine processes. We show that a passage from $\mathbb{P}$ to $\mathbb{Q}$ and vice versa is possible for a rich enough family of probability measures. It is worth noting that our model has three non-tradable noises and one tradable asset, being thus incomplete. This gives rise to non-unique choices of measure change. The proof of existence of equivalent martingale measures for the classical Heston model is conducted in [Reference Wong and Heyde52].

The paper is organized as follows. In Section 2 we present our stochastic volatility model with correlated Brownian noises and a compound Hawkes component in the volatility. We also present some useful results on the existence and positivity of the volatility process. In Section 3 we prove the main results of this paper. First we study the integrability of the exponential of the integrated variance, and then we prove the existence of (local) martingale measures in our model. Finally, in Appendix 5, we prove some technical lemmas on the existence of solutions of ordinary differential equations (ODEs) that appear in the proofs of the main results.

2. Stochastic volatility model

Let $T\in \mathbb{R}$ , $T>0$ be a fixed time horizon. On a complete probability space $(\Omega,\mathcal{A},\mathbb{P})$ , we consider a two-dimensional standard Brownian motion $(B,W)=\left\{\left(B_t,W_t\right), t\in[0, T]\right\}$ and its minimally augmented filtration $\mathcal{F}^{(B,W)}=\big\{\mathcal{F}^{(B,W)}_t, t\in[0, T]\big\}$ . On $(\Omega,\mathcal{A},\mathbb{P})$ , we also consider a Hawkes process $N=\{N_t, t\in[0, T]\}$ with stochastic intensity given by

\begin{align*} \lambda_t=\lambda_0+\alpha\int_0^t e^{-\beta(t-s)}dN_s,\end{align*}

or, equivalently,

\begin{align*} d\lambda_t=-\beta(\lambda_t-\lambda_0)dt+\alpha dN_t,\end{align*}

where $\lambda_0>0$ is the initial intensity, $\beta>0$ is the speed of mean reversion, and $\alpha \in (0,\beta)$ is the self-exciting factor. Note that the stability condition

\begin{align*} \alpha\int_0^\infty e^{-\beta s}ds=\frac{\alpha}{\beta}<1\end{align*}

holds. See [Reference Bacry, Mastromatteo and Muzy4, Section 2] and [Reference Jaisson34, Section 3.1.1] for the definition of N. We then consider a sequence of independent and identically distributed (i.i.d.), strictly positive, and integrable random variables $\{J_i\}_{i\geq 1}$ and the compound Hawkes process $L=\{L_t, t\in[0, T]\}$ given by

\begin{align*} L_t=\sum_{i=1}^{N_t}J_i.\end{align*}

We assume that (B, W), N and $\{J_i\}_{i\geq 1}$ are independent of each other. We write $\mathcal{F}^L=\left\{\mathcal{F}^L_t, t\in[0, T]\right\}$ for the minimally augmented filtration generated by L and

\begin{align*} \mathcal{F}=\big\{\mathcal{F}_t=\mathcal{F}_t^{(B,W)}\vee\mathcal{F}_t^L, t\in[0, T]\big\}\end{align*}

for the joint filtration. We assume that $\mathcal{A}=\mathcal{F}_T$ , and we work with $\mathcal{F}$ . Since (B, W) and L are independent processes, (B, W) is also a two-dimensional $(\mathcal{F},\mathbb{P})$ -Brownian motion.

Finally, with all these ingredients, we introduce our stochastic volatility model. We assume that the interest rate is deterministic and constant, equal to r, but a non-constant interest rate can easily be fitted into this framework. The stock price $S=\{S_t, t\in[0, T]\}$ and its variance $v=\{v_t, t\in[0, T]\}$ are given by

(2.1) \begin{align} \frac{dS_t}{S_t} & =\mu_tdt+\sqrt{v_t}\!\left(\sqrt{1-\rho^2} dB_t+\rho dW_t\right), \notag \\ dv_t & =-\kappa\!\left(v_t-\bar{v}\right)dt+\sigma\sqrt{v_t}dW_t+\eta dL_t,\end{align}

where $S_0>0$ is the initial price of the stock, $\mu\,:\, [0, T] \rightarrow \mathbb{R}$ is a measurable and bounded function, $\rho\in({-}1,1)$ is the correlation factor, $v_0>0$ is the initial value of the variance, $\kappa>0$ is the variance’s mean reversion speed, $\bar{v}>0$ is the long-term variance, $\sigma>0$ is the volatility of the variance, and $\eta>0$ is a scaling factor. We assume that the Feller condition $2\kappa\bar{v}\geq\sigma^2$ is satisfied; see [Reference Alfonsi1, Proposition 1.2.15].

Note that our stochastic volatility model is the well-known Heston model but with a compound Hawkes process added in the variance process. The procedure for proving strong existence and pathwise uniqueness of the stochastic differential equation (SDE) in (2.1) is essentially given in [Reference Protter45, Chapter V.10, Theorem 57]. Informally, between jump times, the SDE is just a standard Cox–Ingersoll–Ross (CIR) model with an initial condition that depends on the value of the compound Hawkes after the jump has occurred. Since strong existence and pathwise uniqueness are proven for the SDE that defines the CIR model (see [Reference Alfonsi1, Theorem 1.2.1]), one can properly interlace the solutions of the continuous-path SDE and the jumps as is done in [Reference Protter45, Chapter V.10, Theorem 57] and prove strong existence and pathwise uniqueness for our SDE in (2.1). In particular, this implies that v is a positive process.

Proposition 2.1. Equation (2.1) has a pathwise unique strong solution.

Since the sizes of the jumps of the compound Hawkes process are strictly positive, one expects that our variance is greater than or equal to the Heston variance. We prove this property following the proof of [Reference Karatzas and Shreve38, Chapter 5.2.C, Proposition 2.18]. The procedure is the same as there, but in our case some extra computations will appear because of the jump contribution of the compound Hawkes process. Recall that the variance of the Heston model is strictly positive, because the Feller condition is assumed to hold; see the reference from before [Reference Alfonsi1, Proposition 1.2.15]. In particular, we will prove that the process v is also strictly positive.

Proposition 2.2. Let $\widetilde{v}=\{\widetilde{v}_t, t\in[0, T]\}$ be the pathwise unique strong solution of

(2.2) \begin{align} \widetilde{v}_t=v_0-\kappa\int_0^t\!\left(\widetilde{v}_s-\bar{v}\right)ds+\sigma\int_0^t\sqrt{\widetilde{v}_s}dW_s.\end{align}

Then

\begin{align*} \mathbb{P}\!\left(\left\{\omega\in\Omega \,:\, \widetilde{v}_t(\omega)\leq v_t(\omega) \ \forall t\in[0, T]\right\}\right)=1,\end{align*}

where v is the pathwise unique strong solution of (2.1).

Proof. See [Reference Alfonsi1, Theorem 1.2.1] for a reference on the solution of the SDE in (2.2). There exists a strictly decreasing sequence $\{a_n\}_{n=0}^\infty\subset(0,1]$ with $a_0=1$ , $\lim_{n\rightarrow\infty}a_n=0$ , and

\begin{align*} \int_{a_n}^{a_{n-1}}\frac{du}{u}=n,\end{align*}

for every $n\geq1$ . Precisely, $a_n=\exp\!\left({-}\frac{n(n+1)}{2}\right)$ .

For each $n\geq1$ , there exists a continuous function $\rho_n$ on $\mathbb{R}$ with support in $(a_n,a_{n-1})$ such that

(2.3) \begin{align} 0\leq \rho_n(x)\leq\frac{2}{nx}\end{align}

holds for every $x>0$ and $\int_{a_n}^{a_{n-1}}\rho_n(x)dx=1$ .

Then the function

(2.4) \begin{align} \psi_n(x)=\int_0^{|x|}\int_0^y\rho_n(u)dudy\end{align}

is even and twice continuously differentiable, with $|\psi^{\prime}_n(x)|\leq1$ and $\lim_{n\rightarrow\infty}\psi_n(x)=|x|$ for $x\in\mathbb{R}$ . Furthermore, $\{\psi_n\}_{n=1}^\infty$ is non-decreasing.

Next, define the non-decreasing function $\varphi_n(x)=\psi_n(x)\mathbb{1}_{(0,\infty)}(x)$ . Note that

\begin{align*} \lim_{n\rightarrow\infty}\varphi_n(x)=\max\{x,0\}\,=\!:\,x^+. \end{align*}

Define $I_t\,:\!=\,\widetilde{v}_t-v_t$ for $t\in[0, T]$ . Applying Itô’s formula, we get

\begin{align*} \varphi_n(I_t) = \ &\varphi_n(0)+\int_0^t\varphi^{\prime}_n(I_{s-})dI_s +\frac{1}{2}\int_0^t\varphi^{\prime\prime}_n(I_{s-})d[I]_s^{\text{c}} \\ & +\sum_{0<s\leq t}\left[\varphi_n(I_s)-\varphi_n(I_{s-})-\varphi^{\prime}_n(I_{s-})\Delta I_s\right].\end{align*}

Note that

\begin{align*} dI_s&=-\kappa(\widetilde{v}_s-v_s)ds+\sigma\!\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)dW_s-\eta dL_s, \\ d[I]_s&=\sigma^2\!\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)^2ds+\eta^2d[L]_t.\end{align*}

The latter implies that $d[I]_s^{\text{c}}=\sigma^2\!\left(\sqrt{\vphantom{\sum}\,\widetilde{v}_s}-\sqrt{v_s}\right)^2ds$ and $\Delta I_s=-\Delta v_s=-\eta \Delta L_s.$ Therefore,

\begin{align*} \varphi_n(I_t)= \ &-\kappa\int_0^t\varphi^{\prime}_n(I_s)I_sds+\sigma\int_0^t\varphi^{\prime}_n(I_{s-})\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)dW_s \\ &-\eta\int_0^t\varphi^{\prime}_n(I_{s-})dL_s+\frac{\sigma^2}{2}\int_0^t\varphi^{\prime\prime}_n(I_s)\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)^2ds \\ &+\sum_{0<s\leq t}\left[\varphi_n(I_s)-\varphi_n(I_{s-})\right]+\eta\sum_{0<s\leq t}\varphi^{\prime}_n(I_{s-})\Delta L_s.\end{align*}

Note that $\eta\int_0^t\varphi^{\prime}_n(I_{s-})dL_s=\eta\sum_{0<s\leq t}\varphi^{\prime}_n(I_{s-})\Delta L_s.$ Since $I_s-I_{s-}=-\eta\Delta L_s\leq0$ and $\varphi_n$ is a non-decreasing function, we have $\sum_{0<s\leq t}\left[\varphi_n(I_s)-\varphi_n(I_{s-})\right] \leq0.$

By (2.3) and (2.4), and using that $|\sqrt{x}-\sqrt{y}|\leq\sqrt{|x-y|}$ for $x,y\geq0$ , we obtain

\begin{align*} \varphi^{\prime\prime}_n(I_s)\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)^2\leq \frac{2}{n}\frac{\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)^2}{\widetilde{v}_s-v_s}\leq\frac{2}{n}.\end{align*}

Since $|\varphi^{\prime}_n(x)|\leq 1$ , we can conclude that

(2.5) \begin{align} \varphi_n(I_t)\leq \ &\kappa\int_0^tI_s^+ds+\sigma\int_0^t\varphi^{\prime}_n(I_{s-})\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)dW_s +\frac{\sigma^2 t}{n}.\end{align}

In order to use the zero mean property of the Itô integral,

(2.6) \begin{align} \mathbb{E}\!\left[\int_0^t\varphi^{\prime}_n(I_{s-})\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)dW_s\right]=0,\end{align}

we need to check that

(2.7) \begin{align} \mathbb{E}\!\left[\int_0^t\varphi^{\prime}_n(I_s)^2\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)^2ds\right]<\infty.\end{align}

Now,

(2.8) \begin{align} \mathbb{E}\!\left[\int_0^t\varphi^{\prime}_n(I_s)^2\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)^2ds\right]&= \mathbb{E}\!\left[\int_0^t\varphi^{\prime}_n(I_s)^2\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)^2\mathbb{1}_{\{I_s>0\}}ds\right] \notag\\ & \leq \mathbb{E}\!\left[\int_0^t\left(\sqrt{\widetilde{v}_s}-\sqrt{v_s}\right)^2\mathbb{1}_{\{I_s>0\}}ds\right] \notag\\ & \leq \mathbb{E}\!\left[\int_0^t\widetilde{v}_sds\right] \notag\\ & = \int_0^t\mathbb{E}\!\left[\widetilde{v}_s\right]ds.\end{align}

In [Reference Glasserman and Kim27, Section 2, Equation (2.3)], we see that for $t\in (0,T]$ ,

\begin{align*} \widetilde{v}_t \sim \frac{e^{-\kappa t}v_0}{k(t)}\chi_\delta^{\prime 2}\left(k(t)\right) \ \ \text{with} \ \ k(t)\,:\!=\,\frac{4\kappa v_0 e^{-\kappa t}}{\sigma^2\big(1-e^{-\kappa t}\big)} \ \ \text{and} \ \ \delta\,:\!=\,\frac{4\kappa\bar{v}}{\sigma^2},\end{align*}

where $\chi_\delta^{\prime 2}(k(t))$ denotes a non-central chi-squared random variable with $\delta$ degrees of freedom and non-centrality parameter k(t). Then

\begin{align*}\mathbb{E}\!\left[\widetilde{v}_t\right]=\frac{\sigma^2\big(1-e^{-\kappa t}\big)}{4\kappa}\left(\delta+k(t)\right)=v_0e^{-\kappa t}+\bar{v}\big(1-e^{-\kappa t}\big).\end{align*}

Since $t\mapsto\mathbb{E}\!\left[\widetilde{v}_t\right]$ is a continuous function on [0, T], the integral in (2.8) is finite, which implies that the integral in (2.7) is finite and (2.6) holds. Taking expectations in (2.5), we have

\begin{align*} \mathbb{E}\!\left[\varphi_n(I_t)\right]\leq\kappa\int_0^t\mathbb{E}\!\left[I_s^+\right]+\frac{\sigma^2 t}{n}.\end{align*}

Sending n to infinity yields $\mathbb{E}\!\left[I_t^+\right]\leq\kappa\int_0^t\mathbb{E}\!\left[I_s^+\right]ds.$ One can check that

(2.9) \begin{align} \int_0^t\left|\mathbb{E}\!\left[I_s^+\right]\right|ds<\infty.\end{align}

In fact,

$$\int_0^t\left|\mathbb{E}\!\left[I_s^+\right]\right|ds = \int_0^t\mathbb{E}\!\left[I_s^+\right]ds = \int_0^t\mathbb{E}\!\left[I_s\mathbb{1}_{\{I_s>0\}}\right]ds \leq \int_0^t\mathbb{E}\!\left[\widetilde{v}_s\right]ds<\infty,$$

where the last integral is finite, as we have seen before.

Applying a version of Gronwall’s inequality where only the condition (2.9) is required, we get $\mathbb{E}[I_t^+]=0$ for all $t\in[0, T]$ . This means that

\begin{align*} \mathbb{P}\!\left(\{\omega\in\Omega\,:\, \widetilde{v}_t(\omega)\leq v_t(\omega)\}\right)=1 \hspace{1cm} \forall t\in[0, T].\end{align*}

Since the sample paths of $\widetilde{v}$ are continuous and the sample paths of v are càdlàg, we get that

\begin{align*} \mathbb{P}\!\left(\left\{\omega\in\Omega \,:\, \widetilde{v}_t(\omega)\leq v_t(\omega) \ \forall t\in[0, T]\right\}\right)=1.\end{align*}

Corollary 2.1. The variance $v=\{v_t, t\in[0, T]\}$ is a strictly positive process.

Proof. Recall that we have assumed that the Feller condition $2\kappa \bar{v}\geq\sigma^2$ is satisfied. Therefore the process $\widetilde{v}$ defined by (2.2) is strictly positive; see [Reference Alfonsi1, Proposition 1.2.15]. Finally, by Proposition 2.2 we have that

\begin{align*} \mathbb{P}\!\left(\left\{\omega\in\Omega \,:\, \widetilde{v}_t(\omega)\leq v_t(\omega) \ \forall t\in[0, T]\right\}\right)=1.\end{align*}

We conclude that the process v is also strictly positive.

3. Risk-neutral probability measures

To prove the existence of a family of risk-neutral probability measures, we follow the classical approach, employing Girsanov’s theorem in connection with Novikov’s condition. Thus, we need to study the integrability of the exponential of the integrated variance, that is, what values $c>0$ , if any, satisfy

\begin{align*} \mathbb{E}\!\left[\exp\!\left(c\int_0^Tv_udu\right)\right]<\infty.\end{align*}

By Corollary 2.1, v is strictly positive and the expectation above is finite for $c\leq0$ , but for our applications is essential that c can be strictly positive. Looking at the proof of [Reference Wong and Heyde52, Lemma 3.1], one can see that for $c\leq \frac{\kappa^2}{2\sigma^2}$ the following holds:

(3.1) \begin{align} \mathbb{E}\!\left[\exp\!\left(c\int_0^T\widetilde{v}_udu\right)\right]<\infty,\end{align}

where $\widetilde{v}$ is the standard Heston volatility given by

\begin{align*} \widetilde{v}_t=v_0-\kappa\int_0^t\left(\widetilde{v}_s-\bar{v}\right)ds+\sigma\int_0^t\sqrt{\widetilde{v}_s}dW_s.\end{align*}

The procedure for proving that (3.1) holds is to show that

(3.2) \begin{align}\mathbb{E}\!\left[\exp\!\left(c\int_0^T\widetilde{v}_udu\right)\right]\leq\exp\!\left({-}(\kappa \bar{v})\Phi(0)-v_0\psi(0)\right)<\infty,\end{align}

where $\Phi$ and $\psi$ satisfy the following generalized Riccati equations:

(3.3) \begin{align} \psi^{\prime}(t) &=\frac{\sigma^2}{2}\psi^2(t)+\kappa\psi(t)+c, \\ -\Phi^{\prime}(t) & =\psi(t), \notag\\ \psi(T) &=\Phi(T)=0. \notag\end{align}

Because of the jump contribution of the compound Hawkes process, this procedure is more delicate in our model. However, we can obtain a bound similar to that of (3.2), with an additional function that will be the solution of an ODE involving the moment generating function of $J_1$ and the Hawkes parameters $\alpha$ and $\beta$ .

We start by making an assumption on the moment generating function of $J_1$ . We write $M_{J}$ for the moment generating function of $J_1$ , that is, $M_{J}(t)=\mathbb{E}\!\left[\exp\!\left(tJ_1\right)\right]$ . Since $J_1$ is strictly positive, $M_J$ is well defined at least on the interval $({-}\infty,0]$ .

Assumption 3.1. There exists $\epsilon_J>0$ such that $M_J$ is well defined on $({-}\infty,\epsilon_J)$ , and it is the maximal domain in the sense that

\begin{align*} \lim_{t\rightarrow\epsilon_J^-}M_J(t)=\infty.\end{align*}

Since $\epsilon_J>0$ , all positive moments of $J_1$ are finite.

Note that this assumption holds for the gamma distribution, the chi-squared distribution, the uniform distribution, and others.

We start studying the functions that will appear in a bound like (3.2) for our variance. To find the ODEs that define those functions, one can consider the process $M=\{M(t), t\in[0, T]\}$ defined by

\begin{align*} M(t)=\exp\!\left(F(t)+G(t)v_t+H(t)\lambda_t+c\int_0^tv_udu\right),\end{align*}

for some unknown functions $F,G,H\colon[0, T]\to\mathbb{R}$ satisfying $F(T)=G(T)=H(T)=0$ . Note that

\begin{align*} M(T)=\exp\!\left(c\int_0^Tv_udu\right).\end{align*}

Hence, $\mathbb{E}[M(T)]$ is exactly the expectation that we want to study. Now, if there exist functions F, G, and H such that M is a local martingale, since it is non-negative, it will be a supermartingale and then

\begin{align*} \mathbb{E}\!\left[\exp\!\left(c\int_0^Tv_udu\right)\right]=\mathbb{E}[M(T)]\leq M(0)=\exp\!\left(F(0)+G(0)v_0+H(0)\lambda_0\right),\end{align*}

where the last expression will be finite as long as F, G, and H exist and are well defined on [0, T].

The generalized Riccati equations for F, G, and H are obtained by applying Itô’s formula to M and equating the drift terms to 0. This is done formally in Proposition 3.1. In the next lemma we study the existence of solutions of the generalized Riccati equations for our model. Note that the ODE for G is slightly different from the ODE for $\psi$ in (3.3); this depends on whether one considers the expectation in (3.1) with the parameter c or $-c$ and on how one defines the process M. The proof of the next lemma is deferred to the appendix, because it is rather technical and based on ODE theory.

Remark 3.1. It is worth pointing out that our model is based on a three-dimensional affine process according to [Reference Duffie, Filipović and Schachermayer19, Reference Duffie, Pan and Singleton20, Reference Keller-Ressel and Mayerhofer40]. However, the results in Lemma 3.1 and Proposition 3.1 cannot be deduced from the general setup of those papers. For instance, the existence of the exponential moment proved in Proposition 3.1 is actually an assumption in [Reference Duffie, Pan and Singleton20, Proposition 1], which is required to prove the (exponential) affine transform formula. Similarly, in [Reference Duffie, Filipović and Schachermayer19, Theorem 2.16(ii)] the authors prove that the exponential moment exists under the assumption that the solutions of some generalized Riccati equations admit an analytic extension. Nevertheless, they do not give conditions to guarantee such an extension of the solutions. That is our contribution in Lemma 3.1 and Proposition 3.1, where we find the generalized Riccati equations for our particular model and study the existence of solutions to them. An analogous situation occurs with [Reference Keller-Ressel and Mayerhofer40, Theorem 2.14(b)].

It is also important to note that Lemma 3.1 is, essentially, based on qualitative ODE theory to study the maximal lifetime of solutions to generalized Riccati ODEs. Similar arguments are used in [Reference Keller-Ressel39] to characterize moment explosions in some affine stochastic volatility models. Even though Proposition 3.1 can be related to moment explosions, the results in [Reference Keller-Ressel39] cannot be straightforwardly applied to prove Proposition 3.1. The reason is that in the setting of [Reference Keller-Ressel39] it is assumed that the variance process is a Markov process, while in our case we need to enlarge it with the intensity of the Hawkes process to obtain a Markov process. Therefore, an additional differential equation appears in our work that needs to be studied. In conclusion, even though Lemma 3.1 involves ODE theory arguments similar to those of [Reference Keller-Ressel39], a detailed investigation is required in our setting.

Lemma 3.1. For $c\leq\frac{\kappa^2}{2\sigma^2}$ , define $D(c)\,:\!=\,\sqrt{\kappa^2-2\sigma^2c}$ ,

$$\Lambda(c)\,:\!=\,\frac{2\eta c\!\left(e^{D(c)T}-1\right)}{D(c)-\kappa+\left(D(c)+\kappa\right)e^{D(c)T}},$$

and

\begin{align*} c_l\,:\!=\,\sup\left\{c\leq\frac{\kappa^2}{2\sigma^2}\,:\, \Lambda(c)< \epsilon_J \hspace{0.3cm}\text{and}\hspace{0.3cm} M_J\!\left(\Lambda(c)\right)\leq\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right) \right\}.\end{align*}

Then $0<c_l\leq\frac{\kappa^2}{2\sigma^2}$ , and for $c< c_l$ , the following hold:

  1. (i) The ODE

    \begin{align*} G^{\prime}(t) & =-\frac{1}{2}\sigma^2G^2(t)+\kappa G(t)-c, \\ G(T) & = 0 \notag\end{align*}
    has a unique solution in the interval [0, T]. The solution is strictly decreasing and is given by
    \begin{align*} G(t)=\frac{2c\!\left(e^{D(c)(T-t)}-1\right)}{D(c)-\kappa+\left(D(c)+\kappa\right)e^{D(c)(T-t)}}.\end{align*}
  2. (ii) The function $t\mapsto M_J(\eta G(t))$ is well defined for $t\in[0, T]$ .

  3. (iii) Define $U\,:\!=\,\sup_{t\in [0, T]}M_J(\eta G(t))$ . Then $U=M_J(\eta G(0))$ and

    \begin{align*} 1<U\leq \frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right).\end{align*}
  4. (iv) The ODE

    \begin{align*} H^{\prime}(t) & =\beta H(t)-M_J\!\left(\eta G(t)\right)\exp\!\left(\alpha H(t)\right) +1, \\ H(T) & = 0 \notag\end{align*}
    has a unique solution in [0, T].

Proof. See Lemma A.1 in the appendix.

Observation 3.1. One could also assume that the domain of $M_J$ is big enough so that the function $t\mapsto M_J(\eta G(t))$ is well defined on the interval [0, T] for any value of $c\leq\frac{\kappa^2}{2\sigma^2}$ . Then one can prove the existence of H by applying the Picard–Lindelöf theorem (see [Reference Hartman30, Chapter II, Theorem 1.1]), which yields the following conditions:

\begin{align*} c\leq\frac{\kappa^2}{2\sigma^2} \ \ \text{and} \ \ \beta+f(U)\alpha\leq \frac{1}{T},\end{align*}

where f(U) is some function of U. Therefore, there would be more admissible values of c, but $\alpha$ and $\beta$ would have to satisfy an inequality involving T, which is quite restrictive; for the sake of generality, we prefer to avoid this.

Note that, a priori, there is no explicit expression for $c_l$ in the previous lemma, because it is not possible to solve the inequalities

\begin{align*} \Lambda(c)< \epsilon_J \hspace{0.5cm}\text{ and }\hspace{0.5cm} M_J\!\left(\Lambda(c)\right)\leq\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right).\end{align*}

Moreover, $c_l$ depends on $M_J$ , $\eta$ , $\kappa$ , $\sigma$ , T, $\epsilon_J$ , $\alpha$ , and $\beta$ . However, one can get an explicit value for $c_l$ that is suboptimal but does not depend on T. We obtain an expression for that suboptimal value and give some examples.

Corollary 3.1. Define $c_s$ by

\begin{align*} c_s=\min\!\left\{\frac{\kappa\epsilon_J}{2\eta},\frac{\kappa }{2\eta}M_J^{-1}\left(\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}

Then $0<c_s<c_l$ .

Proof. See Lemma A.1 in the appendix.

Note that we can apply Lemma 3.1 for any $c<c_s$ , because $c_s<c_l$ . We now give some examples of the value $c_s$ .

Example 3.1.

  1. (i) If $J_1\sim\text{Exponential}(\lambda)$ , then

    \begin{align*} c_s=\min\!\left\{\frac{\kappa\lambda}{2\eta}\left(1-\frac{\alpha}{\beta}\exp\!\left(1-\frac{\alpha}{\beta}\right)\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}
  2. (ii) If $J_1\sim\text{Gamma}(\mu,\lambda)$ with $\mu,\lambda>0$ as the shape and the rate, respectively, then

    \begin{align*} c_s=\min\!\left\{\frac{\kappa\lambda }{2\eta}\left(1-\frac{1}{\left(\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)\right)^{1/\mu}}\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}
  3. (iii) If $J_1=j>0$ , then

    \begin{align*} c_s=\min\!\left\{\frac{\kappa}{2\eta j}\left(\ln\left(\frac{\beta}{\alpha}\right)+\frac{\alpha}{\beta}-1\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}

Proof. See Example A.1 in the appendix.

We have studied the conditions under which the generalized Riccati equations have a well-defined solution on the interval [0, T]. We now proceed to studying the integrability of the exponential of the integrated variance, following the method noted at the beginning of this section.

Proposition 3.1. Let $c<c_l$ ; then

\begin{align*} \mathbb{E}\!\left[\exp\!\left(c\int_0^Tv_udu\right)\right]<\infty.\end{align*}

Proof. By Corollary 2.1, v is strictly positive and the expectation is finite for $c\leq0$ . We focus on the case when $0<c<c_l$ . We first define the function $f\colon[0, T]\times\mathbb{R}^3\to\mathbb{R}$ by

\begin{align*} f(t,x,y,z)=\exp\!\left(F(t)+G(t)x+H(t)y+cz\right),\end{align*}

where G and H are the solutions of the generalized Riccati equations given in Lemma 3.1, that is,

(3.4) \begin{align} G^{\prime}(t) & =-\frac{1}{2}\sigma^2G^2(t)+\kappa G(t)-c, \\ G(T) & = 0 \notag\end{align}

and

(3.5) \begin{align} H^{\prime}(t) & =\beta H(t)-M_J\!\left(\eta G(t)\right)\exp\!\left(\alpha H(t)\right) +1, \\ H(T) & = 0, \notag\end{align}

and F is given by

(3.6) \begin{align} F^{\prime}(t) & =-\kappa\bar{v}G(t)-\beta\lambda_0H(t), \\ F(T) & =0\notag.\end{align}

Note that with the assumption we have made on c, the functions F, G, and H are well defined on [0, T].

We also define the integrated variance $V_t\,:\!=\,\int_0^tv_udu$ . We let $Y_t\,:\!=\,(t,v_t,\lambda_t,V_t)$ and define the process $M=\{M(t), t\in[0, T]\}$ by $M(t)=f(t,v_t,\lambda_t,V_t)=f(Y_t)$ . Applying Itô’s formula to the process M, we get

\begin{align*} M(t)= \ &M(0)+\int_0^t\partial_tf(Y_{s-})ds+\int_0^t\partial_xf(Y_{s-})dv_s+\int_0^t\partial_y f(Y_{s-})d\lambda_s \\ &+\int_0^t\partial_zf(Y_{s-})dV_s+\frac{1}{2}\int_0^t\partial^2_{xx}f(Y_{s-})d[v]_s^{\text{c}} \\ &+\sum_{0<s\leq t}\left[f(Y_s)-f(Y_{s-})-\partial_xf(Y_{s-})\Delta v_s-\partial_y f(Y_{s-})\Delta\lambda_s\right].\end{align*}

We have used that

\begin{align*} [\lambda]_t & =\alpha^2[N]_t=\alpha^2N_t \implies [\lambda]_t^{\text{c}} =0, \\ [V]_t & = 0, \\ [v,\lambda]_t & =\alpha\eta[L,N]_t=\alpha\eta L_t\implies [v,\lambda]_t^{\text{c}} =0, \\ [v,V]_t & =[\lambda,V]_t =0.\end{align*}

Moreover,

\begin{align*} [v]_t & =\int_0^t\sigma^2v_sds+\eta^2[L]_t \implies [v]_t^{\text{c}}=\int_0^t\sigma^2v_sds, \\ dv_t & =-\kappa(v_t-\bar{v})dt+\sigma\sqrt{v_t}dW_t+\eta dL_t, \\ d\lambda_t & =-\beta(\lambda_t-\lambda_0)dt+\alpha dN_t.\end{align*}

Hence,

\begin{align*} M(t)-M(0) = &\int_0^t\Big[\partial_tf(Y_{s})-\kappa\partial_xf(Y_{s})(v_s-\bar{v})-\beta\partial_y f(Y_{s})(\lambda_s-\lambda_0) \\ &+\partial_zf(Y_{s})v_s+\frac{1}{2}\partial_{xx}^2f(Y_{s})\sigma^2v_s\Big]ds\\ &+\int_0^t\partial_xf(Y_{s-})\sigma\sqrt{v_s}dW_s+\int_0^t\partial_yf(Y_{s-})\eta dL_s+\int_0^t\partial_y f(Y_{s-})\alpha dN_s \\ &+\sum_{0<s\leq t}\left[f(Y_s)-f(Y_{s-})-\partial_xf(Y_{s-})\Delta v_s-\partial_y f(Y_{s-})\Delta \lambda_s\right].\end{align*}

Since $\Delta v_s=\eta\Delta L_s$ and $\Delta \lambda_s=\alpha\Delta N_s$ , we have

\begin{align*} \int_0^t\partial_xf(Y_{s-})\eta dL_s & = \sum_{0<s\leq t}\partial_xf(Y_{s-})\Delta v_s, \\ \int_0^t\partial_y f(Y_{s-})\alpha dN_s & = \sum_{0<s\leq t}\partial_y f(Y_{s-})\Delta \lambda_s,\end{align*}

and we get

\begin{align*} M(t)-M(0) = &\int_0^t\Big[\partial_tf(Y_{s})-\kappa\partial_xf(Y_{s})(v_s-\bar{v})-\beta\partial_y f(Y_{s})(\lambda_s-\lambda_0) \\ &+\partial_zf(Y_{s})v_s+\frac{1}{2}\partial_{xx}^2f(Y_{s})\sigma^2v_s\Big]ds\\ &+\int_0^t\partial_xf(Y_{s-})\sigma\sqrt{v_s}dW_s +\sum_{0<s\leq t}\left[f(Y_s)-f(Y_{s-})\right].\end{align*}

Next, we can write

\begin{align*} \sum_{0<s\leq t}\left[f(Y_s)-f(Y_{s-})\right] & =\sum_{0<s\leq t}\left[f(s,v_{s-}+\Delta v_s, \lambda_{s-}+\Delta \lambda_s,V_s)-f(Y_{s-})\right] \\ & = \sum_{0<s\leq t}\left[f(s,v_{s-}+\eta \Delta L_s, \lambda_{s-}+\alpha\Delta N_s,V_s)-f(Y_{s-})\right] \\ & = \sum_{0<s\leq t} g(s,\Delta L_s,\Delta N_s),\end{align*}

where

\begin{align*} g(s,u_1,u_2)\,:\!=\,f(s,v_{s-}+\eta u_1, \lambda_{s-}+\alpha u_2,V_s)-f(Y_{s-}).\end{align*}

We now define $U_s=(L_s,N_s)$ , and for $t\in[0, T]$ , $A\in\mathcal{B} (\mathbb{R}^2\setminus\{0,0\})$ , we let

\begin{align*} N^U(t,A)=\#\{0<s\leq t, \Delta U_s\in A\}.\end{align*}

We add and subtract the compensator of the counting measure $N^U$ to split the expression into a local martingale plus a predictable process of finite variation. Note that the compensator of the Hawkes process is given by $\Lambda^N_t=\int_0^t\lambda_udu$ ; see [Reference Bacry, Mastromatteo and Muzy4, Theorem 3]. One can check that the compensator of the compound Hawkes process is given by $\Lambda^L_t=\mathbb{E}[J_1]\int_0^t\lambda_udu$ . Thus,

\begin{align*} \sum_{0<s\leq t}\left[f(Y_s)-f(Y_{s-})\right] & = \int_0^t\int_{(0,\infty)^2}g(s,u_1,u_2)N^U(ds,du) \\ & = \int_0^t\int_{(0,\infty)^2} g(s,u_1,u_2)\left(N^U(ds,du)-\lambda_sP_{J_1}(du_1)\delta_1(du_2)ds\right) \\ &+\int_0^t\int_{(0,\infty)^2} g(s,u_1,u_2)\lambda_sP_{J_1}(du_1)\delta_1(du_2)ds.\end{align*}

Note that

\begin{align*} \int_0^t\int_{(0,\infty)^2} g(s,u_1,u_2)\lambda_sP_{J_1}(du_1)\delta_1(du_2)ds = \int_0^t\int_{(0,\infty)} g(s,u_1,1)\lambda_sP_{J_1}(du_1)ds.\end{align*}

We conclude that

\begin{align*} M(t)-M(0) = &\int_0^t\Big[\partial_tf(Y_{s})-\kappa\partial_xf(Y_{s})(v_s-\bar{v})-\beta\partial_y f(Y_{s})(\lambda_s-\lambda_0) \\ &+\partial_zf(Y_{s})v_s+\frac{1}{2}\partial_{xx}^2f(Y_{s})\sigma^2v_s+\int_{(0,\infty)} g(s,u_1,1)\lambda_sP_{J_1}(du_1)\Big]ds \\ &+\int_0^t\partial_xf(Y_{s-})\sigma\sqrt{v_s}dW_s \\ &+\int_0^t\int_{(0,\infty)} g(s,u_1,1)\left(N^U(ds,du)-\lambda_sP_{J_1}(du_1)ds\right).\end{align*}

Recall that $f(t,x,y,z)=\exp\!\left(F(t)+G(t)x+H(t)y+cz\right)$ ; thus,

\begin{align*} \partial_tf(t,x,y,z) & =\left(F^{\prime}(t)+G^{\prime}(t)x+H^{\prime}(t)y\right)f(t,x,y,z), \\ \partial_xf(t,x,y,z) & =G(t)f(t,x,y,z), \\ \partial_y f(t,x,y,z) & =H(t)f(t,x,y,z), \\ \partial_zf(t,x,y,z)& =cf(t,x,y,z).\end{align*}

Furthermore,

\begin{align*} g(s,u_1,1) & =f(s,v_{s-}+\eta u_1,\lambda_{s-}+\alpha,V_s)-f(s,v_{s-},\lambda_{s-},V_s) \\ & = f(s,v_{s-},\lambda_{s-},V_s)\left[\exp\!\left(\eta u_1G(s)\right)\exp\!\left(\alpha H(s)\right)-1\right].\end{align*}

Therefore,

\begin{align*} \int_{(0,\infty)} g(s,u_1,1)\lambda_sP_{J_1}(du_1) = f(Y_{s-})\lambda_s\!\left[M_J\!\left(\eta G(s)\right)\exp\!\left(\alpha H(s)\right)-1\right].\end{align*}

Using the specific form of the derivatives and the previous result, we see that the drift part of $M(t)-M(0)$ vanishes. That is, we have the following:

  • Coefficient multiplying v in the drift:

    \begin{align*} f(Y_{t})\left[G^{\prime}(t)-\kappa G(t)+\frac{1}{2}G(t)^2\sigma^2+c\right]=0,\end{align*}
    where we have substituted Equation (3.4).
  • Coefficient multiplying $\lambda$ in the drift:

    \begin{align*} f(Y_{t})\left[H^{\prime}(t)-\beta H(t)+M_J\!\left(\eta G(t)\right)\exp\!\left( \alpha H(t)\right)-1\right] =0 ,\end{align*}
    where we have substituted Equation (3.5).
  • Free coefficient in the drift:

    \begin{align*} f(Y_{t})\left[F^{\prime}(t)+\kappa\bar{v}G(t)+\beta\lambda_0H(t)\right]=0,\end{align*}
    where we have substituted Equation (3.6).

Therefore, the process M can be written as

\begin{align*} M(t) = & \ M(0) +\int_0^t\partial_vf(Y_{s-})\sigma\sqrt{v_s}dW_s \\ &+\int_0^t\int_{(0,\infty)} g(s,u_1,1)\left(N^U(ds,du)-\lambda_sP_{J_1}(du_1)ds\right).\end{align*}

We conclude that M is a local martingale. Moreover, since M is non-negative, it is a supermartingale. Then

\begin{align*} \mathbb{E}[M(T)] &=\mathbb{E}\!\left[\exp\!\left(F(T)+G(T)v_T+H(T)\lambda_T+cV_T\right)\right] \\ & = \mathbb{E}\!\left[\exp\!\left(c\int_0^Tv_udu\right)\right] \\ & \leq M(0) =\exp\!\left(F(0)+G(0)v_0+H(0)\lambda_0\right) <\infty.\end{align*}

Note that $F(0),G(0),H(0)<\infty$ , because the functions F, G, and H are well defined on the entire interval [0, T].

We write $\widehat{S}=\{\widehat{S}_t, t\in[0, T]\}$ for the discounted stock price; that is, $\widehat{S}_t=e^{-rt}S_t$ . We prove the existence of a family of equivalent local martingale measures $\mathbb{Q}(a)$ parametrized by a number a. Namely, $\mathbb{Q}(a)\sim\mathbb{P}$ is such that $\widehat{S}$ is an $(\mathcal{F},\mathbb{Q}(a))$ -local martingale.

Theorem 3.1. Let $a\in\mathbb{R}$ and define $\theta_t^{(a)}\,:\!=\,\frac{1}{\sqrt{1-\rho^2}}\left(\frac{\mu_t-r}{\sqrt{v_t}}-a\rho\sqrt{v_t}\right)$ ,

\begin{align*} Y_t^{(a)} &\,:\!=\,\exp\!\left({-}\int_0^t\theta_s^{(a)}dB_s-\frac{1}{2}\int_0^t\big(\theta_s^{(a)}\big)^2ds\right), \\ Z_t^{(a)} &\,:\!=\,\exp\!\left({-}a\int_0^t\sqrt{v_s}dW_s-\frac{1}{2}a^2\int_0^tv_sds\right),\end{align*}

and $X_t^{(a)}\,:\!=\,Y_t^{(a)}Z_t^{(a)}$ . The set

(3.7) \begin{align} \mathcal{E}\,:\!=\,\left\{\mathbb{Q}(a) \hspace{0.2cm} given\ by \hspace{0.2cm} \frac{d\mathbb{Q}(a)}{d\mathbb{P}}=X_T^{(a)} \hspace{0.2cm} with \hspace{0.2cm} |a|<\sqrt{2c_l}\right\}\end{align}

is a set of equivalent local martingale measures.

Proof. Define the process $\big(B^{\mathbb{Q}(a)},W^{\mathbb{Q}(a)}\big)=\big\{\big(B_t^{\mathbb{Q}(a)},W_t^{\mathbb{Q}(a)}\big), t\in[0, T]\big\}$ by

(3.8) \begin{align} dB_t^{\mathbb{Q}(a)}&=dB_t+\theta_t^{(a)}dt, \notag\\ dW_t^{\mathbb{Q}(a)}&=dW_t+a\sqrt{v_t}dt.\end{align}

The dynamics of the stock is now given by

\begin{align*} \frac{dS_t}{S_t}=\left[\mu_t-\sqrt{v_t}\left(\sqrt{1-\rho^2} \theta_t^{(a)}+a\rho \sqrt{v_t}\right)\right]dt+\sqrt{v_t}\left(\sqrt{1-\rho^2} dB_t^{\mathbb{Q}(a)}+\rho dW_t^{\mathbb{Q}(a)}\right).\end{align*}

Note that

\begin{align*} \mu_t-\sqrt{v_t}\left(\sqrt{1-\rho^2} \theta_t^{(a)}+a\rho \sqrt{v_t}\right)=r,\end{align*}

which is a necessary condition for $\mathbb{Q}(a)$ to be an equivalent local martingale measure. The choice of the market price of risk processes $\theta^{(a)}$ and $a\sqrt{v_t}$ is the same as in [Reference Wong and Heyde52, Equations (3.4) and (3.7)]. This choice preserves the standard Heston variance dynamics after the change of measure.

To apply Girsanov’s theorem we need to check that the process $X^{(a)}$ is an $(\mathcal{F},\mathbb{P})$ -martingale. Since $X^{(a)}$ is a positive $(\mathcal{F},\mathbb{P})$ -local martingale with $X^{(a)}_0=1$ , it is an $(\mathcal{F},\mathbb{P})$ -supermartingale and it is an $(\mathcal{F},\mathbb{P})$ -martingale if and only if

\begin{align*} \mathbb{E}\!\left[X^{(a)}_T\right]=1.\end{align*}

Using the fact that $Z^{(a)}_T$ is $\mathcal{F}_T^{W}\vee\mathcal{F}_T^L$ -measurable, we have

(3.9) \begin{align}\mathbb{E}\!\left[X^{(a)}_T\right]&=\mathbb{E}\!\left[Y^{(a)}_TZ^{(a)}_T\right] \notag\\&=\mathbb{E}\!\left[\mathbb{E}\!\left[Y^{(a)}_TZ^{(a)}_T|\mathcal{F}_T^{W}\vee\mathcal{F}_T^L\right]\right] \notag\\&=\mathbb{E}\!\left[Z^{(a)}_T\mathbb{E}\!\left[Y^{(a)}_T|\mathcal{F}_T^{W}\vee\mathcal{F}_T^L\right]\right].\end{align}

By Corollary 2.1, the variance process v is strictly positive. This implies that $\int_0^T\big(\theta_s^{(a)}\big)^2ds<\infty$ , $\mathbb{P}$ -almost surely. Since $\theta^{(a)}$ is $\big\{\mathcal{F}_t^{W}\vee\mathcal{F}_t^L\big\}_{t\in[0, T]}$ -adapted,

\begin{align*} Y^{(a)}_T|\mathcal{F}_T^{W}\vee\mathcal{F}_T^L\sim\text{Lognormal}\!\left({-}\frac{1}{2}\int_0^T\big(\theta_s^{(a)}\big)^2ds,\int_0^T\big(\theta_s^{(a)}\big)^2ds\right),\end{align*}

and we obtain that $\mathbb{E}\big[Y^{(a)}_T|\mathcal{F}_T^{W}\vee\mathcal{F}_T^L\big]=1.$ Therefore, substituting this in the last expression in (3.9), we obtain $\mathbb{E}\big[X^{(a)}_T\big]=\mathbb{E}\big[Z^{(a)}_T\big]$ and we only need to check that $\mathbb{E}\big[Z^{(a)}_T\big]=1$ . Since $|a|<\sqrt{2c_l}$ , $\frac{1}{2}a^2< c_l$ , we can apply Proposition 3.1 and get that

(3.10) \begin{align} \mathbb{E}\!\left[\exp\!\left(\frac{1}{2}a^2\int_0^Tv_udu\right)\right]<\infty.\end{align}

Hence, Novikov’s condition is satisfied, $Z^{(a)}$ is an $(\mathcal{F},\mathbb{P})$ -martingale, and we conclude that $\mathbb{E}\big[X^{(a)}_T\big]=\mathbb{E}\big[Z^{(a)}_T\big]=1.$

Then $X^{(a)}$ is an $(\mathcal{F},\mathbb{P})$ -martingale, $\mathbb{Q}(a)\sim\mathbb{P}$ is an equivalent probability measure defined by

\begin{align*} \frac{d\mathbb{Q}}{d\mathbb{P}}&=X^{(a)}_T, \\ dX^{(a)}_t&=X^{(a)}_t\!\left[-\theta_t^{(a)}dB_t-a\sqrt{v_t}dW_t\right],\end{align*}

and $\big(B^{\mathbb{Q}(a)},W^{\mathbb{Q}(a)}\big)$ as defined in (3.8) is a two-dimensional standard $(\mathcal{F},\mathbb{Q}(a))$ -Brownian motion. The dynamics of the stock under $\mathbb{Q}(a)$ is given by

\begin{align*} \frac{dS_t}{S_t}=rdt+\sqrt{v_t}\!\left(\sqrt{1-\rho^2} dB_t^{\mathbb{Q}(a)}+\rho dW_t^{\mathbb{Q}(a)}\right).\end{align*}

This implies that the discounted stock $\widehat{S}$ is an $(\mathcal{F},\mathbb{Q}(a))$ -local martingale. Therefore, the set $\mathcal{E}$ defined in (3.7) is a set of equivalent local martingale measures.

Observation 3.2. As pointed out at the beginning of this section, since $\frac{1}{2}a^2$ is multiplying the integrated variance in (3.10), it is important that $c_l$ is strictly positive in Lemma 3.1.

Observation 3.3. The dynamics of the variance under $\mathbb{Q}(a)\in\mathcal{E}$ is given by

(3.11) \begin{align} dv_t & =-\kappa\!\left(v_t-\bar{v}\right)dt+\sigma\sqrt{v_t}\left(dW_t^{\mathbb{Q}(a)}-a\sqrt{v_t}dt\right)+\eta dL_t \notag\\ & =-\left(\kappa\!\left(v_t-\bar{v}\right)+a\sigma v_t\right)dt+\sigma\sqrt{v_t}dW_t^{\mathbb{Q}(a)}+\eta dL_t \notag\\ & =-\kappa^{(a)}\left(v_t-\bar{v}^{(a)}\right)dt+\sigma\sqrt{v_t}dW_t^{\mathbb{Q}(a)}+\eta dL_t,\end{align}

where $\kappa^{(a)}=\kappa+a\sigma$ and $\bar{v}^{(a)}=\frac{k\bar{v}}{k+a\sigma}$ .

So far, we have proven that there exists a set of equivalent local martingale measures. However, we need to study when those measures are actually equivalent martingale measures. We prove that under the condition $\rho^2<c_l$ , there exists a subset of $\mathcal{E}$ of equivalent martingale measures. The condition $\rho^2<c_l$ may look quite restrictive. However, an inequality involving the correlation factor $\rho$ also appears in the proof of the existence of equivalent martingales measures in the standard Heston model (see [Reference Wong and Heyde52, Theorem 3.6]).

Theorem 3.2. If $\rho^2<c_l$ , the set

(3.12) \begin{align}\mathcal{E}_m\,:\!=\,\left\{\mathbb{Q}(a)\in\mathcal{E}\,:\, |a|<\min\!\left\{\frac{\sqrt{2c_l}}{2},\sqrt{c_l-\rho^2}\right\}\right\}\end{align}

is a set of equivalent martingale measures.

Proof. Let $\mathbb{Q}(a)\in\mathcal{E}_m\subset\mathcal{E}$ . By Theorem 3.1, $\widehat{S}$ is an $(\mathcal{F},\mathbb{Q}(a))$ -local martingale with the following dynamics:

\begin{align*} \frac{d\widehat{S}_t}{\widehat{S}_t}=\sqrt{v_t}\left(\sqrt{1-\rho^2}dB_t^{\mathbb{Q}(a)}+\rho dW_t^{\mathbb{Q}(a)}\right).\end{align*}

Hence,

\begin{align*} \widehat{S}_t=S_0\exp\!\left(\sqrt{1-\rho^2}\int_0^t\sqrt{v_s}dB_s^{\mathbb{Q}(a)}+\rho\int_0^t\sqrt{v_s}dW_s^{\mathbb{Q}(a)}-\frac{1}{2}\int_0^tv_sds\right).\end{align*}

Since $\widehat{S}$ is a positive $\left(\mathcal{F},\mathbb{Q}(a)\right)$ -local martingale with $\widehat{S}_0=S_0$ , it is an $\left(\mathcal{F},\mathbb{Q}(a)\right)$ -supermartingale and it is an $\left(\mathcal{F},\mathbb{Q}(a)\right)$ -martingale if and only if $\mathbb{E}^{\mathbb{Q}(a)}\big[\widehat{S}_T\big]=S_0.$

Similarly as we did in Theorem 3.1, we define the processes

\begin{align*} Y_t^{\mathbb{Q}(a)}&:\!=\,\exp\!\left(\sqrt{1-\rho^2}\int_0^t\sqrt{v_s}dB_s^{\mathbb{Q}(a)}-\frac{1-\rho^2}{2}\int_0^tv_sds\right), \\ Z_t^{\mathbb{Q}(a)}&:\!=\,\exp\!\left(\rho\int_0^t\sqrt{v_s}dW_s^{\mathbb{Q}(a)}-\frac{\rho^2}{2}\int_0^tv_sds\right). \end{align*}

Thus, $\widehat{S}_t=S_0Y_t^{\mathbb{Q}(a)} Z_t^{\mathbb{Q}(a)}$ . Using that $Z_T^{\mathbb{Q}(a)}$ is $\mathcal{F}_T^{W^{\mathbb{Q}(a)}}\vee\mathcal{F}_T^L$ -measurable, we have

(3.13) \begin{align} \mathbb{E}^{\mathbb{Q}(a)}\big[\widehat{S}_T\big] & =S_0\mathbb{E}^{\mathbb{Q}(a)}\left[Y_T^{\mathbb{Q}(a)} Z_T^{\mathbb{Q}(a)}\right]=S_0\mathbb{E}^ {\mathbb{Q}(a)}\left[\mathbb{E}^{\mathbb{Q}(a)}\left[Y_T^{\mathbb{Q}(a)} Z_T^{\mathbb{Q}(a)}|\mathcal{F}_T^{W^{\mathbb{Q}(a)}}\vee\mathcal{F}_T^L\right]\right] \notag \\ & = S_0\mathbb{E}^{\mathbb{Q}(a)}\left[Z_T^{\mathbb{Q}(a)}\mathbb{E}^{\mathbb{Q}(a)}\left[Y_T^{\mathbb{Q}(a)}|\mathcal{F}_T^{W^{\mathbb{Q}(a)}}\vee\mathcal{F}_T^L\right]\right].\end{align}

Since $\int_0^Tv_udu<\infty$ , ${\mathbb{Q}(a)}$ -almost surely, and v is $\big\{\mathcal{F}_t^{W^{\mathbb{Q}(a)}}\vee\mathcal{F}_t^L\big\}_{t\in[0, T]}$ -adapted,

\begin{align*} Y_T^{\mathbb{Q}(a)}|\mathcal{F}_T^{W^{\mathbb{Q}(a)}}\vee\mathcal{F}_T^L\sim\text{Lognormal}\left({-}\frac{1-\rho^2}{2}\int_0^Tv_sds,(1-\rho^2)\int_0^Tv_sds\right),\end{align*}

and we have that $\mathbb{E}^{\mathbb{Q}(a)}\big[Y_T^{\mathbb{Q}(a)}|\mathcal{F}_T^{W^{\mathbb{Q}(a)}}\vee\mathcal{F}_T^L\big]=1.$ Substituting this in the last expression in (3.13), we obtain that $\mathbb{E}^{\mathbb{Q}(a)}\big[\widehat{S}_T\big]=S_0\mathbb{E}^{\mathbb{Q}(a)}\big[Z_T^{\mathbb{Q}(a)}\big],$ and hence we only need to check $\mathbb{E}^{\mathbb{Q}(a)}\big[Z_T^{\mathbb{Q}(a)}\big]=1$ . We will prove that Novikov’s condition holds, that is,

(3.14) \begin{align} \mathbb{E}^{\mathbb{Q}(a)}\left[\exp\!\left(\frac{\rho^2}{2}\int_0^Tv_udu\right)\right]<\infty.\end{align}

Unlike in the proof of [Reference Wong and Heyde52, Theorem 3.6], here we cannot directly apply Proposition 3.1 with the volatility parameters $\kappa^{(a)}$ and $\bar{v}^{(a)}$ given in (3.11) to prove that the expectation above is finite. One reason is that under ${\mathbb{Q}(a)}$ the Brownian motion $W^{\mathbb{Q}(a)}$ and the compound Hawkes process L are no longer independent, because

\begin{align*} dW_t^{\mathbb{Q}(a)}=dW_t+a\sqrt{v_t}dt;\end{align*}

therefore, the dynamics of v is different under ${\mathbb{Q}(a)}$ . In order to check (3.14) we note that

\begin{align*} \mathbb{E}^{\mathbb{Q}(a)}\left[\exp\!\left(\frac{\rho^2}{2}\int_0^Tv_udu\right)\right]=\mathbb{E}\!\left[\exp\!\left(\frac{\rho^2}{2}\int_0^Tv_udu\right)\frac{d\mathbb{Q}(a)}{d\mathbb{P}}\right],\end{align*}

where

\begin{align*} \frac{d\mathbb{Q}(a)}{d\mathbb{P}}=Y_T^{(a)}Z_T^{(a)},\end{align*}

and $Y_T^{(a)}$ and $Z_T^{(a)}$ are given in the statement of Theorem 3.1.

Repeating the same argument as in Theorem 3.1, we can write

\begin{align*} \mathbb{E}^{\mathbb{Q}(a)}\left[\exp\!\left(\frac{\rho^2}{2}\int_0^Tv_udu\right)\right] &=\mathbb{E}\!\left[\exp\!\left(\frac{\rho^2}{2}\int_0^Tv_udu\right)Y_T^{(a)}Z_T^{(a)}\right] \\ & = \mathbb{E}\!\left[\mathbb{E}\!\left[ \exp\!\left(\frac{\rho^2}{2}\int_0^Tv_udu\right)Y_T^{(a)}Z_T^{(a)}\Big|\mathcal{F}_T^W\vee\mathcal{F}_T^L\right]\right] \\ & = \mathbb{E}\!\left[\exp\!\left(\frac{\rho^2}{2}\int_0^Tv_udu\right)Z_T^{(a)}\mathbb{E}\!\left[Y_T^{(a)}|\mathcal{F}_T^W\vee\mathcal{F}_T^L\right]\right] \\ & = \mathbb{E}\!\left[\exp\!\left(\frac{\rho^2}{2}\int_0^Tv_udu\right)Z_T^{(a)}\right] \\ & = \mathbb{E}\!\left[\exp\!\left({-}a\int_0^T\sqrt{v_s}dW_s-\frac{1}{2}\big(a^2-\rho^2\big)\int_0^Tv_sds\right)\right],\end{align*}

Then, adding and subtracting $a^2\int_0^Tv_sds$ in the exponential and applying the Cauchy–Schwarz inequality, we obtain

(3.15) \begin{align} &\mathbb{E}\!\left[\exp\!\left({-}a\int_0^T\sqrt{v_s}dW_s-\frac{1}{2}\big(a^2-\rho^2\big)\int_0^Tv_sds\right)\right] = \notag \\ &=\mathbb{E}\!\left[\exp\!\left({-}a\int_0^T\sqrt{v_s}dW_s-a^2\int_0^Tv_sds\right)\exp\!\left(\frac{1}{2}\big(a^2+\rho^2\big)\int_0^tv_sds\right)\right] \notag\\ &\leq \mathbb{E}\!\left[\exp\!\left({-}2a\int_0^T\sqrt{v_s}dW_s-2a^2\int_0^Tv_sds\right)\right]^{\frac{1}{2}}\mathbb{E}\!\left[\exp\!\left(\big(a^2+\rho^2\big)\int_0^tv_sds\right)\right]^{\frac{1}{2}}.\end{align}

Note that the first factor in (3.15) is the expectation of a Doléans–Dade exponential. Since $|a|<\frac{\sqrt{2c_l}}{2}$ (recall the choice of a in (3.12)), $2a^2<c_l$ , and by Proposition 3.1, Novikov’s condition holds:

\begin{align*}\mathbb{E}\!\left[\exp\!\left(2a^2\int_0^Tv_sds\right)\right]<\infty.\end{align*}

Therefore, we have that

\begin{align*} \mathbb{E}\!\left[\exp\!\left({-}2a\int_0^T\sqrt{v_s}dW_s-2a^2\int_0^Tv_sds\right)\right]=1.\end{align*}

For the second term in (3.15), we again apply Proposition 3.1. We need $ a^2+\rho^2<c_l,$ which is true because $|a|<\sqrt{c_l-\rho^2}$ . Thus

\begin{align*}\mathbb{E}\!\left[\exp\!\left(\big(a^2+\rho^2\big)\int_0^tv_sds\right)\right]<\infty\end{align*}

and

\begin{align*} \mathbb{E}^{\mathbb{Q}(a)}\left[\exp\!\left(\frac{\rho^2}{2}\int_0^Tv_udu\right)\right]<\infty.\end{align*}

We conclude that $\mathbb{E}^{\mathbb{Q}(a)}\big[Z_T^{\mathbb{Q}(a)}\big]=1$ , $\mathbb{E}^{\mathbb{Q}(a)}\big[\widehat{S}_T\big]=S_0$ , and $\widehat{S}$ is an $(\mathcal{F},{\mathbb{Q}(a)})$ -martingale. Therefore, $\mathcal{E}_m$ is a set of equivalent martingale measures.

4. Application: efficient computation of risk exposures

A common risk management practice is the computation of exposures, such as potential future exposures, expected exposures, and expected positive exposures, among others. The objective of such computations is to estimate the capital that the firm needs to hold in order to manage its risks and comply with economic regulations. Therefore, computing exposures correctly and efficiently is a fundamental risk management procedure that firms need to deal with.

Stein [Reference Stein51] shows that exposures computed under the risk-neutral measure are essentially arbitrary; they must be calculated under the real-world measure. It is proven in [Reference Stein51] that, under the Black–Scholes model, exposures can differ by a factor of two or more across commonly used numéraires and their corresponding risk-neutral measures.

On the other hand, efficient computation of such exposures is relevant for firms to optimize the time and effort used in those calculations. As explained in [Reference Stein50, Reference Stein51], computing exposures under the real-world measure for a portfolio of derivatives can be computationally expensive. The reason is that to compute exposures on derivative portfolios, one simulates risk factors to the horizon date under the real-world measure, and then the portfolio is repriced under a risk-neutral measure. Essentially, this requires performing a Monte Carlo within a Monte Carlo, which can be extremely time-consuming. Moreover, such exposures are usually computed for a variety of horizon times, making the computations even more costly. This is one of the reasons motivating firms (wrongly) to compute exposures under the risk-neutral measure.

However, if a change of measure exists and the Radon–Nikodym derivative of that change is known, such computations can be done in a far more efficient way. Following the explanation in [Reference Stein51], let V be a portfolio of derivatives and consider a risk exposure of the type

(4.1) \begin{align} R=\mathbb{E}\!\left[Y(V_T)\right]\end{align}

for some function Y. Now, assuming that $\rho^2<c_l$ , let $\mathbb{Q}(a)\in\mathcal{E}_m$ be a risk-neutral measure, and let $\frac{d\mathbb{Q}(a)}{d\mathbb{P}}=X_T^{(a)}$ be as given in Theorem 3.1. Then R can be computed in the following way:

(4.2) \begin{align} R=\mathbb{E}\!\left[Y(V_T)\right]=\mathbb{E}^{\mathbb{Q}(a)}\left[Y(V_T)\frac{d\mathbb{P}}{d\mathbb{Q}(a)}\right]=\mathbb{E}^{\mathbb{Q}(a)}\left[Y(V_T)\frac{1}{X_T^{(a)}}\right].\end{align}

The exposure R can often be calculated much more efficiently using the expression on the right-hand side of (4.2) than using (4.1), because the calculations can be done entirely under $\mathbb{Q}(a)$ . Monte Carlo techniques such as least-squares Monte Carlo [Reference Longstaff and Schwartz42], stochastic mesh [Reference Broadie and Glasserman10], or stochastic grid [Reference Jain and Oosterlee33] can be used to compute $V_T$ , and the same scenarios can used to obtain the expression on the right-hand side of (4.2) [Reference Cesari13].

To summarize, an important practical application of our results to risk management is the correct and efficient computation of exposures on derivative portfolios, which is possible thanks to the existence of risk-neutral measures and an explicit expression for the Radon–Nikodym derivative.

4.1. Numerical example

We give a numerical simulation in which we quantify the error obtained if, under our stochastic volatility model, the exposures are wrongly computed under a risk-neutral measure. The objective is to illustrate the importance of computing exposures under the real-world measure. Assume that $\rho^2<c_l$ . Let $\mathbb{Q}(a)\in\mathcal{E}_m$ , and let V be a portfolio consisting of a single call option with strike K and maturity time T, that is,

\begin{align*} V_t=e^{-r(T-t)}\mathbb{E}^{\mathbb{Q}(a)}\left[\max\{S_T-K,0\}|\mathcal{F}_t\right], \hspace{1cm} t\in[0, T].\end{align*}

Following [Reference Stein51], the expected exposure at time $t\in[0, T]$ is defined by $EE(V,t)\,:\!=\,\mathbb{E}\!\left[\max\{V_t,0\}\right]$ . For the sake of simplicity, we compute the expected exposure at maturity time T. Then $EE(V,T)=\mathbb{E}\!\left[\max\{S_T-K,0\}\right]$ , and we define the (wrong) expected exposure under $\mathbb{Q}(a)$ by $EE^{(a)}(V,T)=\mathbb{E}^{\mathbb{Q}(a)}\left[\max\{S_T-K,0\}\right]$ .

We employ an Euler–Maruyama scheme to approximate the processes $\lambda$ , v, and S; we then run a Monte Carlo simulation to compute EE(V, T) and $EE^{(a)}(V,T)$ for different strikes and values of a. For convenience, we assume that the drift $\mu$ of S is constant and that $J_1\sim\text{Exponential}(k)$ . The parameters used for the simulation are given in Table 1; we have taken as a reference the ones given in [Reference Kokholm and Stisen41, Table IV].

Table 1. Parameters used for the Monte Carlo simulation.

One can check that the stability condition for the Hawkes process, the Feller condition, and the condition $\rho^2<c_l$ are satisfied. In Figure 1 we compare EE(V, T) and $EE^{(0)}(V,T)$ for different strikes, and in Figure 2 we make the same comparison for EE(V, T) and $EE^{(1)}(V,T)$ . In Table 2 we give the average and maximum relative errors between the exposures computed under risk-neutral measures and the correct exposures across the different strikes. Note that the relative errors given in Table 2 are of significant size for risk management purposes. Finally, in Figure 3 we compute $EE^{(a)}(V,T)$ for different values of a and for a fixed strike value $K=1.4$ . We also plot the correct value and the relative error.

Figure 1. Expected exposure computed under $\mathbb{P}$ and under $\mathbb{Q}(0)$ as functions of the strike for a call option.

Table 2. Average and maximum relative errors between exposures computed under risk-neutral measures and exposures computed under the real-world measure.

Figure 2. Expected exposure computed under $\mathbb{P}$ and under $\mathbb{Q}(1)$ as functions of the strike for a call option.

Figure 3. Expected exposure computed under $\mathbb{Q}(a)$ for different values of a and a fixed strike value $K=1.4$ .

It is worth mentioning that it would be of interest to compute EE(V, t) for $t\in[0,T)$ and other types of exposures under our stochastic volatility model for more complex derivative portfolios. Nevertheless, the goal is to show that even with a simple portfolio and exposure one can obtain significantly different exposure values if they are wrongly computed under the risk-neutral measure.

Appendix A. Technical lemmas

We now give the proofs that were postponed in Section 3.

Lemma A.1. For $c\leq\frac{\kappa^2}{2\sigma^2}$ , define $D(c)\,:\!=\,\sqrt{\kappa^2-2\sigma^2c}$ ,

$$\Lambda(c)\,:\!=\,\frac{2\eta c\!\left(e^{D(c)T}-1\right)}{D(c)-\kappa+\left(D(c)+\kappa\right)e^{D(c)T}},$$

and

\begin{align*} c_l\,:\!=\,\sup\left\{c\leq\frac{\kappa^2}{2\sigma^2}\,:\, \Lambda(c)< \epsilon_J \hspace{0.3cm} and \hspace{0.3cm} M_J\left(\Lambda(c)\right)\leq\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right) \right\}.\end{align*}

Then $0<c_l\leq\frac{\kappa^2}{2\sigma^2}$ and for $c< c_l$ , the following hold:

  1. (i) The ODE

    (A.1) \begin{align} G^{\prime}(t) & =-\frac{1}{2}\sigma^2G^2(t)+\kappa G(t)-c, \\ G(T) & = 0 \notag\end{align}
    has a unique solution in the interval [0, T]. The solution is strictly decreasing and is given by
    \begin{align*} G(t)=\frac{2c\!\left(e^{D(c)(T-t)}-1\right)}{D(c)-\kappa+\left(D(c)+\kappa\right)e^{D(c)(T-t)}}.\end{align*}
  2. (ii) The function $t\mapsto M_J(\eta G(t))$ is well defined for $t\in[0, T]$ .

  3. (iii) Define $U\,:\!=\,\sup_{t\in [0, T]}M_J(\eta G(t))$ . Then $U=M_J(\eta G(0))$ and

    \begin{align*} 1<U\leq \frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right).\end{align*}
  4. (iv) The ODE

    (A.2) \begin{align} H^{\prime}(t) & =\beta H(t)-M_J\left(\eta G(t)\right)\exp\!\left(\alpha H(t)\right) +1, \\ H(T) & = 0 \notag\end{align}
    has a unique solution in [0, T].

Proof. We first check that $c_l>0$ . Since

(A.3) \begin{align} \lim_{c\rightarrow0^+}\Lambda(c)=0,\end{align}

there exist positive values of c satisfying the inequality $\Lambda(c)<\epsilon_J$ . Using that $\alpha<\beta$ , one can check that $\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)>1$ . Since the limit in (A.3) holds, $M_J(0)=1$ , and $M_J$ is a continuous function, there exist positive values of c satisfying the inequality $ M_J\left(\Lambda(c)\right)\leq\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right).$ Therefore, $c_l>0$ . From now on, let $c< c_l$ .

(i) To find the solution we can transform Equation (A.1) to a second-order linear equation with constant coefficients and then apply the standard method to solve it using the fact that $c<\frac{\kappa^2}{2\sigma^2}$ . To see that G is strictly decreasing, one can check that

\begin{align*} G^{\prime}(t)=\frac{-4cD(c)^2e^{D(c)(T-t)}}{\left(D(c)-\kappa+\left(D(c)+\kappa\right)e^{D(c)(T-t)}\right)^2}<0.\end{align*}

(ii) Since G is strictly decreasing and $\eta>0$ ,

\begin{align*} \sup_{t\in[0, T]}\eta G(t)=\eta G(0)=\frac{2\eta c\!\left(e^{D(c)T}-1\right)}{D(c)-\kappa+(D(c)+\kappa)e^{D(c)T}}=\Lambda(c).\end{align*}

By the definition of $c_l$ we have $\Lambda(c)<\epsilon_J$ . Then $\eta G(t)<\epsilon_J$ for $t\in[0, T]$ and $M_J(\eta G(t))$ is well defined for $t\in[0, T]$ .

(iii) Since G is strictly decreasing and $M_J$ is strictly increasing, we have $M_J(\eta G(0))\geq M_J(\eta G(t))$ for all $t\in[0, T]$ . Therefore, $U=M_J(\eta G(0)).$ Since $\eta G(0)>\eta G(T)=0$ , we have that

\begin{align*} U=M_J(\eta G(0))>M_J(\eta G(T))=M_J(0)=1.\end{align*}

Moreover, by the definition of $c_l$ we have

\begin{align*} U&=M_J(\eta G(0)) \\ &=M_J\left(\frac{2\eta c(e^{D(c)T}-1)}{D(c)-\kappa+(D(c)+\kappa)e^{D(c)T}}\right) \\ &=M_J(\Lambda(c)) \\ &\leq\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right).\end{align*}

(iv) Let us make the change of variables $h(t)\,:\!=\,H(T-t)$ . Then the ODE in (A.2) is transformed to

(A.4) \begin{align} h^{\prime}(t) & = f(t,h(t))= M_J\left(\eta G(T-t)\right)\exp\!\left(\alpha h(t)\right)-\beta h(t)-1, \\ h(0) & = 0 \notag\end{align}

where $f(t,x)\,:\!=\,M_J(\eta G(T-t))\exp\!\left(\alpha x\right)-\beta x-1$ . Note that for $t\in[0, T]$ and $x\in\mathbb{R}$ ,

(A.5) \begin{align} f_m(x)\,:\!=\,-\beta x-1 \leq f(t,x)\leq U\exp\!\left(\alpha x\right)-\beta x-1\,=\!:\,f_M(x).\end{align}

First, we focus on the ODE

(A.6) \begin{align} h^{\prime}_M(t) & = f_M(h_M(t))= U\exp\!\left(\alpha h_M(t)\right)-\beta h_M(t)-1, \\ h_M(0) & = 0. \notag\end{align}

Since $f_M$ is continuously differentiable in $\mathbb{R}$ , it is Lipschitz continuous on bounded intervals and there exists a unique local solution for every initial condition; see [Reference Hartman30, Chapter II, Theorem 1.1]. We want $f_M(0)>0$ and the existence of $x_p>0$ such that $f_M(x_p)\leq0$ , which would imply the existence of a stable equilibrium point in the interval $(0,x_p)$ . Therefore, the solution of (A.6) would be well defined on $[0,\infty)$ and we would have $h_M(t)<x_p$ for all $t\in[0,\infty)$ .

Note that $f_M(0)=U-1>0$ , as seen in the previous part.

The minimum of $f_M$ is achieved at $x_{\text{min}}=\frac{1}{\alpha}\ln\!\left(\frac{\beta}{\alpha U}\right)$ . Note that $x_{\text{min}}>0$ if and only if $U<\frac{\beta}{\alpha}$ . Then

\begin{align*} f_M(x_{\text{min}})=\frac{\beta}{\alpha}\left(1-\ln\!\left(\frac{\beta}{\alpha U}\right)\right)-1 \leq 0\end{align*}

if and only if $U\leq\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right),$ which is guaranteed by (iii). Moreover, note that $\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)<\frac{\beta}{\alpha}$ and then $x_{\text{min}}>0$ .

We conclude that the point we were searching is $x_p=x_{\text{min}}$ . This guarantees that $h_M$ is well defined on $[0,\infty)$ and $h_M(t)<x_p$ for all $t\in[0, T]$ .

Now we focus on the ODE

\begin{align*} h^{\prime}_m (t) & = f_m(h_m(t))= -\beta h_m(t)-1, \\ h_m(0) & = 0. \notag\end{align*}

The solution is given by $ h_m(t)=\frac{e^{-\beta t}-1}{\beta}.$ The function $h_m$ is well defined on $[0,\infty)$ and $h_m(t)\geq\frac{-1}{\beta}$ for all $t\in[0,\infty)$ .

Consider again the ODE in (A.4), and recall that

\begin{align*} f(t,x)=M_J\left(\eta G(T-t)\right)\exp\!\left(\alpha x)\right)-\beta x-1.\end{align*}

Define the open interval $V\,:\!=\,\big({-}\frac{1}{\beta}-1,x_p+1\big)$ . Note that $f\colon[0, T] \times V\to\mathbb{R}$ is a continuous function, Lipschitz in x, because the exponential function is Lipschitz on bounded intervals. For $(t,x_1),(t,x_2)\in[0, T]\times V$ we have

\begin{align*} |f(t,x_1)-f(t,x_2)|&\leq |M_J(\eta G(T-t))||\exp\!\left(\alpha x_1\right)-\exp\!\left(\alpha x_2\right)|+\beta|x_1-x_2| \\ & \leq U|\exp\!\left(\alpha x_1\right)-\exp\!\left(\alpha x_2\right)|+\beta|x_1-x_2| \\ &\leq K_1|x_1-x_2|,\end{align*}

for some constant $K_1>0$ . Then, by the Picard–Lindelöf theorem (see [Reference Hartman30, Chapter II, Theorem 1.1]), there is a unique solution $h\colon I\to V$ for some interval $I\subset[0, T]$ . Moreover, by [Reference Hartman30, Chapter II, Theorem 3.1], only two cases are possible:

  1. (1) $I=[0, T]$ . In this case, there is nothing more to prove.

  2. (2) $I=[0,\epsilon)$ with $\epsilon\leq T$ and

    (A.7) \begin{align} \lim_{t\rightarrow\epsilon^-}h(t)\in\!\left\{-\frac{1}{\beta}-1,x_p+1\right\}.\end{align}

That is, h approaches the boundary of V as t approaches $\epsilon$ . In this case, as a consequence of (A.5), we will prove that $h_m(t)\leq h(t)\leq h_M(t),$ for all $t\in[0,\epsilon)$ .

First we prove that $h_m(t)\leq h(t)$ for all $t\in[0,\epsilon)$ . Define the function $g(t)\,:\!=\,h_m(t)-h(t)$ for $t\in[0,\epsilon)$ . Note that $g(0)=0$ and we want to prove that $g(t)\leq 0$ for all $t\in[0,\epsilon)$ . Assume there exists $s\in(0,\epsilon)$ such that $g(s)>0$ . Since g is continuous and $g(0)=0$ , there exists $r\in[0,s)$ with $g(r)=0$ and $g(t)>0$ for $t\in(r,s]$ . Now, for $t\in[r,s]$ we have

\begin{align*} g^{\prime}(t)&=h^{\prime}_m (t)-h^{\prime}(t) \\ &=f_m(h_m(t))-f(t,h(t)) \\ &\leq f(t,h_m(t))-f(t,h(t)) \\ &= M_J(\eta G(T-t))\left(\exp\!\left(\alpha h_m(t)\right)-\exp\!\left(\alpha h(t)\right)\right)-\beta\!\left(h_m(t)-h(t)\right) \\ &\leq U\!\left(\exp\!\left(\alpha h_m(t)\right)-\exp\!\left(\alpha h(t)\right)\right)-\beta\!\left(h_m(t)-h(t)\right) \\ &\leq K_2 |h_m(t)-h(t)| = K_2 |g(t)| = K_2 g(t), \end{align*}

for some constant $K_2>0$ , where in the last inequality we have used the fact that since $h_m$ and h are continuous, they are bounded in [r, s], and we can use the Lipschitz property because the exponential function is Lipschitz on bounded intervals. Applying Gronwall’s inequality, we have that $g(s)\leq g(r)e^{K_2(s-r)}=0,$ which is a contradiction. We conclude that $h_m(t)\leq h(t)$ for all $t\in[0,\epsilon)$ . A similar argument can be employed to prove that $h(t)\leq h_M(t)$ for all $t\in[0,\epsilon)$ .

For $t\in[0,\epsilon)$ we have

\begin{align*} h_m(t)\leq h(t)\leq h_M(t) \implies \frac{-1}{\beta}\leq h(t)\leq x_p \implies \frac{-1}{\beta}\leq\lim_{t\rightarrow\epsilon^-}h(t)\leq x_p.\end{align*}

This contradicts (A.7). We conclude that the only possible situation is that h is well defined on [0, T].

Corollary A.1. Define $c_s$ by

\begin{align*} c_s\,:\!=\,\min\!\left\{\frac{\kappa\epsilon_J}{2\eta},\frac{\kappa }{2\eta}M_J^{-1}\left(\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}

Then $0<c_s<c_l$ .

Proof. We first check that $c_s>0$ . The function $M_J\colon({-}\infty,\epsilon_J)\to(0,\infty)$ is well defined and strictly increasing. Therefore, $M_J^{-1}\colon(0,\infty)\to({-}\infty,\epsilon_J)$ is also a well-defined function, and it is strictly increasing.

Since $\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)>1$ and $M_J(0)=1$ , we have $M_J^{-1}\left(\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)\right)>0,$ and we can conclude that $c_s>0$ .

Recall that $0<c_l\leq\frac{\kappa^2}{2\sigma^2}$ , where

\begin{align*} c_l=\sup\!\left\{c\leq\frac{\kappa^2}{2\sigma^2}\,:\, \Lambda(c)< \epsilon_J \hspace{0.3cm}\text{and}\hspace{0.3cm} M_J\left(\Lambda(c)\right)\leq\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right) \right\},\end{align*}

with $D(c)=\sqrt{\kappa^2-2\sigma^2c}$ and $\Lambda(c)=\frac{2\eta c\!\left(e^{D(c)T}-1\right)}{D(c)-\kappa+\left(D(c)+\kappa\right)e^{D(c)T}}$ .

To prove that $c_s<c_l$ we check that $\Lambda(c_s)<\epsilon_J$ and $M_J\left(\Lambda(c_s)\right)\leq\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)$ . The following inequality holds:

(A.8) \begin{align} \Lambda(c)=\frac{2\eta c\!\left(e^{\sqrt{\kappa^2-2\sigma^2c}T}-1\right)}{\sqrt{\kappa^2-2\sigma^2c}-\kappa+\left(\sqrt{\kappa^2-2\sigma^2c}+\kappa\right)e^{\sqrt{\kappa^2-2\sigma^2c}T}}< \frac{2\eta c}{\kappa}.\end{align}

In fact,

\begin{align*}\Lambda(c)=\frac{2\eta c\!\left(e^{\sqrt{\kappa^2-2\sigma^2c}T}-1\right)}{\sqrt{\kappa^2-2\sigma^2c}-\kappa+\left(\sqrt{\kappa^2-2\sigma^2c}+\kappa\right)e^{\sqrt{\kappa^2-2\sigma^2c}T}}< \frac{2\eta c}{\kappa} \\\iff \kappa\!\left(e^{\sqrt{\kappa^2-2\sigma^2c}T}-1\right)<\sqrt{\kappa^2-2\sigma^2c}-\kappa+\left(\sqrt{\kappa^2-2\sigma^2c}+\kappa\right)e^{\sqrt{\kappa^2-2\sigma^2c}T} \\ \iff 0< \sqrt{\kappa^2-2\sigma^2c}\left(1+e^{\sqrt{\kappa^2-2\sigma^2c}T}\right).\end{align*}

Now, by the definition of $c_s$ we have $ \Lambda(c_s)<\frac{2\eta c_s}{\kappa}\leq\epsilon_J,$ and

\begin{align*} M_J\left(\Lambda(c_s)\right)<M_J\left(\frac{2\eta c_s}{\kappa}\right)\leq M_J\left(M_J^{-1}\left(\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)\right)\right)=\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right).\end{align*}

We conclude that $c_s<c_l$ . Note that the inequality in $c_s<c_l$ is strict because the inequality in (A.8) is strict.

Example A.1 (i) If $J_1\sim\ Exponential(\lambda)$ , then

\begin{align*} c_s=\min\!\left\{\frac{\kappa\lambda}{2\eta}\left(1-\frac{\alpha}{\beta}\exp\!\left(1-\frac{\alpha}{\beta}\right)\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}
  1. (ii) If $J_1\sim\text{Gamma}(\mu,\lambda)$ with $\mu,\lambda>0$ as the shape and the rate, respectively, then

    \begin{align*} c_s=\min\!\left\{\frac{\kappa\lambda }{2\eta}\left(1-\frac{1}{\left(\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)\right)^{1/\mu}}\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}
  2. (iii) If $J_1=j>0$ , then

    \begin{align*} c_s=\min\!\left\{\frac{\kappa}{2\eta j}\left(\ln\!\left(\frac{\beta}{\alpha}\right)+\frac{\alpha}{\beta}-1\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}

Proof. (i) The moment generating function is given by $M_J(t)=\frac{\lambda}{\lambda-t}$ , $t<\lambda.$ Hence, with the notation of Assumption 3.1, $\epsilon_J=\lambda$ , and the inverse of $M_J$ is given by $M_J^{-1}(t)=\lambda\!\left(1-\frac{1}{t}\right)$ , $t>0$ . Then, applying Corollary A.1, we have the following expression for $c_s$ :

\begin{align*} c_s=\min\!\left\{\frac{\kappa\lambda}{2\eta},\frac{\kappa\lambda }{2\eta}\left(1-\frac{1}{\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)}\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}

Note that since $\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)>1$ , we have

\begin{align*} 0<1-\frac{1}{\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)}<1.\end{align*}

We conclude that

\begin{align*} c_s&=\min\!\left\{\frac{\kappa\lambda}{2\eta},\frac{\kappa\lambda }{2\eta}\left(1-\frac{1}{\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)}\right),\frac{\kappa^2}{2\sigma^2}\right\} \\ & =\min\!\left\{\frac{\kappa\lambda}{2\eta}\left(1-\frac{\alpha}{\beta}\exp\!\left(1-\frac{\alpha}{\beta}\right)\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}

(ii) The moment generating function is given by $M_J(t)=\left(1-\frac{t}{\lambda}\right)^{-\mu}$ , $t<\lambda.$ Thus, $\epsilon_J=\lambda$ , and the inverse of $M_J$ is given by $M_J^{-1}(t)=\lambda\!\left(1-t^{-1/\mu}\right)$ , $t>0.$ Then, applying Corollary A.1, we have the following expression for $c_s$ :

\begin{align*} c_s=\min\!\left\{\frac{\kappa\lambda}{2\eta},\frac{\kappa\lambda }{2\eta}\left(1-\left(\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)\right)^{-1/\mu}\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}

Note that since $\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)>1$ , we have

\begin{align*} 0<1-\frac{1}{\left(\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)\right)^{1/\mu}}<1.\end{align*}

We conclude that

\begin{align*} c_s=\min\!\left\{\frac{\kappa\lambda }{2\eta}\left(1-\left(\frac{\beta}{\alpha}\exp\!\left(\frac{\alpha}{\beta}-1\right)\right)^{-1/\mu}\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}
  1. (iv) The moment generating function is given by $M_J(t)=e^{tj}$ , $t\in\mathbb{R}$ . Thus, $\epsilon_J=\infty$ , and the inverse of $M_J$ is given by $M_J^{-1}(t)=\ln(t)/j$ , $t>0$ . Then, applying Corollary A.1, we conclude that

    \begin{align*} c_s=\min\!\left\{\frac{\kappa}{2\eta j}\left(\ln\!\left(\frac{\beta}{\alpha}\right)+\frac{\alpha}{\beta}-1\right),\frac{\kappa^2}{2\sigma^2}\right\}.\end{align*}

Acknowledgements

The authors would like to thank an anonymous referee for raising important points about both the theoretical and practical parts of the model, which helped improve the paper.

Funding information

The authors would like to acknowledge financial support from the Research Council of Norway under the SCROLLER project, project number 299897.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Alfonsi, A. (2015). Affine Diffusions and Related Processes: Simulation, Theory and Applications. Springer, Cham.Google Scholar
Alòs, E., León, J. A. and Vives, J. (2007). On the short-time behavior of the implied volatility for jump-diffusion models with stochastic volatility. Finance Stoch. 11, 571589.CrossRefGoogle Scholar
Andersen, T. G., Benzoni, L. and Lund, J. (2002). An empirical investigation of continuous-time equity return models. J. Finance 57, 12391284.CrossRefGoogle Scholar
Bacry, E., Mastromatteo, I. and Muzy, J.-F. (2015). Hawkes processes in finance. Market Microstruct. Liq. 01, article no. 1550005.CrossRefGoogle Scholar
Bates, D. S. (1996). Jumps and stochastic volatility: exchange rate processes implicit in Deutschemark options. Rev. Financial Studies 9, 69107.CrossRefGoogle Scholar
Bates, D. S. (2000). Post-’87 crash fears in the S&P 500 futures option market. J. Econometrics 94, 181238.CrossRefGoogle Scholar
Bernis, G., Brignone, R., Scotti, S. and Sgarra, C. (2021). A gamma Ornstein–Uhlenbeck model driven by a Hawkes process. Math. Financial Econom. 15, 747773.CrossRefGoogle ScholarPubMed
Bernis, G., Garcin, M., Scotti, S. and Sgarra, C. (2023). Interest rates term structure models driven by Hawkes processes. SIAM J. Financial Math. 14, 10621079.CrossRefGoogle Scholar
Bibby, B. and Sørensen, M. (1996). A hyperbolic diffusion model for stock prices. Finance Stoch. 1, 2541.CrossRefGoogle Scholar
Broadie, M. and Glasserman, P. (2004). A stochastic mesh method for pricing high-dimensional American options. J. Comput. Finance 7, 3572.CrossRefGoogle Scholar
Callegaro, G., Mazzoran, A. and Sgarra, C. (2022). A self-exciting modeling framework for forward prices in power markets. Appl. Stoch. Models Business Industry 38, 2748.CrossRefGoogle Scholar
Ceci, C. and Gerardi, A. (2006). A model for high frequency data under partial information: a filtering approach. Internat. J. Theoret. Appl. Finance 9, 555576.CrossRefGoogle Scholar
Cesari, G. et al. (2009). Modelling, Pricing, and Hedging Counterparty Credit Exposure. Springer, Berlin.CrossRefGoogle Scholar
Chernov, M. and Ghysels, E. (2000). A study towards a unified approach to the joint estimation of objective and risk neutral measures for the purpose of options valuation. J. Financial Econom. 56, 407458.CrossRefGoogle Scholar
Comte, F. and Renault, E. (1998). Long memory in continuous-time stochastic volatility models. Math. Finance 8, 291323.CrossRefGoogle Scholar
Cont, R. (2007). Volatility clustering in financial markets: empirical facts and agent-based models. In Long Memory in Economics, Springer, Berlin, pp. 289309.CrossRefGoogle Scholar
Cont, R. and Kokholm, T. (2013). A consistent pricing model for index options and volatility derivatives. Math. Finance 23, 248274.CrossRefGoogle Scholar
Delbaen, F. and Schachermayer, W. (1994). A general version of the fundamental theorem of asset pricing. Math. Ann. 300, 463520.CrossRefGoogle Scholar
Duffie, D., Filipović, D. and Schachermayer, W. (2003). Affine processes and applications in finance. Ann. Appl. Prob. 13, 984–1053.CrossRefGoogle Scholar
Duffie, D., Pan, J. and Singleton, K. (2000). Transform analysis and asset pricing for affine jump-diffusions. Econometrica 68, 13431376.CrossRefGoogle Scholar
El Euch, O., Fukasawa, M. and Rosenbaum, M. (2018). The microstructural foundations of leverage effect and rough volatility. Finance Stoch. 22, 241280.CrossRefGoogle Scholar
El Euch, O. and Rosenbaum, M. (2018). Perfect hedging in rough Heston models. Ann. Appl. Prob. 28, 38133856.CrossRefGoogle Scholar
El Euch, O. and Rosenbaum, M. (2019). The characteristic function of rough Heston models. Math. Finance 29, 338.CrossRefGoogle Scholar
Eraker, B., Johannes, M. and Polson, N. (2003). The impact of jumps in volatility and returns. J. Finance 58, 12691300.CrossRefGoogle Scholar
Filimonov, V., Bicchetti, D., Maystre, N. and Sornette, D. (2014). Quantification of the high level of endogeneity and of structural regime shifts in commodity markets. J. Internat. Money Finance 42, 174192.CrossRefGoogle Scholar
Gatheral, J., Jaisson, T. and Rosenbaum, M. (2018). Volatility is rough. Quant. Finance 18, 933949.CrossRefGoogle Scholar
Glasserman, P. and Kim, K.-K. (2011). Gamma expansion of the Heston stochastic volatility model. Finance Stoch. 15, 267296.CrossRefGoogle Scholar
Gonzato, L. and Sgarra, C. (2021). Self-exciting jumps in the oil market: Bayesian estimation and dynamic hedging. Energy Econom. 99, article no. 105279.CrossRefGoogle Scholar
Grasselli, M. (2017). The 4/2 stochastic volatility model: a unified approach for the Heston and the 3/2 model. Math. Finance 27, 10131034.CrossRefGoogle Scholar
Hartman, P. (1982). Ordinary Differential Equations. Birkhäuser, Boston.Google Scholar
Jaber, E. A., Illand, C. and Li, S. (2022). Joint SPX-VIX calibration with Gaussian polynomial volatility models: deep pricing with quantization hints. Preprint. Available at https://arxiv.org/abs/2212.08297.Google Scholar
Jaber, E. A., Illand, C. and Li, S. (2023). The quintic Ornstein–Uhlenbeck volatility model that jointly calibrates SPX VIX smiles. Preprint. Available at https://arxiv.org/abs/2212.10917.Google Scholar
Jain, S. and Oosterlee, C. W. (2012). Pricing high-dimensional Bermudan options using the stochastic grid method. Internat. J. Comput. Math. 89, 11861211.CrossRefGoogle Scholar
Jaisson, T. (2015). Market impact as anticipation of the order flow imbalance. Quant. Finance 15, 11231135.CrossRefGoogle Scholar
Jaisson, T. and Rosenbaum, M. (2015). Limit theorems for nearly unstable Hawkes processes. Ann. Appl. Prob. 25, 600631.CrossRefGoogle Scholar
Jaisson, T. and Rosenbaum, M. (2016). Rough fractional diffusions as scaling limits of nearly unstable heavy tailed Hawkes processes. Ann. Appl. Prob. 26, 28602882.CrossRefGoogle Scholar
Jiao, Y., Ma, C., Scotti, S. and Zhou, C. (2021). The alpha-Heston stochastic volatility model. Math. Finance 31, 943978.CrossRefGoogle Scholar
Karatzas, I. and Shreve, S. E. (1991). Brownian Motion and Stochastic Calculus. Springer, New York.Google Scholar
Keller-Ressel, M. (2011). Moment explosions and long-term behavior of affine stochastic volatility models. Math. Finance 21, 7398.CrossRefGoogle Scholar
Keller-Ressel, M. and Mayerhofer, E. (2015). Exponential moments of affine processes. Ann. Appl. Prob. 25, 714752.CrossRefGoogle Scholar
Kokholm, T. and Stisen, M. (2015). Joint pricing of VIX and SPX options with stochastic volatility and jump models. J. Risk Finance 16, 2748.CrossRefGoogle Scholar
Longstaff, F. A. and Schwartz, E. S. (2015). Valuing American options by simulation: a simple least-squares approach. Rev. Financial Studies 14, 113147.CrossRefGoogle Scholar
Pacati, C., Pompa, G. and Renò, R. (2018). Smiling twice: the Heston++ model. J. Banking Finance 96, 185206.CrossRefGoogle Scholar
Pan, J. (2002). The jump-risk premia implicit in options: evidence from an integrated time-series study. J. Financial Econom. 63, 350.CrossRefGoogle Scholar
Protter, P. E. (2005). Stochastic integration and differential equations. Springer, Berlin.CrossRefGoogle Scholar
Recchioni, M. C., Iori, G., Tedeschi, G. and Ouellette, M. S. (2021). The complete Gaussian kernel in the multi-factor Heston model: option pricing and implied volatility applications. Europ. J. Operat. Res. 293, 336360.CrossRefGoogle Scholar
Rydberg, T. H. (1997). A note on the existence of unique equivalent martingale measures in a Markovian setting. Finance Stoch. 1, 251257.CrossRefGoogle Scholar
Rydberg, T. H. (1999). Generalized hyperbolic diffusion processes with applications in finance. Math. Finance 9, 183201.CrossRefGoogle Scholar
Rømer, S. E. (2022). Empirical analysis of rough and classical stochastic volatility models to the SPX and VIX markets. Quant. Finance 22, 18051838.CrossRefGoogle Scholar
Stein, H. J. (2013). Joining risks and rewards. Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2368905.Google Scholar
Stein, H. J. (2016). Fixing risk neutral risk measures. Internat. J. Theoret. Appl. Finance 19, article no. 1650021.CrossRefGoogle Scholar
Wong, B. and Heyde, C. C. (2006). On changes of measure in stochastic volatility models. J. Appl. Math. Stoch. Anal. 2006, article no. 18130.CrossRefGoogle Scholar
Figure 0

Table 1. Parameters used for the Monte Carlo simulation.

Figure 1

Figure 1. Expected exposure computed under $\mathbb{P}$ and under $\mathbb{Q}(0)$ as functions of the strike for a call option.

Figure 2

Table 2. Average and maximum relative errors between exposures computed under risk-neutral measures and exposures computed under the real-world measure.

Figure 3

Figure 2. Expected exposure computed under $\mathbb{P}$ and under $\mathbb{Q}(1)$ as functions of the strike for a call option.

Figure 4

Figure 3. Expected exposure computed under $\mathbb{Q}(a)$ for different values of a and a fixed strike value $K=1.4$.