Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-26T19:03:36.813Z Has data issue: false hasContentIssue false

De Finetti’s control problem with a concave bound on the control rate

Published online by Cambridge University Press:  25 January 2024

Félix Locas*
Affiliation:
Université du Québec à Montréal
Jean-François Renaud*
Affiliation:
Université du Québec à Montréal
*
*Postal address: Département de mathématiques, Université du Québec à Montréal (UQAM), 201 av. Président-Kennedy, Montréal (Québec) H2X 3Y7, Canada.
*Postal address: Département de mathématiques, Université du Québec à Montréal (UQAM), 201 av. Président-Kennedy, Montréal (Québec) H2X 3Y7, Canada.
Rights & Permissions [Opens in a new window]

Abstract

We consider De Finetti’s control problem for absolutely continuous strategies with control rates bounded by a concave function and prove that a generalized mean-reverting strategy is optimal in a Brownian model. In order to solve this problem, we need to deal with a nonlinear Ornstein–Uhlenbeck process. Despite the level of generality of the bound imposed on the rate, an explicit expression for the value function is obtained up to the evaluation of two functions. This optimal control problem has, as special cases, those solved in Jeanblanc-Picqué and Shiryaev (1995) and Renaud and Simard (2021) when the control rate is bounded by a constant and a linear function, respectively.

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Nowadays, De Finetti’s control problem refers to a family of stochastic optimal control problems concerned with the maximization of withdrawals made from a stochastic system. While it has interpretations in models of population dynamics and natural resources, it originates from the field of insurance mathematics. Usually, the performance function is the expected time-discounted value of all withdrawals made up to a first-passage stopping time.

In the original insurance context, it is interpretated as follows: find the optimal way to pay out dividends, taken from the insurance surplus process, until ruin is declared. In this case, the performance function is the expectation of the total amount of discounted dividend payments made up to the time of ruin. Therefore, the difficulty consists in finding the optimal balance between paying out dividends as much (and as early) as possible while avoiding ruin to maintain those payments in the long run. Similar interpretations can be made for population dynamics (harvesting) and for natural resource extractions; see, e.g., [Reference Alvarez and Shepp3, Reference Ekström and Lindensjö10]. In any case, the overall objective is the identification of the optimal way to withdraw from the system (optimal strategy) and the derivation of an analytical expression for the optimal value function.

In this paper, we consider absolutely continuous strategies. Except for very recent contributions (see [Reference Albrecher, Azcue and Muler1, Reference Angoshtari, Bayraktar and Young4, Reference Renaud and Simard15]), most of the literature has considered constant bounds on the control rates (see, e.g., [Reference Asmussen and Taksar5, Reference Choulli, Taksar and Zhou8, Reference Jeanblanc-Picqué and Shiryaev11]). Following the direction originally taken by [Reference Alvarez and Shepp3] and later by [Reference Renaud and Simard15], we apply a much more general bound on control rates, namely a concave function of the current level of the system.

1.1. Model and problem formulation

Let $(\Omega, \mathcal F, (\mathcal F_t)_{t \geq 0}, \mathbb{P})$ be a filtered probability space. Fix $\mu, \sigma > 0$ . The state process $X = \{X_t, t \geq 0\}$ is given by

(1.1) \begin{equation} X_t = x + \mu t + \sigma W_t,\end{equation}

where $x \in \mathbb{R}$ and $W = \{W_t, t \geq 0\}$ is an $(\mathcal F_t)$ -adapted standard Brownian motion. For example, the process X can be interpreted as the (uncontrolled) density of a population or the (uncontrolled) surplus process of an insurance company. As alluded to above, we are interested in absolutely continuous control processes. More precisely, a control strategy $\pi$ is characterized by a nonnegative and adapted control rate process $l^\pi = \{l^\pi_t, t \geq 0\}$ yielding the cumulative control process $L_t^\pi = \int_0^t l_s^\pi\,\textrm{d}s$ . Note that $L^\pi = \{L^\pi_t, t \geq 0\}$ is nondecreasing and such that $L^\pi_0=0$ . The corresponding controlled process $U^\pi = \{U^\pi_t, t \geq 0\}$ is then given by $U_t^\pi = X_t - L_t^\pi$ .

In the abovementioned applications (e.g., harvesting and dividend payments), it makes sense to allow for higher rates when the underlying state process is far from its critical level and to allow for a higher (relative) increase in this rate as early as possible. The situation is reminiscent of utility functions in economics. In this direction, let us fix an increasing and concave function $F \colon \mathbb{R} \to \mathbb{R}$ such that $F(0) \geq 0$ . For technical reasons, we assume F is a continuously differentiable Lipschitz function. Finally, a strategy $\pi$ is said to be admissible if its control rate is also such that

(1.2) \begin{equation} 0 \leq l_t^\pi \leq F(U_t^\pi) \end{equation}

for all $0 \leq t \leq \tau_0^\pi$ , where $\tau_0^\pi = \inf\{t > 0 \colon U_t^\pi < 0\}$ is the termination time. The termination level 0 is chosen only for simplicity. Let $\Pi^F$ be the set of all admissible strategies.

From now on, we write $\mathbb{P}_x$ for the probability measure associated with the starting point x and $\mathbb{E}_x$ for the expectation with respect to the measure $\mathbb{P}_x$ . When $x=0$ , we write $\mathbb{P}$ and $\mathbb{E}$ .

Fix a time-preference parameter $q > 0$ and then define the value of a strategy $\pi \in \Pi^F$ by $V_\pi(x) = \mathbb{E}_x\big[\int_0^{\tau_0^\pi}\textrm{e}^{-qt}l_t^\pi\,\textrm{d}t\big]$ , $x \geq 0$ . Note that in our model $V_\pi(0)=0$ for all $\pi \in \Pi^F$ .

We want to find an optimal strategy, that is, a strategy $\pi^\ast \in \Pi^F$ such that, for all $x \in \mathbb{R}$ and for all $\pi \in \Pi^F$ , we have $V_{\pi^*}(x) \geq V_\pi(x)$ . We also want to compute the optimal value function given by $V(x) = \sup_{\pi \in \Pi^F} V_\pi(x)$ , $x \geq 0 $ .

Remark 1.1. The optimal value function and an optimal strategy (if it exists) should be independent of the behavior of F when $x < 0$ . The Lipschitz condition assumption is a sufficient condition for the existence and the uniqueness of a (strong) solution to the stochastic differential equation (SDE) defined in (2.3); see, e.g., [Reference Øksendal14]. In fact, any increasing differentiable concave function $F \colon (0, +\infty) \to \mathbb{R}$ such that $F(0) \geq 0$ and $F'(0+) < \infty$ satisfies the conditions for our model: we can extend F as a Lipschitz continuous function by extending F such that $F(x) = F'(0+)x + F(0)$ for all $x \leq 0$ .

1.2. More related literature

De Finetti’s optimal control problems are adaptations and interpretations of Bruno de Finetti’s seminal work [Reference de Finetti9]. As they have been extensively studied in a variety of models, especially over the last 25 years, it is nearly impossible to provide a proper literature review on this topic in an introduction. See the review paper [Reference Albrecher and Thonhauser2] for more details.

Now, let us focus on the literature closer to our model and our problem. In [Reference Asmussen and Taksar5, Reference Jeanblanc-Picqué and Shiryaev11], the problem for absolutely continuous strategies in a Brownian model is tackled under the following assumption on the control rate:

(1.3) \begin{equation}0 \leq l_t^\pi \leq R ,\end{equation}

where $R>0$ is a given constant; see also [Reference Kyprianou, Loeffen and Pérez13] for the same problem in a Lévy model. For this problem, a threshold strategy is optimal: above the optimal barrier level the maximal control rate R is applied, otherwise the system is left uncontrolled. In other words, a threshold strategy is characterized by a barrier level and the corresponding controlled process is a Brownian motion with a two-valued drift. Very recently, in [Reference Renaud and Simard15], the assumption in (1.3) was replaced by the following:

(1.4) \begin{equation} 0 \leq l_t^\pi \leq K U^\pi_t ,\end{equation}

where $K>0$ is a given constant. In other words, the control rate is now bounded by a linear transformation of the current state. Note that this idea had already been considered in a biological context (but under a different model) in [Reference Alvarez and Shepp3]. In [Reference Renaud and Simard15], it is proved that a mean-reverting strategy is optimal: above the optimal barrier level the maximal control rate $KU^\pi_t$ is applied, otherwise the system is left uncontrolled. Again, such a control strategy is characterized by a barrier level, but now the controlled process is a refracted diffusion process, switching between a Brownian motion with drift and an Ornstein–Uhlenbeck process.

To the best of our knowledge, the concept of a mean-reverting strategy first appeared in [Reference Avanzi and Wong6]. It was argued that, in an insurance context, this type of strategy has desirable properties for shareholders. Then, in [Reference Renaud and Simard15], the name mean-reverting strategies was used for a larger family of control strategies; in fact, the mean-reverting strategy in [Reference Avanzi and Wong6] is one member (the barrier level is equal to zero) of this sub-family of strategies. As mentioned above, one mean-reverting strategy is optimal in the case $F(x)=Kx$ . Following those lines, we will use the name generalized mean-reverting strategy for similar bang-bang strategies corresponding to the general case of an increasing and concave function F.

1.3. Main results and outline of the paper

Obviously, the bounds imposed on the control rates in these last two problems, as given in (1.3) for the constant case and (1.4) for the linear case, are special cases of the one considered in our problem and presented in (1.2). A solution to our general problem is given in Theorem 4.1. As we will see, an optimal control strategy is provided by a generalized mean-reverting strategy, which is also characterized by a barrier level b. More specifically, for such a strategy, the optimal controlled process is given by the following diffusion process:

\[ \textrm{d}U_t^b = \big(\mu - F(U_t^b){\mathbf 1}_{\{U_t^b > b\}}\big)\textrm{d}t + \sigma \textrm{d}W_t .\]

We use a guess-and-verify approach: first, we compute the value of any generalized mean-reverting strategy (Proposition 3.1) using a Markovian decomposition and a perturbation approach; second, we find the optimal barrier level (Proposition 4.1); finally, using a verification lemma (Lemma 2.2), we prove that this best mean-reverting strategy is in fact an optimal strategy for our problem. Despite the level of generality (of F), our solution for this control problem is explicit up to the evaluation of two special functions strongly depending on F: $\varphi_F$ , the solution to an ordinary differential equation, and $I_F$ , an expectation functional. While $\varphi_F$ is one of the two well-known fundamental solutions associated with a diffusion process (also the solution to a first-passage problem), the definition of $I_F$ , see (2.4), is an intricate expectation for which many properties can be obtained (see Proposition 2.1).

The rest of the paper is organized as follows. In Section 2, we provide important preliminary results on functions at the core of our main result, and we state a verification lemma for the maximization problem. In Section 3, we introduce the family of generalized mean-reverting strategies and compute their value functions. In Section 4, we state and prove the main result, which is a solution to the control problem, before looking at specific examples. A proof of the verification lemma is provided in Appendix A.

2. Preliminary results

In this section, we derive preliminary results that are crucial to solving the control problem.

First, let $G \colon [0, \infty) \to \mathbb{R}$ be an increasing and continuously differentiable function. Consider the homogeneous ordinary differential equation (ODE)

(2.1) \begin{equation}\Gamma_G(f) \;:\!=\; \frac{\sigma^2}{2} f''(x) + (\mu - G(x))f'(x) - qf(x) = 0 , \qquad x > 0,\end{equation}

and denote by $\psi_G$ (resp. $\varphi_G$ ) a positive increasing (resp. decreasing) solution to (2.1), with $\psi_G(0) = \varphi_G(+\infty) = 0$ . Under the additional conditions that $\varphi_G(0) = 1$ and $\psi'_{\!\!G}(0) = 1$ , it is known that $\psi_G$ and $\varphi_G$ are uniquely determined. As $\psi_G$ and $\varphi_G$ are solutions of a second-order ordinary differential equation, and since G is a continuously differentiable function, $\psi_G,\varphi_G \in C^3[0, \infty)$ .

We will write $\Gamma$ , $\psi$ , and $\varphi$ when $G \equiv 0$ . In particular, it is easy to verify that, for $x \geq 0$ ,

(2.2) \begin{equation} \psi(x) = \frac{\sigma^2}{\sqrt{\mu^2 + 2q \sigma^2}\,}\textrm{e}^{-(\mu/\sigma^2)x} \sinh\big((x/\sigma^2)\sqrt{\mu^2 + 2q \sigma^2}\big).\end{equation}

In what follows, we will not manipulate this explicit expression for $\psi$ . Instead, we will use its analytical properties (see Lemma 2.1).

Remark 2.1. The expression for $\psi$ is proportional to the expression for $W^{(q)}$ , the q-scale function of the Brownian motion with drift X, used in [Reference Renaud and Simard15]. For more details, see [Reference Kyprianou12].

The next lemma gives analytical properties for the functions $\psi$ and $\varphi_G$ . See [Reference Ekström and Lindensjö10] for a complete proof.

Lemma 2.1. Let $G \colon [0, \infty) \to \mathbb{R}$ be an increasing and continuously differentiable function. The functions $\psi$ and $\varphi_G$ have the following analytical properties:

  1. (i) $\psi$ is strictly increasing and strictly concave-convex with a unique inflection point $\hat{b} \in (0, \infty)$ ;

  2. (ii) $\varphi_G$ is strictly decreasing and strictly convex.

The value of the inflection point of $\psi$ is known explicitly; see, e.g., [Reference Renaud and Simard15, (4)]. Our analysis does not depend on this specific value.

Recall from (1.1) that $X_t = x + \mu t + \sigma W_t$ . Define $U = \lbrace U_t , t \geq 0\rbrace$ by

(2.3) \begin{equation} \textrm{d}U_t = (\mu - F(U_t))\,\textrm{d}t + \sigma\textrm{d}\,W_t, \qquad U_0 = x .\end{equation}

Under our assumptions on F, there exists a unique strong solution to this last stochastic differential equation. This is what we called a nonlinear Ornstein–Uhlenbeck process.

Now, for $a \geq 0$ , define the following first-passage stopping times:

\begin{equation*} \tau_a = \inf\{t > 0 \colon X_t = a\}, \qquad \tau^F_a = \inf\{t > 0 \colon U_t = a\}.\end{equation*}

It is well known that for $0 \leq x \leq b$ we have ${\mathbb E_{x}}\big[\textrm{e}^{-q\tau_b}{\mathbf 1}_{\{\tau_b < \tau_0\}}\big] = {\psi(x)}/{\psi(b)}$ , and for $x \geq 0$ we have ${\mathbb E_{x}}\big[\textrm{e}^{-q\tau^F_0}{\mathbf 1}_{\{\tau^F_0 < \infty\}}\big] = \varphi_F(x)$ , since it is assumed that $\varphi_F(0)=1$ .

Finally, define

(2.4) \begin{equation} I_F(x) = {\mathbb E_{x}}\bigg[\int_0^\infty\textrm{e}^{-qt}F(U_t)\,\textrm{d}t\bigg] , \qquad x \in \mathbb{R}.\end{equation}

As alluded to above, the analysis of this functional is of paramount importance for the solution of our control problem. The main difficulty in computing this functional lies in the fact that the dynamics of U also depend on F.

Proposition 2.1. The function $I_F \colon \mathbb{R} \to \mathbb{R}$ is a twice continuously differentiable, increasing, and concave solution to the following ODE: for $x > 0$ ,

(2.5) \begin{equation} \Gamma_F(f) = -F . \end{equation}

Moreover, $0 \leq I'_{\!\!F}(x) \leq 1$ for all $x \geq 0$ .

Proof. First, we prove that $I_F$ is a solution to (2.5). Fix $b>0$ and define $\tau \;:\!=\; \tau_0^F \wedge \tau_b^F$ . By the continuity of F, it is known that there exists a twice continuously differentiable solution $f \colon \mathbb{R} \to \mathbb{R}$ to (2.5); see, e.g., [Reference Somasundaram16]. Applying Itô’s lemma to $\textrm{e}^{-q(t\wedge\tau)}f(U_{t\wedge\tau})$ and taking expectations, we can write

\[ \mathbb{E}_x \big[\textrm{e}^{-q(t\wedge\tau)}f(U_{t\wedge\tau})\big] = f(x) - {\mathbb E_{x}}\bigg[\int_0^{t\wedge\tau}\textrm{e}^{-qs}F(U_s)\,\textrm{d}s\bigg] + {\mathbb E_{x}}\bigg[\int_0^{t\wedge\tau}\sigma\textrm{e}^{-qs}f'(U_s)\,\textrm{d}W_s\bigg]. \]

As $f^{\prime}$ is bounded on (0, b) and continuous on [0, b], the expectation of the stochastic integral is equal to zero and also, after letting $t \to \infty$ ,

\begin{equation*} f(0){\mathbb E_{x}}\big[\textrm{e}^{-q\tau_0^F}{\mathbf 1}_{\{\tau_0^F < \tau_b^F\}}\big] + f(b){\mathbb E_{x}}\big[\textrm{e}^{-q\tau_b^F}{\mathbf 1}_{\{\tau_b^F < \tau_0^F\}}\big] = f(x) - {\mathbb E_{x}}\bigg[\int_0^\tau\textrm{e}^{-qs}F(U_s)\,\textrm{d}s\bigg]. \end{equation*}

Since $x \mapsto {\mathbb E_{x}}\big[\textrm{e}^{-q\tau_0^F}{\mathbf 1}_{\{\tau_0^F<\tau_b^F\}}\big]$ and $x \mapsto {\mathbb E_{x}}\big[\textrm{e}^{-q\tau_b^F}{\mathbf 1}_{\{\tau_b^F<\tau_0^F\}}\big]$ are solutions to $\Gamma_F(f) = 0$ on (0, b), we deduce that $x \mapsto {\mathbb E_{x}}\big[\int_0^\tau\textrm{e}^{-qs}F(U_s)\,\textrm{d}s\big]$ satisfies (2.5) on (0, b). Finally, splitting the interval and then using the strong Markov property at $\tau$ , we get

\begin{equation*} I_F(x) = {\mathbb E_{x}}\bigg[\int_0^\tau\textrm{e}^{-qs}F(U_s)\,\textrm{d}s\bigg] + I_F(0){\mathbb E_{x}}\big[\textrm{e}^{-q\tau_0^F}{\mathbf 1}_{\{\tau_0^F<\tau_b^F\}}\big] + I_F(b){\mathbb E_{x}}\big[\textrm{e}^{-q\tau_b^F}{\mathbf 1}_{\{\tau_b^F<\tau_0^F\}}\big] \end{equation*}

for all $x \in (0, b)$ , which proves that $I_F$ is a solution to (2.5) on (0, b). Since b is arbitrary, this proves that $I_F$ is a solution to (2.5) on $(0,\infty)$ .

It remains to prove that $I_F$ is increasing, concave, and such that $0 \leq I'_{\!\!F}(x) \leq 1$ for all $x > 0$ . To do so, we use the notation $U^{x}$ for the solution to

(2.6) \begin{equation} \textrm{d}U_t^x = (\mu - F(U_t^x))\,\textrm{d}t + \sigma\,\textrm{d}W_t, \qquad U_0^x = x. \end{equation}

This is the dynamics given in (2.3).

Fix $x < y$ . Note that $U_0^x < U_0^y$ and define $\kappa = \inf\{t > 0 \colon U_t^x = U_t^y\}$ . Since $U_t^x < U_t^y$ for all $0 \leq t \leq \kappa$ and F is increasing,

\begin{align*} I_F(y) & = \mathbb{E}\bigg[\int_0^\infty\textrm{e}^{-qt}F(U_t^{y})\,\textrm{d}t\bigg] \\[5pt] & = \mathbb{E}\bigg[\int_0^\kappa\textrm{e}^{-qt}F(U_t^{y})\,\textrm{d}t\bigg] + \mathbb{E}\bigg[\int_\kappa^\infty\textrm{e}^{-qt}F(U_t^{y})\,\textrm{d}t\bigg] \\[5pt] & \geq \mathbb{E}\bigg[\int_0^\kappa\textrm{e}^{-qt}F(U_t^{x})\,\textrm{d}t\bigg] + \mathbb{E}\bigg[\int_\kappa^\infty\textrm{e}^{-qt}F(U_t^{x})\,\textrm{d}t\bigg] = I_F(x) , \end{align*}

where we used the fact that, for $t \geq \kappa$ , $U_t^x = U_t^y$ . This proves that $I_F$ is increasing.

Now, fix $x, y \in \mathbb{R}$ and $\lambda \in [0,1]$ . Define $z = \lambda x + (1 - \lambda)y$ and $Y_t^z = \lambda U_t^x + (1 - \lambda) U_t^y$ . By linearity, we have $\textrm{d}Y_t^z = (\mu - l_t^{Y})\,\textrm{d}t + \sigma\,\textrm{d}W_t$ , $Y_0^z = z$ , where $l_t^{Y} \;:\!=\; \lambda F(U_t^x) + (1 - \lambda) F(U_t^y)$ . Since F is concave, we have $l_t^Y \leq F(Y_t^z)$ for all $t \geq 0$ . We want to prove that, almost surely, $Y_t^z \geq U_t^z$ for all $t \geq 0$ . First, for all $t \geq 0$ , we have

(2.7) \begin{equation} Y_t^z - U_t^z = \int_0^t(F(U_s^z) - l_s^{Y})\,\textrm{d}s. \end{equation}

Note that, by the concavity of F, since $U_0^z = Y_0^z = z$ , we have $F(U_0^z) \geq l_0^{Y}$ . Now define $\tau = \inf\{t > 0 \colon F(U_t^z) > l_t^{Y}\}$ . On $\{\tau = +\infty\}$ we have $Y_t^z = U_t^z$ for all t. On $\{\tau < \infty\}$ , since the mapping $s \mapsto F(U_s^z) - l_s^{Y}$ is continuous almost surely, it follows from (2.7) that there exists $\varepsilon > 0$ for which $Y_t^z > U_t^z$ for all $t \in \,]\tau, \tau + \varepsilon[$ . But we can apply the same argument for another time point $s > \tau$ : if there exists $s>\tau$ for which $Y_s^z = U_s^z$ , then either $Y_t^z=U_t^z$ for all $t > s$ , or there exist $s', \varepsilon' > 0$ for which

\begin{equation*} \left\{ \begin{array}{l@{\quad}l} Y_t^z = U_t^z \quad & \mbox{if } s \leq t \leq s', \\[5pt] Y_t^z > U_t^z & \mbox{if } s' < t < s'+\varepsilon'. \end{array} \right. \end{equation*}

This proves that $Y_t^z \geq U_t^z$ for all $t \geq 0$ , which is equivalent to

\begin{equation*} \int_0^t F(U_s^z)\,\textrm{d}s \geq \int_0^t(\lambda F(U_s^x) + (1-\lambda) F(U_s^y))\,\textrm{d}s. \end{equation*}

Since this is true for all $t \geq 0$ , we further have that, for all $t \geq 0$ ,

(2.8) \begin{equation} \int_0^t\textrm{e}^{-qs}F(U_s^z)\,\textrm{d}s \geq \int_0^t\textrm{e}^{-qs}(\lambda F(U_s^x) + (1-\lambda)F(U_s^y))\,\textrm{d}s . \end{equation}

Letting $t \to \infty$ it follows that $I_F(\lambda x + (1-\lambda)y) \geq \lambda I_F(x) + (1-\lambda)I_F(y)$ , proving the concavity of $I_F$ .

Let $x, h \geq 0$ and define $\kappa^h \;:\!=\; \inf\{t > 0 \colon U_t^{x+h} = U_t^x\}$ . We can write

\begin{align*} I_F(x+h) - I_F(x) & = \mathbb{E}\bigg[\int_0^\infty\textrm{e}^{-qt}(F(U_t^{x+h}) - F(U_t^{x}))\,\textrm{d}t\bigg] \\[5pt] & = \mathbb{E}\bigg[\!\int_0^{\kappa^h}\textrm{e}^{-qt}(F(U_t^{x+h}) - F(U_t^{x}))\,\textrm{d}t\!\bigg] + \mathbb{E}\bigg[\!\int_{\kappa^h}^{\infty}\textrm{e}^{-qt}(F(U_t^{x+h}) - F(U_t^{x}))\,\textrm{d}t\!\bigg]. \end{align*}

If $\kappa^h$ is finite, the second expectation is zero, because, for all $t \geq \kappa^h$ and for all $\omega \in \Omega$ , $U_t^{x+h}(\omega) = U_t^{x}(\omega)$ . Now, on $\{\kappa^h < \infty\}$ we have

\begin{equation*} \int_0^{\kappa^h}(F(U_s^{x+h}) - F(U_s^{x}))\,\textrm{d}s = h, \end{equation*}

which implies that $\int_0^{\kappa^h}\textrm{e}^{-qs}(F(U_s^{x+h}) - F(U_s^{x}))\,\textrm{d}s \leq h$ . On $\{\kappa^h = \infty\}$ , we have

\begin{equation*} \int_0^{t}\textrm{e}^{-qs}(F(U_s^{x+h}) - F(U_s^{x}))\,\textrm{d}s < h \end{equation*}

for all $t \geq 0$ , which implies that $\int_0^{\infty}\textrm{e}^{-qs}(F(U_s^{x+h}) - F(U_s^{x}))\,\textrm{d}s \leq h$ . In conclusion, we have

\begin{equation*} \mathbb{E}\bigg[\int_0^{\kappa^h}\textrm{e}^{-qt}(F(U_t^{x+h}) - F(U_t^{x}))\,\textrm{d}t\bigg] \leq h. \end{equation*}

Letting $h \to 0$ , we find that $I'_{\!\!F}(x) \leq 1$ .

Finally, here is the verification lemma for the stochastic control problem. The proof is given in Appendix A.

Lemma 2.2. Let $\pi^* \in \Pi^F$ be such that $V_{\pi^*} \in C^2(0, \infty)$ , $V'_{\!\!\pi^*}$ is bounded, and, for all $x > 0$ ,

(2.9) \begin{equation} \frac{\sigma^2}{2}V''_{\!\!\!\pi^*}(x) + \mu V'_{\!\!\pi^*}(x) - qV_{\pi^*}(x) + \sup_{0 \leq u \leq F(x)} u(1 - V'_{\!\!\pi^*}(x)) = 0. \end{equation}

Then $\pi^*$ is an optimal strategy: for all $\pi \in \Pi^F$ and all $x \in \mathbb{R}$ , $V_{\pi^*}(x) \geq V_{\pi}(x)$ . In this case, $V \in C^2(0, +\infty)$ , V’ is bounded, and V satisfies (2.9).

3. Generalized mean-reverting strategies

Since the Hamilton–Jacobi–Bellman equation in (2.9) is linear with respect to the control variable, we expect a bang-bang strategy to be optimal. Further, since from modelling reasons we expect the optimal value function V to be concave, then an optimal strategy must be of the form

\begin{equation*} l_s^\pi = \left\{ \begin{array}{l@{\quad}l} F(U_s^\pi) \quad & \mbox{if } U_s^\pi > b, \\[5pt] 0 & \mbox{if } U_s^\pi < b \end{array}\right.\end{equation*}

for some $b \geq 0$ to be determined.

Consequently, and following the line of reasoning in [Reference Renaud and Simard15], let us define the family of generalized mean-reverting strategies. For a fixed $b \geq 0$ , define the generalized mean-reverting strategy $\pi_b$ and the corresponding controlled process $U^b \;:\!=\; U^{\pi_b}$ given by

(3.1) \begin{equation} \textrm{d}U_t^b = \big(\mu - F(U_t^b){\mathbf 1}_{\{U_t^b > b\}}\big)\,\textrm{d}t + \sigma\,\textrm{d}W_t.\end{equation}

In other words, the control rate $l^b \;:\!=\; l^{\pi_b}$ associated with $\pi_b$ is given by $l^b_t = F(U_t^b){\mathbf 1}_{\{U_t^b > b\}}$ . Similarly, we define the corresponding value function by $V_b \;:\!=\; V_{\pi_b}$ .

Note that if $b = 0$ then $U^0 = U$ , with U as defined in (2.3).

Remark 3.1. As discussed in Remark 1.1, there exists a unique strong solution to the SDE given in (2.3). When $b>0$ , the drift function is not necessarily continuous, but it can be shown, using, for example, the same steps as in [Reference Renaud and Simard15], that a strong solution also exists for (3.1).

The next proposition gives the value function for a generalized mean-reverting strategy.

Proposition 3.1. The value function $V_0$ of the generalized mean-reverting strategy $\pi_0$ is continuously differentiable and given by $V_0(x) = I_F(x) - I_F(0)\varphi_F(x)$ , $x \geq 0$ . If $b > 0$ , then the value function $V_b$ of the generalized mean-reverting strategy $\pi_b$ is continuously differentiable and given by

\begin{equation*} V_b(x) = \left\{ \begin{array}{l@{\quad}l} C_1(b) \psi(x) & \mbox{if } 0<x<b, \\[5pt] I_F(x) + C_2(b)\varphi_F(x) & \mbox{if } x \geq b, \end{array}\right. \end{equation*}

where

\begin{equation*} C_1(b) = \frac{I'_{\!\!F}(b)\varphi_F(b) - I_F(b)\varphi'_{\!\!F}(b)}{\psi'(b)\varphi_F(b) - \psi(b)\varphi'_{\!\!F}(b)}, \qquad C_2(b) = \frac{I'_{\!\!F}(b)\psi(b) - I_F(b)\psi'(b)}{\psi'(b)\varphi_F(b) - \psi(b)\varphi'_{\!\!F}(b)}. \end{equation*}

Proof. The proof follows the same steps as the one for [Reference Renaud and Simard15, Proposition 2.1]. Fix $b > 0$ . Using the strong Markov property, we have, for $x \leq b$ ,

\begin{equation*} V_b(x) = {\mathbb E_{x}}\big[\textrm{e}^{-q\tau_b}{\mathbf 1}_{\{\tau_b < \tau_0\}}\big]V_b(b). \end{equation*}

For $x > b$ , using the strong Markov property and the equality (also obtained from the strong Markov property)

\begin{equation*} \mathbb{E}_x\big[\textrm{e}^{-q\tau_b^F}\mathbf{1}_{\{\tau_b^F < \infty\}}\big] \mathbb{E}_b\big[\textrm{e}^{-q\tau_0^F}\mathbf{1}_{\{\tau_0^F < \infty\}}\big] = \mathbb{E}_x\big[\textrm{e}^{-q\tau_0^F}\mathbf{1}_{\{\tau_0^F < \infty\}}\big], \end{equation*}

we get

\begin{equation*} V_b(x) = {\mathbb E_{x}}\bigg[\int_0^\infty\textrm{e}^{-qt}F(U_t)\,\textrm{d}t\bigg] + {\mathbb E_{x}}\big[\textrm{e}^{-q\tau^F_0}{\mathbf 1}_{\{\tau^F_0 < \infty\}}\big]\Bigg( \frac{V_b(b) - {\mathbb E_{b}}\big[\int_0^\infty\textrm{e}^{-qt}F(U_t)\,\textrm{d}t\big]} {{\mathbb E_{b}}\big[\textrm{e}^{-q\tau^F_0}{\mathbf 1}_{\{\tau^F_0 < \infty\}}\big]}\Bigg). \end{equation*}

Consequently, we can write

(3.2) \begin{equation} V_b(x) = \left\{ \begin{array}{l@{\quad}l} ({\psi(x)}/{\psi(b)})V_b(b) & \mbox{if } 0 \leq x \leq b, \\[5pt] I_F(x) + ({\varphi_F(x)}/{\varphi_F(b)})(V_b(b) - I_F(b)) & \mbox{if } x \geq b. \end{array}\right. \end{equation}

To conclude, we need to compute $V_b(b)$ . For n sufficiently large, consider the strategy $\pi_b^n$ consisting of using the maximal control rate when the controlled process is above b, until it goes below $b - 1/n$ . We again apply the maximal control rate when the controlled process reaches b once more. Note that $\pi_b^n$ is admissible. We denote its value function by $V_b^n$ . We can show that

(3.3) \begin{equation} \lim_{n \to \infty} V_b^n(b) = V_b(b). \end{equation}

First, we prove that

(3.4) \begin{equation} \int_0^{\tau_0^{\pi_b^n}}\textrm{e}^{-qt}\,\textrm{d}L_t^{\pi_b^n} \xrightarrow[n \to \infty]{} \int_0^{\tau_0^{\pi_b}}\textrm{e}^{-qt}\,\textrm{d}L_t^{\pi_b} \end{equation}

almost surely. Note that $\tau_0^{\pi_b^n} \leq \tau_0^{\pi_b}$ , and so

\begin{align*} \Bigg|\int_0^{\tau_0^{\pi_b^n}}\textrm{e}^{-qt}\,\textrm{d}L_t^{\pi_b^n} - \int_0^{\tau_0^{\pi_b}}\textrm{e}^{-qt}\,\textrm{d}L_t^{\pi_b}\Bigg| & = \Bigg|\int_0^{\tau_0^{\pi_b^n}}\textrm{e}^{-qt}\,\big(\textrm{d}L_t^{\pi_b^n} - \textrm{d}L_t^{\pi_b}\big) - \int_{\tau_0^{\pi_b^n}}^{\tau_0^{\pi_b}}\textrm{e}^{-qt}\,\textrm{d}L_t^{\pi_b}\Bigg| \\[5pt] & \leq \frac{1}{n} + \Bigg|\int_{\tau_0^{\pi_b^n}}^{\tau_0^{\pi_b}}\textrm{e}^{-qt}\,\textrm{d}L_t^{\pi_b}\Bigg|, \end{align*}

where we used the triangular inequality and the fact that, since $0 \leq U_t^{\pi_b} - U_t^{\pi_b^n} \leq 1/n$ for all $t \leq \tau_0^{\pi_b^n}$ , we have $0 \leq L_t^{\pi_b^n} - L_t^{\pi_b} \leq 1/n$ for all $t \leq \tau_0^{\pi_b^n}$ . As the sequence of stopping times $\tau_0^{\pi_b^n}$ converges to $\tau_0^{\pi_b}$ , since $U_{\tau_0^{\pi_b^n}}^{\pi_b} \leq 1/n$ this proves (3.4). Second, if F is a bounded function, then (3.3) follows from the dominated convergence theorem. If F is not bounded then, for n sufficiently large,

\begin{equation*} \int_0^{\tau_0^{\pi_b^n}}\textrm{e}^{-qt}\,\textrm{d}L_t^{\pi_b^n} \leq \int_0^{\tau_0}\textrm{e}^{-qt}F(X_t)\,\textrm{d}t, \end{equation*}

where $\tau_0 = \inf\{t > 0 \colon X_t = 0\}$ . To apply the dominated convergence theorem, we show that this upper bound is integrable. Since

\begin{equation*} {\mathbb E_{b}}\bigg[\int_0^{\tau_0}\textrm{e}^{-qt}F(X_t)\,\textrm{d}t\bigg] = {\mathbb E_{b}}\bigg[\int_0^{\infty}\textrm{e}^{-qt}F(X_t)\,\textrm{d}t\bigg] \big(1 - {\mathbb E_{b}}\big[\textrm{e}^{-q\tau_0}{\mathbf 1}_{\{\tau_0 < \infty\}}\big]\big), \end{equation*}

using Fubini’s theorem and Jensen’s inequality we get

\[ {\mathbb E_{b}}\bigg[\int_0^{\infty}\textrm{e}^{-qt}F(X_t)\,\textrm{d}t\bigg] \leq \int_0^\infty\textrm{e}^{-qt}F({\mathbb E_{b}}[X_t])\,\textrm{d}t = \int_0^\infty\textrm{e}^{-qt}F(b + \mu t)\,\textrm{d}t \leq F'(0)\bigg(\frac{b}{q} + \frac{\mu}{q^2}\bigg). \]

This concludes the proof of (3.3).

Next, using similar arguments to above, we can write

\begin{equation*} V_b^n(b - 1/n) = \psi_b(b-1/n) V_b^n(b), \end{equation*}

where $\psi_b(x)=\psi(x)/\psi(b)$ , and

\begin{align*} V_b^n(b) & = {\mathbb E_{b}}{\bigg[\int_0^{\tau_{b-1/n}^{F}}\textrm{e}^{-qt}F(U_t^{\pi_b^n})\,\textrm{d}t\bigg]} + {\mathbb E_{b}}{\big[\textrm{e}^{-q\tau_{b-1/n}^{F}}{\mathbf 1}_{\{\tau_{b-1/n}^{F} < \infty\}}\big]}V_b^n(b-1/n) \\[5pt] & = I_F(b) + \frac{\varphi_F(b)}{\varphi_F(b-1/n)}(V_b^n(b-1/n) - I_F(b-1/n)). \end{align*}

Solving for $V_b^n(b)$ , we find

\begin{equation*} V_b^n(b) = \frac{I_F(b-1/n)\varphi_F(b) - I_F(b)\varphi_F(b-1/n)} {\psi_b(b-1/n)\varphi_F(b) - \psi_b(b)\varphi_F(b-1/n)}. \end{equation*}

Define

\begin{equation*} G(y) = I_F(b-y)\varphi_F(b) - I_F(b) \varphi_F(b-y), \quad H(y) = \psi_b(b-y) \varphi_F(b) - \psi_b(b) \varphi_F(b-y). \end{equation*}

Note that $G(0) = H(0) = 0$ . As $\psi_b$ , $\varphi_F$ , and $I_F$ are differentiable functions, dividing the numerator and the denominator by $1/n$ and taking the limit yields

\begin{equation*} V_b(b) = \lim_{n \to \infty} V_b^n(b) = \frac{G'(0\!+\!)}{H'(0\!+\!)}, \end{equation*}

which leads to

(3.5) \begin{equation} V_b(b) = \frac{I'_{\!\!F}(b)\varphi_F(b) - I_F(b)\varphi'_{\!\!F}(b)}{\psi_b'(b)\varphi_F(b) - \psi_b(b)\varphi'_{\!\!F}(b)}. \end{equation}

Substituting (3.5) into (3.2), we get

(3.6) \begin{equation} V_b(x) = \left\{ \begin{array}{l@{\quad}l} K_1(b) \psi_b(x) & \mbox{if } 0 \leq x \leq b, \\[5pt] I_F(x) + K_2(b) \varphi_F(x) & \mbox{if } x \geq b, \end{array}\right. \end{equation}

where

\begin{equation*} K_1(b) = \frac{I'_{\!\!F}(b)\varphi_F(b) - I_F(b)\varphi'_{\!\!F}(b)}{\psi_b'(b)\varphi_F(b) - \psi_b(b)\varphi'_{\!\!F}(b)}, \qquad K_2(b) = \frac{I'_{\!\!F}(b)\psi_b(b) - I_F(b)\psi'_{\!\!b}(b)}{\psi_b'(b)\varphi_F(b) - \psi_b(b)\varphi'_{\!\!F}(b)}. \end{equation*}

Using the definition of $\psi_b$ , (3.6) can be rewritten as

\begin{equation*} V_b(x) = \left\{ \begin{array}{l@{\quad}l} C_1(b) \psi(x) & \mbox{if } 0<x<b, \\[5pt] I_F(x) + C_2(b)\varphi_F(x) & \mbox{if } x \geq b. \end{array}\right. \end{equation*}

It is straightforward to check that $V_b \in C^1(0, +\infty)$ , since elementary algebraic manipulations lead to $V'_{\!\!b}(b\!-\!) = V'_{\!\!b}(b\!+\!)$ .

When $b=0$ , using Markovian arguments as above, we can verify that

\begin{equation*} V_0(x) = I_F(x) - I_F(0)\varphi_F(x), \qquad x>0, \end{equation*}

from which it is clear that $V_0 \in C^1(0, +\infty)$ .

4. Main results

We are now ready to provide a solution to the general control problem. As mentioned before, depending on the set of parameters, an optimal strategy will be given by the generalized mean-reverting strategy $\pi_0$ or by a generalized mean-reverting strategy $\pi_{b^\ast}$ for a barrier level $b^\ast$ to be determined.

First, let us consider the situation in which the parameters are such that

\begin{equation*} I'_{\!\!F}(0) - I_F(0) \varphi'_{\!\!F}(0) \leq 1.\end{equation*}

Recalling from Proposition 3.1 that $V_0(x) = I_F(x) - I_F(0) \varphi_F(x)$ , $x \geq 0$ , we deduce that $V'_{\!\!0}(0) \leq 1$ .

Also, $V_0$ is a concave function. Indeed, using the notation introduced in (2.6) we can write

\begin{equation*} V_0(x) = \mathbb{E}\bigg[\int_0^{\tau_0^x}\textrm{e}^{-qt}F(U_t^x)\,\textrm{d}t\bigg] ,\end{equation*}

where $\tau^x_0 = \inf\{t > 0 \colon U^x_t = 0\}$ . Since the inequality in (2.8) holds for all $t \geq 0$ , it also holds true for the stopping time $\tau_0^F$ . As a consequence, $V_0$ is concave and, further, $V'_{\!\!0}(x) \leq 1$ for all $x \geq 0$ .

In conclusion, all the conditions in the verification lemma are satisfied and thus the generalized mean-reverting $\pi_0$ is an optimal strategy.

Now let us consider the situation in which the parameters are such that

\begin{equation*} I'_{\!\!F}(0) - I_F(0) \varphi'_{\!\!F}(0) > 1.\end{equation*}

From the Hamilton–Jacobi–Bellman equation (2.9), we expect an optimal barrier level $b^*$ to be given by $V'_{\!\!b^*}(b^*) = 1$ , which is equivalent to

(4.1) \begin{equation} I_F(b^*) - \frac{I'_{\!\!F}(b^*)\varphi_F(b^*)}{\varphi'_{\!\!F}(b^*)} = \frac{\psi(b^*)}{\psi'(b^*)} - \frac{\varphi_F(b^*)}{\varphi'_{\!\!F}(b^*)}.\end{equation}

Recall from Lemma 2.1 that $\hat{b}$ is the unique inflection point of $\psi$ .

Proposition 4.1. If $I'_{\!\!F}(0) - I_F(0)\varphi'_{\!\!F}(0) > 1$ , then there exists a solution $b^* \in (0,\hat{b}]$ to (4.1).

Proof. Define

\begin{equation*} g(y) = I_F(y) - \frac{I'_{\!\!F}(y) \varphi_F(y)}{\varphi'_{\!\!F}(y)}, \qquad h(y) = \frac{\psi(y)}{\psi'(y)} - \frac{\varphi_F(y)}{\varphi'_{\!\!F}(y)}. \end{equation*}

We see that $g(0) > h(0)$ is equivalent to $I'_{\!\!F}(0) - I_F(0) \varphi'_{\!\!F}(0) > 1$ .

We will show that $g(\hat{b}) \leq h(\hat{b})$ . The result will follow from the intermediate value theorem. First, we have the following inequality:

(4.2) \begin{equation} \frac{\psi(\hat{b})}{\psi'(\hat{b})} - \frac{\varphi_F(\hat{b})}{\varphi'_{\!\!F}(\hat{b})} \geq \frac{F(\hat{b})}{q}. \end{equation}

Indeed, by the definition of $\hat{b}$ and $\psi$ we have ${\psi(\hat{b})}/{\psi'(\hat{b})} = {\mu}/{q}$ . Also, by the definition of $\varphi_F$ we have

\begin{equation*} \frac{\sigma^2}{2}\frac{\phi^{\prime\prime}_F(\hat{b})}{\varphi'_{\!\!F}(\hat{b})} + \mu - q\frac{\varphi_F(\hat{b})}{\varphi'_{\!\!F}(\hat{b})} - F(\hat{b}) = 0. \end{equation*}

Since $\varphi_F$ is convex and decreasing, (4.2) follows.

Now, using Proposition 2.1, we can write

\begin{equation*} I_F(\hat{b}) = \frac{\sigma^2}{2q}I^{\prime\prime}_F(\hat{b}) + \frac{\mu}{q} I'_{\!\!F}(\hat{b}) - \frac{F(\hat{b})}{q}(I'_{\!\!F}(\hat{b}) - 1). \end{equation*}

Since $I_F$ is concave, it follows that

\begin{equation*} I_F(\hat{b}) \leq \frac{\mu}{q} I'_{\!\!F}(\hat{b}) - \frac{F(\hat{b})}{q}(I'_{\!\!F}(\hat{b}) - 1). \end{equation*}

Also, since $0 \leq I'_{\!\!F}(\hat{b}) \leq 1$ , using (4.2) yields

\begin{equation*} I_F(\hat{b}) \leq \frac{\mu}{q}I'_{\!\!F}(\hat{b}) - \bigg(\frac{\psi(\hat{b})}{\psi'(\hat{b})} - \frac{\varphi_F(\hat{b})}{\varphi'_{\!\!F}(\hat{b})}\bigg) (I'_{\!\!F}(\hat{b}) - 1). \end{equation*}

This inequality is equivalent to $g(\hat{b}) \leq h(\hat{b})$ .

The next result states that if $b^*$ is a solution to (4.1), as in the previous proposition, then $\pi_{b^*}$ satisfies the conditions of the verification lemma.

Proposition 4.2. If $I'_{\!\!F}(0) - I_F(0) \varphi'_{\!\!F}(0) > 1$ and if $b^* \in (0, \hat{b}]$ is a solution to (4.1), then $V_{b^*} \in C^2(0, +\infty)$ and is concave.

Proof. This proof relies heavily on the analytical properties of $\psi$ , $\varphi_F$ , and $I_F$ obtained in Lemma 2.1 and Proposition 2.1.

First, we show that $V_{b^*} \in C^2(0, +\infty)$ . From Proposition 3.1, we deduce that

\begin{equation*} V''_{\!\!b^*}(x) = \left\{ \begin{array}{l@{\quad}l} C_1(b^*) \psi''(x) & \mbox{if } 0 < x < b^*, \\[5pt] I^{\prime\prime}_F(x) + C_2(b^*)\phi^{\prime\prime}_F(x) & \mbox{if } x > b^*. \end{array}\right. \end{equation*}

Therefore, since $\psi$ , $\varphi_F$ , and $I_F$ are twice continuously differentiable functions, it is sufficient to show that $V''_{\!\!b^*}(b^*-) = V''_{\!\!b^*}(b^*+)$ . We also have that $\psi$ , $\varphi_F$ , and $I_F$ are solutions to second-order ODEs, so it is equivalent to show that

\begin{align*} \mu[C_1(b^*)\psi'(b^*) - C_2(b^*)\varphi'_{\!\!F}(b^*) - I'_{\!\!F}(b^*)] & - q[C_1(b^*)\psi(b^*) - C_2(b^*)\varphi_F(b^*) - I_F(b^*)] \\[5pt] & + F(b^*)[C_2(b^*)\varphi'_{\!\!F}(b^*) + I'_{\!\!F}(b^*) - 1] = 0. \end{align*}

The statement follows from the fact that $V_{b^*}$ is continuously differentiable at $x = b^*$ and because $V'_{\!\!b^*}(b^*+) = 1$ .

Now, let us show that $V_{b^*}$ is concave. Since $V'_{\!\!b^*}(b^*) = 1$ , it follows directly that

(4.3) \begin{equation} C_1(b^*) = \frac{1}{\psi'(b^*)}, \qquad C_2(b^*) = \frac{1 - I'_{\!\!F}(b^*)}{\varphi'_{\!\!F}(b^*)}. \end{equation}

Using the analytical properties of $\psi$ , $\varphi_F$ , and $I_F$ , it is clear that $C_2(b^*) \leq 0 < C_1(b^*)$ . Since $b^* \leq \hat{b}$ , $\psi$ is concave on $(0, b^*)$ , and so $V''_{\!\!b^*}(x) \leq 0$ for all $x \in (0, b^*)$ . Finally, since $I_F$ is concave, $\varphi_F$ is convex, and $C_2(b^*) \leq 0$ , we have $V''_{\!\!b^*}(x) \leq 0$ for all $x \in (b^*,\infty)$ . In other words, $V_{b^*}$ is concave.

We are now ready to state the main result, which is a solution to the general control problem.

Theorem 4.1. If $I'_{\!\!F}(0) - I_F(0)\varphi'_{\!\!F}(0) \leq 1$ , then $\pi_0$ is an optimal strategy and the optimal value function is given by $V(x) = I_F(x) - I_F(0) \varphi_F(x)$ , $x \geq 0$ .

If $I'_{\!\!F}(0) - I_F(0) \varphi'_{\!\!F}(0) > 1$ , then $\pi_{b^*}$ is an optimal strategy, with $b^*$ a solution to (4.1), and the optimal value function is given by

\begin{equation*} V(x) = \left\{ \begin{array}{l@{\quad}l} {\psi(x)}/{\psi'(b^*)} & \mbox{if } 0 \leq x \leq b^*, \\[5pt] I_F(x) + (({1 - I'_{\!\!F}(b^*)})/{\varphi'_{\!\!F}(b^*)}) \varphi_F(x) & \mbox{if } x \geq b^*. \end{array}\right. \end{equation*}

Proof. All that is left to justify is the expression for the optimal value function V under the condition $I'_{\!\!F}(0) - I_F(0) \varphi'_{\!\!F}(0) > 1$ . In that case, it suffices to use the general expression for $V_b$ obtained in Proposition 3.1 together with the expressions for $C_1(b^*)$ and $C_2(b^*)$ given in (4.3).

As announced, given an increasing and concave function F, and recalling that $\psi$ is independent of F and always known explicitly, see (2.2), this solution to the control problem is explicit up to the computations of the functions $\varphi_F$ and $I_F$ . It is interesting to note that these functions only depend on the dynamics of the nonlinear Ornstein–Uhlenbeck process U as given in (2.3); they are also solutions to ODEs.

4.1. Solution to the problem with an affine bound

If we choose $F(x) = R+Kx$ , with $K > 0$ and $R \geq 0$ , then U is such that

\begin{equation*} \textrm{d}U_t = (\mu - R + KU_t)\,\textrm{d}t + \sigma\,\textrm{d}W_t ,\end{equation*}

i.e., it is a standard Ornstein–Uhlenbeck process. In this case, $\varphi_F$ is known explicitly (see [Reference Borodin and Salminen7, (2.0.1)]):

\begin{equation*}\varphi_F(x) = \frac{H_K^{(q)}(x;\; \mu - R, \sigma)}{H_K^{(q)}(0;\; \mu - R, \sigma)}, \qquad x \geq 0,\end{equation*}

where, using the notation in [Reference Renaud and Simard15],

\begin{equation*}H_K^{(q)}(x;\; m, \sigma) \;:\!=\; \textrm{e}^{K((x - m/K)^2/2\sigma^2}D_{-q/K}\bigg(\bigg(\frac{x - m/K}{\sigma}\bigg)\sqrt{2K}\bigg),\end{equation*}

where $D_{- \lambda}$ is the parabolic cylinder function given by

\begin{equation*} D_{- \lambda}(x) \;:\!=\; \frac{1}{\Gamma(\lambda)} \textrm{e}^{-x^2/4} \int_0^\infty t^{\lambda - 1} \textrm{e}^{-xt-t^2/2}\,\textrm{d}t, \qquad x \in \mathbb{R}.\end{equation*}

On the other hand, we can compute the expectation in the definition of $I_F$ :

(4.4) \begin{equation} I_F(x) = \frac{K}{q+K}\bigg(x + \frac{\mu}{q}\bigg) + \frac{R}{q+K}.\end{equation}

The following corollary is a generalization of the results obtained in both [Reference Asmussen and Taksar5, Reference Jeanblanc-Picqué and Shiryaev11] and [Reference Renaud and Simard15] for a constant bound and a linear bound on the control rate, respectively.

Corollary 4.1. Set $F(x)=R+Kx$ , with $K > 0$ and $R \geq 0$ . Define

\begin{align*}\Delta = -\frac{H_K^{(q)}(0;\; \mu-R, \sigma)}{H_K^{(q) \prime} (0;\; \mu-R, \sigma)}.\end{align*}

If $\Delta \geq {K \mu}/{q^2} + {R}/{q}$ , then the mean-reverting strategy $\pi_0$ is an optimal strategy and the optimal value function is given, for $x \geq 0$ , by

\begin{equation*} V(x) = \frac{Kx}{q+K} + \bigg[\frac{K}{q+K}\bigg(\frac{\mu}{q}\bigg) + \frac{R}{q+K}\bigg] \bigg(1- \frac{H_K^{(q)}(x;\; \mu - R, \sigma)}{H_K^{(q)}(0;\; \mu - R, \sigma)} \bigg) . \end{equation*}

If $\Delta < {K \mu}/{q^2} + {R}/{q}$ , then there exists a (unique) solution $b^* \in (0,\hat{b}]$ to

\begin{equation*} \frac{\psi(b)}{\psi'(b)} - \bigg(b + \frac{\mu}{q}\bigg) - \frac{R}{K} = -\frac{q}{K}\bigg(\frac{\psi(b)}{\psi'(b)} - \frac{H_K^{(q)}(b;\;\mu - R,\sigma)}{H_K^{(q)\prime}(b;\;\mu - R,\sigma)}\bigg) , \end{equation*}

the mean-reverting strategy $\pi_{b^*}$ is an optimal strategy, and the optimal value function is given by

\begin{equation*} V(x) = \left\{ \begin{array}{l@{\quad}l} \dfrac{\psi(x)}{\psi'(b^*)} & \textit{if } 0 \leq x \leq b^*, \\[5pt] \dfrac{K}{q+K}\bigg(x + \dfrac{\mu}{q}\bigg) + \dfrac{q}{q+K} \bigg(\dfrac{R}{q} + \dfrac{H_K^{(q)}(x;\;\mu - R,\sigma)}{H_K^{(q)\prime}(b^*;\; \mu - R,\sigma)}\bigg) & \textit{if } x \geq b^*. \end{array}\right. \end{equation*}

4.2. Solution to the problem with a capped linear bound

If we choose $F(x) = \min(Kx, R)$ , with $K, R > 0$ , then U is such that

\begin{equation*} \textrm{d}U_t = (\mu - \min(KU_t, R))\,\textrm{d}t + \sigma\,\textrm{d}W_t .\end{equation*}

Therefore, when $U_t > R/K$ , U evolves like a Brownian motion with drift $\mu - R$ and, when $U_t < R/K$ , like an Ornstein–Uhlenbeck process.

Note that F is not continuously differentiable at $x = R/K$ , but there exists a decreasing sequence of continuously differentiable and concave functions $F_n$ such that, for all $x \geq 0$ , $F_n(x) \to F(x)$ when $n \to \infty$ . By the dominated convergence theorem, we thus have $\varphi_{F_n} \to \varphi_F$ and $I_{F_n} \to I_F$ . Also, since F is continuous, $I_F$ and $\varphi_F$ are (at least) twice continuously differentiable, as we recall that $\varphi_F$ and $I_F$ are solutions of a second-order ODE.

We can compute $\varphi_F$ and $I_F$ using the strong Markov property repeatedly:

\begin{align*} \varphi_F(x) & = \left\{ \begin{array}{l@{\quad}l} B(x) + C(x) \varphi_F(R/K) & \mbox{if } 0 \leq x \leq R/K, \\[5pt] A(x) \varphi_F(R/K) & \mbox{if } x \geq R/K; \end{array} \right. \\[5pt] I_F(x) & = \left\{ \begin{array}{l@{\quad}l} I_{K}(x) - D(x)\left(I_{K}(R/K) - I_F(R/K)\right) & \mbox{if } x \leq R/K, \\[5pt] ({R}/{q})\left(1 - A(x)\right) + A(x) I_F(R/K) & \mbox{if } x \geq R/K, \end{array} \right. \end{align*}

where

\begin{align*} A(x) & = {\mathbb E_{x}}\big[\textrm{e}^{-q\tau_{R/K}^F}{\mathbf 1}_{\{\tau_{R/K}^F < \infty\}}\big] \\[5pt] & = \tilde{\psi}(x - R/K) - \frac{2 q}{\sqrt{(\mu - R)^2 + 2\sigma^2 q} - (\mu - R)}\psi(x - R/K), \\[5pt] B(x) & = {\mathbb E_{x}}\big[\textrm{e}^{-q\tau_0^F}{\mathbf 1}_{\{\tau_0^F < \tau_{R/K}^F\}}\big] = \frac{S({q}/{K};\sqrt{{2}/{K}}({R-\mu})/{\sigma};({x-\mu/K})/{(\sigma/\sqrt{2K})})} {S({q}/{K};\sqrt{{2}/{K}}({R-\mu})/{\sigma};\sqrt{{2}/{K}}({-\mu}/{\sigma}))}, \\[5pt] C(x) & = {\mathbb E_{x}}\big[\textrm{e}^{-q\tau_{R/K}^F}{\mathbf 1}_{\{\tau_{R/K}^F < \tau_0^F\}}\big] = \frac{S({q}/{K};({x - \mu/K})/({\sigma/\sqrt{2K}});\sqrt{{2}/{K}}({-\mu}/{\sigma}))} {S({q}/{K};\sqrt{{2}/{K}}({R-\mu})/{\sigma};\sqrt{{2}/{K}}({-\mu}/{\sigma}))}, \\[5pt] D(x) & = {\mathbb E_{x}}\big[\textrm{e}^{-q\tau_{R/K}^F}{\mathbf 1}_{\{\tau_{R/K}^F < \infty\}}\big] = \frac{H_K^{(q)}(x;\; \mu, -\sigma)}{H_K^{(q)}(R/K;\; \mu, -\sigma)}, \end{align*}

where $\hat{\psi}$ is defined as in (2.2) but with $\mu$ replaced here by $\mu-R$ , where

\begin{equation*} \widetilde{\psi}(x) \;:\!=\; 1 + \frac{2q}{\sigma^2}\int_0^x \hat{\psi}(y)\,\textrm{d}y, \end{equation*}

where $S(\nu, x, y) \;:\!=\; ({\Gamma(\nu)}/{\pi})\textrm{e}^{(x^2 + y^2)/4}(D_{-\nu}(\!-x)D_{-\nu}(y)-D_{-\nu}(x)D_{-\nu}(\!-y))$ , and where

\begin{equation*} I_K(x) \;:\!=\; \frac{K}{q+K}\bigg(x + \frac{\mu}{q}\bigg). \end{equation*}

Note that A is the solution to a well-known first-passage problem for a Brownian motion with drift $\mu-R$ ; see, e.g., [Reference Borodin and Salminen7] or [Reference Kyprianou12]. The expressions for B, C, and D are deduced from [Reference Borodin and Salminen7, (3.0.5(a)), (3.0.5(b)), and (2.0.1)], respectively.

Note that $I_K$ is the function $I_F$ when $F(x) = Kx$ . See (4.4) when $R = 0$ .

Since $\varphi_F$ and $I_F$ are differentiable, we can deduce that

\begin{align*} \varphi_F(R/K) & = \frac{B'(R/K)}{A'(R/K) - C'(R/K)}, \\[5pt] I_F(R/K) & = \frac{K/(q+K) - D'(R/K) I_{K}(R/K) + (R/q)A'(R/K)}{A'(R/K) - D'(R/K)}.\end{align*}

We are ready to apply the main result to the particular case given by $F(x) = \min(Kx, R)$ .

Corollary 4.2. Set $F(x) = \min(Kx, R)$ , with $K,R > 0$ .

If $I'_{\!\!F}(0) - I_F(0) \varphi'_{\!\!F}(0) \leq 1$ , then the generalized mean-reverting strategy $\pi_0$ is optimal and the optimal value function is given, for $x \geq 0$ , by

\begin{equation*} V(x) = \left\{ \begin{array}{l@{\quad}l} I_{K}(x) - D(x)(I_{K}({R}/{K}) - I_F({R}/{K})) - I_F(0)(B(x) + C(x) \varphi_F({R}/{K})) & \textit{if } x \leq {R}/{K}, \\[5pt] {R}/{q} + A(x)(I_F({R}/{K}) - I_F(0)\varphi_F({R}/{K}) - {R}/{q}) & \textit{if } x \geq {R}/{K}. \end{array} \right. \end{equation*}

If $I'_{\!\!F}(0) - I_F(0) \varphi'_{\!\!F}(0) > 1$ , then the generalized mean-reverting strategy $\pi_{b^*}$ is optimal, where $b^* \in (0, \hat{b}]$ is the unique solution of (4.1). The optimal value function depends on whether $b^* \leq {R}/{K}$ or $b^* > {R}/{K}$ . If $b^* \leq {R}/{K}$ , then the optimal value function is given by

\begin{equation*} V(x) = \left\{ \begin{array}{l@{\quad}l} {\psi(x)}/{\psi'(b^*)} & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\textit{if } x \leq b^*, \\[5pt] {I_{K}(x) - D(x)(I_{K}({R}/{K}) - I_F({R}/{K})) - (({1-I'_{\!\!F}(b^*)})/{\varphi_F(b^*)})(B(x)+C(x)\varphi_F({R}/{K}))} \\[5pt] & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\textit{if } x\in[b^*,{R}/{K}], \\[5pt] {R}/{q} + A(x)(I_F({R}/{K}) + (({1 - I'_{\!\!F}(b^*)})/{\varphi'_{\!\!F}(b^*)})\varphi_F({R}/{K}) - {R}/{q}) \qquad & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\textit{if } x \geq {R}/{K}. \end{array} \right. \end{equation*}

If, instead, $b^* > {R}/{K}$ , then the optimal value function is given by

\begin{equation*} V(x) = \left\{ \begin{array}{l@{\quad}l} {\psi(x)}/{\psi'(b^*)} & \textit{if } x \leq b^*, \\[5pt] {R}/{q} + A(x)(I_F({R}/{K}) + (({1 - I'_{\!\!F}(b^*)})/{\varphi'_{\!\!F}(b^*)})\varphi_F({R}/{K}) - {R}/{q}) & \textit{if } x \geq b^*. \end{array} \right. \end{equation*}

Appendix A. Proof of the verification lemma

Proof of Lemma 2.2. The second statement of the lemma is a direct consequence of the definition of the optimal value function.

Now, let $\pi$ be an arbitrary admissible strategy. Applying Itô’s lemma to the continuous semimartingale $(t, U_t^\pi)$ , using the function $g(t,y) \;:\!=\; \textrm{e}^{-qt}V_{\pi^*}(y)$ , we find

\begin{align*} \textrm{e}^{-q(t\wedge\tau_0^\pi)}V_{\pi^*}(U_{t\wedge\tau_0^\pi}^\pi) & = V_{\pi^*}(x) \\[5pt] & \quad + \int_0^{t\wedge\tau_0^\pi}\textrm{e}^{-qs}\bigg(\frac{\sigma^2}{2} V''_{\!\!\!\pi^*}(U_s^\pi) + \mu V'_{\!\!\pi^*}(U_s^\pi) - qV_{\pi^*}(U_s^\pi) - l_s^\pi V'_{\!\!\pi^*}(U_s^\pi)\bigg)\,\textrm{d}s \\[5pt] & \quad + \int_0^{t \wedge \tau_0^\pi} \sigma \textrm{e}^{-qs} V'_{\!\!\pi^*}(U_s^\pi)\,\textrm{d}W_s. \end{align*}

Taking expectations on both sides, we find

\begin{align*} V_{\pi^*}(x) & = {\mathbb E_{x}}\big[\textrm{e}^{-q(t\wedge\tau_0^\pi)}V_{\pi^*}(U_{t\wedge\tau_0^\pi}^\pi)\big] \\[5pt] & \quad - {\mathbb E_{x}}\bigg[\int_0^{t\wedge\tau_0^\pi}\textrm{e}^{-qs}\bigg(\frac{\sigma^2}{2} V''_{\!\!\!\pi^*}(U_s^\pi) + \mu V'_{\!\!\pi^*}(U_s^\pi) - qV_{\pi^*}(U_s^\pi) - l_s^\pi V'_{\!\!\pi^*}(U_s^\pi)\bigg)\, \textrm{d}s\bigg] \\[5pt] & \geq {\mathbb E_{x}}\big[\textrm{e}^{-q(t\wedge\tau_0^\pi)}V_{\pi^*}(U_{t\wedge\tau_0^\pi}^\pi)\big] + {\mathbb E_{x}}\bigg[\int_0^{t\wedge\tau_0^\pi}\textrm{e}^{-qs}l_s^\pi\,\textrm{d}s\bigg], \end{align*}

where the inequality is obtained directly from the fact that $V_{\pi^*}$ satisfies (2.9). Note also that the expectation of the stochastic integral is 0 because $V'_{\!\!\pi^*}$ is bounded. Letting $t \to \infty$ , we get $V_{\pi^*}(x) \geq V_\pi(x)$ for all $x > 0$ . We can, for example, work separately on $\{\tau_0^\pi < \infty\}$ and $\{\tau_0^\pi = \infty\}$ .

Acknowledgement

The authors are grateful to an anonymous referee for comments and suggestions which have significantly improved the paper.

Funding information

Funding in support of this work was provided by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) and a PhD Scholarship from the Fonds de Recherche du Québec – Nature et Technologies (FRQNT).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Albrecher, H., Azcue, P. and Muler, N. (2022). Optimal ratcheting of dividends in a Brownian risk model. SIAM J. Fin. Math. 13, 657701.10.1137/20M1387171CrossRefGoogle Scholar
Albrecher, H. and Thonhauser, S. (2009). Optimality results for dividend problems in insurance. Rev. R. Acad. Cienc. Exactas Fs. Nat. Ser. A Mat. RACSAM 103, 295320.10.1007/BF03191909CrossRefGoogle Scholar
Alvarez, L. H. R. and Shepp, L. A. (1998). Optimal harvesting of stochastically fluctuating populations. J. Math. Biol. 37, 155177.10.1007/s002850050124CrossRefGoogle Scholar
Angoshtari, B., Bayraktar, E. and Young, V. R. (2019). Optimal dividend distribution under drawdown and ratcheting constraints on dividend rates. SIAM J. Fin. Math. 2, 547577.10.1137/18M119567XCrossRefGoogle Scholar
Asmussen, S. and Taksar, M. (1997). Controlled diffusion models for optimal dividend pay-out. Insurance Math. Econom. 20, 115.10.1016/S0167-6687(96)00017-0CrossRefGoogle Scholar
Avanzi, B. and Wong, B. (2012). On a mean reverting dividend strategy with Brownian motion. Insurance Math. Econom. 51, 229238.10.1016/j.insmatheco.2012.04.002CrossRefGoogle Scholar
Borodin, A. N. and Salminen, P. (2002). Handbook of Brownian motion—Facts and Formulae, 2nd edn. Birkhäuser, Basel.10.1007/978-3-0348-8163-0CrossRefGoogle Scholar
Choulli, T., Taksar, M. and Zhou, X. Y. (2004). Interplay between dividend rate and business constraints for a financial corporation. Ann. Appl. Prob. 14, 18101837.10.1214/105051604000000909CrossRefGoogle Scholar
de Finetti, B. (1957). Su un’ impostazione alternativa dell teoria collettiva del rischio. Transactions of the XVth International Congress of Actuaries. 2, 433443.Google Scholar
Ekström, E. and Lindensjö, K. (2023). De Finetti’s control problem with competition. App. Math. Optim. 87, 126.Google Scholar
Jeanblanc-Picqué, M. and Shiryaev, A. N. (1995). Optimization of the flow of dividends. Uspekhi Mat. Nauk. 50, 2546.Google Scholar
Kyprianou, A. E. (2014). Fluctuations of Lévy processes with applications: Introductory Lectures, 2nd edn. Springer, Heidelberg.10.1007/978-3-642-37632-0CrossRefGoogle Scholar
Kyprianou, A. E., Loeffen, R. L. and Pérez, J.-L. (2012). Optimal control with absolutely continuous strategies for spectrally negative Lévy processes. J. Appl. Prob. 49, 150166.10.1239/jap/1331216839CrossRefGoogle Scholar
Øksendal, B. (2003). Stochastic Differential Equations: An Introduction with Applications, 5th edn. Springer, New York.10.1007/978-3-642-14394-6CrossRefGoogle Scholar
Renaud, J.-F. and Simard, C. (2021). A stochastic control problem with linearly bounded control rates in a Brownian model. SIAM J. Control Optim. 59, 31033117.10.1137/20M135296XCrossRefGoogle Scholar
Somasundaram, D. (2001). Ordinary Differential Equations: A First Course, 1st edn. Alpha Science International, Oxford.Google Scholar