Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-25T18:20:04.000Z Has data issue: false hasContentIssue false

Some results on the supremum and on the first passage time of the generalized telegraph process

Published online by Cambridge University Press:  02 December 2024

Barbara Martinucci*
Affiliation:
University of Salerno
Paola Paraggio*
Affiliation:
University of Salerno
Shelemyahu Zacks*
Affiliation:
Binghamton University
*
*Postal address: Fisciano (SA), I-84084, Italy.
*Postal address: Fisciano (SA), I-84084, Italy.
****Postal address: Binghamton, NY 13902-6000, USA. Email address: shelly@math.binghamton.edu
Rights & Permissions [Opens in a new window]

Abstract

We analyze the process M(t) representing the maximum of the one-dimensional telegraph process X(t) with exponentially distributed upward random times and generally distributed downward random times. The evolution of M(t) is governed by an alternating renewal of two phases: a rising phase R and a constant phase C. During a rising phase, X(t) moves upward, whereas, during a constant phase, it moves upward and downward, continuing to move until it attains the maximal level previously reached. Under some choices of the distribution of the downward times, we are able to determine the distribution of C, which allows us to obtain some bounds for the survival function of M(t). In the particular case of exponential downward random times, we derive an explicit expression for the survival function of M(t). Finally, the moments of the first passage time $\Theta_w$ of the process X(t) through a fixed boundary $w>0$ are analyzed.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

The one-dimensional (integrated) telegraph process is a suitable mathematical model to describe the random motion of a particle along the real line. The particle moves with finite velocity, and the direction of the motion is reversed according to the arrival epochs of a homogeneous Poisson process. This process was first studied by Goldstein [Reference Goldstein18] and Kac [Reference Kac23], while some properties of the solution of the Goldstein–Kac telegraph equation were analyzed by Bartlett [Reference Bartlett3].

Starting from these papers, the telegraph process and its generalizations have drawn the attention of many scientists, especially since they represent an alternative to diffusion processes, which are often unsuitable for describing natural phenomena in the life sciences. For instance, in Beghin et al. [Reference Beghin, Nieddu and Orsingher4] and in Lopez and Ratanov [Reference López and Ratanov27], the authors analyze the asymmetric telegraph process, whereas Di Crescenzo and Martinucci [Reference Di Crescenzo and Martinucci12] and Martinucci and Meoli [Reference Martinucci, Meoli and Zacks28] treat the case of a general distribution for the random times between consecutive reversals of direction. The telegraph process with an arbitrary number of velocities is analyzed in Kolesnik [24], whereas Stadje and Zacks [Reference Stadje and Zacks37] and De Gregorio [Reference De Gregorio10] consider the case of random velocities. The telegraph process perturbed by jumps is studied in Ratanov [Reference Ratanov35], and the large deviation principle applied to the telegraph process can be found in De Gregorio and Macci [Reference De Gregorio and Macci11]. See also Garra et al. [Reference Orsingher, Garra and Zeifman31] and Cinque and Orsingher [Reference Cinque and Orsingher7] for some multidimensional extensions of the telegraph process, and Di Crescenzo et al. ([Reference Di Crescenzo, Martinucci and Zacks13, Reference Di Crescenzo, Martinucci, Paraggio and Zacks14]) for the telegraph process confined by boundaries.

The rising interest in the telegraph process concerns not only the theoretical description of the model but also its application in several fields, such as biology (see Hillen and Othmer [Reference Hillen and Othmer21] for the use of the telegraph equation to model chemotactic behavior), physics (see, for instance, Weiss [Reference Weiss40] for applications in electromagnetic theory), ecology (see Holmes et al. [Reference Holmes, Lewis, Banks and Veit22] for a model of the dispersal of wild animals) and financial market modeling (see Kolesnik and Ratanov [Reference Kolesnik and Ratanov25] and Pogorui et al. [Reference Pogorui, Swishchuk and Rodríguez-Dagnino32]).

Recently, Cinque and Orsingher ([Reference Cinque and Orsingher5, Reference Cinque and Orsingher6]) have addressed the problem of finding the distribution of the maximum level reached by the particle up to time t in the case of the standard telegraph process. A similar topic has also been studied by Masoliver and Weiss [Reference Masoliver and Weiss29], who analyze the process representing the maximum displacement (i.e. the difference between the maximum and the minimum) of a telegraph process, and by Ratanov [Reference Ratanov36], where some explicit formulae for the distributions of the running maximum/minimum, first passage times, and telegraphic meanders are obtained. In general, there are few processes for which explicit expressions for the distribution of the maximum and that of the first passage time through a fixed boundary are known. However, these distributions lend themselves to important applications. For instance, the motion of some microorganisms is often driven by a telegraph-type equation (see Komin et al. [Reference Komin, Erdmann and Schimansky-Geier26]), and it may be of interest in finding the maximum displacement reached by such organisms or the first time instant at which the motion reaches a fixed boundary representing a critical threshold. In addition, the study of the maximum may be of interest in finance (for asset pricing models based on the telegraph processes, see Di Crescenzo and Pellerey [Reference Di Crescenzo and Pellerey15]), and also in geology (for the ground displacements sometimes described as the superposition of an asymmetric telegraph process and a diffusive component, see Travaglino et al. [Reference Travaglino, Di Crescenzo, Martinucci and Scarpa38]).

In the present paper, we aim to analyze the process M(t) which represents the maximum level reached by the particle during the time interval [0, t] in the case of constant velocities with equal absolute value. The evolution of M(t) is represented by an alternating renewal of two phases: a rising phase R and a constant phase C. In particular, the constant phase C represents the duration of an interval in which the particle moves upward and downward, starting with a downward movement, until it reaches the maximal level previously attained. Note that the constant phase C can be regarded as the sojourn time (see, for instance, Ray [Reference Ray34]) of the process M(t) in the previously reached maximal level. The problem of finding the distribution of the sojourn time is of great interest in many research areas (see, for instance, Dębicki et al. [Reference Dębicki, Liu and Michna9] and Foss and Miyazawa [Reference Foss and Miyazawa17]). In our context, the distribution of the constant phase C is related to the survival function of M(t). Moreover, the moments of the first passage time $\Theta_w$ of the telegraph process (or equivalently of M(t)) can be determined starting from those of C.

The paper is organized as follows. In Section 2, we describe the telegraph process with exponentially distributed upward random times $U_i$ and generally distributed downward random times $D_i$ , and we introduce some basic definitions. In Section 3, we present the general reasoning for finding the distribution of the maximum M(t), together with some general results regarding the moments of the first passage time $\Theta_w$ . Then, in Sections 4 and 5, we specialize the results obtained for a general distribution of D to the cases when the downward random times have the following distributions: (i) exponential, (ii) Erlang, (iii) weighted exponential, and (iv) mixture of exponentials.

2. The model

Let $\{X(t);\;t\ge 0\}$ be a one-dimensional integrated telegraph process. Such a process describes the motion of a particle which starts from the origin at the time $t=0$ and then moves upward and downward alternately with velocity $v=1$ . The first motion of the particle is upward, and the duration of this motion is denoted by $U_1$ . Then the particle spends a random length of time, denoted by $D_1$ , moving downward, before changing its velocity from $v=-1$ to $v=1$ , and so on. More precisely, we denote by $U_i$ , $i\in \mathbb N$ , the random variables representing the durations of intervals spent by the particle moving upward and by $D_i$ , $i\in \mathbb N$ , those representing the downward periods. We assume that the sequences of positive, independent and identically distributed (i.i.d.) random times $\left\{U_{1},U_{2},\cdots\right\}$ and $\left\{D_{1},D_{2},\cdots\right\} $ are in turn mutually independent. In particular, the random variables $U_i$ are set to be exponentially distributed with parameter $\lambda$ , whereas the variables $D_i$ , for any $i\in \mathbb N$ , follow a general absolutely continuous distribution function G, with probability density g.

Denoting by $T_n$ the nth random instant at which the motion changes velocity, we have

\begin{align*}T_{2n}=U^{(n)}+D^{(n)}, \qquad T_{2n+1}=T_{2n}+U_{n+1}, \qquad n=0,1,\ldots,\end{align*}

where $U^{(0)}=D^{(0)}=0$ and

\begin{align*}U^{(n)}\;:\!=\;U_{1}+\ldots+U_{n},\qquad D^{(n)}\;:\!=\;D_{1}+\ldots+D_{n},\qquad n=1,\ldots.\end{align*}

Hence the position of the particle at time t can be expressed as

\begin{align*} X(t) =\int_0^t v\,(\!-\!1)^{\Lambda_s} \,\textrm{d}s, \end{align*}

where $v=1$ and

\begin{align*}\Lambda_t\;:\!=\;\sum_{n=1}^{+\infty}{\mathbf{1}}_{\{T_n\leq t\}},\qquad \Lambda_0=0,\end{align*}

is the alternating counting process characterized by random times $T_1,T_2,\cdots$ , which counts the number of the particle velocity changes in [0, t].

Let us consider the process M(t) which represents the maximum level reached by the particle during the time interval [0, t], i.e.

\begin{equation*}M(t)=\sup\nolimits_{0\le s\le t} X(s).\end{equation*}

The evolution of M(t) is governed by an alternating renewal of two phases: a rising phase R and a constant phase C, as shown in Figure 1. During a rising phase of the supremum, the particle moves with positive velocity, whereas during a constant phase C of M(t), the particle moves upward and downward, starting with a downward movement and then continuing to move until it attains the maximal level previously reached. Because of the memoryless property, the rising phases $R_i$ , for any $i\in\mathbb N$ , are independent and exponentially distributed random variables with parameter $\lambda$ . Moreover, the random variables $C_i$ , $i\in\mathbb N$ , are i.i.d. Obtaining their distributions will be the task of the next sections. Obviously, since during a constant phase the maximal level of the particle does not change, the distribution of M(t), for any fixed time t, is equal to that of the portion of time in [0, t] spent during a rising phase.

Figure 1. The telegraph process X(t) and the corresponding supremum process M(t).

3. The distribution of the supremum

Let us introduce the auxiliary compound Poisson process

(1) \begin{equation}Y(t)=\sum_{n=0}^{N(t)}D_n, \qquad t\ge 0,\end{equation}

where $D_0=0$ and

(2) \begin{equation}N(t)\;:\!=\;\max\left\{n\in\mathbb N_0\;:\sum_{i=1}^n U_i\le t\right\},\end{equation}

which is a Poisson process with intensity $\lambda$ . From the definition, the condition $Y(t)=s-t$ , $0\le t\le s$ , means that during the interval [0, s] the particle moves up for t time instants and down for the remaining $s-t$ time instants. From Equation (1), we have that the probability law of Y(t) is characterized by a discrete component

\begin{align*}\mathbb P(Y(t)=0) = \mathbb P(N(t)=0)=\textrm{e}^{-\lambda t}\end{align*}

and an absolutely continuous component

(3) \begin{equation}h(u,t)\;:\!=\;\frac{\textrm{d}}{\textrm{d}u} \mathbb P(Y(t)\leq u)=\sum_{n=1}^{+\infty}p(n,\lambda t)g^{(n)}(u),\quad t>0,\end{equation}

where

(4) \begin{equation}p(n,\lambda t)\;:\!=\;\mathbb P(N(t)=n)=\frac{(\lambda t)^n}{n!}\textrm{e}^{-\lambda t},\qquad n\in {\mathbb N}_{0},\end{equation}

is the probability distribution of the Poisson process N(t) (see Equation (2)), and $g^{(n)}$ is the n-fold convolution of the density g of the random variables $D_i$ .

Denoting by $D_1$ the duration of the first downward period, or equivalently the length of the first downward movement, let us set

\begin{equation*}T\;:\!=\;T(D_1)\;:\!=\;\inf\{t\;:\; Y(t)=-D_1+t\}.\end{equation*}

The stopping time $T(D_1)$ represents the smallest time such that

\begin{equation*}\sum_{i=1}^{N(T(D_1))}D_i-\sum_{i=1}^{N(T(D_1))+1}U_i+D_1=0,\end{equation*}

so that $T(D_1)$ corresponds to half the time taken by the particle to reach the previous maximal level. Hence, denoting by $C\;:\!=\;C(D_1)$ the duration of the constant phase, we have

(5) \begin{equation}C(D_1)\stackrel{d}{=}2\cdot T(D_1),\end{equation}

where $\overset{d}{=}$ denotes equality in distribution. Moreover, from Equation (5), a lower bound for $T(D_1)$ easily follows. Specifically, since the duration of a constant phase is at least equal to $2\cdot D_1$ , we have $T(D_1)\ge D_1$ .

Note that, because of the assumption of i.i.d. downward times, we have $T\equiv T(D)$ , where the random variable T(D) has finite moments if and only if (see Zacks [Reference Zacks, Perry, Bshouty and Bar-Lev39])

(6) \begin{equation}\mathbb E(D)<\mathbb E(U).\end{equation}

Theorem 1. The probability density of the stopping time T is given by

(7) \begin{equation}f_T(t)=\frac{\textrm{d}}{\textrm{d}t} \mathbb P(T\leq t)=e^{-\lambda t}g(t)+\frac{1}{t}\sum_{n=1}^{+\infty}p(n,\lambda t)\int_0^t x g^{(n)}(t-x)g(x)\textrm{d}x, \qquad t\ge 0,\end{equation}

where $p(n,\lambda t)$ is as defined in Equation (4) and $g^{(n)}(x)$ denotes the n-fold convolution of the density g.

Proof. The conditional density of the stopping time $T(D_1)$ , given $D_1=x$ , can be expressed as follows:

(8) \begin{equation}f_T(t|D_1=x)=\frac{x}{t}h(t-x,t), \qquad t>x>0,\end{equation}

where h(u, t) is as defined in Equation (3).

Moreover, the probability that $T(D_1)=D_1$ , conditional on the duration of the first downward movement $D_1$ , can be obtained by means of the compound process Y(t). Indeed, we have

(9) \begin{equation}\mathbb P\left(T(D_1)=D_1|D_1\right)=\mathbb P\left(Y(D_1)=0\right)=e^{-\lambda D_1}.\end{equation}

Hence, by Equations (8) and (9), and recalling Equation (3), we immediately obtain Equation (7).

Using Equation (5), and recalling Theorem 1, we immediately obtain the following corollary.

Corollary 1. For $t>0$ , the probability density of the constant phase C can be expressed as

(10) \begin{equation}f_C(t)\;:\!=\;\frac{\textrm{d}}{\textrm{d}t} \mathbb P(C\leq t)=\frac{1}{2} e^{-\frac{\lambda t}{2}}g\Big(\frac{t}{2}\Big)+\frac{1}{t}\sum_{n=1}^{+\infty}p\Big(n,\frac{\lambda t}{2}\Big)\int_0^{t/2} x\,g^{(n)}\!\Big(\frac{t}{2}-x\Big)g(x)\textrm{d}x.\end{equation}

In the sequel we shall denote by $F_C(t)\;:\!=\;\mathbb P(C\leq t)$ , $t>0$ , the distribution function of the random variable C and by $F_C^{(n)}(t)$ its n-fold convolution.

Aiming to formulate the distribution of the supremum process M(t), let us introduce the compound Poisson process

(11) \begin{equation}Z(t)=\sum_{n=0}^{N(t)} C_n,\end{equation}

where N(t) is as defined in Equation (2) and the random variables $C_n$ are i.i.d. random variables with probability density given in Equation (10). By Equation (11), we immediately get the following remark.

Remark 1. The moment generating function of the process Z(t) is given by

(12) \begin{equation}\psi_{Z}(t,s)\;:\!=\;\mathbb E(e^{sZ(t)})= e^{-\lambda t [1-\psi_C(s)]},\end{equation}

where

(13) \begin{equation}\psi_C(s)\;:\!=\;\mathbb E(e^{sC})\end{equation}

is the moment generating function of the random variable C. Moreover, the expected value and the variance of Z(t) can be expressed in terms of the moments of C. In particular, we have

(14) \begin{equation}\mathbb E(Z(t))=\lambda t \mathbb E(C),\qquad Var(Z(t))=\lambda t \mathbb E(C^2).\end{equation}

The following theorem provides the expression for the survival distribution of M(t).

Theorem 2. For $t>0$ , the survival function of M(t) is given by

(15) \begin{equation}\mathbb P (M(t)>w)=\sum_{n=0}^{+\infty}p(n,\lambda w) F_C^{(n)}(t-w),\end{equation}

where $p(n,\lambda w)$ is as defined in Equation (4) and $F_C^{(n)}(t)$ is the n-fold convolution of the distribution function of C.

Proof. By Equation (11), the distribution function of Z(t) can be expressed as

\begin{equation*}H_Z(z,t)\;:\!=\;\mathbb P(Z(t)\le z)=\sum_{n=0}^{+\infty}p(n,\lambda t) F_C^{(n)}(z).\end{equation*}

Hence, the proof immediately follows from noting that

\begin{equation*}\mathbb P (M(t)>w)=H_Z(t-w,w).\end{equation*}

The relevance of Theorem 2 is its validity whatever the distribution of D. However, the expression given in Equation (15) may lead to complicated calculations because of the presence of the n-fold convolution of the distribution function of C. Therefore it is useful to obtain some bounds for the survival function of M(t), which are provided in the following proposition.

Proposition 1. For $t\geq w>0$ , the following relationships hold:

\begin{align*}L(t,w)\leq\mathbb P(M(t)>w)\leq U(t,w),\end{align*}

where

(16) \begin{equation}L(t,w)\;:\!=\;\sum_{k=0}^{+\infty}\frac{(\lambda w)^k e^{-\lambda w}}{k!}\left[F_C\left(\frac{t-w}{k}\right)\right]^k,\qquad U(t,w)\;:\!=\;\exp\left[-\lambda w (1-F_C(t-w))\right],\end{equation}

and $F_C(t)$ is the distribution function of the constant phase C.

Proof. The result easily follows from considering $C_{(k)}\;:\!=\;\max\{C_1,\ldots,C_k\}$ , $k\in\mathbb N$ , and recalling Equation (11). Indeed, we have

\begin{align*}\mathbb P(Z(t)\le w)&=\sum_{k=0}^{+\infty}\frac{(\lambda t)^ke^{-\lambda t}}{k!}\cdot \mathbb P\left(\sum_{n=0}^k C_n\le w\right)\\[5pt]& \quad \leq \sum_{k=0}^{+\infty}\frac{(\lambda t)^k e^{-\lambda t}}{k!}\left[F_C(w)\right]^k=\exp\left[-\lambda t(1-F_C(w))\right],\end{align*}

and, similarly,

\begin{equation*}\mathbb P(Z(t)\le w)\geq\sum_{k=0}^{+\infty}\frac{(\lambda t)^k e^{-\lambda t}}{k!}\mathbb P\left(C_{(k)}\le \frac{w}{k}\right)=\sum_{k=0}^{+\infty}\frac{(\lambda t)^k e^{-\lambda t}}{k!}\left[F_C\left(\frac{w}{k}\right)\right]^k.\end{equation*}

3.1. First passage time distribution

Let $\Theta_w$ be the first passage time of the process M(t) through a fixed boundary $w>0$ , i.e.

(17) \begin{equation}\Theta_w\;:\!=\;\inf\{t\ge 0: M(t)=w\}.\end{equation}

Note that the random variable $\Theta_w$ is identified with the first time instant at which the process X(t) crosses a fixed level w. The relationship between the moments of $\Theta_w$ and those of the constant phase C is shown in the following proposition.

Proposition 2. For $w>0$ , the expected value and the variance of the first passage time $\Theta_w$ are given by

(18) \begin{equation}{\mathbb E}(\Theta_w)={\mathbb E}(Z(w))=\lambda w {\mathbb E}(C),\end{equation}
(19) \begin{equation}Var(\Theta_w)=2 w {\mathbb E}(Z(w))+ Var(Z(w))=2\lambda w^2 {\mathbb E}(C)+\lambda w {\mathbb E}(C^2),\end{equation}

where the compound Poisson process Z(t) is as defined in Equation (11).

Proof. By using integration by parts, we have

\begin{equation*}\begin{aligned}\psi_Z(w,-s)&=\int_0^{+\infty}e^{-sx}h_Z(x,w)\textrm{d}x=s\int_0^{+\infty}e^{-sx}\mathbb P(Z(w)\le x)\textrm{d}x\\[5pt] &=s\int_0^{+\infty}e^{-sx}\mathbb P(M(x+w)>w)\textrm{d}x=s e^{sw}\int_{w}^{+\infty}e^{-sy}\mathbb P(M(y)>w)\textrm{d}y.\end{aligned}\end{equation*}

Hence, the equality

(20) \begin{equation}\frac{e^{-sw} \psi_Z(w,-s)}{s}=\int_{w}^{+\infty}e^{-sy}\mathbb P(M(y)>w)\textrm{d}y\end{equation}

holds, so that

(21) \begin{equation}\int_{w}^{+\infty}e^{-sy}\!\left(1-\mathbb P(M(y)>w)\right)\!\textrm{d}y=\frac{e^{-sw}}{s}-\int_w^{+\infty}e^{-sy}\mathbb P(M(y)>w)\textrm{d}y=\frac{e^{-sw}}{s}(1-\psi_Z(w,-s)).\end{equation}

Since

\begin{align*}\lim_{s\to 0}\int_{w}^{+\infty}e^{-sy}\left(1-\mathbb P(M(y)>w)\right)\textrm{d}y=\int_w^{+\infty}\mathbb P(M(y)\le w)\textrm{d}y=\int_w^{+\infty}\mathbb P(\Theta_w>y)\textrm{d}y=\mathbb E(\Theta_w),\end{align*}

and if we recall that

\begin{align*}\lim_{s\to 0} \frac{e^{-sw}}{s}(1-\psi_Z(w,-s))=\lim_{s\to 0} \left.\frac{\textrm{d}}{\textrm{d}s}\psi_Z(w,-s)\right|_{s=0}=\mathbb E(Z(w)),\end{align*}

the proof follows from Equation (21) and the first of the equations in (14).

Moreover, from Equation (20), by differentiating both members twice with respect to s and evaluating the result for $s=0$ , we obtain

\begin{align*}2 w \mathbb E(Z(w))+\mathbb E(Z^2(w))=2 w \frac{\textrm{d}}{\textrm{d}s}M_Z(w,s)\Big\vert_{s=0}+\frac{\textrm{d}^2}{\textrm{d}s^2}M_Z(w,0)=\int_w^{+\infty} 2 y \mathbb{P}(M(y)\le w) \textrm{d}y,\end{align*}

so that

(22) \begin{equation}\mathbb E(\Theta_w^2)= \mathbb E(Z(w))(2w+\mathbb E(Z(w)))+Var (Z(w)).\end{equation}

Hence, Equation (19) follows from Equations (14) and (22).

4. The distribution of the maximum

This section is devoted to the analysis of the distribution of the maximum under different choices of the distribution of the downward random times.

4.1. Exponentially distributed downward random times

Let us assume that the random variables $D_i$ , $i\in {\mathbb N}$ , are exponentially distributed with parameter $\mu>0$ . From now on, we assume $\mu>\lambda$ so that the condition (6) is satisfied. The probability density function of the constant phase C is provided in the following theorem.

Theorem 3. In the case of exponentially distributed downward times, for $t> 0$ , the probability density function of the constant phase random variable is given by

(23) \begin{equation}f_C(t)=\frac{\mu e^{-(\lambda+\mu)t/2}}{t\sqrt{\lambda\mu}}I_1(t\sqrt{\lambda\mu}),\end{equation}

where $I_n(x)$ is the nth-order modified Bessel function of the first kind,

(24) \begin{equation}I_n(x)\;:\!=\;\sum_{m=0}^{+\infty}\frac{1}{\Gamma(m+n+1)m!}\left(\frac x2\right)^{2m+n}.\end{equation}

Proof. If $D\sim Exp(\mu)$ , then $g^{(n)}(x)=\frac{\mu^n x^{n-1}e^{-\mu x}}{(n-1)!}$ , $n\in\mathbb N$ , $x>0$ . Hence, from Equation (7), for $t\ge0$ , it follows that

\begin{eqnarray*}&&\frac{1}{t}\sum_{n=1}^{+\infty} \frac{(\lambda t)^n e^{-\lambda t}}{n!} \int_0^t \frac{x \mu^n (t-x)^{n-1}e^{-\mu (t-x)}}{(n-1)!} \mu e^{-\mu x} \textrm{d}x\\[5pt] && =\frac{\mu e^{-(\lambda+\mu)t}}{t}\sum_{n=1}^{+\infty}\frac{(\lambda\mu t)^n}{n!(n-1)!}\int_0^t x(t-x)^{n-1}\textrm{d}x\\[5pt] && =\mu e^{-(\lambda+\mu)t}\sum_{n=1}^{+\infty} \frac{(\lambda\mu t^2)^n}{n!(n+1)!}=\mu e^{-(\lambda+\mu)t}\left(-1+\frac{I_1(2t\sqrt{\lambda\mu})}{t\sqrt{\lambda\mu}}\right),\end{eqnarray*}

so that the proof follows from Equation (10).

Plots of the density $f_C(t)$ and of the corresponding distribution function $F_C(t)$ are given in Figure 2 for some choices of the parameters.

Figure 2. The density $f_C(t)$ (left) and the distribution function $F_C(t)$ (right) for $\lambda=0.2$ (solid), $\lambda=0.5$ (dashed), $\lambda=0.8$ (dotted), and $\mu=0.5$ .

In the following proposition we provide the expression for the n-fold convolution of the density $f_C$ .

Proposition 3. In the case of exponentially distributed downward times, for $t>0$ and $n\in\mathbb N$ , the n-fold convolution of the density $f_C$ is given by

(25) \begin{equation}f_C^{(n)}(t)=\frac{n}{t}\left(\frac{\mu}{\lambda}\right)^{n/2} e^{-(\lambda+\mu)t/2}I_n(t\sqrt{\lambda\mu}).\end{equation}

Proof. The proof is by induction on n. The equality holds for $n=1$ . We suppose that Equation (25) holds for $n\in\mathbb N$ , and we show it is then true for $n+1$ . We have

\begin{equation*}\begin{aligned}f_C^{(n+1)}(t)&=\int_0^tf_C^{(n)}(x)f_C(t-x)\textrm{d}x\\[5pt] &=\int_0^t \frac{n}{x}\left(\frac{\mu}{\lambda}\right)^{n/2} e^{-(\lambda+\mu)x/2}I_n(x\sqrt{\lambda\mu})\frac{\mu e^{-(\lambda+\mu)(t-x)/2}}{(t-x)\sqrt{\lambda\mu}}I_1((t-x)\sqrt{\lambda\mu})\textrm{d}x\\[5pt] &=\frac{n+1}{t}\left(\frac{\mu}{\lambda}\right)^{(n+1)/2}e^{-(\lambda+\mu)t/2}I_{n+1}(t\sqrt{\lambda\mu}),\end{aligned}\end{equation*}

following the equation (cf. Equation 11 in Section $2.15.19$ of Prudnikov et al. [Reference Prudnikov, Brychkov and Marichev33])

\begin{equation*}\int_0^a \frac{1}{x(a-x)}I_n(ac-cx)I_m(cx)\textrm{d}x=\frac{n+m}{anm}I_{m+n}(ac),\qquad a,n,m>0.\end{equation*}

Starting from Proposition 3, we now provide an explicit expression for the density $h_Z(z,t)$ of the compound process Z(t).

Proposition 4. In the case of exponentially distributed downward times, for $t>0$ , the density function of Z(t) is given by

\begin{equation*}h_Z(z,t)\;:\!=\;\frac{\textrm{d}}{\textrm{d}z} \mathbb P(Z(t)\le z) =\frac{e^{-\lambda t}e^{-(\lambda+\mu)z/2}}{z}(t\sqrt{\lambda\mu})\left(2\frac{t}{z}+1\right)^{-1/2}I_1\left(z\sqrt{\lambda\mu\left(1+\frac{2t}{z}\right)}\right).\end{equation*}

Moreover, we have $\mathbb P(Z(t)=0)=e^{-\lambda t}$ .

Proof. The equality follows from straightforward calculations. Indeed, for any $t> 0$ ,

\begin{equation*}\begin{aligned}h_Z(z,t)&=\sum_{n=0}^{+\infty}\frac{(\lambda t)^n e^{-\lambda t}}{n!} \, \frac{\mu^n}{(\lambda\mu)^{n/2}}\, \frac{n}{z}\, e^{-(\lambda+\mu)z/2}I_n(z\sqrt{\lambda\mu})\\[5pt] &=\frac{e^{-\lambda t}e^{-(\lambda+\mu)z/2}}{z}\sum_{n=1}^{+\infty}\frac{(t\sqrt{\lambda\mu})^{n/2}}{(n-1)!}I_n(z\sqrt{\lambda\mu})\\[5pt] &=\frac{e^{-\lambda t}e^{-(\lambda+\mu)z/2}}{z}t\sqrt{\lambda\mu}\left(2\frac{t}{z}+1\right)^{-1/2}I_1\left(\sqrt{\lambda\mu z^2+2\lambda\mu tz}\right)\\[5pt] &=\frac{e^{-\lambda t}e^{-(\lambda+\mu)z/2}}{z}t\sqrt{\lambda\mu}\left(2\frac{t}{z}+1\right)^{-1/2}I_1\left(z\sqrt{\lambda\mu(1+2t/z)} \right),\end{aligned}\end{equation*}

where we have used the following relation (cf. Equation 4 in Section $5.8.3$ of Prudnikov et al. [Reference Prudnikov, Brychkov and Marichev33]):

\begin{equation*}\sum_{k=0}^{+\infty}\frac{t^k}{k!}I_{n+k}(z)=\left(2\frac{t}{z}+1\right)^{-n/2}I_n\left(\sqrt{z^2+2tz}\right).\end{equation*}

Some plots of the density $h_Z(z,t)$ are provided in Figure 3 for different choices of the parameters.

Figure 3. Left: the density $h_Z(z,t)$ as a function of t for $z=2$ , $\lambda=0.2$ (solid), $\lambda=0.5$ (dashed), $\lambda=0.8$ (dotted), and $\mu=0.8$ . Right: the density $h_Z(z,t)$ as a function of z for $\mu=0.3$ , $\lambda=0.5$ , $t=1$ (solid), $t=2$ (dashed), and $t=3$ (dotted).

The following proposition provides an expression for the survival function of the supremum M(t) in terms of a series of products of modified Bessel functions and generalized Laguerre polynomials.

Proposition 5. In the case of exponentially distributed downward times, for $t>0$ and $0\le w< t$ , the survival function of the supremum M(t) can be expressed as

(26) \begin{eqnarray}&& \mathbb P (M(t)>w)= 1- \frac{2 w \sqrt{\lambda \mu} }{(\lambda+\mu) \sqrt{t^2-w^2}} \textrm{e}^{-\frac{(\lambda+\mu)t}{2}}\textrm{e}^{\frac{(\mu-\lambda)w}{2}} \nonumber \\[5pt] && \times \sum_{j=0}^{+\infty} j! \left(-\frac{2 \sqrt{\lambda\mu}}{(\lambda+\mu)^2\sqrt{t^2-w^2}}\right)^j\, I_{j+1}\left(\sqrt{\lambda\mu (t^2-w^2)}\right)\, L_{j}^{-2j-1}(t(\lambda+\mu)),\end{eqnarray}

where $L_n^k(x)$ , $n\in {\mathbb N}$ , denotes the generalized Laguerre polynomial. Moreover, $\mathbb P (M(t)=t)=e^{-\lambda t}$ .

The proof of Proposition 5 is given in Appendix A.

The following proposition provides an asymptotic result concerning the survival probability.

Proposition 6. For $w>0$ , we have

\begin{equation*}\lim_{t\to+\infty} \mathbb {P}\left(M(t)>w\right)=1.\end{equation*}

Proof. This follows immediately from Equation (26), if we recall Equation (9.7.1) of [Reference Abramowitz and Stegun1] and the definition of generalized Laguerre polynomials (see, for instance, [Reference Andrews2, p. 220]).

In the case $\lambda=\mu$ , the following proposition provides an expression for the survival function of the supremum M(t) in terms of the Marcum Q-function, which is a special function occurring as a complementary cumulative distribution function for a non-central chi-squared random variable.

Proposition 7. In the case of exponentially distributed downward times, for $t>0$ , $0\le w<t$ , and $\lambda=\mu$ , the survival function of the supremum M(t) is given by

(27) \begin{equation}\mathbb P\left(M(t)>w\right)= e^{-\lambda t}I_0\left(\lambda\sqrt{t-w}\sqrt{t+w}\right)+2\left[1-Q_{1}\left(\sqrt{\lambda(t+w)},\sqrt{\lambda(t-w)}\right)\right],\end{equation}

where $Q_1\left(a,b\right)$ denotes the Marcum Q-function of order 1, defined as

\begin{align*}Q_1(a,b)=\int_b^{+\infty}x\exp\left(-\frac{x^2+a^2}{2}\right)I_0(ax)\textrm dx,\qquad b>0.\end{align*}

Moreover, $\mathbb P (M(t)=t)=e^{-\lambda t}$ .

Proof. From the definition of the compound Poisson process Z(t) (see Equation (11)), and by Proposition 4, we have that, for $0\le w\le t$ ,

\begin{equation*}\begin{aligned}\mathbb P (M(t)>w)&=H_Z(t-w,w)=\int_0^{t-w}h_Z(x,w)\textrm{d}x\\[5pt] &=e^{-\lambda w}w\sqrt{\lambda\mu}\int_0^{t-w}\frac{e^{-\frac{x(\lambda+\mu)}{2}}}{x}\, \frac{1}{\sqrt{2\frac{w}{x}+1}}I_1\left(x\sqrt{\lambda\mu\left(1+2\frac{w}{x}\right)}\right)\textrm{d}x+{e^{-\lambda w}}.\end{aligned}\end{equation*}

Making use of integration by parts, we have

(28) \begin{align}&\mathbb P\left(M(t)>w\right)=e^{-\lambda t}I_0\left(\lambda \sqrt{t-w}\sqrt{t+w}\right)\nonumber\\[5pt] &+\int_0^{t-w}e^{-\lambda(x+w)}\left[\lambda I_0\left(I_0\left(\lambda\sqrt{x}\sqrt{x+2w}\right)-\frac{\lambda \sqrt{x}}{\sqrt{x+2w}}I_1\left(\lambda \sqrt{x}\sqrt{x+2w}\right)\right)\right]\textrm dx\nonumber \\[5pt] &=e^{-\lambda t}I_0\left(\lambda \sqrt{t-w}\sqrt{t+w}\right)+\lambda \int_w^t e^{-\lambda y}\!\left[I_0\left(\lambda\sqrt{y-w}\sqrt{y+w}\right)-\frac{\sqrt{y-w}}{\sqrt{y+w}}I_1\!\left(\lambda\frac{\sqrt{y-w}}{\sqrt{y+w}}\right)\!\right]\!\textrm dy.\end{align}

Considering the definition of the modified Bessel function of the first kind, we have

(29) \begin{align}&\int_w^t e^{-\lambda y/2}\left(I_0\left(\lambda\sqrt{t+w}\sqrt{y-w}\right)-\sqrt{\frac{y-w}{t+w}}I_1\left( \lambda\sqrt{t+w}\sqrt{y-w}\right)\right)\textrm dy\nonumber\\[5pt] &=\frac{2 e^{-\lambda w/2}}{\sqrt{t+w}}\int_0^{\sqrt{t-w}}z e^{-\lambda z^2/2}\left(\sqrt{t+w}I_0\left(\lambda z \sqrt{t+w}\right)-zI_1\left(\lambda z \sqrt{t+w}\right)\right)\textrm dz\nonumber\\[5pt] &=\frac{2e^{-\lambda w/2}}{\sqrt{t+w}}\int_0^{\sqrt{t-w}}z e^{-\lambda/2 z^2}\left(\sum_{n=0}^{+\infty}\frac{\left(\sqrt{t+w}\right)^{2n+1}\left(\frac{\lambda}{2}z\right)^{2n}}{n!n!}-z \sum_{n=0}^{+\infty}\frac{\left(\sqrt{t+w}\right)^{2n+1}\left(\frac{\lambda}{2}z\right)^{2n+1}}{n!(n+1)!}\right)\textrm dz\nonumber\\[5pt] &=\frac{2e^{-\lambda w/2}}{\lambda}\left[\sum_{n=0}^{+\infty}\frac{\left(\frac{\lambda}{2}(t+w)\right)^n}{n!n!}\gamma\left(n+1,\frac{\lambda}{2}(t-w)\right)-\sum_{n=0}^{+\infty}\frac{\left(\frac{\lambda}{2}(t+w)\right)^n}{n!(n+1)!}\gamma\left(n+2,\frac{\lambda}{2}(t-w)\right)\right]\!,\end{align}

where $\gamma(s,x)$ denotes the lower incomplete gamma function. Hence, recalling the recurrence relation (see, for instance, Equation (6.5.22) of [Reference Abramowitz and Stegun1])

(30) \begin{equation}\gamma(s+1,x)=s\gamma(s,x)-x^se^{-x},\end{equation}

we find that Equation (29) becomes

\begin{align*}&\frac{2e^{-\lambda w/2}}{\lambda}\left[\sum_{n=0}^{+\infty}\frac{\left(\frac{\lambda}{2}(t+w)\right)^n}{n!n!}\gamma\left(n+1,\frac{\lambda(t-w)}{2}\right)\right.\\[5pt] &\left.-\sum_{n=0}^{+\infty}\frac{\left(\frac{\lambda}{2}(t+w)\right)^n}{n!(n+1)!}\left((n+1)\gamma\left(n+1,\frac{\lambda (t-w)}{2}\right)-e^{-\lambda/2(t-w)}\left(\frac\lambda2(t-w)\right)^{n+1}\right)\right]\\[5pt] &=\frac{2e^{-\lambda w/2}}{\lambda}\left[\sum_{n=0}^{+\infty}\frac{\left(\frac{\lambda}{2}(t+w)\right)^n\left(\frac{\lambda}{2}(t-w)\right)^{n+1}}{n!(n+1)!}e^{-\lambda(t-w)/2}\right]\\[5pt] &=\frac{2e^{-\lambda t}}{\lambda}\frac{\sqrt{t-w}}{\sqrt{t+w}}I_1\left(\lambda\sqrt{t-w}\sqrt{t+w}\right).\end{align*}

Hence

\begin{align*}&\int_w^t e^{-\lambda t/2}e^{-\lambda y/2}I_0\left(\lambda \sqrt{y-w}\sqrt{t+w}\right)\textrm dy \\[5pt] &= \int_w^t e^{-\lambda y}\left[I_0\left(\lambda\sqrt{y-w}\sqrt{y+w}-\frac{\sqrt{y-w}}{\sqrt{y+w}}I_1\left(\lambda \sqrt{y-w}\sqrt{y+w}\right)\right)\right]\textrm dy.\end{align*}

Finally, from Equation (28) it follows that

\begin{equation*}\begin{aligned}\mathbb{P}\left(M(t)>w\right)&=e^{-\lambda t}I_0\left(\lambda \sqrt{t-w}\sqrt{t+w}\right)+\lambda \int_w^t e^{-\lambda t/2}e^{-\lambda y/2}I_0\left(\lambda \sqrt{y-w}\sqrt{t+w}\right)\textrm dy\\[5pt] &= e^{-\lambda t}I_0\left(\lambda \sqrt{t-w}\sqrt{t+w}\right) + \int_0^{\lambda(t-w)}e^{-\lambda/2(t+w)-x/2}I_0\left(\sqrt{\lambda x}\sqrt{t+w}\right)\textrm dx,\end{aligned}\end{equation*}

and the desired result immediately follows from recalling the definition of the Marcum Q-function.

The result obtained in Proposition 7 is in agreement with that given in Cinque and Orsingher [Reference Cinque and Orsingher6] (cf. Equation (2.26)), as proven in the following remark.

Remark 2. The survival function of M(t), obtained by considering the distribution function given in Equation (2.26) of [Reference Cinque and Orsingher6] for $c_1=c_2=1$ , can be identified with Equation (27). Indeed,

\begin{align*} &\mathbb P\left(M(t)>w\right)=1-e^{-\lambda t}\sum_{j=0}^{+\infty}I_{j+1}\left(\lambda\sqrt{(t-w)(t+w)}\right)\left[\left(\sqrt{\frac{t+w}{t-w}}\right)^{j+1}-\left(\sqrt{\frac{t-w}{t+w}}\right)^{j+1}\right]\\[5pt] &=1-e^{-\lambda t}\sum_{m=1}^{+\infty}\frac{\left(\frac{\lambda}{2}\right)^m}{m!}(t+w)^m\sum_{j=0}^{m-1}\frac{\left(\frac\lambda 2\right)^j}{j!}(t-w)^j+e^{-\lambda t}\sum_{j=0}^{+\infty}\frac{\left(\frac{\lambda}{2}\right)^j}{j!}(t+w)^j\sum_{h=j+1}^{+\infty}\frac{\left(\frac{\lambda}{2}\right)^h}{h!}(t-w)^h\\[5pt] &=1+e^{-\lambda t}\left(-1+e^{\lambda/2(t-w)}\right)+e^{\lambda/2(t-w)}e^{-\lambda t}\sum_{m=1}^{+\infty}\frac{\left(\frac{\lambda}{2}\right)^m}{m!}(t+w)^m\\[5pt] & \times \left[\frac{\gamma(m+1,\frac\lambda 2(t-w))}{\Gamma\left(m+1\right)}+\frac{\gamma(m,\frac\lambda 2(t-w))}{\Gamma\left(m\right)}-1\right]. \end{align*}

Hence, by Equation (30), the survival function can be expressed as

\begin{equation*}\begin{aligned}\mathbb P\left(M(t)>w\right)=&-2e^{-\lambda t}+2e^{-\frac{\lambda}{2}(t+w)}+e^{-\lambda t}I_0\left(\lambda\sqrt{t-w}\sqrt{t+w}\right)\\[5pt] &+2e^{-\frac{\lambda}{2}(t+w)}\sum_{m=1}^{+\infty}\frac{\left(\frac{\lambda}{2}(t+w)\right)^m}{m!}\gamma\left(m+1,\frac{\lambda}{2}(t-w)\right)\\[5pt] &=e^{-\lambda t}I_0\left(\lambda\sqrt{t-w}\sqrt{t+w}\right)+2\left[1-Q_1\left(\sqrt{\lambda(t+w)},\sqrt{\lambda(t-w)}\right)\right],\end{aligned}\end{equation*}

where the last equality holds by virtue of Equation (61.2.5) of Hansen [Reference Hansen20].

Figure 4 shows the behavior of the distribution function $\mathbb P(M(t)\le w)$ for some choices of the parameters, obtained from (26) as the complementary function.

In the following proposition, we provide the expression for the expected value of M(t).

Proposition 8. In the case of exponentially distributed downward times, the expected value of the supremum M(t) is given by

(31) \begin{equation}\begin{aligned}\mathbb E\left(M(t)\right)&=\frac{1-e^{-\lambda t}}{\lambda}+\frac{\mu}{2\lambda} \sum_{h=0}^{+\infty}\frac{\left(\frac{\lambda\mu}{4}\right)^h }{h!(h+1)!}\sum_{r=0}^{h} \binom{h}{r}\left(\frac{2}{\lambda}\right)^r (r+1)!\\[5pt] &\times \left[\left(\frac{2}{\lambda+\mu}\right)^{1+2h-r}\gamma\left(1+2h-r,\frac{(\lambda+\mu)t}{2}\right)-e^{-\lambda t}t^{1+2h-r} \right.\\[5pt] &\times \left.\sum_{k=0}^{r+1} \frac{(\lambda t)^{k}}{k!}\beta\left(2h-r+1,k+1\right) {_1}F_1\left(1+2h-r;\;2+2h+k-r;\;\frac{(\lambda-\mu)t}{2}\right)\right],\end{aligned}\end{equation}

where ${_1}F_1(a;\;b;\;z)$ is the confluent hypergeometric function

(32) \begin{equation}{_1}F_1(a,b;\;z)\;:\!=\;\sum_{k=0}^{+\infty}\frac{(a)_kz^k}{(b)_kk!},\end{equation}

$\gamma(s,x)$ is the lower incomplete gamma function, and $\beta(x,y)$ denotes the beta function.

Figure 4. The distribution function $\mathbb P(M(t)\le w)$ , in the case of exponentially distributed downward times, when $t=30$ . Left: $\mu=0.5$ , $\lambda=0.5$ (solid), $\lambda=0.3$ (dashed), and $\lambda=0.2$ (dotted). Right: $\lambda=0.5$ , $\mu=0.5$ (solid), $\mu=0.7$ (dashed), and $\mu=0.9$ (dotted).

The proof of Proposition 8 is given in Appendix B.

Remark 3. In the case $\lambda=\mu$ , the expected value of M(t) is given by

(33) \begin{equation}\mathbb E\left(M(t)\right)=e^{-\lambda t}t\left(I_0(\lambda t)+I_1\left(\lambda t\right)\right),\qquad t\ge 0.\end{equation}

The result follows from Equation (31) if we note that for $\mu=\lambda$ ,

\begin{align*}&\sum_{k=0}^{r+1}\frac{(\lambda t)^k}{k!}\beta\left(2h-r+1,k+1\right) {_1F_1}\left(1+2h-r;\;2+2h+k-r;\;0\right)\\[5pt] &\qquad =\frac{(\lambda t)^{1-2(h+1)+r}\left[(2h-r)!\Gamma\left(3+h,\lambda t\right)+(2h+2)!\Gamma(1+2h-r,\lambda t)\right]}{(2h+2)!}.\end{align*}

We remark that Equation (33) coincides with Equation (5.10) of Cinque and Orsingher [Reference Cinque and Orsingher5].

4.2. Erlang distributed downward random times

Let us now consider the case $D\sim Erlang(\mu,k)$ with $k\in \mathbb N$ , $\mu>0$ . In the sequel we shall assume $k/\mu<1/\lambda$ , which ensures the validity of the condition (6). The density of D is given by

\begin{align*}g(t)=\frac{\mu^k t^{k-1}e^{-\mu t}}{(k-1)!}, \quad t\ge 0.\end{align*}

In the following proposition, we provide the expression for the density of the random variable C.

Proposition 9. In the case of Erlang distributed downward random times, for any $t>0$ , the density of the constant phase C is given by

(34) \begin{equation}f_C(t)=\frac{1}{2}\mu^k k \left(\frac{t}{2}\right)^{k-1} e^{-(\lambda+\mu)t/2} W_{k,k+1}\Big[\lambda \mu^k \left(\frac{t}{2}\right)^{k+1}\Big],\end{equation}

where

(35) \begin{equation}W_{\rho,\beta}(z)\;:\!=\;\sum_{j=0}^{+\infty} \frac{z^j}{j! \Gamma(\rho j+ \beta)},\quad \rho>-1, \beta\in {\mathbb C},\end{equation}

is the Wright function.

Proof. From Equation (3), the density of Y(t) is

\begin{align*}h(u,t)=\frac{e^{-\lambda t -\mu u}}{u} \sum_{n=1}^{+\infty}\frac{(\lambda t \mu^k u^k)^n}{n! (nk-1)!},\quad 0<u<t.\end{align*}

Hence, taking into account Equation (7), the probability density of T, for any $t\ge 0$ , is

\begin{eqnarray*}&& f_T(t)=\textrm{e}^{-\lambda t} g(t)+\int_0^t \frac{x}{t}h(t-x,t)g(x)\textrm{d} x\\[5pt] && =\textrm{e}^{-\lambda t} g(t) + \int_0^t\frac{x}{t}\left(\frac{e^{-\lambda t -\mu (t-x)}}{t-x} \sum_{n=1}^{+\infty}\frac{(\lambda t \mu^k (t-x)^k)^n}{n! (nk-1)!} \right) \frac{\mu^kx^{k-1}e^{-\mu x}}{(k-1)!}\textrm{d}x \\[5pt] && =\textrm{e}^{-\lambda t} g(t) + \frac{\mu^k}{t(k-1)!}e^{-\lambda t-\mu t}\sum_{n=1}^{+\infty}\frac{(\lambda t \mu^k)^n}{n! (nk-1)!}\int_0^t x^k(t-x)^{kn-1}\textrm{d}x\\[5pt] && =\textrm{e}^{-\lambda t} g(t) + \frac{\mu^k}{t(k-1)!}e^{-\lambda t-\mu t}\sum_{n=1}^{+\infty}\frac{(\lambda t \mu^k)^n}{n! (nk-1)!}\cdot\frac{t^{k(n+1)}k!(nk-1)!}{(k+kn)!}\\[5pt] && =\textrm{e}^{-\lambda t} g(t)+ \mu^k k t^{k-1} e^{-t(\lambda+\mu)}\sum_{n=1}^{+\infty}\frac{(\lambda\mu^k t^{k+1})^n}{n!(k+kn)!}.\end{eqnarray*}

Finally, by Equation (10), we have for $t\ge 0$

\begin{eqnarray*}&& f_C(t)=\frac{1}{2} e^{-(\lambda+\mu)t/2} \frac{\mu^k ({t}/{2})^{k-1}}{(k-1)!}+\frac{1}{2}\mu^k k \left(\frac{t}{2}\right)^{k-1}e^{-(\lambda+\mu)t/2}\sum_{n=1}^{+\infty} \frac{(\lambda \mu^k \left({t}/{2}\right)^{k+1})^n}{n!(k+kn)!}\\[5pt] && =\frac{1}{2}\mu^k k \left(\frac{t}{2}\right)^{k-1}e^{-(\lambda+\mu)t/2} \sum_{n=0}^{+\infty} \frac{(\lambda \mu^k \left({t}/{2}\right)^{k+1})^n}{n!(k+kn)!},\end{eqnarray*}

so that the proof immediately follows from Equation (35).

Remark 4. Note that for $k=1$ Equation (34) becomes

\begin{align*}\begin{aligned}f_C(t)\big|_{k=1}=&\frac12\mu e^{-(\lambda+\mu)t/2}\,W_{1,2}\left(\lambda\mu\left(\frac t2\right)^2\right)=\frac12\mu e^{-(\lambda+\mu)t/2}\,\sum_{m=0}^{+\infty}\frac{\left(\lambda\mu\left(\frac t2\right)^2\right)^m}{m!(m+1)!}\\[5pt] =&\mu e^{-(\lambda+\mu)t/2}\frac{I_1\left(\sqrt{\lambda\mu}t\right)}{\sqrt{\lambda\mu}t},\quad t\ge 0,\end{aligned}\end{align*}

which is the density of C in the case of exponentially distributed downward random times.

Proposition 10. In the case of Erlang distributed downward times, the upper bound U(t, w) provided in Proposition 1 has the following expression: for any $t>w$ ,

(36) \begin{equation}U(t,w)=\exp\left\{-\lambda w\left[1-k\left(\frac{\mu}{\lambda+\mu}\right)^k\sum_{j=0}^{+\infty}\frac{(\lambda\mu^k)^j\gamma\left(j+jk+k,\frac{(\lambda+\mu)(t-w)}{2}\right)}{j!(\lambda+\mu)^{j+jk}\Gamma\left(jk+k+1\right)} \right]\right\},\end{equation}

where $\gamma(s,x)$ is the lower incomplete gamma function.

Proof. From Equation (34), the distribution function of C is given by

(37) \begin{equation}F_C(t)=\int_0^t f_C(s)\textrm{d}s=k\left(\frac{\mu}{\lambda+\mu}\right)^k\sum_{j=0}^{+\infty}\frac{(\lambda\mu^k)^j\gamma\left(j+jk+k,\frac{(\lambda+\mu)t}{2}\right)}{j!(\lambda+\mu)^{j+jk}\Gamma\left(jk+k+1\right)}.\end{equation}

Hence, the result immediately follows from Proposition 1.

In Figure 5 we provide plots of the lower bound L(t, w) defined in Equation (16) with $F_C(w)$ as given in Equation (37) and U(t, w) as given in Equation (36), for different choices of parameters.

Figure 5. The lower bound L(t, w) and the upper bound U(t, w) of the survival function of M(t), in the case of Erlang distributed downward times, for $\mu=9$ , $k=2$ , $t=3$ , $\lambda=2$ (left), and $\lambda=3$ (right).

4.3. Weighted exponentially distributed downward random times

Let us assume that $D\sim WE(\alpha,\mu)$ , with $\alpha,\mu>0$ , so that the density of D is given by

(38) \begin{equation}g(t)=\frac{\alpha+1}{\alpha}\mu e^{-\mu t} (1-e^{-\alpha\mu t}), \quad t>0.\end{equation}

A random variable having such a density is said to have a weighted exponential distribution (see Gupta and Kundu [Reference Gupta and Kundu19] and Das and Kundu [Reference Das and Kundu8]). Note that the exponential distribution can be obtained from Equation (38) by letting $\alpha\to +\infty$ . We assume $0<\lambda\le \frac{\mu}{2}$ , $\alpha>0$ (or $\frac{\mu}{2}<\lambda<\mu$ , $\alpha>\frac{\mu-2\lambda}{\lambda-\mu}$ ), so that the condition (6) is fulfilled. In the following proposition we obtain the expression for the density of the constant phase C.

Theorem 4. In the case of weighted exponentially distributed downward times, the probability density function of the random variable C is given by

(39) \begin{equation}f_C(t)=\frac{(1+\alpha)\mu}{2\alpha}e^{-\frac{(\lambda+\mu)t}{2}-\frac{\alpha\mu t}{4}}\sqrt{\pi\alpha\mu\frac{t}{2}}\sum_{n=0}^{+\infty}\frac{\left(\lambda\frac{t^2}{4}\right)^n}{n!(n+1)!}\left(\frac{(1+\alpha)\mu}{\alpha}\right)^nI_{n+1/2}\left(\frac{\alpha\mu t}{4}\right),\quad t\ge 0.\end{equation}

Proof. From Condition 4 of [Reference Gupta and Kundu19], by considering two independent exponentially distributed random variables, i.e. $U\sim Exp(\mu)$ and $V\sim Exp(\mu(1+\alpha))$ , we can easily see that

\begin{align*}U+V\overset{d}{=}D \sim WE(\alpha,\mu).\end{align*}

Hence, if $\{D_1,\ldots,D_n\}$ is a collection of n independent random variables such that $D_i\sim WE(\alpha,\mu)$ for any $i=1,\ldots,n$ , then

\begin{equation*}\sum_{i=1}^n D_i\overset{d}{=}\widetilde U +\widetilde V,\end{equation*}

where $\widetilde U\sim Erlang(\mu(1+\alpha),n)$ and $\widetilde V\sim Erlang (\mu,n)$ . Moreover, denoting by $f_{\widetilde U}(x)$ and $f_{\widetilde V}(x)$ the densities of $\widetilde U$ and $\widetilde V$ respectively, we have

\begin{equation*}\begin{aligned}g^{(n)}(t)&=\int_{-\infty}^{+\infty} f_{\widetilde U}(x)f_{\widetilde V}(t-x)\textrm{d}x=\int_0^t \frac{\mu^n (1+\alpha)^nx^{n-1}e^{-\mu(1+\alpha)x}}{(n-1)!}\cdot \frac{\mu^n(t-x)^{n-1}e^{-\mu(t-x)}}{(n-1)!}\textrm{d}x\\[5pt] &=\frac{\mu^{2n}(1+\alpha)^n}{[(n-1)!]^2}\int_0^t x^{n-1}e^{-\mu(1+\alpha)x}(t-x)^{n-1}e^{-\mu(t-x)}\textrm{d}x\\[5pt] &=\left[\frac{\mu(1+\alpha)}{\alpha}\right]^n\frac{e^{-\frac{1}{2}(2+\alpha)\mu t}}{(n-1)!}\sqrt{\frac{\alpha\mu \pi }{t}} t^n I_{n-1/2}\left(\frac{\alpha\mu t}{2}\right).\end{aligned}\end{equation*}

Hence, from Equation (7), it follows that

\begin{equation*}\begin{aligned}f_T(t)&=\mu e^{-(\lambda +\mu)t}\frac{\alpha+1}{\alpha}(1-e^{-\alpha\mu t})-\frac{1}{t}\sum_{n=1}^{+\infty}\frac{(\lambda t)^n e^{-\lambda t}}{n!}\int_0^te^{-\mu t-\alpha\mu t-1/2\alpha\mu x}\left(-1+e^{\alpha\mu(t-x)}\right)\\[5pt] &\times \sqrt{\frac{\alpha\mu\pi}{x}}\left(\frac{(1+\alpha)\mu x}{\alpha}\right)^{n+1}(x-t)I_{n-1/2}\left(\frac{\alpha\mu x}{2}\right)\frac{1}{n! \, x}\textrm{d}x\\[5pt] &=\mu e^{-(\lambda +\mu)t}\frac{\alpha+1}{\alpha}(1-e^{-\alpha\mu t}) -\frac{e^{-\mu t-\lambda t -\alpha\mu t}}{t}\sqrt{\alpha\mu \pi}\\[5pt] &\times \sum_{n=1}^{+\infty}\frac{(\lambda t)^n}{n!(n-1)!}\left(\frac{(1+\alpha)\mu}{\alpha}\right)^{n+1}\int_0^t e^{\frac{\alpha\mu x}{2}}x^{n-1/2}(t-x) I_{n-1/2}\left(\frac{\alpha\mu x}{2}\right)\textrm{d}x\\[5pt] &+\frac{e^{-\mu t-\lambda t}}{t}\sqrt{\alpha\mu \pi}\sum_{n=1}^{+\infty}\frac{(\lambda t)^n}{n!(n-1)!}\left(\frac{(1+\alpha)\mu}{\alpha}\right)^{n+1}\!\int_0^t e^{-\frac{\alpha\mu x}{2}}x^{n-1/2}(t-x) I_{n-1/2}\!\left(\frac{\alpha\mu x}{2}\right)\!\textrm{d}x\\[5pt] &=\mu e^{-(\lambda +\mu)t}\frac{\alpha+1}{\alpha}(1-e^{-\alpha\mu t})+e^{-\mu t-\lambda t-\frac{\alpha\mu t}{2}}\sqrt{\alpha\mu\pi t}\sum_{n=1}^{+\infty}\frac{(\lambda t^2)^n}{n!(n+1)!}\left(\frac{(1+\alpha)\mu}{\alpha}\right)^{n+1}\\[5pt] &\times I_{n+1/2}\left(\frac{\alpha\mu t}{2}\right) =e^{-\mu t-\lambda t-\frac{\alpha\mu t}{2}}\sqrt{\alpha\mu\pi t}\sum_{n=0}^{+\infty}\frac{(\lambda t^2)^n}{n!(n+1)!}\left(\frac{(1+\alpha)\mu}{\alpha}\right)^{n+1}I_{n+1/2}\left(\frac{\alpha\mu t}{2}\right).\end{aligned}\end{equation*}

Finally, the result follows from taking into account Equation (10).

Remark 5. For $\alpha\to +\infty$ , since (see, for instance, Equation (9.7.1) of [Reference Abramowitz and Stegun1])

\begin{align*}I_{n+1/2}\left(\frac{\alpha \mu t}{4}\right)\propto \frac{e^{\alpha\mu t/4}}{\sqrt{\frac{\pi\alpha\mu t}{2}}},\end{align*}

we have that

\begin{align*}\lim_{\alpha\to+\infty} f_C(t)=\frac 12\mu e^{-(\lambda+\mu)t/2} \frac{I_1\left(t\sqrt{\lambda\mu}\right)}{\sqrt{\lambda\mu}t},\quad t\ge 0,\end{align*}

which is identical to the density function of C in the case of exponentially distributed downward random times.

The following proposition provides the expression for the upper bound for the survival function of M(t).

Proposition 11. In the case of weighted exponentially distributed downward times, the upper bound U(t, w) (see Proposition 1) has the following expression:

(40) \begin{equation}\begin{aligned}U(t,w)&=\exp\left\{-\lambda w \left[1- \frac{(1+\alpha)\mu}{2\alpha}\frac{\sqrt{2\pi}\alpha\mu}{(\lambda+\mu+\alpha\mu)^2}\sum_{n=0}^{+\infty}\frac{1}{n!(n+1)!}\left(\frac{(1+\alpha)\mu\lambda}{\alpha}\right)^n\right. \right.\\[5pt] &\times \left. \left. \frac{1}{(\lambda+\mu+\alpha\mu)^{3n}}\left(\frac{\alpha\mu}{2}\right)^n\sum_{k=0}^{+\infty}\left(\frac{\alpha\mu/2}{\lambda+\mu+\alpha\mu}\right)^{2k}\frac{1}{\Gamma\left(k+n+3/2\right)k!} \right.\right.\\[5pt] &\left.\left. \times\gamma\left(2+2k+3n,(\lambda+\alpha\mu+\mu)\frac{t-w}{2}\right)\right]\right\},\quad t>w>0,\end{aligned}\end{equation}

where $\gamma(s,x)$ is the lower incomplete gamma function.

Proof. Recalling Equation (24), from Equation (39) we have that the distribution of C, for $t>w>0$ , is given by

(41) \begin{equation}\begin{aligned}F_C(w)&=\frac{(1+\alpha)\mu}{2\alpha}\sqrt{\pi\alpha\mu} \sum_{n=0}^{+\infty} \frac{1}{n!(n+1)!}\left(\frac{(1+\alpha)\mu\lambda}{\alpha}\right)^n\\[5pt] & \times\int_0^w \exp\left(-\frac{(\lambda+\mu)t}{2}-\frac{\alpha\mu t}{4}\right)\left(\frac{t}{2}\right)^{2n+1/2} I_{n+1/2}\left(\frac{\alpha\mu t}{2}\right)\textrm dt\\[5pt] &=\frac{(1+\alpha)\mu}{2\alpha}\sqrt{\pi\alpha\mu} \sum_{n=0}^{+\infty} \frac{1}{n!(n+1)!}\left(\frac{(1+\alpha)\mu\lambda}{\alpha}\right)^n\\[5pt] &\times \sum_{k=0}^{+\infty}\frac{1}{\Gamma(k+n+3/2)k!}\int_0^w \exp\left(-\frac{(\lambda+\mu)t}{2}-\frac{\alpha\mu t}{4}\right)\left(\frac{t}{2}\right)^{2n+1/2}\left(\frac{\alpha\mu t}{4}\right)^{2k+n+1/2}\textrm dt\\[5pt] &=\frac{(1+\alpha)\mu}{2\alpha}\sqrt{\pi\alpha\mu} \sum_{n=0}^{+\infty} \frac{1}{n!(n+1)!}\left(\frac{(1+\alpha)\mu\lambda}{\alpha}\right)^n\\[5pt] &\times \sum_{k=0}^{+\infty}\frac{2^{1/2-2k-n}}{\Gamma\left(k+n+3/2\right)k!}\cdot\frac{(\alpha\mu)^{1/2+2k+n}}{(\lambda+\mu+\alpha\mu)^{2+2k+3n}} \,\gamma\left(2+2k+3n,(\lambda+\alpha\mu+\mu)\frac{w}{2}\right).\end{aligned}\end{equation}

Hence, the result follows from Proposition 1.

Figure 6 shows plots of the lower bound L(t, w) defined in Equation (16) with $F_C(w)$ as given in Equation (41) and the upper bound U(t, w) given in Equation (40), in the case of weighted exponentially distributed downward times, for various choices of the parameters.

Figure 6. The lower bound L(t, w) and the upper bound U(t, w) of the survival function of M(t), in the case of weighted exponentially distributed downward times, for $\alpha=1$ , $\lambda=4$ , $t=3$ , $\mu=10$ (left), and $\mu=12$ (right).

4.4. Mixed exponential downward random times

This section is devoted to the case in which the downward time D is distributed as a mixture of two exponential distributions, so that the density of D is given by

(42) \begin{equation}f_D(t)=b_1\mu_1e^{-\mu_1 t}+b_2\mu_2e^{-\mu_2 t},\quad t\ge 0,\end{equation}

with $b_i\ge 0$ , for $i=1,2$ and $b_1+b_2=1$ . Throughout the section we assume that

\begin{align*}\frac{b1}{\mu_1}+\frac{b_2}{\mu_2}<\frac{1}{\lambda},\end{align*}

so that the condition (6) is fulfilled.

The following theorem provides the expression for the density of the constant phase C.

Theorem 5. In the case of mixed exponential downward times (see Equation (42)), for $t\geq 0$ , the probability density function of the constant phase C is given by

(43) \begin{align}&f_C(t)=\frac{1}{2}\sum_{n=0}^{+\infty}\left(\frac{\lambda t^2}{4}\right)^n \frac{e^{-\lambda t/2}}{(n+1)!}\sum_{k=0}^n \frac{\binom{n}{k}}{n!}b_2^n\mu_2^n\left(\frac{b_1\mu_1}{b_2\mu_2}\right)^k\nonumber \\[5pt]&\times \left[b_1\mu_1e^{-\mu_1 t/2}{_1F_1}\left(n-k,n+2,(\mu_1-\mu_2)\frac{t}{2}\right) +b_2\mu_2e^{-\mu_2 t/2}{_1F_1}\left(k,n+2,(\mu_2-\mu_1)\frac{t}{2}\right)\right]\!,\end{align}

where ${_1}F_1(a,b;\;z)$ is as defined in Equation (32).

Proof. Let us start with the computation of the n-fold convolution of $f_D$ . Setting $f_i(t)\;:\!=\;\mu_ie^{-\mu_i t}$ for any $i=1,2$ , we have

\begin{equation*}\begin{aligned}f_D^{(n)}(t)=\sum_{k=0}^{n} \binom{n}{k}b_1^kb_2^{n-k}f_1^{(k)}(t)\star f_2^{(n-k)}(t), \quad t\ge 0,\end{aligned}\end{equation*}

where $f\star g$ denotes the convolution of f and g. In particular, we have that

\begin{eqnarray*}&& f_1^{(k)}(t)\star f_2^{(n-k)}(t)=\int_{-\infty}^{+\infty}f_1^{(k)}(x)f_2^{(n-k)}(t-x) \textrm{d}x=\frac{\mu_1^k\mu_2^{n-k}e^{-\mu_2 t}}{(k-1)!(n-k-1)!}\\[5pt] && \times \int_0^t x^{k-1}(t-x)^{n-k-1}e^{-\mu_1 x}e^{\mu_2 x}\textrm{d}x=\frac{\mu_1^k\mu_2^{n-k}e^{-\mu_2 t}t^{n-1} {_1F_1(k,n,(\mu_2-\mu_1)t)}}{(n-1)!}.\end{eqnarray*}

Hence, $f_D^{(n)}(t)$ can be expressed as follows:

\begin{align*}f_D^{(n)}(t)=\sum_{k=0}^n\binom{n}{k}b_1^kb_2^{n-k}\mu_1^k\mu_2^{n-k}e^{-\mu_2 t} t^{n-1} \frac{{_1F_1(k,n,(\mu_2-\mu_1)t)}}{(n-1)!}.\end{align*}

From Equation (7), we have

\begin{equation*}\begin{aligned}f_T(t)&=\frac{1}{t}\sum_{n=0}^{+\infty}p(n,\lambda t)\int_0^t x f_D^{(n)}(t-x)f_D(x)\textrm{d}x=\frac{1}{t}\sum_{n=0}^{+\infty}p(n,\lambda t)\sum_{k=0}^n\frac{\binom{n}{k}}{(n-1)!}b_2^n\mu_2^n \left(\frac{b_1\mu_1}{b_2\mu_2}\right)^k\\[5pt] &\times \left[b_1\mu_1\int_0^t xe^{-\mu_2(t-x)}(t-x)^{n-1}e^{-\mu_1 x}{_1F_1(k,n,(\mu_2-\mu_1)(t-x))}\textrm{d}x\right.\\[5pt] &+\left. b_2\mu_2\int_0^t xe^{-\mu_2(t-x)}(t-x)^{n-1}e^{-\mu_2 x}{_1F_1(k,n,(\mu_2-\mu_1)(t-x))}\textrm{d}x \right].\end{aligned}\end{equation*}

It is not hard to show that

\begin{equation*}\begin{aligned}&\int_0^t x e^{-\mu_2(t-x)}(t-x)^{n-1}e^{-\mu_1x}{_1F_1(k,n,(\mu_2-\mu_1)(t-x))}\textrm{d}x\\[5pt] &=e^{-\mu_1t}\int_0^t x(t-x)^{n-1}{_1F_1(n-k,n,(\mu_1-\mu_2)(t-x))}\textrm{d}x \\[5pt] &=e^{-\mu_1 t}\frac{t^{n+1}}{n(n+1)}{_1F_1(n-k,n+2,(\mu_1-\mu_2)t)},\end{aligned}\end{equation*}

and

\begin{align*}&\int_0^t x e^{-\mu_2(t-x)}(t-x)^{n-1}e^{-\mu_2x}{_1F_1(k,n,(\mu_2-\mu_1)(t-x))}\textrm{d}x\\[5pt] & = e^{-\mu_2t}\int_0^t x(t-x)^{n-1}{_1F_1(k,n,(\mu_2-\mu_1)(t-x))}\textrm{d}x\\[5pt] & = e^{-\mu_2 t}\frac{t^{n+1}}{n(n+1)}{_1F_1(k,n+2,(\mu_2-\mu_1)t)}.\end{align*}

Hence, Equation (43) follows from Equation (5).

Remark 6. For $b_1=b_2=\frac 12$ and $\mu_1=\mu_2=\mu$ , since $_1F_1(n-k,n+2,0)=1$ , we have that

\begin{align*}\begin{aligned}f_C(t)\Big|_{b_1=b_2=\frac{1}{2},\, \mu_1=\mu_2=\mu}&=\frac{\mu e^{-(\lambda+\mu) t/2}}{2}\sum_{n=0}^{+\infty}\left(\frac{\lambda t^2}{4}\right)^n\frac{\left(\frac\mu 2\right)^n}{n!(n+1)!}\,\sum_{k=0}^n\binom{n}{k}\\[5pt] &=\frac{\mu e^{-(\lambda+\mu) t/2}I_1\left(\sqrt{\lambda\mu}t\right)}{\sqrt{\lambda\mu}\,t},\quad t\ge 0,\end{aligned}\end{align*}

which is the same as the corresponding result in the case of exponentially distributed downward random times.

Proposition 12. In the case of mixed exponential downward times (Equation (42)), the upper bound U(t, w) given in Proposition 1 has the following expression:

(44) \begin{equation}U(t,w)=\exp\left[-\lambda w \left(1-F_C(t-w)\right) \right],\quad t>w>0,\end{equation}

where

(45) \begin{align}& F_C(w)= b_1\mu_1 \sum_{n=0}^{+\infty} \frac{(b_2\mu_2\lambda)^n}{n!(n+1)!}\left[ \sum_{m=0}^{+\infty}\frac{(m+n-1)!\,{_2}F_1\left(1-n,-n;1-m-n;-\frac{b_1\mu_1}{b_2\mu_2}\right)}{(n-1)! (n+2)_m m! (\lambda+\mu_1)^{m+2n+1}}\right.\nonumber\\[5pt] & \times (\mu_1-\mu_2)^m \,\gamma\left(1+m+2n,\frac{(\lambda+\mu_1)w}{2}\right)+n \sum_{m=0}^{+\infty}\frac{m! {_2}F_1\left(m+1,1-n;\;2;\;-\frac{b_1\mu_1}{b_2\mu_2}\right)}{(n+2)_m m! (\lambda+\mu_2)^{m+2n+1}}\nonumber\\[5pt] & \left.\times (\mu_1-\mu_2)^{m}\,\gamma\left(1+m+2n,\frac{(\lambda+\mu_2)w}{2}\right)\right],\end{align}

$\gamma(s,x)$ is the lower incomplete gamma function, and ${_2}F_1(a,b,c;\;z)$ is the Gauss hypergeometric function, defined as

(46) \begin{equation}{_2}F_1(a,b,c;\;z)\;:\!=\;\sum_{k=0}^{+\infty}\frac{(a)_k(b)_k z^k}{(c)_k k!}.\end{equation}

Proof. From Equation (43), the distribution of C is given by

\begin{eqnarray*}&& F_C(w)=\frac{1}{2}\sum_{n=0}^{+\infty} \frac{b_2^n\mu_2^n}{n!(n+1)!}\sum_{k=0}^n\binom{n}{k}\left(\frac{b_1\mu_1}{b_2\mu_2}\right)^k \\[5pt] && \times \left[b_1\mu_1\int_0^w e^{-(\lambda +\mu_1) t/2}{_1F_1}\left(n-k,n+2,\frac{(\mu_1-\mu_2)t}{2}\right)\left(\frac{\lambda t^2}{4}\right)^n\textrm dt\right.\\[5pt] && +\left.b_2\mu_2\int_0^w e^{-(\lambda +\mu_2) t/2}{_1F_1}\left(k,n+2,\frac{(\mu_1-\mu_2)t}{2}\right)\left(\frac{\lambda t^2}{4}\right)^n\textrm dt\right].\end{eqnarray*}

Recalling Equation (32), one has

\begin{align*}&\int_0^w e^{-(\lambda+\mu_1)t/2}{_1F_1\left(n-k,n+2,\frac{(\mu_1-\mu_2)t}{2}\right)\left(\frac{\lambda t^2}{4}\right)^n}\textrm dt\\[5pt] &=\sum_{m=0}^{+\infty}\frac{(n-k)_m}{(n+2)_m m!}\int_0^w e^{-(\lambda+\mu_1)t/2}\left(\frac{(\mu_1-\mu_2)t}{2}\right)^m \left(\frac{\lambda t^2}{4}\right)^n \textrm dt\\[5pt] &=\sum_{m=0}^{+\infty}\frac{(n-k)_m}{(n+2)_m m!} 2\lambda^n \frac{(\mu_1-\mu_2)^m}{(\lambda+\mu_1)^{1+m+2n}}\gamma\left(1+m+2n,\frac{(\lambda+\mu_1)w}{2}\right).\end{align*}

Similarly, we have

\begin{align*}&\int_0^w e^{-(\lambda+\mu_2)t/2}{_1F_1\left(k,n+2,\frac{(\mu_1-\mu_2)t}{2}\right)\left(\frac{\lambda t^2}{4}\right)^n}\textrm dt\\[5pt] &=\sum_{m=0}^{+\infty}\frac{(k)_m}{(n+2)_m m!} 2\lambda^n\frac{(\mu_1-\mu_2)^m}{(\lambda+\mu_2)^{1+m+2n}}\gamma\left(1+m+2n,\frac{(\lambda+\mu_2)w}{2}\right).\end{align*}

The result immediately follows from Proposition 1 and Theorem 5 if we consider that

\begin{align*}&\sum_{k=0}^n\binom{n}{k}\left(\frac{b_1\mu_1}{b_2\mu_2}\right)^k (n-k)_m= \frac{(m+n-1)!\, {_2}F_1\left(1-n,-n;\;1-m-n; -\frac{b_1\mu_1}{b_2\mu_2}\right)}{(n-1)!},\\[5pt] &\sum_{k=0}^n\binom{n}{k}\left(\frac{b_1\mu_1}{b_2\mu_2}\right)^k (k)_m= \frac{n b_1\mu_1\,m!\, {_2}F_1\left(1+m,1-n;\;2; -\frac{b_1\mu_1}{b_2\mu_2}\right)}{b_2\mu_2}.\end{align*}

In Figure 7, we provide plots of the lower bound L(t, w) defined in Equation (16) with $F_C(w)$ as given in Equation (45) and the upper bound U(t, w) given in Equation (44), for different choices of the parameters.

Figure 7. The lower bound L(t, w) and the upper bound U(t, w) of the survival function of M(t), in the case of mixed exponential downward times, for $b_1=b_2=0.5$ , $t=3$ , $\mu_1=2$ , $\mu_2=3$ , $\lambda=1$ (left), and $\lambda=1.5$ (right).

5. Some results on the moments of the first passage time

This section presents some results concerning the moments of the first passage time, defined in Equation (17), under different choices for the distribution of the downward random times.

5.1. Exponentially distributed downward random times

Let us assume that the random variables $D_i$ , $i\in {\mathbb N}$ , are exponentially distributed with parameter $\mu>\lambda>0$ . We collect some results concerning the first passage time of X(t) through a fixed level $w>0$ .

Theorem 6. In the case of exponentially distributed downward times, the expected value of the first passage time $\Theta_w$ , defined in Equation (17), is given by

(47) \begin{equation}\mathbb E(\Theta_w)=\frac{2\lambda}{\mu-\lambda}w, \quad w\geq 0,\end{equation}

for $\mu>\lambda>0$ .

Proof. Recalling Equation (23), we have

\begin{align*}\psi_C(-s)=\frac{1}{\lambda}\left[\frac{\lambda+\mu}{2}+s-\sqrt{-\lambda\mu+\frac{(\lambda+\mu+2s)^2}{4}}\right],\end{align*}

and

(48) \begin{equation}\psi_{Z}(w,-s)=\exp\left[-\frac{w}{2}\left(\lambda-\mu-2s+\sqrt{(\lambda-\mu)^2+4(\lambda+\mu)s+4s^2}\right)\right].\end{equation}

Hence, for $\lambda<\mu$ , we have

\begin{align*}\mathbb E(Z(w))=-\left.\frac{\textrm{d}}{\textrm{d}s}\psi_Z(w,-s)\right|_{s=0}=\frac{2\lambda}{\mu-\lambda}w,\end{align*}

so that the proof follows from recalling Equation (18).

Remark 7. Note that for $\lambda\to\mu^-$ ,

\begin{align*}\lim_{\lambda\to\mu^-}\mathbb E(\Theta_w)=+\infty.\end{align*}

This result is in agreement with the one obtained by Orsingher [Reference Orsingher30] in the case of a standard symmetric telegraph process.

Theorem 7. In the case of exponentially distributed downward times, the second-order moment of the first passage time $\Theta_w$ , defined in Equation (17), is given by

\begin{align*}\mathbb E(\Theta^2_w)=\frac{4\lambda\mu w \left[2+(\mu-\lambda)w\right]}{(\mu-\lambda)^3}, \quad w\geq 0,\end{align*}

whereas for the variance of $\Theta_w$ we have

(49) \begin{equation}Var (\Theta_w)=\frac{4\lambda w \left[2\mu+(\mu^2-\lambda^2) w\right]}{(\mu-\lambda)^3},\quad w\geq 0.\end{equation}

Proof. Recalling Equations (22) and (48), we have

\begin{align*}\mathbb E(\Theta^2_w)=\left.2w\frac{\textrm{d}}{\textrm{d}s}\psi_Z(w,s)\right|_{s=0}+\left.\frac{\textrm{d}^2}{\textrm{d}s^2}\psi_Z(w,s)\right|_{s=0}=\frac{4\lambda\mu w \left[2+(\mu-\lambda)w\right]}{(\mu-\lambda)^3}.\end{align*}

Hence, Equation (49) follows from recalling Equation (47).

In Figure 8, we provide plots of the expected value (47) and the variance (49) of the first passage time $\Theta_w$ for suitable choices of the parameters.

Figure 8. The expected value (47) (left) and the variance (49) (right) of the first passage time $\Theta_w$ for $\mu=5$ .

5.2. Erlang distributed downward random times

Let us now consider the case $D\sim Erlang(\mu,k)$ with $k\in \mathbb N$ , $\mu>0$ , and $k/\mu<1/\lambda$ . In the following propositions we provide explicit expressions for the moment generating function of the constant phase C and of the compound process Z(w).

Proposition 13. In the case of $Erlang(\mu,k)$ distributed downward random times, the moment generating function of C is given by

(50) \begin{equation}\psi_C(s)=\frac{\mu^k k}{(\lambda+\mu-2s)^k}{_1\Psi_1\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix}; \displaystyle\frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}},\end{equation}

where

\begin{align*}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(a_1,A_1)\\[5pt] (b_1,B_1)\end{matrix};\; z\end{bmatrix}\;:\!=\;\sum_{n=0}^{+\infty}\frac{\Gamma(a_1+A_1n)}{\Gamma(b_1+B_1n)}\cdot \frac{z^n}{n!}\end{align*}

denotes the Fox–Wright function.

Proof. We have

(51) \begin{equation}\psi_C(s)=\frac{1}{2}\mu^k k \sum_{n=0}^{+\infty}\frac{(\lambda\mu^k)^n}{n! (k+kn)!}\int_0^{+\infty}e^{st-(\lambda+\mu)t/2}\left(\frac{t}{2}\right)^{(k+1)n+k-1}\textrm{d}t.\end{equation}

Taking into account that

\begin{align*}\int_0^{+\infty}e^{st-(\lambda+\mu)t/2}\left(\frac{t}{2}\right)^{(k+1)n+k-1}\textrm{d}t=\frac{2(k+n+kn-1)!}{(\lambda+\mu-2s)^{n+k(n+1)}},\end{align*}

from Equation (51) we have

\begin{equation*}\begin{aligned}\psi_C(s)&=\frac{\mu^kk}{(\lambda+\mu-2s)^k}\sum_{n=0}^{+\infty}\left(\frac{\lambda \mu^k}{(\lambda+\mu-2s)^{k+1}}\right)^n \frac{(k+n+kn-1)!}{n!(k+kn)!}\\[5pt] &=\frac{\mu^kk}{(\lambda+\mu-2s)^k} {_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix}; \displaystyle\frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}.\end{aligned}\end{equation*}

Proposition 14. In the case of $Erlang(\mu,k)$ distributed downward random times, the moment generating function of Z(t) is given by

(52) \begin{equation}\psi_{Z}(t,s)=\exp\left[-\lambda t\left(1-\frac{\mu^k k }{(\lambda+\mu-2s)^k}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix}; \displaystyle\frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\right)\right].\end{equation}

Proof. The result follows from Equations (12) and (50), taking into account that the random variables $C_j$ are i.i.d. with distribution as given in Equation (34).

The expected value and the variance of the process Z(t) are given in the following proposition.

Proposition 15. In the case of $Erlang(\mu,k)$ distributed downward random times, for $t>0$ , the expected value of the process Z(t) is given by

(53) \begin{align}\mathbb E(Z(t)) = \frac{\lambda t\mu^k}{(\lambda+\mu)^{k+1}}&\left\{2k \left[k{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\nonumber \\[5pt] (k+1,k)\end{matrix}; \displaystyle\frac{\lambda\mu^k}{(\lambda+\mu)^{k+1}}\end{bmatrix}\right.\right.\\[5pt] & \left.\left.+ {\lambda (k+1)}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(2k+1,k+1)\\[5pt] (2k+1,k)\end{matrix}; \displaystyle\frac{\lambda\mu^k}{(\lambda+\mu)^{k+1}}\end{bmatrix}\right] \right\},\end{align}

and the variance is given by

(54) \begin{align}Var(Z(t))&=\lambda t \frac{4k\mu^k(k+1)}{(\lambda+\mu)^{k+2}}\Bigg\{k {_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu)^{k+1}}\end{bmatrix}\nonumber\\[5pt] &\quad+ \frac{\lambda\mu^k (3k+2)}{(\lambda+\mu)^{k+1}}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(2k+1,k+1)\\[5pt] (2k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu)^{k+1}}\end{bmatrix}\nonumber\\[5pt] &\quad+\frac{\lambda^2\mu^{2k}(k+1)}{(\lambda+\mu)^{2k+2}}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(3k+2,k+1)\\[5pt] (3k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu)^{k+1}}\end{bmatrix}\Bigg\}.\end{align}

The proof of Proposition 15 is given in Appendix C.

Finally, we provide some results concerning the expected value of the first passage time.

Proposition 16. In the case of $Erlang(\mu,k)$ distributed downward times, the expected value of the first passage time $\Theta_w$ is given by

\begin{align*}\mathbb E(\Theta_w)&= \lambda w\Bigg\{\frac{2\mu^k k}{(\lambda+\mu)^{k+1}} \Bigg[k{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix}; \displaystyle\frac{\lambda\mu^k}{(\lambda+\mu)^{k+1}}\end{bmatrix}\\[5pt] &+ \frac{\lambda\mu^k (k+1)}{(\lambda+\mu)^{k+1}}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(2k+1,k+1)\\ (2k+1,k)\end{matrix}; \displaystyle\frac{\lambda\mu^k}{(\lambda+\mu)^{k+1}}\end{bmatrix}\Bigg] \Bigg\}. \end{align*}

Proof. The result follows immediately from Proposition 2 and Equation (53).

Remark 8. When $k=1$ , we have

\begin{align*}&\mathbb E(\Theta_w)\Big|_{k=1}=\lambda w\left\{\frac{2\mu}{(\lambda+\mu)^{2}} \left[{_1\Psi_1}\begin{bmatrix}\begin{matrix}(1,2)\\[5pt] (2,1)\end{matrix}; \displaystyle\frac{\lambda\mu}{(\lambda+\mu)^{2}}\end{bmatrix}+ \frac{2\lambda\mu }{(\lambda+\mu)^{2}}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(3,2)\\[5pt] (3,1)\end{matrix}; \displaystyle\frac{\lambda\mu}{(\lambda+\mu)^{2}}\end{bmatrix}\right] \right\}\\[5pt] &=\lambda w\left[\frac{2\mu}{(\lambda+\mu)^2}\left(\frac{\lambda+\mu}{\mu}+\frac{2\lambda(\lambda+\mu)}{\mu(\mu-\lambda)}\right)\right]=\frac{2\lambda}{\mu-\lambda}w,\end{align*}

which is the same as the expected value in the case of exponential downward random times.

Remark 9. From Equation (53), we can see that the mean of $\Theta_w$ depends linearly on w. Moreover, when $\lambda\to\mu^-$ , as in the exponential case, one has

\begin{align*}\mathbb E(\Theta_w)\to+\infty.\end{align*}

From Equations (53)–(54) and Equation (22), it is possible to get an explicit formula for the second-order moment of the first passage time $\Theta_w$ . The resulting expression is cumbersome, so for brevity we omit it.

In Figure 9, we provide plots of the expected value and the variance of the first passage time $\Theta_w$ for suitable choices of the parameters.

Figure 9. The expected value obtained in Proposition 16 (left) and the variance (right) of the first passage time $\Theta_w$ for $\mu=9$ and $k=2$ .

5.3. Weighted exponentially distributed downward random times

Let us assume that $D\sim WE(\alpha,\mu)$ (see Equation (38)), with $0<\lambda\le \frac{\mu}{2}$ , $\alpha>0$ (or $\frac{\mu}{2}<\lambda<\mu$ , $\alpha>\frac{\mu-2\lambda}{\lambda-\mu}$ ).

Proposition 17. In the case of weighted exponentially distributed downward times, the moment generating function of C is given by

(55) \begin{equation}\begin{aligned}\psi_C(s)&=\frac{{4(1+\alpha)\mu^2}}{\left(2\lambda+(2+\alpha)\mu-4s\right)^{2}}\sum_{n=0}^{+\infty}\left(\frac{{8(1+\alpha)\lambda\mu^2}}{\left(2\lambda+(2+\alpha )\mu-4s\right)^3}\right)^n\frac{(3n+1)!}{(n+1)!(2n+1)!}\\[5pt] &\times {_2 F_1}\left(\frac{3(1+n)}{2},1+\frac{3n}{2},\frac 32 +n, \frac{\alpha^2\mu^2}{(2\lambda+(2+\alpha)\mu -4s)^2}\right),\end{aligned}\end{equation}

where ${_2}F_1(a,b,c;\;z)$ is the Gauss hypergeometric function (see Equation (46)).

Proof. Since

\begin{align*} &\int_0^{+\infty}e^{-\left(\frac{\lambda+\mu}{2}+\frac{\alpha\mu}{4}-s\right)t} t^{1/2+2n} I_{n+1/2}\left(\frac{\alpha\mu t}{2}\right)\textrm{d}t\\[5pt] &=\frac{2^{5/2+3n}(\alpha\mu)^{n+1/2}}{(2\lambda+(2+\alpha)\mu-4s)^{2+3n}}\frac{\Gamma\left(3n+2\right)}{\Gamma\left(\frac32+n\right)}\\[5pt] &\quad \times {_2}F_1\left(\frac{3(1+n)}{2}, 1+\frac{3}{2}n,\frac32+n,\frac{\alpha^2\mu^2}{(2\lambda+(2+\alpha)\mu-4s)^2}\right), \end{align*}

the proof immediately follows from Equation (13).

In the following proposition, we provide the expected value of the first passage time $\Theta_w$ .

Proposition 18. In the case of weighted exponentially distributed downward times, the expected value of the first passage time $\Theta_w$ is given by

(56) \begin{equation}\begin{aligned}\mathbb E(\Theta_w)&=\lambda w \left\{\frac{16(1+\alpha)\mu^2}{(2\lambda+(2+\alpha)\mu)^5}\sum_{n=0}^{+\infty}\left(\frac{8(1+\alpha)\lambda\mu^2}{(2\lambda+(2+\alpha)\mu)^3}\right)^n \frac{(2+3n)!}{n!(n+1)!}\right.\\[5pt] &\left.\left[{{_2F_1}\left(\frac{3(1+n)}{2},1+\frac{3}{2}n,\frac{3}{2}+n,\frac{\alpha^2\mu^2}{(2\lambda+(2+\alpha)\mu)^2}\right)}\right.\right.\\[5pt] &\left.\left.+\frac{3\alpha^2\mu^2(1+n)}{(3+2n)(2\lambda+(2+\alpha)\mu)^2}{_2F_1}{\left(\frac 32 n+2,\frac{5+3n}{2},\frac 52+n, \frac{\alpha^2\mu^2}{(2\lambda+(2+\alpha)\mu)^2}\right)}\right]\right\},\end{aligned}\end{equation}

where ${_2}F_1(a_1,a_2;\;b;\;z)$ is as defined in Equation (46).

Proof. Recalling Equation (55), by evaluating the first derivative with respect to s of $\psi_C(s)$ at $s=0$ , we get the expected value of C, which is given by

\begin{equation*}\begin{aligned}\mathbb E(C)&=\left.\frac{\textrm{d}}{\textrm{d}s}\psi_C(s)\right|_{s=0}=\frac{16(1+\alpha)\mu^2}{(2\lambda+(2+\alpha)\mu)^5}\sum_{n=0}^{+\infty}\left(\frac{8(1+\alpha)\lambda\mu^2}{(2\lambda+(2+\alpha)\mu)^3}\right)^n \frac{(2+3n)!}{n!(n+1)!}\\[5pt] &\left[{{_2F_1}\left(\frac{3(1+n)}{2},1+\frac{3}{2}n,\frac{3}{2}+n,\frac{\alpha^2\mu^2}{(2\lambda+(2+\alpha)\mu)^2}\right)}\right.\\[5pt] &\left.+\frac{3\alpha^2\mu^2(1+n)}{(3+2n)(2\lambda+(2+\alpha)\mu)^2}{_2F_1}{\left(\frac 32 n+2,\frac{5+3n}{2},\frac 52+n, \frac{\alpha^2\mu^2}{(2\lambda+(2+\alpha)\mu)^2}\right)}\right].\end{aligned}\end{equation*}

Hence, the result follows easily from Equation (18).

Remark 10. For $\alpha\to+\infty$ , since

\begin{align*}&{_2F_1}\left(\frac 32 +\frac32 n, 1+\frac{3}{2}n,\frac 32+n,\frac{\alpha^2\mu^2}{(2\lambda+(2+\alpha)\mu)^2}\right)\propto \frac{(2n)!\Gamma\left(\frac{3}{2}+n\right)}{\Gamma\left(\frac{3}{2}+\frac 32 n\right)\Gamma\left(1+\frac{3}{2}n\right)}\\[5pt] &\qquad \times\left(\frac{(2\lambda +(2+\alpha)\mu)^2-\alpha^2\mu^2}{(2\lambda+(2+\alpha)\mu)^2}\right)^{-2n-1}\end{align*}

(see, for instance, Equation (9) of [Reference Erdélyi16]), we have

\begin{align*}\lim_{\alpha\to+\infty}\psi_C(s)=-\frac{\lambda+\mu-2s}{2\lambda}\left(-1+\sqrt{1-\frac{4\lambda\mu}{(\lambda+\mu-2s)^2}}\right).\end{align*}

Hence

\begin{align*}\lim_{\alpha\to+\infty}\mathbb E(\Theta_w)=\frac{2}{\mu-\lambda}\lambda ,\end{align*}

which corresponds to the expected value of $\Theta_w$ in the case of exponentially distributed downward random times.

Remark 11. From Equation (55) it is possible to get an expression for the second-order moment of $\Theta_w$ . In particular, after some calculations we have that

\begin{align*}\begin{aligned}\mathbb E(C^2)&=\left.\frac{d^2}{d s^2} \psi_C(s)\right|_{s=0}=\frac{8 (1+\alpha)\mu^2\sqrt{\pi}}{(2\lambda+(2+\alpha)\mu)^8}\sum_{n=0}^{+\infty} \left(\frac{(1+\alpha)\lambda \mu^2}{2(2\lambda+(2+\alpha)\mu)^3}\right)^n \frac{(3n+3)!}{n!(n+1)!}\\[5pt] &\times \left[\frac{4(2\lambda+(2+\alpha)\mu)^4}{\Gamma\left(\frac 32 +n\right)} {{_2F_1 \left(\frac{3(1+n)}{2},1+\frac 32 n, \frac 32+n, \frac{\alpha^2\mu^2}{(2\lambda +(2+\alpha)\mu)^2}\right)}}\right.\\[5pt] &+\frac{\alpha^4\mu^4 (4+3n)(5+3n)}{{\Gamma\left(\frac 72 +n\right)}} {{_2F_1 \left(\frac{3(2+n)}{2},\frac 72 +\frac 32 n, \frac 72+n, \frac{\alpha^2\mu^2}{(2\lambda +(2+\alpha)\mu)^2}\right)}}\\[5pt] &\left.+\frac{2\alpha^2\mu^2(2\lambda+(2+\alpha)\mu)^2(7+6n)}{{\Gamma\left(\frac 52 +n\right)}}{{_2F_1 \left(2+\frac 32 n,\frac 52 +\frac 32 n, \frac 52+n, \frac{\alpha^2\mu^2}{(2\lambda +(2+\alpha)\mu)^2}\right)}}\!\right]\!,\end{aligned}\end{align*}

where ${_2}F_1(a_1,a_2;\;b;\;z)$ is as defined in Equation (46).

The resulting expression for $\mathbb E(\Theta_w^2)$ is omitted for brevity. In Figure 10, we provide plots of the expected value and the variance of $\Theta_w$ for some choices of the parameters.

Figure 10. The expected value (56) (left) and the variance (right) of $\Theta_w$ (obtained from Remark 11) for $\alpha=1$ and $\lambda=4$ .

5.4. Mixed exponential downward random times

This section is devoted to the case in which the downward time D is distributed as a mixture of two exponential distributions (42), with $b_i\ge 0$ , for $i=1,2$ , $b_1+b_2=1$ , and $\frac{b1}{\mu_1}+\frac{b_2}{\mu_2}<\frac{1}{\lambda}$ .

Proposition 19. In the case of mixed exponential distributed downward times, the moment generating function of C is given by

(57) \begin{equation}\psi_C(s)= \mathcal C(b_1,b_2,\mu_1,\mu_2,\lambda,s)+\mathcal D(b_1,b_2,\mu_1,\mu_2,\lambda,s),\end{equation}

where

\begin{equation*}\begin{aligned}\!\mathcal C(b_1,b_2,\mu_1,\mu_2,\lambda,s)&\;:\!=\;\frac{b_1\mu_1}{(\lambda+\mu_1-2s)}\!\sum_{n=0}^{+\infty}\frac{(2n)!}{(n+1)!n!}\!\left(\frac{b_2\mu_2\lambda}{(\lambda+\mu_1-2s)^2}\right)^n \!\sum_{k=0}^n\!\binom{n}{k}\!\left(\frac{b_1\mu_1}{b_2\mu_2}\right)^k\\[5pt] &\times {_2F_1\left(n-k,2n+1,2+n,\frac{\mu_1-\mu_2}{\lambda+\mu_1-2s}\right)},\end{aligned}\end{equation*}

with ${_2}F_1(a_1,a_2;\;b;\;z)$ as defined in Equation (46) and

\begin{equation*}\begin{aligned}\!\mathcal D(b_1,b_2,\mu_1,\mu_2,\lambda,s)&\;:\!=\;\frac{b_2\mu_2}{(\lambda+\mu_2-2s)}\sum_{n=0}^{+\infty}\frac{(2n)!}{(n+1)!n!}\!\left(\!\frac{b_2\mu_2\lambda}{(\lambda+\mu_2-2s)^2}\!\right)^n \sum_{k=0}^n\!\binom{n}{k}\!\left(\frac{b_1\mu_1}{b_2\mu_2}\!\right)^k\\[5pt] &\times{_2}F_1\left(k,2n+1,2+n,\frac{\mu_2-\mu_1}{\lambda+\mu_2-2s}\right).\end{aligned}\end{equation*}

Proof. The proof immediately follows if we recall that

\begin{equation*}\begin{aligned}&\int_0^{+\infty}e^{st}\left(\frac{\lambda t^2}{4}\right)^n e^{-\lambda t/2-\mu_1t/2}{_1F_1\left(n-k,n+2,\frac{(\mu_1-\mu_2)t}{2}\right)}\textrm{d}t\\[5pt] &=\frac{2(2n)! \lambda^n}{(\lambda+\mu_1-2s)^{2n+1}}{_2F_1\left(n-k,2n+1,2+n,\frac{\mu_1-\mu_2}{\lambda+\mu_1-2s}\right)},\end{aligned}\end{equation*}

and

\begin{equation*}\begin{aligned}&\int_0^{+\infty}e^{st}\left(\frac{\lambda t^2}{4}\right)^n e^{-\lambda t/2-\mu_2t/2}{_1F_1\left(k,n+2,\frac{(\mu_2-\mu_1)t}{2}\right)}\textrm{d}t\\[5pt] &=\frac{2(2n)! \lambda^n}{(\lambda+\mu_2-2s)^{2n+1}}{_2}F_1\left(k,2n+1,2+n,\frac{\mu_2-\mu_1}{\lambda+\mu_2-2s}\right).\end{aligned}\end{equation*}

The expected value of the first passage time $\Theta_w$ is obtained in the following proposition.

Proposition 20. In the case of downward times distributed as in Equation (42), the expected value of the first passage time $\Theta_w$ is given by

\begin{equation*}\mathbb E(\Theta_w)=\lambda w \left(\mathcal{A}(b_1,b_2,\mu_1,\mu_2,\lambda)+\mathcal B (b_1,b_2,\mu_1,\mu_2,\lambda)\right),\end{equation*}

where

\begin{equation*}\begin{aligned}&\mathcal{A}(b_1,b_2,\mu_1,\mu_2,\lambda)\;:\!=\;b_1\mu_1\sum_{n=0}^{+\infty}\frac{(2n)!}{(n+1)!n!}(b_2\mu_2\lambda)^n\sum_{k=0}^n \binom{n}{k}\left(\frac{b_1\mu_1}{b_2\mu_2}\right)^k\left(\frac{2(2n+1)}{(\lambda+\mu_1)^{2n+2}}\right.\\[5pt] &\times \left.{_2F_1\left(n-k,2n+1,2+n,\frac{\mu_1-\mu_2}{\lambda+\mu_1}\right)}+\frac{2(\mu_1-\mu_2)(n-k)(2n+1)}{(\lambda+\mu_1)^{2n+3}(2+n)}\right.\\[5pt] &\times\left.{_2F_1\left(n-k+1,2n+2,3+n,\frac{\mu_1-\mu_2}{\lambda+\mu_1}\right)}\right),\end{aligned}\end{equation*}

and

\begin{equation*}\begin{aligned}&\mathcal{B}(b_1,b_2,\mu_1,\mu_2,\lambda)\;:\!=\;b_2\mu_2\sum_{n=0}^{+\infty}\frac{(2n)!}{(n+1)!n!}(b_2\mu_2\lambda)^n\sum_{k=0}^n \binom{n}{k}\left(\frac{b_1\mu_1}{b_2\mu_2}\right)^k\left(\frac{2(2n+1)}{(\lambda+\mu_2)^{2n+2}}\right.\\[5pt] &\times \left.{_2F_1\left(k,2n+1,2+n,\frac{\mu_2-\mu_1}{\lambda+\mu_2}\right)}\right.\\[5pt]&\left. +\frac{2k(\mu_2-\mu_1)(2n+1)}{(\lambda+\mu_2)^{2n+3}(2+n)}{_2F_1\left(k+1,2n+2,3+n,\frac{\mu_2-\mu_1}{\lambda+\mu_2}\right)}\right).\end{aligned}\end{equation*}

Proof. The proof follows from Equation (18), if we recall Equation (57) and note that

\begin{equation*}\mathbb E(C)=\left.\frac{\textrm{d}}{\textrm{d}s}\psi_C(s)\right|_{s=0}=\mathcal{A}(b_1,b_2,\mu_1,\mu_2,\lambda)+\mathcal B (b_1,b_2,\mu_1,\mu_2,\lambda).\end{equation*}

Some plots of $\mathbb E(\Theta_w)$ are provided in Figure 11.

Figure 11. The expected value of $\Theta_w$ obtained in Proposition 20 for $b_1=b_2=0.5$ , $\mu_1=2$ , and $\lambda=1$ .

Remark 12. Note that when $b_1=b_2=\frac 12$ and $\mu_1=\mu_2=\mu$ , we recover the case of exponentially distributed downward random times. Indeed,

\begin{align*}\mathcal A\left(\frac12,\frac12,\mu,\mu,\lambda\right)=\mathcal B\left(\frac12,\frac12,\mu,\mu,\lambda\right)=\frac{1}{\mu-\lambda}.\end{align*}

Hence,

\begin{align*}\mathbb E(\Theta_w)\Big|_{b_1=b_2=\frac12,\,\mu_1=\mu_2=\mu}=\frac{2\lambda}{\mu-\lambda}w,\end{align*}

which corresponds to the expected value of the first passage time $\Theta_w$ in the exponential case.

Appendix A. Proof of Proposition 5

We note that

(58) \begin{equation}\begin{aligned}\int_0^{t-w}\frac{e^{-(\lambda+\mu)x/2}}{x}\frac{1}{\sqrt{2\frac{w}{x}+1}}I_1\left(x\sqrt{\lambda\mu\left(1+2\frac{w}{x}\right)}\right)\textrm{d}x\\[5pt] =\frac{\sqrt{\lambda\mu}}{2}\sum_{j=0}^{+\infty}\left(\frac{\lambda\mu}{4}\right)^j\frac{1}{j!(j+1)!}\int_0^{t-w}e^{-(\lambda+\mu)x/2}\left(\left(x+w\right)^w-w^2\right)^j \textrm{d}x,\end{aligned}\end{equation}

since

\begin{equation*}\begin{aligned}I_1\left(x\sqrt{\lambda\mu\left(1+2\frac{w}{x}\right)}\right)&=\sum_{j=0}^{+\infty}\frac{\left[\frac{x^2}{4}\left(\lambda\mu(1+2\frac{w}{x})\right)\right]^j}{j!(j+1)!}\left(\frac{x}{2}\sqrt{\lambda\mu\left(1+2\frac{w}{x}\right)}\right)\\[5pt] &=\frac{x}{2}\sqrt{\lambda\mu\left(1+\frac{2w}{x}\right)}\sum_{j=0}^{+\infty}\left(\frac{\lambda\mu}{4}\right)^j\frac{(x^2+2wx)^j}{j!(j+1)!}.\end{aligned}\end{equation*}

In order to evaluate the integral appearing in the right-hand-side of Equation (58), we determine the following integral:

(59) \begin{equation}\begin{aligned}&\int_{t-w}^{+\infty}e^{-(\lambda+\mu)x/2}\left((x+w)^2-w^2\right)^j \textrm{d}x=\int_t^{+\infty}e^{-(\lambda+\mu)(y-w)/2}(y^2-w^2)^j \textrm{d}y\\[5pt] &=e^{(\lambda+\mu)w/2}\sum_{r=0}^j \binom{j}{r}(t^2-w^2)^{j-r}\int_t^{+\infty}e^{-(\lambda+\mu)y/2}(y^2-t^2)^r \textrm{d}y\\[5pt] &=e^{(\lambda+\mu)w/2}\sum_{r=0}^j \binom{j}{r}(t^2-w^2)^{j-r}2^{2r+1}\left(\frac{t}{\lambda+\mu}\right)^{1/2+r}\frac{r!}{\sqrt\pi}K_{1/2+r}\left(\frac{t(\lambda+\mu)}{2}\right),\end{aligned}\end{equation}

where $K_n(x)$ denotes the nth-order modified Bessel function of the second kind. Note that

\begin{equation*}\begin{aligned}&\frac{e^{-\lambda w}w\lambda\mu}{2}\sum_{j=0}^{+\infty}\left(\frac{\lambda\mu}{4}\right)^j \frac{e^{w/2(\lambda+\mu)}}{j!(j+1)!}\sum_{r=0}^j \binom{j}{r}(t^2-w^2)^{j-r}2^{2r+1}\frac{r!}{\sqrt{\pi}}\left(\frac{t}{\lambda+\mu}\right)^{1/2+r}K_{1/2+r}\left(\frac{t(\lambda+\mu)}{2}\right)\\[5pt] &=\frac{e^{-\lambda w}w\lambda\mu}{2}\sum_{j=0}^{+\infty}\frac{(\lambda\mu)^j}{2^{2j}} \frac{e^{w/2(\lambda+\mu)}}{(j+1)!}\sum_{s=0}^j \frac{(t^2-w^2)^s}{s!}2^{{2j-2s+1}}\frac{t^{1/2+j-s}}{\sqrt{\pi}(\lambda+\mu)^{1/2+j-s}}K_{1/2+j-s}\left(\frac{t(\lambda+\mu)}{2}\right)\\[5pt] &=\frac{\lambda\mu w\sqrt{t} e^{w(\mu-\lambda)/2}}{\sqrt{\pi(\lambda+\mu)}}\sum_{j=0}^{+\infty}\left(\frac{\lambda\mu t }{\lambda+\mu}\right)^j\frac{1}{(j+1)!}\sum_{s=0}^j \frac{1}{s!}\left(\frac{(t^2-w^2)(\lambda+\mu)}{4t}\right)^s K_{1/2+j-s}\left(\frac{t(\lambda+\mu)}{2}\right).\end{aligned}\end{equation*}

Hence, since

\begin{align*}&\sum_{j=0}^{+\infty}\frac{1}{(j+1)!}\left(\frac{\lambda\mu t}{\lambda+\mu}\right)^j\sum_{s=0}^j \frac{1}{s!}\left(\frac{(t^2-w^2)(\lambda+\mu)}{4t}\right)^s K_{1/2+j-s}\left(\frac{t(\lambda+\mu)}{2}\right)\\[5pt] =&\sum_{s=0}^{+\infty} \frac{1}{s!}\left(\frac{(t^2-w^2)(\lambda+\mu)}{4t}\right)^s\sum_{j=s}^{+\infty}\frac{1}{(j+1)!}\left(\frac{\lambda\mu t}{\lambda+\mu}\right)^j K_{1/2+j-s}\left(\frac{t(\lambda+\mu)}{2}\right)\\[5pt] =&\sum_{s=0}^{+\infty} \frac{1}{s!}\left(\frac{(t^2-w^2)(\lambda+\mu)}{4t}\right)^s\sum_{h=0}^{+\infty}\frac{1}{(h+s+1)!}\left(\frac{\lambda\mu t}{\lambda+\mu}\right)^{h+s} K_{1/2+h}\left(\frac{t(\lambda+\mu)}{2}\right)\\[5pt] =&\sum_{h=0}^{+\infty}\left(\frac{\lambda\mu t}{\lambda+\mu}\right)^{h}K_{1/2+h}\left(\frac{t(\lambda+\mu)}{2}\right)\sum_{s=0}^{+\infty} \frac{1}{s!(h+s+1)!}\left(\frac{(t^2-w^2)(\lambda+\mu)\lambda\mu t}{4t(\lambda+\mu)}\right)^s\\[5pt] =&\sum_{h=0}^{+\infty}\left(\frac{\lambda\mu t}{\lambda+\mu}\right)^h\frac{2}{\sqrt{\lambda\mu (t^2-w^2)}}\!\left(\frac{2}{\sqrt{\lambda\mu (t^2-w^2)}}\right)^hI_{h+1}\!\left(\sqrt{\lambda\mu (t^2-w^2)}\right)K_{1/2+h}\!\left(\!\frac{t(\lambda+\mu)}{2}\!\right)\\[5pt] =&\frac{2}{\sqrt{\lambda\mu (t^2-w^2)}}\sum_{h=0}^{+\infty}\left(\frac{2t\sqrt{\lambda\mu} }{(\lambda+\mu)\sqrt{t^2-w^2}}\right)^h I_{h+1}\left(\sqrt{\lambda\mu (t^2-w^2)}\right)K_{1/2+h}\left(\frac{t(\lambda+\mu)}{2}\right),\end{align*}

we have

(60) \begin{align}&e^{-\lambda w}w\sqrt{\lambda\mu}\int_{t-w}^{+\infty}\frac{e^{-(\lambda+\mu)x/2}}{x}\frac{1}{\sqrt{2\frac{w}{x}+1}}I_1\left(x\sqrt{\lambda\mu\left(1+2\frac{w}{x}\right)}\right)\textrm{d}x\nonumber \\[5pt] =&\frac{2 e^{w/2(\mu-\lambda)}w\sqrt{\lambda\mu t}}{\sqrt{\pi(\lambda+\mu)(t^2-w^2)}}\sum_{h=0}^{+\infty}\left(\frac{2t\sqrt{\lambda\mu} }{(\lambda+\mu)\sqrt{t^2-w^2}}\right)^h I_{h+1}\left(\sqrt{\lambda\mu (t^2-w^2)}\right)K_{1/2+h}\left(\frac{t(\lambda+\mu)}{2}\right).\end{align}

Similarly, it can be proved that

\begin{equation*}\int_{0}^{+\infty}\frac{e^{-(\lambda+\mu)x/2}}{x}\left((x+w)^2-w^2\right)^j \textrm{d}x=e^{w/2(\lambda+\mu)}\frac{j!}{\sqrt{\pi}}\left(\frac{4w}{\lambda+\mu}\right)^{j+1/2}K_{1/2+j}\left(\frac{w(\lambda+\mu)}{2}\right),\end{equation*}

so that

(61) \begin{align}&e^{-\lambda w}w\frac{\lambda\mu}{2}\sum_{j=0}^{+\infty}\left(\frac{\lambda\mu}{4}\right)^j\frac{1}{j!(j+1)!}\int_0^{+\infty}\frac{e^{-(\lambda+\mu)x/2}}{x}\left((x+w)^2-w^2\right)^j \textrm{d}x\nonumber\\[5pt] =&\;e^{w/2(\mu-\lambda)}w\lambda\mu\sqrt{\frac{w}{\lambda+\mu}}\sum_{j=0}^{+\infty}\left(\frac{\lambda\mu w}{\lambda+\mu}\right)^j\frac{K_{1/2+j}\left(\frac{w}{2}(\lambda+\mu)\right)}{(j+1)!\sqrt{\pi}}.\end{align}

Finally, from Equations (60) and (61), for $0\le w <t$ , we have

(62) \begin{align}&\mathbb P (M(t)>w)\nonumber\\[5pt] &= \frac{e^{w/2(\mu-\lambda)}w}{\sqrt{(\lambda+\mu)\pi}}\left[\lambda\mu \sqrt{w}\sum_{j=0}^{+\infty}\frac{\left(\frac{\lambda\mu w}{\lambda+\mu}\right)^j}{(j+1)!}K_{1/2+j}\left(\frac{w(\lambda+\mu)}{2}\right) \right.\nonumber\\[5pt] &\left.-\frac{2\sqrt{\lambda\mu t}}{\sqrt{t^2-w^2}}\sum_{j=0}^{+\infty}\left(\frac{2t\sqrt{\lambda\mu}}{(\lambda+\mu)\sqrt{t^2-w^2}}\right)^j\ I_{j+1}\left(\sqrt{\lambda\mu (t^2-w^2)}\right)K_{1/2+j}\left(\frac{t(\lambda+\mu)}{2}\right)\right]+e^{-\lambda w}.\end{align}

Since

(63) \begin{equation}K_{1/2+j}\left(\frac{t(\lambda+\mu)}{2}\right)=\sqrt{\frac{\pi}{t(\lambda+\mu)}}\textrm{e}^{-\frac{t(\lambda+\mu)}{2}} \frac{ j! (\!-\!1)^j }{[t(\lambda+\mu)]^j} L_{j}^{-2j-1}(t(\lambda+\mu)),\end{equation}

where $L_n^k(x)$ , $n\in {\mathbb N}$ , denotes the generalized Laguerre polynomial (see, for instance, [Reference Hansen20, p. 411]), and recalling Equation (5.11.4.10) of Prudnikov [Reference Prudnikov, Brychkov and Marichev33], we have

(64) \begin{eqnarray}&& \frac{e^{w/2(\mu-\lambda)}w}{\sqrt{(\lambda+\mu)\pi}}\lambda\mu \sqrt{w}\sum_{j=0}^{+\infty}\frac{\left(\frac{\lambda\mu w}{\lambda+\mu}\right)^j}{(j+1)!}K_{1/2+j}\left(\frac{w(\lambda+\mu)}{2}\right)\nonumber \\[5pt] && = \frac{e^{-\lambda w}w\lambda\mu}{\lambda+\mu}\sum_{j=0}^{+\infty}\left(-\frac{\lambda\mu}{(\lambda+\mu)^2}\right)^j\,\frac{L_{j}^{-2j-1}(w(\lambda+\mu))}{j+1}=1-e^{-\lambda w}.\end{eqnarray}

Hence, substituting Equation (64) in Equation (62), for $0\le w <t$ we obtain

\begin{align*} &\mathbb P (M(t)>w)\\[5pt] &=1-\frac{2\sqrt{\lambda\mu t}}{\sqrt{t^2-w^2}}\sum_{j=0}^{+\infty}\left(\frac{2t\sqrt{\lambda\mu}}{(\lambda+\mu)\sqrt{t^2-w^2}}\right)^j\ I_{j+1}\left(\sqrt{\lambda\mu (t^2-w^2)}\right)K_{1/2+j}\left(\frac{t(\lambda+\mu)}{2}\right), \end{align*}

so that the statement follows from Equation (63).

Note that, by Equation (5.11.4.10) of [Reference Prudnikov, Brychkov and Marichev33], the right-hand side of (26) tends to $\textrm{e}^{-\lambda t}$ as w approaches t.

Appendix B. Proof of Proposition 8

The expected value of M(t) can be computed by considering the integral from 0 to t of the survival function $\mathbb P(M(t)>w)$ , since $0\le M(t)\le t$ . Considering the series expansion of the Bessel function $I_1(\sqrt{x\lambda\mu(x+2w)})$ , we have

\begin{align*}I_1(\sqrt{x\lambda\mu(x+2w)})=\sum_{h=0}^{+\infty}\left(\frac{\sqrt{x\lambda\mu(x+2w)}}{2}\right)^{2h+1}\cdot\frac{1}{h!(h+1)!},\end{align*}

so that, after some algebra,

(65) \begin{eqnarray}&& \mathbb E\left(M(t)\right)=\int_0^t\mathbb P(M(t)>w)\textrm{d}w=\int_0^t\textrm{d}w\int_0^{t-w}h_Z(x,w)\textrm{d}x=\int_0^t\textrm{d}x\int_0^{t-x}h_Z(x,w)\textrm{d}w\nonumber \\[5pt] && =\frac{1-\textrm{e}^{-\lambda t}}{\lambda}+\sqrt{\lambda\mu}\sum_{h=0}^{+\infty}\frac{\left(\frac{\sqrt{\lambda\mu}}{2}\right)^{2h+1}}{h!(h+1)!}\int_{0}^t \textrm{e}^{-\lambda w} w \, \textrm{d}w \int_{0}^{t-w} \textrm{e}^{-\frac{(\lambda+\mu)x}{2}} x^h (x+2w)^h \textrm{d}x. \end{eqnarray}

If we expand the term $(x+2w)^h$ into a finite sum, Equation (65) becomes

(66) \begin{equation}\frac{1-{\textrm{e}}^{-\lambda t}}{\lambda}+ \frac{\mu}{2 \lambda}\sum_{h=0}^{+\infty}\frac{\left(\frac{\lambda\mu}{4}\right)^{h}}{h!(h+1)!}\sum_{j=0}^{h} \left(\begin{array}{c}{h}\\[2pt] {j}\end{array}\right) \left(\frac{2}{\lambda}\right)^j \int_{0}^{t} \textrm{e}^{- \frac{(\lambda+\mu)x}{2}} x^{2 h-j} \gamma(j+2, \lambda(t-x)) \textrm{d}x, \end{equation}

where $\gamma(a,x)$ is the lower incomplete gamma function. Since the lower incomplete gamma function can be expanded into a finite sum

\begin{align*}\gamma(r+2,\lambda y)=(r+1)!\left[1-e^{-\lambda y}\sum_{k=0}^{r+1}\frac{(\lambda y)^k}{k!}\right],\end{align*}

we have that

(67) \begin{equation}\begin{aligned}& \int_{0}^{t} \textrm{e}^{-\frac{(\lambda+\mu)x}{2}} x^{2 h-j} \gamma(j+2, \lambda(t-x)) \textrm{d}x\\[5pt] &=e^{-(\lambda+\mu)t/2}(r+1)!\left(\int_0^t e^{(\lambda+\mu)y/2}(t-y)^{2h-r}\textrm{d}y-\sum_{k=0}^{r+1}\frac{\lambda^k}{k!}\int_0^t e^{(\mu-\lambda)y/2}y^k(t-y)^{2h-r}\textrm dy\right)\\[5pt] &=e^{-(\lambda+\mu)t/2}(r+1)!\left(e^{(\lambda+\mu)t/2}\left(\frac{2}{\lambda+\mu}\right)^{2h-r+1}\gamma\left(2h-r+1,\frac{(\lambda+\mu)t}{2}\right)\right.\\[5pt] &-\left.\sum_{k=0}^{r+1}\lambda^kt^{2h+1-r+k}(2h-r)! \frac{_1F_1\left(k+1,2+2h-r+k,\frac{(\mu-\lambda)t}{2}\right)}{\Gamma\left(2+2h-r+k\right)}\right)\\[5pt] &=(r+1)!\left(\frac{2}{\lambda+\mu}\right)^{2h-r+1}\gamma\left(2h-r+1,\frac{(\lambda+\mu)t}{2}\right)-t^{2h-r+1}(r+1)!e^{-\lambda t}\\[5pt] &\times\sum_{k=0}^{r+1}\frac{\lambda^k t^k}{k!}\beta\left(2h-r+1,k+1\right){_1 F_1\left(2h-r+1,2h-r+k+2,-\frac{(\mu-\lambda)t}{2}\right)},\end{aligned}\end{equation}

where ${_1}F_1(a;\;b;\;z)$ is as defined in Equation (32). Hence, the result immediately follows from substituting Equation (67) in Equation (66).

Appendix C. Proof of Proposition 15

Considering the logarithm of Equation (52) and differentiating the resulting relation with respect to s, one has

(68) \begin{align}&\frac{1}{\psi_Z(t,s)}\cdot\frac{\textrm{d}}{\textrm{d}s}\psi_Z(t,s)=\lambda t\left\{\frac{2k^2\mu^k}{(\lambda+\mu-2s)^{k+1}}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\nonumber \\[5pt] (k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\right.\\[5pt] &+\left.\frac{\mu^k k}{(\lambda+\mu-2s)^k}\cdot \frac{\textrm{d}}{\textrm{d}s}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\right\}.\end{align}

By the chain rule, it follows that

(69) \begin{align}&\frac{\textrm{d}}{\textrm{d}s}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix}; \displaystyle\frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\nonumber\\[5pt] &=\frac{\textrm{d}}{\textrm{d}z}\left. {_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix};\; z\end{bmatrix} \right|_{z=\frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}} \cdot \frac{\textrm{d}}{\textrm{d} s}\left[\frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\right]\nonumber\\[5pt] &={_1\Psi_1}\begin{bmatrix}\begin{matrix}(2k+1,k+1)\\[5pt] (2k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\cdot \frac{2\lambda\mu^k(k+1)}{(\lambda+\mu-2s)^{k+2}},\end{align}

since

\begin{align*}\begin{aligned}\frac{\textrm{d}}{\textrm{d}z}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix};\;z\end{bmatrix}&=\frac{\textrm{d}}{\textrm{d}z}\sum_{n=0}^{+\infty}\frac{\Gamma(k+(k+1)n)}{\Gamma(k+1+kn)}\frac{z^n}{n!}=\sum_{n=1}^{+\infty}\frac{\Gamma(k+(k+1)n)}{\Gamma(k+1+kn)}\frac{z^{n-1}}{(n-1)!}\\[5pt] &=\sum_{m=0}^{+\infty}\frac{\Gamma(2k+1+(k+1)m)}{\Gamma(2k+1+km)}\frac{z^m}{m!}=\frac{\textrm{d}}{\textrm{d}z}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(2k+1,k+1)\\[5pt] (2k+1,k)\end{matrix};\;z\end{bmatrix}.\end{aligned}\end{align*}

Hence, Equation (53) follows from substituting Equation (69) into Equation (68) and evaluating the resulting relation at $s=0$ .

By differentiating Equation (68) with respect to s, one gets

(70) \begin{align}&\frac{\textrm{d}}{\textrm{d}s}\left(\frac{1}{\psi_Z(t,s)}\cdot\frac{\textrm{d}}{\textrm{d}s}\psi_Z(t,s)\right)=\lambda t \left[\frac{4k^2\mu^k(k+1)}{(\lambda+\mu-2s)^{k+2}}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}+\right.\nonumber\\[5pt] &+\left. \frac{4k^2\mu^k}{(\lambda+\mu-2s)^{k+1}}\cdot \frac{\textrm{d}}{\textrm{d}s}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\right.\nonumber\\[5pt] &\left.+\frac{\mu^k k}{(\lambda+\mu-2s)^k}\cdot \frac{\textrm{d}^2}{\textrm{d}s^2}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\right].\end{align}

By the chain rule, we have that

(71) \begin{align}&\frac{\textrm{d}}{\textrm{d}s}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\nonumber\\&={_1\Psi_1}\begin{bmatrix}\begin{matrix}(2k+1,k+1)\\[5pt] (2k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\cdot \frac{2\lambda\mu^k(k+1)}{(\lambda+\mu-2s)^{k+2}},\end{align}

and

(72) \begin{align}&\frac{\textrm{d}^2}{\textrm{d}s^2}{_1\Psi_1}\begin{bmatrix}\begin{matrix}(k,k+1)\\[5pt] (k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\nonumber\\[5pt]& ={_1\Psi_1}\begin{bmatrix}\begin{matrix}(3k+2,k+1)\\[5pt] (3k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\cdot \frac{4\lambda^2\mu^{2k}(k+1)^2}{(\lambda+\mu-2s)^{2k+4}}\nonumber\\[5pt] &+{_1\Psi_1}\begin{bmatrix}\begin{matrix}(2k+1,k+1)\\[5pt] (2k+1,k)\end{matrix};\displaystyle \frac{\lambda\mu^k}{(\lambda+\mu-2s)^{k+1}}\end{bmatrix}\cdot \frac{4\lambda\mu^{k}(k+1)(k+2)}{(\lambda+\mu-2s)^{k+3}}.\end{align}

Finally, by substituting Equations (71) and (72) into Equation (70) and evaluating the resulting relation at $s=0$ , we obtain Equation (54).

Acknowledgements

B. Martinucci and P. Paraggio are members of the group GNCS of the Istituto Nazionale di Alta Matematica (INdAM). The authors thank the anonymous referees for their useful comments, which improved the paper.

Funding information

This work is partially supported by PRIN 2022, under the project ‘Anomalous Phenomena on Regular and Irregular Domains: Approximating Complexity for the Applied Sciences’, and by PRIN 2022 PNRR, under the project ‘Stochastic Models in Biomathematics and Applications’.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Abramowitz, M. and Stegun, I. A. (1994). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Dover, New York.Google Scholar
Andrews, L. C. (1998). Special Functions of Mathematics for Engineers, 2nd edn. Oxford University Press.CrossRefGoogle Scholar
Bartlett, M. S. (1978). A note on random walks at constant speed. Adv. Appl. Prob. 10, 704707.CrossRefGoogle Scholar
Beghin, L., Nieddu, L. and Orsingher, E. (2001). Probabilistic analysis of the telegrapher’s process with drift by means of relativistic transformations. J. Appl. Math. Stoch. Anal. 14, 1125.Google Scholar
Cinque, F. and Orsingher, E. (2020). On the distribution of the maximum of the telegraph process. Theory Prob. Math. Statist. 102, 7395.CrossRefGoogle Scholar
Cinque, F. and Orsingher, E. (2021). On the exact distributions of the maximum of the asymmetric telegraph process. Stoch. Process. Appl. 142, 601633.CrossRefGoogle Scholar
Cinque, F. and Orsingher, E. (2023). Random motions in $R^3$ with orthogonal directions. Stoch. Process. Appl. 161, 173200.CrossRefGoogle Scholar
Das, S. and Kundu, D. (2016). On weighted exponential distribution and its length biased version. J. Indian Soc. Prob. Statist. 17, 5777.CrossRefGoogle Scholar
Dębicki, K., Liu, P. and Michna, Z. (2020). Sojourn times of Gaussian processes with trend. J. Theoret. Prob. 33, 21192166.CrossRefGoogle Scholar
De Gregorio, A. (2010). Stochastic velocity motions and processes with random time. Adv. Appl. Prob. 42, 10281056.CrossRefGoogle Scholar
De Gregorio, A. and Macci, C. (2012). Large deviation principles for telegraph processes. Statist. Prob. Lett. 82, 18741882.CrossRefGoogle Scholar
Di Crescenzo, A. and Martinucci, B. (2013). On the generalized telegraph process with deterministic jumps. Methodology Comput. Appl. Prob. 15, 215235.CrossRefGoogle Scholar
Di Crescenzo, A., Martinucci, B. and Zacks, S. (2018). Telegraph process with elastic boundary at the origin. Methodology Comput. Appl. Prob. 20, 333352.CrossRefGoogle Scholar
Di Crescenzo, A., Martinucci, B., Paraggio, P. and Zacks, S. (2021). Some results on the telegraph process confined by two non-standard boundaries. Methodology Comput. Appl. Prob. 23, 837858.CrossRefGoogle Scholar
Di Crescenzo, A. and Pellerey, F. (2002). On prices’ evolutions based on geometric telegrapher’s process. Appl. Stoch. Models Business Industry 18, 171184.CrossRefGoogle Scholar
Erdélyi, A. (1953). Higher Transcendental Functions, Vol. I. McGraw-Hill, New York.Google Scholar
Foss, S. and Miyazawa, M. (2018). Customer sojourn time in GI/GI/1 feedback queue in the presence of heavy tails. J. Statist. Phys. 173, 11951226.CrossRefGoogle Scholar
Goldstein, S. (1951). On diffusion by discontinuous movements, and on the telegraph equation. Quart. J. Mech. Appl. Math. 4, 129156.CrossRefGoogle Scholar
Gupta, R. D. and Kundu, D. (2009). A new class of weighted exponential distributions. Statistics 43, 621634.CrossRefGoogle Scholar
Hansen, E. R. (1975). A Table of Series and Products. Prentice-Hall, Englewood Cliffs, NJ.Google Scholar
Hillen, T. and Othmer, H. G. (2000). The diffusion limit of transport equations derived from velocity-jump processes. SIAM J. Appl. Math. 61, 751775.Google Scholar
Holmes, E. E., Lewis, M. A., Banks, J. E. and Veit, R. R. (1994). Partial differential equations in ecology: spatial interactions and population dynamics. Ecology 75, 1729,CrossRefGoogle Scholar
Kac, M. (1974). A stochastic model related to the telegrapher’s equation. Rocky Mountain J. Math. 4, 497509.CrossRefGoogle Scholar
Kolesnik, A. (1998). The equations of Markovian random evolution on the line. J. Appl. Prob. 35, 2735.CrossRefGoogle Scholar
Kolesnik, A. D. and Ratanov, N. (2022). Telegraph Processes and Option Pricing, 2nd edn. Springer, Berlin, Heidelberg.Google Scholar
Komin, N., Erdmann, U. and Schimansky-Geier, L. (2004). Random walk theory applied to Daphnia motion. Fluctuation Noise Lett. 4, L151L159.CrossRefGoogle Scholar
López, O. and Ratanov, N. (2012). Option pricing driven by a telegraph process with random jumps. J. Appl. Prob. 49, 838849.CrossRefGoogle Scholar
Martinucci, B., Meoli, A. and Zacks, S. (2022). Some results on the telegraph process driven by gamma components. Adv. Appl. Prob. 54, 808848.CrossRefGoogle Scholar
Masoliver, J. and Weiss, G. H. (1993). On the maximum displacement of a one-dimensional diffusion process described by the telegrapher’s equation. Physica A 195, 93100.CrossRefGoogle Scholar
Orsingher, E. (1995). Motions with reflecting and absorbing barriers driven by the telegraph equation. Random Operat. Stoch. Equat. 3, 922.Google Scholar
Orsingher, E., Garra, R. and Zeifman, A. I. (2020). Cyclic random motions with orthogonal directions. Markov Process. Relat. Fields 26, 381402.Google Scholar
Pogorui, A. A., Swishchuk, A. and Rodríguez-Dagnino, R. M. (2021). Transformations of telegraph processes and their financial applications. Risks 9, 147.CrossRefGoogle Scholar
Prudnikov, A. P., Brychkov, Y. A. and Marichev, O. I. (1986). Integrals and Series, Vol. 2, Special Functions. Gordon and Breach, New York.Google Scholar
Ray, D. (1963). Sojourn times of diffusion processes. Illinois J. Math. 7, 615630.CrossRefGoogle Scholar
Ratanov, N. (2007). A jump telegraph model for option pricing. Quant. Finance 7, 575583.CrossRefGoogle Scholar
Ratanov, N. (2021). On telegraph processes, their first passage times and running extrema. Statist. Prob. Lett. 174, article no. 109101.CrossRefGoogle Scholar
Stadje, W. and Zacks, S. (2004). Telegraph processes with random velocities. J. Appl. Prob. 41, 665678.CrossRefGoogle Scholar
Travaglino, F., Di Crescenzo, A., Martinucci, B. and Scarpa, R. (2018). A new model of Campi Flegrei inflation and deflation episodes based on Brownian motion driven by the telegraph process. Math. Geosci. 50, 961—975.CrossRefGoogle Scholar
Zacks, S., Perry, D., Bshouty, D. and Bar-Lev, S. (1999). Distributions of stopping times for compound Poisson processes with positive jumps and linear boundaries. Commun. Statist. Stoch. Models 15, 89101.CrossRefGoogle Scholar
Weiss, G. H. (2002). Some applications of persistent random walks and the telegrapher’s equation. Physica A 311, 381410.CrossRefGoogle Scholar
Figure 0

Figure 1. The telegraph process X(t) and the corresponding supremum process M(t).

Figure 1

Figure 2. The density $f_C(t)$ (left) and the distribution function $F_C(t)$ (right) for $\lambda=0.2$ (solid), $\lambda=0.5$ (dashed), $\lambda=0.8$ (dotted), and $\mu=0.5$.

Figure 2

Figure 3. Left: the density $h_Z(z,t)$ as a function of t for $z=2$, $\lambda=0.2$ (solid), $\lambda=0.5$ (dashed), $\lambda=0.8$ (dotted), and $\mu=0.8$. Right: the density $h_Z(z,t)$ as a function of z for $\mu=0.3$, $\lambda=0.5$, $t=1$ (solid), $t=2$ (dashed), and $t=3$ (dotted).

Figure 3

Figure 4. The distribution function $\mathbb P(M(t)\le w)$, in the case of exponentially distributed downward times, when $t=30$. Left: $\mu=0.5$, $\lambda=0.5$ (solid), $\lambda=0.3$ (dashed), and $\lambda=0.2$ (dotted). Right: $\lambda=0.5$, $\mu=0.5$ (solid), $\mu=0.7$ (dashed), and $\mu=0.9$ (dotted).

Figure 4

Figure 5. The lower bound L(t, w) and the upper bound U(t, w) of the survival function of M(t), in the case of Erlang distributed downward times, for $\mu=9$, $k=2$, $t=3$, $\lambda=2$ (left), and $\lambda=3$ (right).

Figure 5

Figure 6. The lower bound L(t, w) and the upper bound U(t, w) of the survival function of M(t), in the case of weighted exponentially distributed downward times, for $\alpha=1$, $\lambda=4$, $t=3$, $\mu=10$ (left), and $\mu=12$ (right).

Figure 6

Figure 7. The lower bound L(t, w) and the upper bound U(t, w) of the survival function of M(t), in the case of mixed exponential downward times, for $b_1=b_2=0.5$, $t=3$, $\mu_1=2$, $\mu_2=3$, $\lambda=1$ (left), and $\lambda=1.5$ (right).

Figure 7

Figure 8. The expected value (47) (left) and the variance (49) (right) of the first passage time $\Theta_w$ for $\mu=5$.

Figure 8

Figure 9. The expected value obtained in Proposition 16 (left) and the variance (right) of the first passage time $\Theta_w$ for $\mu=9$ and $k=2$.

Figure 9

Figure 10. The expected value (56) (left) and the variance (right) of $\Theta_w$ (obtained from Remark 11) for $\alpha=1$ and $\lambda=4$.

Figure 10

Figure 11. The expected value of $\Theta_w$ obtained in Proposition 20 for $b_1=b_2=0.5$, $\mu_1=2$, and $\lambda=1$.