Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-27T08:29:00.420Z Has data issue: false hasContentIssue false

Trajectory fitting estimation for reflected stochastic linear differential equations of a large signal

Published online by Cambridge University Press:  31 October 2023

Xuekang Zhang*
Affiliation:
Anhui Polytechnic University
Huisheng Shu*
Affiliation:
Donghua University
*
*Postal address: School of Mathematics-Physics and Finance, and Key Laboratory of Advanced Perception and Intelligent Control of High-end Equipment, Ministry of Education, Anhui Polytechnic University, Wuhu 241000, China. Email address: xkzhang@ahpu.edu.cn
**Postal address: College of Science, Donghua University, Shanghai, 201620, China. Email address: hsshu@dhu.edu.cn
Rights & Permissions [Opens in a new window]

Abstract

In this paper we study the drift parameter estimation for reflected stochastic linear differential equations of a large signal. We discuss the consistency and asymptotic distributions of trajectory fitting estimator (TFE).

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let $(\Omega,\mathcal{F},\mathbb{P})$ be equipped with a right continuous and increasing family of $\sigma$ -algebras $\{\mathcal{F}_t,t\geq 0\}$ , and let $\{W_t\}_{t\geq 0}$ be a given standard Brownian motion defined on the probability space $(\Omega,\mathcal{F},\mathbb{P})$ . In this paper we consider a reflected stochastic linear differential equation of a large signal,

(1) \begin{align} \left\{ \begin{array}{l}{\mathrm{d}} X_t=\dfrac{\theta}{\varepsilon} X_t \,{\mathrm{d}} t+{\mathrm{d}} W_t+{\mathrm{d}} L_t, \\[12pt] X_t\geq 0,\quad 0\leq t\leq T, \end{array}\right. \end{align}

where the initial value $X_0=x_0>0, \varepsilon\in(0,1]$ , $\theta\in\mathbb{R}$ is unknown, and $L=\{L_t,\,t\geq 0\}$ is the minimal increasing non-negative process which makes the reflected stochastic process (1) satisfy $X_t\geq 0$ for all $t\geq 0$ . The process L increases only when X hits the boundary zero, so that

\begin{align*}\int_0^\infty I_{\{X_t>0\}}\,{\mathrm{d}} L_t=0.\end{align*}

It can be easily proved (see e.g. [Reference Harrison9] and [Reference Whitt22]) that the process L has the following explicit expression:

(2) \begin{align}L_{t}=\max\biggl\{ 0,\sup_{u\in[0,t]}\biggl({-}x_0-\dfrac{\theta}{\varepsilon}\int_0^uX_\nu \,{\mathrm{d}} \nu-W_u\biggr)\biggr\}.\end{align}

Usually in applications to financial engineering, queueing systems, storage models, etc., the reflecting barrier is assumed to be zero. This is principally because of the physical restriction of the state processes. For example, inventory levels, stock prices, and interest rates should take non-negative values. We refer to [Reference Bo, Tang, Wang and Yang2], [Reference Bo, Wang and Yang3], [Reference Han, Hu. and Lee8], [Reference Harrison9], [Reference Krugman14], [Reference Linetsky17], [Reference Lions and Sznitman18], [Reference Ricciardi and Sacerdote19], [Reference Ward and Glynn20], [Reference Ward and Glynn21], and [Reference Whitt22] for more details on reflected stochastic differential equations (RSDEs) and their wide applications.

But in reality, the drift parameter in RSDEs is seldom known. Parametric inference is one of the effective methods for solving this type of problem. In the case of statistical inference for RSDEs driven by Brownian motion, a popular approach is the maximum likelihood estimation method, based on the Girsanov density (see e.g. [Reference Zang and Zhang23], [Reference Zang and Zhang24], and [Reference Zang and Zhu26]). For example, Bo et al. [Reference Bo, Wang, Yang and Zhang4] established the maximum likelihood estimator (MLE) for the stationary reflected Ornstein–Uhlenbeck processes (OUPs) and studied the strong consistency and asymptotic normality of the MLE. Jiang and Yang [Reference Jiang and Yang12] considered asymptotic properties of the MLE of the parameter occurring in ergodic reflected Ornstein–Uhlenbeck processes (ROUPs) with a one-sided barrier. Zang and Zhu [Reference Zang and Zhu26] investigated the strong consistency and limiting distribution of the MLE in both the stationary and non-stationary cases for reflected OUPs. It is well known that the TFE was introduced by Kutoyants [Reference Kutoyants15] as a numerically attractive alternative to the well-investigated MLE. Recently, Zang and Zhang [Reference Zang and Zhang25] used the trajectory fitting estimation to investigate the asymptotic behaviour of the estimator for non-stationary reflected OUPs, including strong consistency and asymptotic distribution. Further, they have shown that the TFE for ergodic reflected OUPs is not strongly consistent.

On the other hand, trajectory fitting estimation for stochastic process without reflection have drawn increasing attention (see e.g. [Reference Dietz5], [Reference Dietz and Kutoyants6], [Reference Dietz and Kutoyants7], [Reference Kutoyants15], and [Reference Kutoyants16]). For instance, Abi-ayad and Mourid [Reference Abi-ayad and Mourid1] discussed the strong consistency and Gaussian limit distribution of the TFE for non-recurrent diffusion processes. Jiang and Xie [Reference Jiang and Xie11] studied the asymptotic behaviours for the TFE in stationary OUPs with linear drift.

Motivated by the aforementioned works, in this paper we extend the work of Zang and Zhang [Reference Zang and Zhang25] and study the consistency and asymptotic distributions of the TFE for RSDE (1) based on continuous observation of $X=\{X_t, 0\leq t\leq T \}$ . In order to obtain our estimators, we divide RSDE (1) by $\varepsilon^{{{1}/{2}}}$ and change the variable $t_\varepsilon=t\varepsilon^{-1}$ . So $t_\varepsilon\in[0,T_\varepsilon]$ with $T_\varepsilon=T\varepsilon^{-1}$ . From the scaling properties of Brownian motion, we find that there exists another standard Brownian motion $\{\widetilde{W}_t\}_{t\geq 0}$ on the enlarged probability space such that $\widetilde{W}_t\stackrel{d}{=}\varepsilon^{-{{1}/{2}}}W_{\varepsilon t}$ . Denote $Y_{t_\varepsilon}=X_{t_\varepsilon\varepsilon}\varepsilon^{-{{1}/{2}}}$ . Then, for reflected stochastic process (1), we have

(3) \begin{align} \left\{ \begin{array}{l} {\mathrm{d}} Y_{t_\varepsilon}=\theta Y_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon+{\mathrm{d}} \widetilde{W}_{t_\varepsilon}+{\mathrm{d}} \widetilde{L}_{t_\varepsilon}, \\[5pt] Y_{t_\varepsilon}\geq 0,\quad 0\leq t_\varepsilon\leq T_\varepsilon, \\[5pt] Y_0=x_0\varepsilon^{-{{1}/{2}}},\end{array} \right.\end{align}

where the realizations of $\widetilde{L}_{t_\varepsilon}=\varepsilon^{-{{1}/{2}}}L_{\varepsilon t_\varepsilon}$ . It follows from (2) that

(4) \begin{align}\widetilde{L}_{t_\varepsilon}=\max\biggl\{ 0,\sup_{s\in[0,t_\varepsilon]}\biggl({-}x_0\varepsilon^{-{{1}/{2}}}-\theta\int_0^sY_u \,{\mathrm{d}} u-\widetilde{W}_s\biggr)\biggr\}.\end{align}

Let

\begin{align*}A_{t_\varepsilon}=\int_0^{t_\varepsilon} Y_{s} \,{\mathrm{d}} s.\end{align*}

RSDE (3) can be written as

\begin{align*}Y_{t_\varepsilon}=Y_0+\theta A_{t_\varepsilon}+\widetilde{W}_{t_\varepsilon}+\widetilde{L}_{t_\varepsilon}, \quad 0\leq t_\varepsilon\leq T_\varepsilon.\end{align*}

The TFE of $\theta$ should minimize

\begin{align*}\int_0^{T_\varepsilon}|Y_{t_\varepsilon}-(Y_0+\theta A_{t_\varepsilon})|^2 \,{\mathrm{d}} t_\varepsilon.\end{align*}

It can easily be seen that the minimum is attained when $\theta$ is given by

\begin{align*}\widehat{\theta}_\varepsilon&=\dfrac{\int_0^{T_\varepsilon} A_{t_\varepsilon}(Y_{t_\varepsilon}-Y_0) \,{\mathrm{d}} t_{\varepsilon}}{\int_0^{T_\varepsilon}A_{t_\varepsilon}^2 \,{\mathrm{d}} t_\varepsilon}.\end{align*}

By simple calculations, we have

(5) \begin{align}\widehat{\theta}_\varepsilon-\theta=\dfrac{\int_0^{T_\varepsilon} A_{t_\varepsilon}\widetilde{W}_{t_\varepsilon} \,{\mathrm{d}} t_\varepsilon}{\int_0^{T_\varepsilon}A_{t_\varepsilon}^2 \,{\mathrm{d}} t_\varepsilon}+\dfrac{\int_0^{T_\varepsilon} A_{t_\varepsilon}\widetilde{L}_{t_\varepsilon} \,{\mathrm{d}} t_\varepsilon}{\int_0^{T_\varepsilon}A_{t_\varepsilon}^2 \,{\mathrm{d}} t_\varepsilon}.\end{align}

2. Consistency of the TFE $\widehat{\theta}_\varepsilon$

In this section we discuss the consistency of the TFE $\widehat{\theta}_\varepsilon$ in both the non-ergodic and ergodic cases, respectively. We shall use the notation ‘ $\rightarrow_p$ ’ to denote ‘convergence in probability’ and the notation ‘ $\Rightarrow$ ’ to denote ‘convergence in distribution’. We write ‘ $\stackrel{d}{=}$ ’ for equality in distribution.

We introduce two important lemmas as follows.

Lemma 2.1. (Dietz and Kutoyants [Reference Dietz and Kutoyants6].) If $\varphi_T$ is a probability measure defined on $[0,\infty)$ such that $\varphi_T([0,T])=1$ and $\varphi_T([0,K])\rightarrow0$ as $T\rightarrow\infty$ for each $K>0$ , then

\begin{align*}\lim_{T\rightarrow\infty}\int_0^Tf_t\varphi_T({\mathrm{d}} t)=f_\infty\end{align*}

for every bounded and measure function $f\colon [0,\infty)\rightarrow\mathbb{R}$ for which the limit $f_\infty\;:\!=\; \lim_{t\rightarrow\infty}f_t$ exists.

Lemma 2.2. (Karatzas and Shreve [Reference Karatzas and Shreve13].) Let $z\geq 0$ be a given number and let $y(\!\cdot\!)=\{y(t);\; 0 \leq t < \infty\}$ be a continuous function with $y(0)= 0$ . There exists a unique continuous function $k(\!\cdot\!)=\{k(t);\; 0 \leq t < \infty\}$ such that

  1. (i) $x(t)\;:\!=\; z+y(t)+k(t)\geq 0, 0\leq t<\infty$ ,

  2. (ii) $k(0)=0$ , $k(\!\cdot\!)$ is non-decreasing,

  3. (iii) $k(\!\cdot\!)$ is flat off $\{t\geq 0;\; x(t) = 0\}$ , that is,

    \begin{align*} \int_0^\infty I_{\{x(s)> 0\}} \,{\mathrm{d}} k(s)=0. \end{align*}

Then the function $k(\!\cdot\!)$ is given by

\begin{align*}k(t)=\max\Bigl[0, \max_{0\leq s\leq t}\{-(z+y(s))\}\Bigr],\quad 0\leq t<\infty.\end{align*}

Theorem 2.1.

  1. (a) Under $\theta>0$ , we have

    (6) \begin{align}\lim_{\varepsilon\rightarrow 0}(\widehat{\theta}_\varepsilon-\theta)=0\quad \textit{a.s.}\end{align}
  2. (b) Under $\theta=0$ , we have

    (7) \begin{align}\widehat{\theta}_\varepsilon-\theta\rightarrow_p0\quad\textit{as}\; \text{$ \varepsilon\rightarrow 0$.}\end{align}
  3. (c) Under $\theta<0$ , we have

    (8) \begin{align}\lim_{\varepsilon\rightarrow 0}\widehat{\theta}_\varepsilon=0\quad \textit{a.s.,}\end{align}
    that is, the TFE $\widehat{\theta}_\varepsilon$ is not strongly consistent.

Proof. (a) (i) If $x_0> 0$ , it is easy to see that

(9) \begin{align}{\mathrm{e}}^{-\theta t_\varepsilon}Y_{t_\varepsilon}=x_0\varepsilon^{-{{1}/{2}}}+\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s} \,{\mathrm{d}} \widetilde{W}_s+\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s} \,{\mathrm{d}} \widetilde{L}_s.\end{align}

Because the process $\widetilde{L}=\{\widetilde{L}_{t_\varepsilon}\}_{t_\varepsilon\geq 0}$ increases only when $Y=\{Y_{t_\varepsilon}\}_{t_\varepsilon\geq 0}$ hits the boundary zero, $\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s} \,{\mathrm{d}} \widetilde{L}_s$ is a continuous non-decreasing process for which

\begin{align*} x_0\varepsilon^{-{{1}/{2}}}+\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s} \,{\mathrm{d}} \widetilde{W}_s+\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s}\,{\mathrm{d}} \widetilde{L}_s\geq 0 ,\end{align*}

and increases only when

\begin{align*} x_0\varepsilon^{-{{1}/{2}}}+\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s}\,{\mathrm{d}} \widetilde{W}_s+\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s}\,{\mathrm{d}} \widetilde{L}_s=0.\end{align*}

It follows from Lemma 2.2 that

(10) \begin{align}\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s}\,{\mathrm{d}} \widetilde{L}_s=\max\biggl[0,\max_{0\leq s\leq t_\varepsilon} \biggl\{ -x_0\varepsilon^{-{{1}/{2}}}-\int_0^{s}\,{\mathrm{e}}^{-\theta u}\,{\mathrm{d}} \widetilde{W}_u\biggr\} \biggr].\end{align}

For

\begin{align*} M_{t}\;:\!=\; -\int_0^{t}\,{\mathrm{e}}^{-\theta s}\,{\mathrm{d}} \widetilde{W}_s,\end{align*}

by time change for a continuous martingale, there exists another standard Brownian motion $\{\widehat{W}_{t}\}_{t\geq 0}$ on the enlarged probability space such that $M_{t}\stackrel{d}{=}\widehat{W}_{\frac{1-{\mathrm{e}}^{-2\theta t}}{2\theta}}$ . It follows from (10) that

(11) \begin{align}\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s}\,{\mathrm{d}} \widetilde{L}_s=\max\biggl[0,-x_0\varepsilon^{-{{1}/{2}}}+\max_{0\leq s\leq \frac{1-{\mathrm{e}}^{-2\theta t_\varepsilon}}{2\theta}} \widehat{W}_{s} \biggr].\end{align}

Then, under $\theta>0$ and $x_0>0$ , by the fact that $\widehat{W}_s$ , $s\in[0,{{1}/{(2\theta)}}]$ is almost surely finite, we have

(12) \begin{align}\lim_{\varepsilon\rightarrow 0}\varepsilon^{{{1}/{2}}}\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s}\,{\mathrm{d}} \widetilde{L}_s=0\quad \text{a.s.}\end{align}

In view of the definition of the quadratic variation of a continuous local martingale, we find that

\begin{align*}\lim_{\varepsilon\rightarrow 0}\langle M,M\rangle_{t_\varepsilon}=\dfrac{1}{2\theta}\quad \text{a.s.}\end{align*}

and

\begin{align*}\breve{M}_{t_\varepsilon}\;:\!=\; M_{t_\varepsilon}^2-\langle M,M\rangle_{t_\varepsilon}\end{align*}

is a continuous local martingale. Writing this as

\begin{align*}M_{t_\varepsilon}^2=\breve{M}_{t_\varepsilon}+\langle M,M\rangle_{t_\varepsilon},\end{align*}

it follows from the convergence theorem of non-negative semi-martingales that

\begin{align*}\lim_{\varepsilon\rightarrow 0}M_{t_\varepsilon}^2<\infty\quad \text{a.s.},\end{align*}

and hence

(13) \begin{align}\lim_{\varepsilon\rightarrow 0}M_{t_\varepsilon}=-\int_0^{\infty}\,{\mathrm{e}}^{-\theta s}\,{\mathrm{d}} \widetilde{W}_s\quad \text{a.s.}\end{align}

Applying (9), (12), and (13), we obtain

(14) \begin{align}\lim_{\varepsilon\rightarrow 0}\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta t_\varepsilon}Y_{t_\varepsilon}=x_0\quad \text{a.s.}\end{align}

Combining Lemma 2.1 with (14) yields

(15) \begin{align}\lim_{\varepsilon\rightarrow 0} \varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta T_\varepsilon}A_{T_\varepsilon}&=\lim_{\varepsilon\rightarrow 0}\int_0^{T_\varepsilon}\bigl(\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta t_\varepsilon}Y_{t_\varepsilon}\bigr)\dfrac{{\mathrm{e}}^{\theta t_\varepsilon}}{\int_0^{T_\varepsilon}\,{\mathrm{e}}^{\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon\times {\mathrm{e}}^{-\theta T_\varepsilon}\int_0^{T_\varepsilon}\,{\mathrm{e}}^{\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon\notag \\[5pt] &=\dfrac{x_0}{\theta}\quad \text{a.s.}\end{align}

By (4) and the self-similarity property for the Brownian motion, we get

(16) \begin{align}\widetilde{L}_{t_\varepsilon}&\leq \max\biggl\{ 0,\sup_{0\leq s\leq t_\varepsilon}\bigl({-}x_0\varepsilon^{-{{1}/{2}}}-\widetilde{W}_s\bigr)\biggr\}\stackrel{d}{=}t_{\varepsilon}^{{{1}/{2}}}\max\Bigl\{0,\max_{0\leq u\leq 1}\bigl({-}x_0t^{-{{1}/{2}}}-\breve{W}_{u}\bigr)\Bigr\},\end{align}

where $\{\breve{W}_u,\,u\geq 0\}$ is a fixed Wiener process on the enlarged probability space. Using (16) and the fact that $\breve{W}_u$ , $u\in[0,1]$ is almost surely finite, we see that

\begin{align*}\lim_{\varepsilon\rightarrow 0}\dfrac{\widetilde{L}_{t_\varepsilon}}{t_\varepsilon}=0\quad \text{a.s.}\end{align*}

Obviously,

(17) \begin{align}\lim_{\varepsilon\rightarrow 0}\dfrac{\widetilde{L}_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}=0\quad \text{a.s.}\end{align}

By the strong law of large numbers, we obtain

\begin{align*}\lim_{\varepsilon\rightarrow 0}\dfrac{\widetilde{W}_{t_\varepsilon}}{t_\varepsilon}=0\quad \text{a.s.}\end{align*}

This implies that

(18) \begin{align}\lim_{\varepsilon\rightarrow 0}\dfrac{\widetilde{W}_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}=0\quad \text{a.s.}\end{align}

By (15), (17), (18), and Lemma 2.1, we have

(19) \begin{align} \lim_{\varepsilon\rightarrow 0}(\widehat{\theta}_\varepsilon-\theta) & =\lim_{\varepsilon\rightarrow 0}\dfrac{\int_0^{T_\varepsilon}\frac{A_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}\frac{\widetilde{W}_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}\frac{\varepsilon^{-1}\,{\mathrm{e}}^{2\theta t_\varepsilon}}{\int_0^{T_\varepsilon}\varepsilon^{-1}\,{\mathrm{e}}^{2\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{\int_0^{T_\varepsilon}\bigl(\frac{A_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}\bigr)^2\frac{\varepsilon^{-1}\,{\mathrm{e}}^{2\theta t_\varepsilon}}{\int_0^{T_\varepsilon}\varepsilon^{-1}\,{\mathrm{e}}^{2\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}\notag \\[5pt] & \quad +\lim_{\varepsilon\rightarrow 0}\dfrac{\int_0^{T_\varepsilon}\frac{A_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}\frac{\widetilde{L}_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}\frac{\varepsilon^{-1}\,{\mathrm{e}}^{2\theta t_\varepsilon}}{\int_0^{T_\varepsilon}\varepsilon^{-1}\,{\mathrm{e}}^{2\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{\int_0^{T_\varepsilon}\bigl(\frac{A_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}\bigr)^2\frac{\varepsilon^{-1}\,{\mathrm{e}}^{2\theta t_\varepsilon}}{\int_0^{T_\varepsilon}\varepsilon^{-1}\,{\mathrm{e}}^{2\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}\notag\\[5pt] & =0\quad \text{a.s.}\end{align}

This completes the desired proof.

(ii) If $x_0=0$ , by Theorem 2.1 of Zang and Zhang [Reference Zang and Zhang23], it follows that (6) holds. This completes the proof of Theorem 2.1(a).

(b) Under $\theta=0$ , we have $Y_{T_\varepsilon}=x_0\varepsilon^{-{{1}/{2}}}+\widetilde{W}_{T_{\varepsilon}}+\widetilde{L}_{T_{\varepsilon}}$ . Then

\begin{align*}\int_0^{T_\varepsilon}A_{t_\varepsilon}^2\,{\mathrm{d}} t_\varepsilon&=T_\varepsilon^4\int_0^{1}\biggl(\int_0^{t}\biggl(\dfrac{x_0}{\sqrt{T}}+\dfrac{\widetilde{W}_{sT_\varepsilon}}{\sqrt{T_\varepsilon}}+\dfrac{\widetilde{L}_{sT_\varepsilon}}{\sqrt{T_\varepsilon}}\biggr)\,{\mathrm{d}} s\biggr)^2\,{\mathrm{d}} t\end{align*}

and

\begin{align*}&\int_0^{T_\varepsilon}\bigl(\widetilde{W}_{t_{\varepsilon}}+\widetilde{L}_{t_{\varepsilon}}\bigr)A_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon =T_\varepsilon^3\int_0^{1}\biggl(\dfrac{\widetilde{W}_{sT_\varepsilon}}{\sqrt{T_\varepsilon}}+\dfrac{\widetilde{L}_{sT_\varepsilon}}{\sqrt{T_\varepsilon}}\biggr)\int_0^{s}\biggl(\dfrac{x_0}{\sqrt{T}}+\dfrac{\widetilde{W}_{uT_\varepsilon}}{\sqrt{T_\varepsilon}}+\dfrac{\widetilde{L}_{uT_\varepsilon}}{\sqrt{T_\varepsilon}}\biggr)\,{\mathrm{d}} u \,{\mathrm{d}} s.\end{align*}

By the scaling properties of Brownian motion and Lemma 2.2, it follows that

\begin{align*}\bigl\{\bigl(\widetilde{W}_{\nu},\widetilde{L}_{\nu}\bigr);\nu\geq 0\bigr\}&\stackrel{d}{=}\Bigl\{T_\varepsilon^{{{1}/{2}}}\Bigl(\widehat{W}_{\nu/T_\varepsilon},\max\Bigl\{0,\max_{0\leq s\leq \nu}\bigl({-}x_0T^{-{{1}/{2}}}-\widehat{W}_{s/T_\varepsilon}\bigr)\Bigr\}\Bigr);\nu\geq 0 \Bigr\}\notag \\[5pt] & =: \bigl\{T_\varepsilon^{{{1}/{2}}}\bigl(\widehat{W}_{\nu/T_\varepsilon},\widehat{L}_{\nu/T_\varepsilon}\bigr);\nu\geq 0 \bigr\},\end{align*}

where $\{\widehat{W}_\nu,\,\nu\geq 0\}$ is another standard Wiener process on the enlarged probability space. By the continuous mapping theorem, we have

(20) \begin{align}\dfrac{\int_0^{T_\varepsilon}\bigl(\widetilde{W}_{t_{\varepsilon}}+\widetilde{L}_{t_{\varepsilon}}\bigr)A_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{\int_0^{T_\varepsilon}A_{t_\varepsilon}^2\,{\mathrm{d}} t_\varepsilon}&\stackrel{d}{=}\dfrac{1}{T_\varepsilon}\dfrac{\int_0^{1}\bigl(\widehat{W}_{s}+\widehat{L}_{s}\bigr)\int_0^{s}\bigl(x_0T^{-{{1}/{2}}}+\widehat{W}_{u}+\widehat{L}_{u}\bigr)\,{\mathrm{d}} u \,{\mathrm{d}} s}{\int_0^{1}\bigl(\int_0^{t}\bigl(x_0T^{-{{1}/{2}}}+\widehat{W}_{s}+\widehat{L}_{s}\bigr)\,{\mathrm{d}} s\bigr)^2\,{\mathrm{d}} t}\notag \\[5pt] &\rightarrow_p 0 \quad \text{{as $\varepsilon\rightarrow 0$,}}\end{align}

where

\begin{align*} \widehat{L}_s=\max\Bigl\{0,\max_{0\leq u\leq s}\bigl({-}x_0T^{-{{1}/{2}}}-\widehat{W}_{u}\bigr)\Bigr\}.\end{align*}

Combining (5) with (20) implies that (7) holds. This completes the desired proof.

(c) We consider the case $\theta<0$ . We first introduce the reflected OUP

(21) \begin{align}{\mathrm{d}} \bar{Y}_{t_\varepsilon}=\theta\bar{Y}_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon+{\mathrm{d}} \widetilde{W}_{t_\varepsilon}+{\mathrm{d}} \bar{L}_{t_\varepsilon},\quad \bar{Y}_0=0,\quad 0\leq t_\varepsilon\leq T_\varepsilon.\end{align}

It is easy to see that

(22) \begin{align}{\mathrm{e}}^{-\theta t_\varepsilon}\bar{Y}_{t_\varepsilon}=\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s}\,{\mathrm{d}} \widetilde{W}_s+\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s}\,{\mathrm{d}} \bar{L}_s.\end{align}

Similarly to the discussion of (10), we see that

(23) \begin{align}\int_0^{t_\varepsilon}\,{\mathrm{e}}^{-\theta s}\,{\mathrm{d}} \bar{L}_s=\max\biggl[0,\max_{0\leq s\leq t_\varepsilon}\biggl\{-\int_0^s \,{\mathrm{e}}^{-\theta u} \,{\mathrm{d}} \widetilde{W}_u\biggr\} \biggr].\end{align}

By (9), (10), (22), and (23), we have

(24) \begin{align}| Y_{T_\varepsilon}-\bar{Y}_{T_\varepsilon}|&={\mathrm{e}}^{\theta T_\varepsilon}\biggl| \max\biggl[ 0, \max_{0\leq s\leq T_\varepsilon}\biggl\{-x_0\varepsilon^{-{{1}/{2}}}-\int_0^s\,{\mathrm{e}}^{-\theta u}\,{\mathrm{d}} \widetilde{W}_u\biggr\}\biggr]\notag\\[5pt] &\quad -\max\biggl[ 0, \max_{0\leq s\leq T_\varepsilon}\biggl\{-\int_0^s\,{\mathrm{e}}^{-\theta u}\,{\mathrm{d}} \widetilde{W}_u\biggr\}\biggr]\biggr|+x_0\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta T_\varepsilon}\notag\\[5pt] &\leq 2x_0\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta T_\varepsilon}.\end{align}

For the linear system (21), according to the mean ergodic theorem (see Hu et al. [Reference Hu, Lee, Lee and Song10]), we have

(25) \begin{align}\lim_{\varepsilon\rightarrow 0}\dfrac{1}{T_\varepsilon}\int_0^{T_\varepsilon} \bar{Y}_s\,{\mathrm{d}} s=\dfrac{1}{\sqrt{-\pi\theta}}\quad \text{a.s.}\end{align}

Combining (24) with (25) yields

(26) \begin{align}\lim_{\varepsilon\rightarrow 0}\dfrac{A_{T_\varepsilon}}{T_\varepsilon}=\lim_{\varepsilon\rightarrow 0}\dfrac{1}{T_\varepsilon}\int_0^{T_\varepsilon} (Y_s-\bar{Y}_s)\,{\mathrm{d}} s+\lim_{\varepsilon\rightarrow 0}\dfrac{1}{T_\varepsilon}\int_0^{T_\varepsilon} \bar{Y}_s\,{\mathrm{d}} s=\dfrac{1}{\sqrt{-\pi\theta}}\quad \text{a.s.}\end{align}

It follows from Zang and Zhang [Reference Zang and Zhang25] that

\begin{align*} \lim_{\varepsilon\rightarrow 0}\frac{\bar{L}_{T_\varepsilon}}{T_\varepsilon}=\sqrt{\frac{-\theta}{\pi}}\quad \text{a.s.}\end{align*}

Note that

\begin{align*}\bigl|\widetilde{L}_{T_\varepsilon}- \bar{L}_{T_\varepsilon}\bigr|&=\biggl|\max\biggl\{ 0, \sup_{s\in[0,T_\varepsilon]}\biggl( -x_0\varepsilon^{-{{1}/{2}}}-\theta\int_0^sY_u\,{\mathrm{d}} u-\widetilde{W}_s\biggr)\biggr\}\notag\\[5pt] &\quad -\max\biggl\{ 0, \sup_{s\in[0,T_\varepsilon]}\biggl( -\theta\int_0^s\bar{Y}_u\,{\mathrm{d}} u-\widetilde{W}_s\biggr)\biggr\}\biggr|\notag \\[5pt] &\leq x_0\varepsilon^{-{{1}/{2}}}+|\theta|\int_0^{T_\varepsilon} 2x_0 \varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta u}\,{\mathrm{d}} u\notag \\[5pt] &\leq 3x_0\varepsilon^{-{{1}/{2}}}.\end{align*}

Then we have

(27) \begin{align}\lim_{\varepsilon\rightarrow 0}\dfrac{\widetilde{L}_{T_\varepsilon}}{T_\varepsilon}=\lim_{\varepsilon\rightarrow 0}\dfrac{\widetilde{L}_{T_\varepsilon}-\bar{L}_{T_\varepsilon}}{T_\varepsilon}+\lim_{\varepsilon\rightarrow 0}\dfrac{\bar{L}_{T_\varepsilon}}{T_\varepsilon}=\sqrt{\dfrac{-\theta}{\pi}}\quad \text{a.s.}\end{align}

By (26), (27), Lemma 2.1, and the strong law of large numbers, we have

(28) \begin{align}\lim_{\varepsilon\rightarrow 0}\dfrac{\int_0^{T_\varepsilon}\widetilde{W}_{t_\varepsilon}A_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{\int_0^{T_\varepsilon}A^2_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}=\lim_{\varepsilon\rightarrow 0}\dfrac{\int_0^{T_\varepsilon}\dfrac{\widetilde{W}_{t_\varepsilon}}{t_\varepsilon}\frac{A_{t_\varepsilon}}{t_\varepsilon}\frac{t_\varepsilon^2}{\int_0^{T_\varepsilon}t_\varepsilon^2\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{\int_0^{T_\varepsilon}\bigl(\frac{A_{t_\varepsilon}}{t_\varepsilon}\bigr)^2\frac{t_\varepsilon^2}{\int_0^{T_\varepsilon}t_\varepsilon^2\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}=0\quad \text{a.s.}\end{align}

and

(29) \begin{align}\lim_{\varepsilon\rightarrow 0}\dfrac{\int_0^{T_\varepsilon}\widetilde{L}_{t_\varepsilon}A_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{\int_0^{T_\varepsilon}A^2_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}=\lim_{\varepsilon\rightarrow 0}\dfrac{\int_0^{T_\varepsilon}\frac{\widetilde{L}_{t_\varepsilon}}{t_\varepsilon}\frac{A_{t_\varepsilon}}{t_\varepsilon}\frac{t_\varepsilon^2}{\int_0^{T_\varepsilon}t_\varepsilon^2\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{\int_0^{T_\varepsilon}\bigl(\frac{A_{t_\varepsilon}}{t_\varepsilon}\bigr)^2\frac{t_\varepsilon^2}{\int_0^{T_\varepsilon}t_\varepsilon^2\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}=-\theta\quad \text{a.s.}\end{align}

By (5), (28), and (29), we can conclude that (8) holds. This completes the proof.

3. Asymptotic distribution of the TFE $\widehat{\theta}_\varepsilon$

In this section we investigate the asymptotic distribution of the TFE $\widehat{\theta}_\varepsilon$ .

Theorem 3.1. Let $\{\widehat{W}_u,\,u\geq 0\}$ be another standard Wiener process on the enlarged probability space.

  1. (a) Assume $\theta>0$ .

    1. (i) If $x_0> 0$ , then

      (30) \begin{align}(\widehat{\theta}_\varepsilon-\theta)\,{\mathrm{e}}^{\theta T_\varepsilon}\Rightarrow \dfrac{2\theta}{x_0}T^{{{1}/{2}}}N\quad \textit{as}\; \text{$ \varepsilon\rightarrow 0$,}\end{align}
      where N is a random variable with the standard normal distribution.
    2. (ii) If $x_0=0$ , then

      (31) \begin{align}\dfrac{{\mathrm{e}}^{\theta T_\varepsilon}}{\sqrt{T_\varepsilon}}(\widehat{\theta}_\varepsilon-\theta)\Rightarrow \dfrac{2\theta N}{\bigl|\widehat{W}_{{{1}/{(2\theta)}}}+\widehat{L}_{{{1}/{(2\theta)}}}\bigr|}\quad \textit{as}\; \text{$ \varepsilon\rightarrow 0$,}\end{align}
      where
      \begin{align*} \widehat{L}_{{{1}/{(2\theta)}}}=\max\Bigl[ 0,\max_{0\leq u\leq {{1}/{(2\theta)}}}(\!-\!\widehat{W}_u)\Bigr],\end{align*}
      and N is a standard normal random variable which is independent of $\widehat{W}_{{{1}/{(2\theta)}}}$ and $\widehat{L}_{{{1}/{(2\theta)}}}$ .

  2. (b) If $\theta =0$ , then

    (32) \begin{align}\varepsilon^{-1}\widehat{\theta}_\varepsilon\Rightarrow \dfrac{1}{T}\dfrac{\int_0^{1}\bigl(\widehat{W}_{s}+\widehat{L}_{s}\bigr)\int_0^{s}\bigl(x_0T^{-{{1}/{2}}}+\widehat{W}_{u}+\widehat{L}_{u}\bigr)\,{\mathrm{d}} u \,{\mathrm{d}} s}{\int_0^{1}\bigl(\int_0^{t}\bigl(x_0T^{-{{1}/{2}}}+\widehat{W}_{s}+\widehat{L}_{s}\bigr)\,{\mathrm{d}} s\bigr)^2\,{\mathrm{d}} t}\quad \textit{as}\; \textit{$ \varepsilon\rightarrow 0$,}\end{align}
    where
    \begin{equation*} \widehat{L}_s=\max\Bigl\{0,\max_{0\leq u\leq s}\bigl({-}x_0T^{-{{1}/{2}}}-\widehat{W}_{u}\bigr)\Bigr\}\end{equation*}

Proof. (a) (i) If $x_0> 0$ , we have

(33) \begin{align}&{\mathrm{e}}^{\theta T_\varepsilon}(\widehat{\theta}_\varepsilon-\theta)\notag \\[5pt] &\quad =\dfrac{\varepsilon \,{\mathrm{e}}^{-\theta T_\varepsilon}\int_0^{T_\varepsilon}A_{t_\varepsilon}\widetilde{W}_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon+\varepsilon \,{\mathrm{e}}^{-\theta T_\varepsilon}\int_0^{T_\varepsilon}A_{t_\varepsilon}\widetilde{L}_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{\varepsilon \,{\mathrm{e}}^{-2\theta T_\varepsilon}\int_0^{T_\varepsilon}A_{t_\varepsilon}^2\,{\mathrm{d}} t_\varepsilon}\notag \\[5pt] &\quad =\dfrac{T^{{{1}/{2}}}}{\varepsilon \,{\mathrm{e}}^{-2\theta T_\varepsilon}\int_0^{T_\varepsilon}A_{t_\varepsilon}^2\,{\mathrm{d}} t_\varepsilon}\biggl(T_\varepsilon^{-{{1}/{2}}}\widetilde{W}_{T_\varepsilon}\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta T_\varepsilon}\int_0^{T_\varepsilon}A_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon \notag \\[5pt] &\qquad+\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta T_\varepsilon}T_\varepsilon^{-{{1}/{2}}}\int_0^{T_\varepsilon}A_{t_\varepsilon}\bigl(\widetilde{W}_{T_\varepsilon}-\widetilde{W}_{t_\varepsilon}\bigr)\,{\mathrm{d}} t_\varepsilon +\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta T_\varepsilon}T_\varepsilon^{-{{1}/{2}}}\int_0^{T_\varepsilon}A_{t_\varepsilon}\widetilde{L}_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon\biggr)\notag \\[5pt] &\quad \;:\!=\; I_1(\varepsilon)(I_2(\varepsilon)+I_3(\varepsilon)+I_4(\varepsilon)).\end{align}

We shall study the asymptotic behaviour of $I_i(\varepsilon)$ , $i=1,\ldots,4$ . By (15) and Lemma 2.1, we can see that

\begin{align*} &\lim_{\varepsilon\rightarrow0}\varepsilon \,{\mathrm{e}}^{-2\theta T_\varepsilon}\int_0^{T_\varepsilon}A_{t_\varepsilon}^2\,{\mathrm{d}} t_\varepsilon\\[5pt] &\quad =\lim_{\varepsilon\rightarrow0}\dfrac{\int_0^{T_\varepsilon}\varepsilon^{-1} \,{\mathrm{e}}^{2\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{\varepsilon^{-1}\,{\mathrm{e}}^{2\theta T_\varepsilon}}\int_0^{T_\varepsilon}\biggl(\dfrac{A_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}\biggr)^2\dfrac{\varepsilon^{-1}\,{\mathrm{e}}^{2\theta t_\varepsilon}}{\int_0^{T_\varepsilon} \varepsilon^{-1}\,{\mathrm{e}}^{2\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon\notag \\[5pt] &\quad =\dfrac{x_0^2}{2\theta^3}\quad \text{a.s.}\end{align*}

It follows that

(34) \begin{align}\lim_{\varepsilon\rightarrow0}I_1(\varepsilon)=\dfrac{2\theta^3}{x_0^2}T^{{{1}/{2}}}\quad \text{a.s.}\end{align}

Now we consider $I_2(\varepsilon)$ . Using (15) and Lemma 2.1 again, we have

(35) \begin{align} &\lim_{\varepsilon\rightarrow0}\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta T_\varepsilon}\int_0^{T_\varepsilon}A_{t_\varepsilon}\,{\mathrm{d}} t_\varepsilon \notag \\[5pt] &\quad =\lim_{\varepsilon\rightarrow0}\dfrac{\int_0^{T_\varepsilon} \varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta T_\varepsilon}}\int_0^{T_\varepsilon}\biggl(\dfrac{A_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}\biggr)\dfrac{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}{\int_0^{T_\varepsilon} \varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon\notag \\[5pt] &\quad =\dfrac{x_0}{\theta^2}\quad \text{a.s.}\end{align}

For the second factor in $I_2(\varepsilon)$ , we find that

\begin{align*}T^{-{{1}/{2}}}_\varepsilon W_{T_\varepsilon}=T_\varepsilon^{-{{1}/{2}}}\bigl(W_{T_\varepsilon}-W_{{T_\varepsilon}^{{{1}/{2}}}}\bigr)+T_\varepsilon^{-{{1}/{2}}}W_{T_\varepsilon^{{{1}/{2}}}}.\end{align*}

It is easy to see that the random variable

\begin{align*} T_\varepsilon^{-{{1}/{2}}}\bigl(W_{T_\varepsilon}-W_{{T_\varepsilon}^{{{1}/{2}}}}\bigr)\end{align*}

has a normal distribution $N\bigl(0,1-T_\varepsilon^{-{{1}/{2}}}\bigr)$ , which converges weakly to a standard normal random variable N as $\varepsilon\rightarrow0$ . By the strong law of large numbers, we see that

\begin{align*}\lim_{\varepsilon\rightarrow0}\dfrac{W_{T_\varepsilon^{{{1}/{2}}}}}{T_\varepsilon^{{{1}/{2}}}}=0\quad \text{a.s.}\end{align*}

Hence we have

(36) \begin{align}T^{-{{1}/{2}}}_\varepsilon W_{T_\varepsilon}\Rightarrow N \quad \text{{as $\varepsilon\rightarrow0$.}}\end{align}

Combining (35) with (36) gives

(37) \begin{align}I_2(\varepsilon)\Rightarrow\dfrac{x_0}{\theta^2}N \quad\text{{as $\varepsilon\rightarrow0$.}}\end{align}

Next, we show that $I_3(\varepsilon)\rightarrow 0$ in probability as $\varepsilon\rightarrow0$ . Note that

(38) \begin{align}|I_3(\varepsilon)|&\leq \varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta T_\varepsilon}T_\varepsilon^{-{{1}/{2}}}\int_0^{T_\varepsilon}\biggl|\int_0^{t_\varepsilon}Y_s\,{\mathrm{d}} s\biggr|\bigl|\widetilde{W}_{T_\varepsilon}-\widetilde{W}_{t_\varepsilon}\bigr|\,{\mathrm{d}} t_\varepsilon\notag \\[5pt] &\leq \varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta T_\varepsilon}T_\varepsilon^{-{{1}/{2}}}\int_0^{T_\varepsilon}\biggl(\int_0^{t_\varepsilon}\bigl|\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta s}Y_s\bigr|\,{\mathrm{e}}^{\theta s}\varepsilon^{-{{1}/{2}}}\,{\mathrm{d}} s\biggr)\bigl|\widetilde{W}_{T_\varepsilon}-\widetilde{W}_{t_\varepsilon}\bigr|\,{\mathrm{d}} t_\varepsilon\notag \\[5pt] &\leq \dfrac{1}{\theta}\sup_{\varepsilon\geq 0}\bigl|\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta T_{\varepsilon}}Y_{T_\varepsilon}\bigr| T^{-{{1}/{2}}}_\varepsilon \,{\mathrm{e}}^{-\theta T_\varepsilon}\int_0^{T_\varepsilon}|W_{T_\varepsilon}-W_{t_\varepsilon}|\,{\mathrm{e}}^{\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon,\end{align}

which converges to zero in probability as $\varepsilon\rightarrow 0$ . In fact, using Markov’s inequality and Fubini’s theorem, we find that for given $\delta>0$ ,

\begin{align*}&\mathbb{P}\biggl(T^{-{{1}/{2}}}_\varepsilon \,{\mathrm{e}}^{-\theta T_\varepsilon}\int_0^{T_\varepsilon}|W_{T_\varepsilon}-W_{t_\varepsilon}|\,{\mathrm{e}}^{\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon>\delta\biggr)\notag \\[5pt] &\quad \leq\delta^{-1}\mathbb{E}\biggl[T^{-{{1}/{2}}}_\varepsilon \,{\mathrm{e}}^{-\theta T_\varepsilon}\int_0^{T_\varepsilon}|W_{T_\varepsilon}-W_{t_\varepsilon}|\,{\mathrm{e}}^{\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon\biggr]\notag \\[5pt] &\quad=\delta^{-1}T^{-{{1}/{2}}}_\varepsilon\int_0^{T_\varepsilon}\mathbb{E}|W_{T_\varepsilon}-W_{t_\varepsilon}|\,{\mathrm{e}}^{-\theta(T_\varepsilon-t_\varepsilon)}\,{\mathrm{d}} t_\varepsilon\notag \\[5pt] &\quad=\delta^{-1} T^{-{{1}/{2}}}_\varepsilon\int_0^{T_\varepsilon}\nu^{{{1}/{2}}}\,{\mathrm{e}}^{-\theta \nu}\,{\mathrm{d}} \nu\notag \\[5pt] &\quad\leq \delta^{-1}T_\varepsilon^{-{{1}/{2}}}\theta^{-{{3}/{2}}}\dfrac{1}{2}\sqrt{\pi},\end{align*}

which tends to zero as $\varepsilon\rightarrow0$ . Finally, we consider $I_4(\varepsilon)$ . Combining the self-similarity property for the Brownian motion with (4) yields

(39) \begin{align}\dfrac{\widetilde{L}_{t_\varepsilon}}{t_{\varepsilon}^{{{1}/{2}}}}&=t_{\varepsilon}^{-{{1}/{2}}}\max\biggl\{ 0,\sup_{0\leq ut_\varepsilon\leq t_\varepsilon}\biggl({-}x_0\varepsilon^{-{{1}/{2}}}-\theta\int_0^{ut_\varepsilon}Y_r\,{\mathrm{d}} r-\widetilde{W}_{ut_\varepsilon}\biggr)\biggr\}\notag \\[5pt] &\stackrel{d}{=}\max\biggl\{0,\max_{0\leq u\leq 1}\biggl({-}x_0t^{-{{1}/{2}}}- \theta t_{\varepsilon}^{-{{1}/{2}}}\int_0^{ut_\varepsilon}Y_r\,{\mathrm{d}} r-\widehat{W}_{u}\biggr)\biggr\}\quad \text{a.s.}\end{align}

Note that for any given $u\in(0,1]$ we can choose a positive number $\epsilon>0$ such that $\epsilon<u$ . Then we have

(40) \begin{align}&\lim_{\varepsilon\rightarrow 0}\max_{u\in(0,1]}\biggl({-}x_0t^{-{{1}/{2}}}- \theta t_{\varepsilon}^{-{{1}/{2}}}\int_0^{ut_\varepsilon}Y_r\,{\mathrm{d}} r-\widehat{W}_{u}\biggr)\notag \\[5pt] &\quad \leq\lim_{\varepsilon\rightarrow 0}\max_{u\in(0,1]}\biggl({-}\theta t_{\varepsilon}^{-{{1}/{2}}}\int_0^{ut_\varepsilon}Y_r\,{\mathrm{d}} r\biggr)+\max_{u\in(0,1]}(\!-\!\widehat{W}_{u})\notag \\[5pt] &\quad \leq -\lim_{\varepsilon\rightarrow 0}\theta t_{\varepsilon}^{-{{1}/{2}}}\int_0^{\epsilon t_\varepsilon}Y_r\,{\mathrm{d}} r+\max_{u\in(0,1]}(\!-\!\widehat{W}_{u}).\end{align}

By (15), we have

(41) \begin{align}\lim_{\varepsilon\rightarrow 0}\theta t_{\varepsilon}^{-{{1}/{2}}}\int_0^{\epsilon t_\varepsilon}Y_s\,{\mathrm{d}} s&=\theta\lim_{\varepsilon\rightarrow 0} \dfrac{\int_0^{\epsilon t_\varepsilon}Y_s\,{\mathrm{d}} s}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta \epsilon t_\varepsilon}}\dfrac{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta \epsilon t_\varepsilon}}{t_{\varepsilon}^{{{1}/{2}}}}\notag \\[5pt] &=\theta\lim_{\varepsilon\rightarrow 0} \dfrac{\int_0^{\epsilon t_\varepsilon}Y_s\,{\mathrm{d}} s}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta \delta t_\varepsilon}}\dfrac{{\mathrm{e}}^{\theta \epsilon t_\varepsilon}}{t^{{{1}/{2}}}}\notag \\[5pt] &=\infty\quad \text{a.s.}\end{align}

By (40), (41), and the fact that $\widehat{W}_{u},u\in(0,1]$ is almost surely finite, we have

(42) \begin{align}\lim_{\varepsilon\rightarrow 0}\max_{u\in(0,1]}\biggl({-}x_0t^{-{{1}/{2}}}- \theta t_{\varepsilon}^{-{{1}/{2}}}\int_0^{ut_\varepsilon}Y_r\,{\mathrm{d}} r-\widehat{W}_{u}\biggr)=-\infty\quad \text{a.s.}\end{align}

If $u=0$ , we find that

\begin{align*} -x_0t^{-{{1}/{2}}}-\theta t_{\varepsilon}^{-{{1}/{2}}}\int_0^{ut_\varepsilon}Y_r\,{\mathrm{d}} r-\widehat{W}_{u}=-x_0t^{-{{1}/{2}}}.\end{align*}

Hence we see that

(43) \begin{align}\lim_{\varepsilon\rightarrow 0}\max_{u\in[0,1]}\biggl({-}x_0t^{-{{1}/{2}}}-\theta t_{\varepsilon}^{-{{1}/{2}}}\int_0^{ut_\varepsilon}\widetilde{Y_s}\,{\mathrm{d}} s-\widehat{W}_u\biggr)=-x_0t^{-{{1}/{2}}}\quad \text{a.s.}\end{align}

Applying (39) and (43), we obtain

(44) \begin{align}\lim_{\varepsilon\rightarrow 0}\dfrac{\widetilde{L}_{t_\varepsilon}}{t_{\varepsilon}^{{{1}/{2}}}}=0\quad \text{a.s.}\end{align}

By (15) and (44) as well as Lemma 2.1, we have

(45) \begin{align} \lim_{\varepsilon\rightarrow0}I_4(\varepsilon) &=\lim_{\varepsilon\rightarrow0}\int_0^{T_\varepsilon}\dfrac{A_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}\dfrac{L_{t_\varepsilon}}{t_\varepsilon^{{{1}/{2}}}}\dfrac{\varepsilon^{-{{1}/{2}}}t_\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}{\int_0^{t_\varepsilon}\varepsilon^{-{{1}/{2}}}t_\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon\dfrac{\int_0^{T_\varepsilon} \varepsilon^{-{{1}/{2}}}t_\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{\varepsilon^{-{{1}/{2}}}T_\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{\theta T_\varepsilon}}\notag \\[5pt] &\leq\lim_{\varepsilon\rightarrow0}\int_0^{T_\varepsilon}\dfrac{A_{t_\varepsilon}}{\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}\dfrac{L_{t_\varepsilon}}{t_\varepsilon^{{{1}/{2}}}}\dfrac{\varepsilon^{-{{1}/{2}}}t_\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}}{\int_0^{t_\varepsilon}\varepsilon^{-{{1}/{2}}}t_\varepsilon^{{{1}/{2}}}\,{\mathrm{e}}^{\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}\,{\mathrm{d}} t_\varepsilon\dfrac{\int_0^{T_\varepsilon} \,{\mathrm{e}}^{\theta t_\varepsilon}\,{\mathrm{d}} t_\varepsilon}{{\mathrm{e}}^{\theta T_\varepsilon}}\notag \\[5pt] &=0\quad \text{a.s.} \end{align}

Therefore, by (33), (34), (37), (38), and (45), we conclude that (30) holds. This completes the desired proof.

(ii) Under $x_0=0$ , by Theorem 2.2 of Zang and Zhang [Reference Zang and Zhang23], it follows that (31) holds. This completes the desired proof.

(b) Combining (5) with (20) implies that (32) holds. This completes the proof.

4. Discussion

In this section we discuss the properties of the TFE $\widehat{\theta}_\varepsilon$ and MLE

\begin{align*} \widehat{\theta}_\varepsilon^{\textrm{MLE}}\;:\!=\; \frac{\int_0^{T_\varepsilon}Y_{s} \,{\mathrm{d}} Y_{s}}{\int_0^{T_\varepsilon}Y_{s}^2\,{\mathrm{d}} s}\end{align*}

separately in terms of the range of $\theta$ , i.e. $\theta>0$ , $\theta=0$ , and $\theta<0$ . By comparing the results of Zhang and Shu [Reference Zhang and Shu27], we obtain the following claims.

  1. (a) Under $\theta>0$ , both estimators are consistent. In addition, the following hold.

    1. (i) If $x_0> 0$ , then

      (46) \begin{align}\varepsilon^{-{{1}/{2}}}\,{\mathrm{e}}^{\theta T_\varepsilon}(\widehat{\theta}_\varepsilon^{\textrm{MLE}}-\theta)\Rightarrow N\biggl(0,\dfrac{2\theta}{x_0^2}\biggr)\quad \text{as $ \varepsilon\rightarrow 0$,}\end{align}
    2. (ii) If $x_0=0$ , then

      (47) \begin{align}{\mathrm{e}}^{\theta T_\varepsilon}(\widehat{\theta}_\varepsilon^{\textrm{MLE}}-\theta)\Rightarrow \dfrac{\sqrt{2\theta}N}{\bigl|\widehat{W}_{{{1}/{(2\theta)}}}+\widehat{L}_{{{1}/{(2\theta)}}}\bigr|}\quad \text{as $ \varepsilon\rightarrow 0$,}\end{align}
      where
      \begin{align*} \widehat{L}_{{{1}/{(2\theta)}}}=\max\Bigl[ 0,\max_{0\leq u\leq {{1}/{(2\theta)}}}(\!-\!\widehat{W}_u)\Bigr],\end{align*}
      and N is a standard normal random variable which is independent of $\widehat{W}_{{{1}/{(2\theta)}}}$ and $\widehat{L}_{{{1}/{(2\theta)}}}$ . For both estimators, the order of the convergence depends heavily on the true value of the parameter. It can also be seen that the MLE $\widehat{\theta}_\varepsilon^{\textrm{MLE}}$ converges in distribution of higher order than the TFE $\widehat{\theta}_\varepsilon$ .

  2. (b) Under $\theta=0$ , it is easy to see that both estimators are consistent. The MLE $\widehat{\theta}_\varepsilon^{\textrm{MLE}}$ has the limiting distribution

    \begin{align*}\varepsilon^{-1}\widehat{\theta}_\varepsilon^{\textrm{MLE}}\Rightarrow\dfrac{1}{T}\dfrac{\int_0^1\bigl(x_0T^{-{{1}/{2}}}+\widehat{W}_u+\widehat{L}_u\bigr)\,{\mathrm{d}} \widehat{W}_u}{x_0T^{-1}+\int_0^1\bigl(\widehat{W}_u+\widehat{L}_u\bigr)^2\,{\mathrm{d}} u+2x_0T^{-{{1}/{2}}}\int_0^1\bigl(\widehat{W}_u+\widehat{L}_u\bigr)\,{\mathrm{d}} u}\quad \text{as $ \varepsilon\rightarrow 0$,}\end{align*}
    where
    \begin{align*} \widehat{L}_u=\max\Bigl\{0,\max_{0\leq r\leq u}\bigl({-}x_0T^{-{{1}/{2}}}-\widehat{W}_{r}\bigr)\Bigr\}. \end{align*}
    Both estimators are neither normal nor a mixture of normals. Further, both estimators have the same order of convergence in this case.
  3. (c) Under $\theta<0$ , the MLE $\widehat{\theta}_\varepsilon^{\textrm{MLE}}$ of $\theta$ is strongly consistent. But the TFE $\widehat{\theta}_\varepsilon$ is not strongly consistent. The MLE $\widehat{\theta}_\varepsilon^{\,\textrm{MLE}}$ has the limiting distribution

    \begin{align*}\dfrac{\widehat{\theta}_\varepsilon^{\textrm{MLE}}-\theta}{\varepsilon^{{{1}/{2}}}}\Rightarrow N\biggl(0,-\dfrac{2\theta}{x_0^2+T}\biggr)\quad \text{as $ \varepsilon\rightarrow 0$.} \end{align*}

Acknowledgements

The authors thank the anonymous referee for their constructive comments and suggestions, which improved the quality of the paper significantly.

Funding information

This work was supported by the National Natural Science Foundation of China (grants 12101004, 62073071, and 12271003), the Natural Science Research Project of Anhui Educational Committee (grants 2023AH030021 and 2023AH010011), and the Research Start-up Foundation for Introducing Talent of Anhui Polytechnic University (grant 2020YQQ064).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Abi-ayad, I. and Mourid, T. (2018). Parametric estimation for non recurrent diffusion processes. Statist. Prob. Lett. 141, 96102.10.1016/j.spl.2018.05.024CrossRefGoogle Scholar
Bo, L. J., Tang, D., Wang, Y. J. and Yang, X. W. (2011). On the conditional default probability in a regulated market: a structural approach. Quant. Finance 11, 16951702.10.1080/14697680903473278CrossRefGoogle Scholar
Bo, L. J., Wang, Y. J. and Yang, X. W. (2010). Some integral functionals of reflected SDEs and their applications in finance. Quant. Finance 11, 343348.10.1080/14697681003785926CrossRefGoogle Scholar
Bo, L. J., Wang, Y. J., Yang, X. W. and Zhang, G. N. (2011). Maximum likelihood estimation for reflected Ornstein–Uhlenbeck processes. J. Statist. Planning Infer. 141, 588596.10.1016/j.jspi.2010.07.001CrossRefGoogle Scholar
Dietz, H. M. (2001). Asymptotic behaviour of trajectory fitting estimators for certain non-ergodic SDE. Statist. Infer. Stoch. Process. 4, 249258.10.1023/A:1012254332474CrossRefGoogle Scholar
Dietz, H. M. and Kutoyants, Y. A. (1997). A class of minimum-distance estimators for diffusion processes with ergodic properties. Statist. Decisions 15, 211227.Google Scholar
Dietz, H. M. and Kutoyants, Y. A. (2003). Parameter estimation for some non-recurrent solutions of SDE. Statist. Decisions 21, 2945.10.1524/stnd.21.1.29.20321CrossRefGoogle Scholar
Han, Z., Hu., Y. Z. and Lee, C. (2016). Optimal pricing barriers in a regulated market using reflected diffusion processes. Quant. Finance 16, 639–647.10.1080/14697688.2015.1034163CrossRefGoogle Scholar
Harrison, M. (1986). Brownian Motion and Stochastic Flow Systems. John Wiley, New York.Google Scholar
Hu, Y. Z., Lee, C., Lee, M. H. and Song, J. (2015). Parameter estimation for reflected Ornstein–Uhlenbeck processes with discrete observations. Statist. Infer. Stoch. Process. 18, 279291.10.1007/s11203-014-9112-7CrossRefGoogle Scholar
Jiang, H. and Xie, C. (2016). Asymptotic behaviours for the trajectory fitting estimator in Ornstein–Uhlenbeck process with linear drift. Stochastics 88, 336352.10.1080/17442508.2015.1066378CrossRefGoogle Scholar
Jiang, H. and Yang, Q. S. (2022). Moderate deviations for drift parameter estimations in reflected Ornstein–Uhlenbeck process. J. Theoret. Prob. 35, 12621283.10.1007/s10959-021-01096-3CrossRefGoogle Scholar
Karatzas, I. and Shreve, S. (1991). Brownian Motion and Stochastic Calculus. Springer, New York.Google Scholar
Krugman, P. R. (1991). Target zones and exchange rate dynamics. Quart. J. Econom. 106, 669682.10.2307/2937922CrossRefGoogle Scholar
Kutoyants, Y. A. (1991). Minimum distance parameter estimation for diffusion type observations. C. R. Acad. Sci. Paris 312, 637642.Google Scholar
Kutoyants, Y. A. (2004). Statistical Inference for Ergodic Diffusion Processes. Springer, Berlin.10.1007/978-1-4471-3866-2CrossRefGoogle Scholar
Linetsky, V. (2005). On the transition densities for reflected diffusions. Adv. Appl. Prob. 37, 435460.10.1239/aap/1118858633CrossRefGoogle Scholar
Lions, P. L. and Sznitman, A. S. (1984). Stochastic differential equations with reflecting boundary condition. Commun. Pure. Appl. Math. 37, 511537.10.1002/cpa.3160370408CrossRefGoogle Scholar
Ricciardi, L. M. and Sacerdote, L. (1987). On the probability densities of an Ornstein–Uhlenbeck process with a reflecting boundary. J. Appl. Prob. 24, 355369.10.2307/3214260CrossRefGoogle Scholar
Ward, A. R. and Glynn, P. W. (2003). A diffusion approximation for a Markovian queue with reneging. Queueing Systems 43, 103128.10.1023/A:1021804515162CrossRefGoogle Scholar
Ward, A. R. and Glynn, P. W. (2005). A diffusion approximation for a GI/GI/1 queue with balking or reneging. Queueing Systems 50, 371400.10.1007/s11134-005-3282-3CrossRefGoogle Scholar
Whitt, W. (2002). Stochastic-Process Limits. Springer, New York.10.1007/b97479CrossRefGoogle Scholar
Zang, Q. P. and Zhang, L. X. (2016). Parameter estimation for generalized diffusion processes with reflected boundary. Sci. China Math. 59, 11631174.10.1007/s11425-015-5112-3CrossRefGoogle Scholar
Zang, Q. P. and Zhang, L. X. (2016). A general lower bound of parameter estimation for reflected Ornstein–Uhlenbeck processes. J. Appl. Prob. 53, 2232.10.1017/jpr.2015.5CrossRefGoogle Scholar
Zang, Q. P. and Zhang, L. X. (2019). Asymptotic behaviour of the trajectory fitting estimator for reflected Ornstein–Uhlenbeck processes. J. Theoret. Prob. 32, 183201.10.1007/s10959-017-0796-7CrossRefGoogle Scholar
Zang, Q. P. and Zhu, C. L. (2016). Asymptotic behaviour of parametric estimation for nonstationary reflected Ornstein–Uhlenbeck processes. J. Math. Anal. Appl. 444, 839851.10.1016/j.jmaa.2016.06.067CrossRefGoogle Scholar
Zhang, X. K. and Shu, H. S. (2023). Maximum likelihood estimation for the reflected stochastic linear system with a large signal. Brazilian J. Prob. Statist. 37, 351364.10.1214/23-BJPS571CrossRefGoogle Scholar