Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-27T08:05:17.429Z Has data issue: false hasContentIssue false

Continuous dependence of stationary distributions on parameters for stochastic predator–prey models

Published online by Cambridge University Press:  22 February 2024

Nguyen Duc Toan*
Affiliation:
Vinh University
Nguyen Thanh Dieu*
Affiliation:
Vinh University
Nguyen Huu Du*
Affiliation:
Hanoi National University
Le Ba Dung*
Affiliation:
Hanoi National University
*
*Postal address: High School for Gifted Students, Vinh University, 182 Le Duan, Vinh, Nghe An, Vietnam. Email: nguyenductoandhv@gmail.com
**Postal address: Department of Mathematics, Vinh University, 182 Le Duan, Vinh, Nghe An, Vietnam. Email: dieunt@vinhuni.edu.vn
***Postal address: Department of Mathematics, Mechanics and Informatics, Hanoi National University, 334 Nguyen Trai, Thanh Xuan, Hanoi Vietnam
***Postal address: Department of Mathematics, Mechanics and Informatics, Hanoi National University, 334 Nguyen Trai, Thanh Xuan, Hanoi Vietnam
Rights & Permissions [Opens in a new window]

Abstract

This research studies the robustness of permanence and the continuous dependence of the stationary distribution on the parameters for a stochastic predator–prey model with Beddington–DeAngelis functional response. We show that if the model is extinct (resp. permanent) for a parameter, it is still extinct (resp. permanent) in a neighbourhood of this parameter. In the case of extinction, the Lyapunov exponent of predator quantity is negative and the prey quantity converges almost to the saturated situation, where the predator is absent at an exponential rate. Under the condition of permanence, the unique stationary distribution converges weakly to the degenerate measure concentrated at the unique limit cycle or at the globally asymptotic equilibrium when the diffusion term tends to 0.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Predator–prey relations refer to the relationship between two species where one species is the hunted food source for the other. This relationship plays an important role in the evolution of ecosystems, existing not only in the simplest life forms on Earth like single-celled organisms, but also in complex animal communities. The earliest mathematical model describing this relationship belongs to Lotka [Reference Lotka20] and Volterra [Reference Volterra28]. Since then, it has become an interesting topic in mathematical biology [Reference Goel, Maitra and Montroll8, Reference Kingsland17, Reference Rosenzweig and MacArthur24]. In order to describe the predator feeding rate with increasing prey density and quantify the energy transfer across trophic levels, the functional response is added into the standard Lotka–Voltarra equation to make it more realistic. Depending on the characteristics of specific environments, several types of functional responses have been chosen. The Holling Type II functional response (with the most common form as [Reference Jeschke, Kopp and Tollrian14]), where the rate of prey consumption by a predator rises as the prey density increases, but eventually levels off at a plateau (or asymptote) at which the rate of consumption remains constant regardless of the increase in prey density, is introduced in [Reference Holling12]. Similarly, Type III responses are sigmoidal and characterized by a low-density refuge from predation, a mid-density peak in per capita mortality, and then declining mortality owing to predator satiation [Reference Real23]. Independently, [Reference Beddington2, Reference DeAngelis, Goldstein and O’Neill4] proposed a functional response which is similar to Holling Type II, but containing an extra term describing mutual interference by predators. It became the most common type of functional response and is well documented in empirical studies.

Let $\alpha=(r, K, m, \beta , \gamma,a , b ,c) $ be the vector of parameters, whose components are appropriate positive constants. Then a deterministic predator–prey model with the Beddington–DeAngelis functional response has the form

(1.1) \begin{equation} \begin{cases} {\textrm{d}} x_{\alpha}(t) = \bigg(rx_{\alpha}(t)\bigg(1-\dfrac{x_{\alpha}(t)}{K}\bigg) - \dfrac{m x_{\alpha}(t)y_{\alpha}(t)}{a+by_{\alpha}(t)+cx_{\alpha}(t)}\bigg)\,{\textrm{d}} t, \\[0.3cm] {\textrm{d}} y_{\alpha}(t) = y_{\alpha}(t)\bigg({-}\gamma + \dfrac{\beta mx_{\alpha}(t)}{a+by_{\alpha}(t)+cx_{\alpha}(t)}\bigg)\,{\textrm{d}} t, \end{cases}\end{equation}

where $x_\alpha (t)$ is the prey population and $y_{\alpha}(t)$ is the predator population at time t (see also [Reference Zhang, Sun and Jin29] and references therein).

It is well known that the quadrants of the plane $\mathbb{R}^2_+=\{(x,y)\,:\, x\geq 0,y\geq 0\}$ and its interior $\mathbb{R}^{2, o}_+=\{(x,y)\,:\, x> 0,y> 0\}$ are invariant with respect to (1.1). We denote by $\Phi^\phi_{\alpha }(t)=(x_{\alpha}(t), y_{\alpha}(t))$ the unique solution of (1.1) with initial value $\phi=(x,y)\in\mathbb{R}^2_+$ . Let

\begin{equation*}f(\phi, \alpha)=\bigg(rx\bigg(1-\frac{x}K\bigg)-\frac{mxy}{a+by+cx};\, -\gamma y+\frac{\beta m xy}{a+by+cx}\bigg)^\top.\end{equation*}

Consider the Lyapunov function $V(\phi, \alpha)=\beta x+y$ . It can be seen that

\begin{equation*}\dot{V}(\phi,\alpha)=V_\phi(\phi,\alpha)f(\phi, \alpha)=\beta rx\bigg(1-\frac xK\bigg)-\gamma y\leq \frac{\beta(r+\gamma)^2K}{4r}-\gamma V(\phi, \alpha).\end{equation*}

From this inequality, it is easy to prove that the set

\begin{equation*}\mathcal{R}(\alpha)\,:\!=\,\bigg\{(x,y)\in\mathbb{R}^2_+\,:\, \beta x+ y\leq \frac{\beta(r+\gamma)^2K}{4r\gamma}\bigg\}\end{equation*}

is also an attractive set with respect to (1.1).

For any vector parameter $\alpha=(r, K, m , \beta , \gamma , a , b , c )\in\mathbb{R}^8_+$ , construct the threshold value

(1.2) \begin{equation} \lambda_\alpha=-\gamma +\frac{\beta mK}{a +Kc }.\end{equation}

When $\lambda_\alpha>0$ , the system in (1.1) has three non-negative equilibria: (0, 0), (K, 0), and $\big(x^*_{\alpha }, y^*_{\alpha }\big)$ . The long-term behaviour of the model in (1.1) has been classified (see [Reference Hwang13, Reference Zou and Fan30], for example) by using the threshold value $\lambda_\alpha$ as follows.

Lemma 1.1. ([Reference Zou and Fan30].) Let $\lambda_\alpha$ be a threshold value that is defined by (1.2).

  1. (i) If $\lambda_\alpha \leq 0$ then the boundary equilibrium point (K, 0) is globally asymptotically stable.

  2. (ii) If $\lambda_\alpha >0$ and

    \begin{equation*}b \geq\min\bigg\{\frac{c }{\beta }, \frac{m ^2\beta ^2-c ^2\gamma ^2}{\gamma \beta (m \beta -c \gamma )+m r \beta ^2}\bigg\}\end{equation*}
    then the positive equilibrium point $\phi^*_\alpha=(x^*_{\alpha}, y^*_{\alpha})$ of the system in (1.1) is globally asymptotically stable.
  3. (iii) If $\lambda_\alpha>0$ and

    \begin{equation*}b <\min\bigg\{\frac{c }{\beta }, \frac{m ^2\beta ^2-c ^2\gamma ^2}{\gamma \beta (m \beta -c \gamma )+m r \beta ^2}\bigg\}\end{equation*}
    then the positive equilibrium point $\phi_{\alpha }^*=\big(x^*_{\alpha }, y^*_{\alpha }\big)$ is unstable, and there is an exactly stable limit cycle $\Gamma_{\alpha }$ .

Furthermore, by the clarity of the positive equilibrium point formula and [Reference Han9, Theorem 1.2, p. 356], we have the following lemma.

Lemma 1.2 Let $\alpha=(r, K, m , \beta , \gamma , a , b , c )$ be the parameters of (1.1) such that $\lambda_\alpha>0$ .

  1. (i) On the set

    \begin{equation*}b \geq\min\bigg\{\frac{c }{\beta }, \frac{m ^2\beta ^2-c ^2\gamma ^2}{\gamma \beta (m \beta -c \gamma )+m r \beta ^2}\bigg\},\end{equation*}
    the mapping $\alpha \to \big(x^*_{\alpha }, y^*_{\alpha }\big)$ is continuous.
  2. (ii) On

    \begin{equation*}b <\min\bigg\{\frac{c }{\beta }, \frac{m ^2\beta ^2-c ^2\gamma ^2}{\gamma \beta (m \beta -c \gamma )+m r \beta ^2}\bigg\},\end{equation*}
    the mapping $\alpha\to \Gamma_{\alpha}$ is continuous in Hausdorff distance.

Let $\mathcal{K}\subset \mathbb{R}^8_+$ be a compact set, and write $\mathcal{R}(\mathcal{K})=\cup_{\alpha\in\mathcal{K}}\mathcal{R}(\alpha)$ . It is clear that $\overline {\mathcal{R}(\mathcal{K})}$ is a compact set. Since f has continuous partial derivatives $\nabla_\phi f$ and $\nabla_\alpha f$ , these derivatives are uniformly bounded on $\overline{\mathcal{R}(\mathcal{K})}\times \mathcal{K}$ . As a consequence, there exists a positive constant $L_1$ such that

\begin{equation*} \|f(\phi_1,\alpha_1)-f(\phi_2, \alpha_2)\|\leq L_1(\|\phi_1-\phi_2|+\|\alpha_1-\alpha_2\|), \quad \phi_1,\phi_2\in \mathcal{R}(\mathcal{K}),\ \alpha_1, \alpha_2\in \mathcal{K}.\end{equation*}

From this inequality, for $T>0$ and for any $\phi\in \mathcal{R}(\mathcal{K})$ , $\alpha_1, \alpha_2\in \mathcal{K}$ , we have

\begin{align*} \sup_{0\leq t\leq T}\big\|\Phi^\phi_{\alpha_1}(t)-\Phi^\phi_{\alpha_2}(t)\big\| & = \sup_{0\leq t\leq T}\bigg\|\int_0^t\big[f\big(\Phi^\phi_{\alpha_1}(s),\alpha_1\big) - f\big(\Phi^\phi_{\alpha_2}(s),\alpha_2\big)\big]\,{\textrm{d}} s\bigg\| \\ & \leq \int_0^T\big\|f\big(\Phi^\phi_{\alpha_1}(s), \alpha_1\big) - f\big(\Phi^\phi_{\alpha_2}(s), \alpha_2\big)\big\|\,{\textrm{d}} s \\ & \leq L_1 T\|\alpha_1-\alpha_2\| + L_1\int_0^T\sup_{0\leq s\leq t}\big\|\Phi^\phi_{\alpha_1}(s)-\Phi^\phi_{\alpha_2}(s)\big\|\,{\textrm{d}} t.\end{align*}

Applying the Gronwall inequality, we obtain

(1.3) \begin{equation} \sup_{0\leq t\leq T}\big\|\Phi^\phi_{\alpha_1}(t)-\Phi^\phi_{\alpha_2}(t)\big\| \leq L_1 T\|\alpha_1-\alpha_2\|{\textrm{e}}^{L_1 T} \quad \text{ for all } \phi \in \mathcal{R}(\mathcal{K}),\ \alpha_1, \alpha_2\in \mathcal{K}.\end{equation}

We now discuss the evolution of the predator–prey model in (1.1) in a random environment, in which some parameters are perturbed by noise (see [Reference Du, Nguyen and Yin7, Reference Ji and Jiang15, Reference Ji, Jiang and Li16, Reference Mao, Sabais and Renshaw22, Reference Rudnicki25, Reference Tuan, Dang and Van27). Currently, an important way to model the influence of environmental fluctuations in biological systems is to assume that white noise affects the growth rates. This assumption is reasonable and practicable since there are many small factors involved in the evolution of systems. Hence, the noise must follow a Gaussian distribution by the central limit theorem. Thus, in a random environment, when the parameters $r,\gamma$ are perturbed, they become $r+\sigma_1 \dot B_1$ , $\gamma\hookrightarrow \gamma-\sigma_2 \dot B_2$ , where $B_1$ and $B_2$ are two independent Brownian motions. Therefore, (1.1) subjected to environmental white noise can be rewritten as

(1.4) \begin{equation} \begin{cases} {\textrm{d}} x_{\alpha, \sigma}(t) = \bigg(rx_{\alpha, \sigma}(t)\bigg(1 - \dfrac{x_{\alpha, \sigma}(t)}{K}\bigg) - \dfrac{mx_{\alpha, \sigma}(t)y_{\alpha, \sigma}(t)}{a+by_{\alpha, \sigma}(t)+cx_{\alpha, \sigma}(t)}\bigg)\,{\textrm{d}} t + \sigma_1 x_{\alpha, \sigma}(t)\,{\textrm{d}} B_1(t), \\[0.3cm] {\textrm{d}} y_{\alpha, \sigma}(t) = y_{\alpha, \sigma}(t)\bigg({-}\gamma + \dfrac{\beta mx_{\alpha, \sigma}(t)}{a+by_{\alpha, \sigma}(t)+cx_{\alpha, \sigma}(t)}\bigg)\,{\textrm{d}} t + \sigma_2 y_{\alpha, \sigma}(t)\,{\textrm{d}} B_2(t), \end{cases}\end{equation}

where $\sigma=(\sigma_1,\sigma_2)$ . The existence of a stationary distribution and the stochastic bifurcation for (1.4) was considered in [Reference Zou and Fan30], where it was proved that there is a critical point $b^*(\sigma_1;\,\sigma_2)$ which depends on $\sigma_1$ and $\sigma_2$ such that the system in (1.4) undergoes a stochastic Hopf bifurcation at $b^*(\sigma_1;\,\sigma_2)$ . The shape of the stationary distribution for the system in (1.4) changes from crater-like to peak-like. However, the conditions imposed on the parameters are rather strict; hence, some of the results in [Reference Zou and Fan30] need careful discussion. In [Reference Du, Nguyen and Yin7], a threshold was constructed between distinction and permanence (also the threshold of the existence of a stationary distribution) for the system in (1.4). The extinction and permanence have been considered in a more generalized context in [Reference Benaïm3, Reference Hening and Nguyen10, Reference Hening, Nguyen and Chesson11] by studying the stochastic permanence of Markov processes via Lyapunov exponents (expected per capita growth rate) and Lyapunov functions. Specifically, they applied these results to give the permanence condition of the stochastic Kolmogorov equations. Also, in [Reference Benaïm3, Reference Hening and Nguyen10, Reference Hening, Nguyen and Chesson11, Reference Schreiber, Benaïm and Atchadé26] the authors have shown that in the case of stochastic persistence there exists a unique invariant probability $\pi^*$ such that the transition probability $P(t,x, \cdot)$ of the solution (1.4) converges in total variation norm to $\pi^*$ with an exponential rate. Robust permanence was considered in [Reference Schreiber, Benaïm and Atchadé26] under the name of $\delta$ -perturbation for a system with compact state spaces; [Reference Hening, Nguyen and Chesson11] studied the robustness of discrete systems for stochastic ecological communities.

The results obtained in [Reference Benaïm3, Reference Hening and Nguyen10, Reference Hening, Nguyen and Chesson11, Reference Schreiber, Benaïm and Atchadé26] are very strong and it is easy to see that some of the results in [Reference Du, Nguyen and Yin7] can be obtained by careful calculations. However, in these works, the authors have mainly dealt with conditions ensuring the permanence of (1.4) but have not focused on studying the robustness of the system.

This paper continues to study this model by considering the robustness of permanence and the continuous dependence of the stationary distribution of (1.4) on the data if it exists. This is important in all mathematical models because in practice, observations do not perfectly reflect biological reality, which causes the threshold estimates to be mere approximations. Precisely, we prove that if the model is extinct (resp. permanent) for a parameter, it is still extinct (resp. permanent) in a neighbourhood of this parameter. In the case of extinction, the Lyapunov exponent of predator quantity is negative and the prey quantity converges almost to the saturated situation, where the predator is absent. Further, if $\lambda_{\alpha_0}>0$ and $\alpha \to \alpha_0$ , $\sigma \to 0$ , the stationary distribution $\mu_{\alpha,\sigma}$ of (1.4) will converge weakly to the degenerate distribution concentrated on the critical point $\big(x^*_{\alpha_0}, y^*_{\alpha_0}\big)$ or on the limit cycle $\Gamma_{\alpha_0}$ of the system in (1.1). Thus, if the model without noise has a critical point $\big(x^*_{\alpha_0}, y^*_{\alpha_0}\big)$ or a limit cycle $\Gamma_{\alpha_0}$ with parameter ${\alpha_0}$ , then when the intensity of the noise is small, the long-term population density is almost concentrated at any neighbourhood of $\big(x^*_{\alpha_0}, y^*_{\alpha_0}\big)$ or of the limit cycle $\Gamma_{\alpha_0}$ for any parameter close to ${\alpha_0}$ . This is an important conclusion as the small-noise asymptotics are very relevant in mathematical biology.

The paper is organized as follows. The next section discusses the main results. In section 3, we provide an example to illustrate that when $(\alpha,\sigma) \to (\alpha_0,0)$ , the stationary distribution $\mu_{\alpha,\sigma}$ weakly converges to the degenerate distribution concentrated on $(x^*_{\alpha_0}, y^*_{\alpha_0})$ or on the limit cycle  $\Gamma_{\alpha_0}$ .

2. Main result

Let $(\Omega,\mathcal{F},\mathbb{P})$ be a complete probability space and let $B_1(t)$ and $B_2(t)$ be two mutually independent Brownian motions. It is well known that both $\mathbb{R}^{2}_+$ and $\mathbb{R}^{2,\textrm{o}}_+$ $\big($ the interior of $\mathbb{R}^2_+\big)$ are invariant to (1.4), i.e. for any initial value $\phi=(x(0),y(0))\in\mathbb{R}^{2}_+$ $\big($ resp. in $\mathbb{R}^{2,\circ}_+\big)$ , there exists a unique global solution to (1.4) that remains in $\mathbb{R}^{2}_+$ $\big($ resp. $\in\mathbb{R}^{2,\circ}_+\big)$ almost surely [Reference Zou and Fan30]. Denote by $\Phi^\phi_{\alpha, \sigma}(t)=(x_{\alpha, \sigma}(t), y_{\alpha, \sigma}(t))$ the unique solution of (1.4) with initial value $\phi\in\mathbb{R}^2_+$ . For $\alpha=(r, K, m , \beta , \gamma , a , b , c )$ and $\delta>0$ , let

\begin{equation*}{\mathcal{U}}_\delta(\alpha )=\big\{\alpha^{\prime}=(r^{\prime}, K^{\prime},m^{\prime}, \beta^{\prime}, \gamma^{\prime}, a^{\prime}, b^{\prime}, c^{\prime})\in\mathbb{R}^{8}_+ \,:\,\|\alpha^{\prime}-\alpha \|\leq\delta\big\} \end{equation*}

and ${\mathcal{V}}_\delta(\phi_0 )=\big\{\phi\in\mathbb{R}^{2}_+\,:\, \|\phi-\phi_0 \|\leq \delta\big\}$ be the balls with radius $\delta>0$ and center $\alpha$ (resp. $\phi_0$ ). Write $\mathcal{R}_\delta(\alpha)=\bigcup_{\alpha^{\prime}\in{\mathcal{U}}_\delta(\alpha)}\mathcal{R}(\alpha^{\prime})$ . For any $R>0$ , write ${\bf B}_{R}=\{\phi=(x, y)\in \mathbb{R}^2_+\,:\,\|\phi\|\leq R\}$ . Let $C^{2}\big(\mathbb{R}^2, \mathbb{R}_+\big)$ be the family of all non-negative functions $V(\phi)$ on $\mathbb{R}^2 $ which are twice continuously differentiable in $\phi$ . For $V\in C^{2}\big(\mathbb{R}^2, \mathbb{R}_+\big)$ , define the differential operator $\mathcal{L} V$ associated with (1.4) as

\begin{equation*} \mathcal{L} V(\phi)={V}_\phi(\phi) f(\phi, \alpha) + \tfrac{1}{2}\textrm{trace}\big[g^\top(\phi,\sigma) {{V}}_{\phi\phi}(\phi) g(\phi,\sigma)\big],\end{equation*}

where ${V}_\phi(\phi) $ and ${V}_{\phi\phi}(\phi)$ are the gradient and Hessian of $V(\cdot)$ , and g is the diffusion coefficient of (1.4) given by

\begin{equation*} g(\phi,\sigma)=\begin{pmatrix} \sigma_1x & \quad 0\\[2pt]0 & \quad \sigma_2y\end{pmatrix}.\end{equation*}

By virtue of the symmetry of Brownian motions, in the following we are interested only in $\sigma_1\geq 0$ , $\sigma_2\geq 0$ .

Lemma 2.1 Let $\mathcal{K}\subset\mathbb{R}^{8,\textrm{o}}_+$ be a compact set and $\overline \sigma > 0$ . Then, for any $0 \leq p \leq {2\gamma_*}/{\overline \sigma^2}$ ,

(2.1) \begin{equation} \mathbb{E}\big(V\big(\Phi^\phi_{\alpha,\sigma}(t)\big)\big) \leq {\textrm{e}}^{-H_1t}V(\phi) + \frac{H_2}{H_1} \quad \text{ for all } \alpha\in \mathcal{K}, \,\|\sigma\|\leq \overline \sigma,\, t\geq 0, \end{equation}

where

\begin{equation*} H_{1}=\frac{(1+p)}{2}\bigg(\gamma_*-\frac{p\overline \sigma^2}2\bigg), \qquad H_2=\sup_{\alpha\in\mathcal{K},\|\sigma\|\leq \overline \sigma }\sup_{\phi\in\mathbb{R}^2_+}\{\mathcal{L}V(\phi)+H_1V(\phi)\}, \end{equation*}

$\gamma_*=\inf\{\beta \wedge \gamma\,:\,(r,K,m,\beta,\gamma,a,b,c)\in\mathcal{K}\}$ , and $V(\phi)=(\beta x+y)^{1+p}$ . As a consequence,

(2.2) \begin{equation} \sup\Big\{\mathbb{E}\big\|\Phi^\phi_{\alpha,\sigma}(t)\big\|^{1+p}\,:\,\alpha\in\mathcal{K},\, \|\sigma\|\leq\overline\sigma,\,{t\geq 0}\Big\} < \infty. \end{equation}

Proof. The differential operator $\mathcal{L}V(\phi)$ associated with (1.4) is given by

\begin{align*} \mathcal{L}V(\phi) & = (1+p)(\beta x+y)^p\bigg(\beta rx\bigg(1-\frac{x}K\bigg)-\gamma y\bigg) + \frac{p(1+p)}2(\beta x+y)^{p-1}\big(\beta^2\sigma_1^2x^2\hskip-1mm+\sigma_2^2y^2\big) \\ & \leq (1+p)\bigg({-}\gamma+\frac{p\|\sigma\|^2}{2}\bigg)(\beta x+y)^{1+p} + \beta r(1+p)(\beta x+y)^px\bigg(\frac{\gamma+r}r-\frac{x}K\bigg) \\ & \leq (1+p)\bigg({-}\gamma_*+\frac{p\overline\sigma^2}{2}\bigg)(\beta x+y)^{1+p} + \beta r(1+p)(\beta x+y)^px\bigg(\frac{\gamma+r}r-\frac{x}K\bigg) \\ & \leq H_2-H_1V(\phi), \end{align*}

where

\begin{equation*}H_1=\frac{1+p}2\bigg(\gamma_*-\frac{p\overline \sigma^2}2\bigg), \qquad H_2=\sup_{\alpha\in\mathcal{K},\|\sigma\|\leq \overline \sigma}\sup_{\phi\in\mathbb{R}^2_+} \{\mathcal{L}V(\phi)+H_1V(\phi)\}<\infty.\end{equation*}

Thus, $\mathcal{L}V(\phi)\leq H_2-H_1V(\phi)$ . By a standard argument as in [Reference Dieu, Nguyen, Du and Yin5, Lemma 2.3], it follows that

\begin{equation*} \mathbb{E}\big({\textrm{e}}^{H_1t}V\big(\Phi^\phi_{\alpha,\sigma}(t)\big)\big)\leq V(\phi)+\frac{H_2({\textrm{e}}^{H_1t}-1)}{H_1}. \end{equation*}

Thus, $\mathbb{E}\big(V\big(\Phi^\phi_{\alpha,\sigma}(t)\big)\big)\leq {\textrm{e}}^{-H_1t}V(\phi)+{H_2}/{H_1}$ , i.e. we get (2.1).

By using the inequality $\|\phi\|^{1+p}\leq \max\big\{1,\gamma_*^{-(1+p)}\big\}V(\phi)$ and (2.1), we have

\begin{equation*} \sup\big\{\mathbb{E}\|\Phi^\phi_{\alpha,\sigma}(t)\|^{1+p}\,:\, \alpha\in \mathcal{K},\,\|\sigma\|\leq \overline \sigma,\,t\geq 0\big\}<\infty. \end{equation*}

When the predator is absent, the evolution of the prey follows the stochastic logistic equation on the boundary,

(2.3) \begin{equation} {\textrm{d}}\varphi_{\alpha,\sigma}(t) = r\varphi_{\alpha,\sigma}(t)\bigg(1-\frac{\varphi_{\alpha,\sigma}(t)}K\bigg)\,{\textrm{d}} t + \sigma_1 \varphi_{\alpha,\sigma}(t)\,{\textrm{d}} B_1(t).\end{equation}

Denote by $\varphi_{\alpha,\sigma}(t)$ the solution of (2.3). By the comparison theorem, it can be seen that $x_{\alpha, \sigma}(t)\leq\varphi_{\alpha,\sigma}(t)$ for all $t\geq0$ almost surely (a.s.), provided that $x_{\alpha, \sigma}(0)=\varphi_{\alpha,\sigma}(0)>0$ and $y_{\alpha, \sigma}(0)\geq0$ .

Lemma 2.2 Let $(x_{\alpha,\sigma}(t), y_{\alpha,\sigma}(t))$ be a solution of (1.4) and $\varphi_{\alpha,\sigma}(t)$ a solution of (2.3).

  1. (i) If $r<{\sigma_1 ^2}/2$ then the system is exponentially ruined in the sense that the Lyapunov exponents of $\varphi_{\alpha,\sigma}(\cdot)$ , $x_{\alpha, \sigma}(\cdot)$ and $y_{\alpha, \sigma}(\cdot)$ are negative.

  2. (ii) In the case $r-{\sigma_1^2}/2>0$ , (2.3) has a unique stationary distribution $\nu_{\alpha,\sigma}$ with the density $p_{\alpha,\sigma}(x) = Cx^{\big({2r}/{\sigma_1^2}\big)-2}{\textrm{e}}^{-\big({2r}/{\sigma_1^2K}\big)x}$ , $x\geq 0$ . Further, $\nu_{\alpha,\sigma}$ weakly converges to $\delta_K(\cdot)$ as $\sigma_1\to 0$ , where $\delta_K(\cdot)$ is the Dirac measure with mass at K.

Proof. It is easy to verify that with the initial condition $\varphi_{\alpha,\sigma}(0)=x_0>0$ , (2.3) has the unique solution

(2.4) \begin{equation} \varphi_{\alpha,\sigma}(t) = \frac{{x_0}\exp\big\{\big(r-{\sigma_1^2}/2\big)t+\sigma B_1(t)\big\}} {K+r{x_0}\int_0^t\exp\big\{\big(r-{\sigma_1^2}/2\big)s+\sigma B_1(s)\big\}\,{\textrm{d}} s} \end{equation}

(see [Reference Kink18], for example).

Therefore, by the law of iterated logarithm, we see that if $ r-{\sigma_1^2}/2< 0$ then

\begin{equation*}\lim_{t\to\infty}\frac{\ln \varphi_{\alpha,\sigma}(t)}t= r-\frac{\sigma_1^2}2<0.\end{equation*}

Using this estimate and the comparison theorem gets

\begin{equation*} \limsup\limits_{t\to\infty}\frac{\ln x_{\alpha, \sigma}(t)}t \leq \lim_{t\to\infty}\frac{\ln \varphi_{\alpha,\sigma}(t)}t= r-\frac{\sigma_1^2}2<0.\end{equation*}

On the other hand, from the second equation in (1.4),

(2.5) \begin{equation} \frac{\ln y_{\alpha,\sigma}(t)}t = \frac{\ln y_{\alpha,\sigma}(0)}t-\gamma-\frac{\sigma_2^2}2 + \frac1t\int_0^t\frac{\beta m x_{\alpha,\sigma}(s)}{a+b y_{\alpha,\sigma}(s)+cx_{\alpha,\sigma}(s)}\,{\textrm{d}} s + \frac{\sigma_2 B_2(t)}t. \end{equation}

By using the strong law of large numbers and (2.5), we have

\begin{equation*}\limsup_{t\to\infty}\frac{\ln y_{\alpha,\sigma}(t)}t=-\gamma-\frac{\sigma_2^2}2<0,\end{equation*}

which proves item (i).

Consider now the Fokker–Planck equation with respect to (2.3),

\begin{equation*} \frac{\partial p_{\alpha,\sigma}(x,t)}{\partial t} = -\frac{\partial[rx(1-{x}/K)p_{\alpha,\sigma}(x, t)]}{\partial x} + \frac{\sigma_1^2}2\frac{\partial^2[x^2p_{\alpha,\sigma}(x)]}{\partial x^2}. \end{equation*}

It is easy to see that for $r-{\sigma_1^2}/2>0$ this has a unique positive integrable solution $p_{\alpha,\sigma}(x) = Cx^{{2r}/{\sigma_1^2}-2}{\textrm{e}}^{-({2r}/{\sigma_1^2K})x}$ , $x\geq 0$ , which is a stationary density of (2.3), where C is the normalizing constant defined by

\begin{equation*}C=\frac{1}{\Gamma\big(\big({2r}/{\sigma_1^2}\big)-1\big)}\bigg(\frac{2r}{\sigma_1^2K}\bigg)^{({2r}/{\sigma_1^2})-1}\end{equation*}

and $\Gamma(\cdot)$ is the Gamma function. This means that (2.3) has a stationary distribution whose density is a Gamma distribution $\Gamma\big(\big({2r}/{\sigma_1^2}\big)-1, {2r}/{\sigma_1^2K}\big).$ By direct calculation we have

\begin{align*} \lim_{\sigma\to0}\mathbb{E}(\varphi_{\alpha,\sigma}(t)) & = \lim_{\sigma_1\to 0}\bigg[K-\frac{K\sigma_1^2}{2r}\bigg]=K, \\ \lim_{\sigma_1\to0}\textrm{Var}(\varphi_{\alpha,\sigma}(t)) & = \lim_{\sigma_1\to 0}\bigg[\frac{K^2\sigma_1^4}{4r^2}\bigg(\frac{2r}{\sigma_1^2}-1\bigg)\bigg]=0. \end{align*}

These equalities imply that the process $\varphi_{\alpha,\sigma}(t)$ converges to K in $L_2$ as $\sigma_1\to 0$ , which proves item (ii).

By Lemma 2.2(i), from now on we are only interested in the case $r>{\sigma_1 ^2}/2$ . For any $\alpha\in \mathbb{R}^{8,\textrm{o}}_+$ and $\sigma\geq 0$ we define a threshold

\begin{equation*} \lambda_{\alpha,\sigma}=-\gamma-\frac{\sigma_2^2}2+\int_0 ^\infty\frac{m\beta x}{a+cx}p_{\alpha,\sigma}(x)\,{\textrm{d}} x.\end{equation*}

We note that when $\sigma=0$ we have $\lambda_{\alpha,0}=\lambda_\alpha$ as defined by (1.2).

Lemma 2.3 The mapping $(\alpha, \sigma)\to \lambda_{\alpha,\sigma}$ is continuous on the domain

\begin{equation*}\mathcal{D}\,:\!=\,\big\{\alpha=(r,K,m,\beta, \gamma,a,b,c)\,:\, \alpha\in\mathbb{R}^8_+,\,r>{\sigma_1^2}/2,\,\sigma_1\geq 0\big\}.\end{equation*}

Proof. The integrability of the function $x^{\big({2r}/{\sigma_1^2}\big)-2}{\textrm{e}}^{-\big({2r}/{\sigma_1^2K}\big)x}$ , $x\geq 0$ , depends on the two singular points 0 and $\infty$ . For $\alpha_0=(r_0,K_0,m_0, \beta_0, \gamma_0, a_0,b_0,c_0)\in\mathcal{D}$ and $\sigma_0$ with $\sigma_{0,1}>0$ , we can find a sufficiently small $\eta>0$ such that the function

\begin{align*} h\big(x\big)= \begin{cases} x^{\big(\big({2r_0-\eta}\big)/\big({\sigma_{0,1}^2+\eta}\big)\big)-2}{\textrm{e}}^{-\big(\big({2r_0-\eta}\big)/\big({\sigma_{0,1}^2K_0+\eta}\big)\big)x} & \text{ for } 0<x<1, \\ x^{\big(\big({2r_0+\eta}\big)/\big({\sigma_{0,1}^2-\eta}\big)\big)-2}{\textrm{e}}^{-\big(\big({2r_0-\eta}\big)/\big({\sigma_{0,1}^2K_0+\eta}\big)\big)x} & \text{ for } 1\leq x. \end{cases} \end{align*}

is integrable on $\mathbb{R}_+$ . Further, for all $\alpha$ to be close to $\alpha_0$ and $\sigma$ to be close to $\sigma_0$ , we have $p_{\alpha,\sigma}(x)\leq h(x)$ for all $x\in\mathbb{R}_+$ . As the function ${m\beta x}/({a+cx})$ is bounded, we can use the Lebesgue dominated convergence theorem to get $\lim_{\alpha\to\alpha_0, \sigma\to\sigma_0}\lambda_{\alpha,\sigma_0}=\lambda_{\alpha,\sigma_0}$ .

The case $\sigma_0=0$ follows from Lemma 2.2(ii).

Theorem 2.1 If $\lambda_{\alpha,\sigma}<0$ then $y_{\alpha, \sigma}(t)$ has the Lyapunov exponent $\lambda_{\alpha,\sigma}$ and $x_{\alpha, \sigma}(t)-\varphi_{\alpha,\sigma}(t)$ converges almost surely to 0 as $t\to \infty$ at an exponential rate.

Proof. Since the function $h(u) = u/({A+u})$ is increasing in $u>0$ , it follows from (2.5) and the comparison theorem that

\begin{align*} \frac{\ln y_{\alpha,\sigma}(t)}t & = \frac{\ln y_{\alpha,\sigma}(0)}t - \gamma - \frac{\sigma_2^2}2 + \frac1t\int_0^t\frac{\beta m x_{\alpha,\sigma}(s)}{a+b y_{\alpha,\sigma}(s)+cx_{\alpha,\sigma}(s)}\,{\textrm{d}} s + \frac{\sigma_2 B_2(t)}t \\ & \leq \frac{\ln y_{\alpha,\sigma}(0)}t - \gamma - \frac{\sigma_2^2}2 + \frac1t\int_0^t\frac{\beta m \varphi_{\alpha,\sigma}(s)}{a +c\varphi_{\alpha,\sigma}(s)}\,{\textrm{d}} s + \frac{\sigma_2 B_2(t)}t. \end{align*}

Letting $t\to\infty$ and applying the law of large numbers to the process $\varphi_{\alpha,\sigma}$ , we obtain

(2.6) \begin{equation} \limsup_{t\to\infty}\frac{\ln y_{\alpha,\sigma}(t)}t \leq -\gamma - \frac{\sigma_2^2}2 + \int_0^\infty\frac{\beta m x}{a +cx}p_{\alpha,\sigma}(x)\,{\textrm{d}} x = \lambda_{\alpha,\sigma}. \end{equation}

We now prove that the process $x_{\alpha,\sigma}(t)-\varphi_{\alpha,\sigma}(t)$ converges almost surely to 0 by estimating the rate of convergence $\varphi_{\alpha,\sigma}(t)-x_{\alpha,\sigma}(t)$ when $t\to\infty$ . Using Itô’s formula, we get

\begin{align*} \ln x_{\alpha, \sigma}(t) & = \ln x_0 + \int_0^t\bigg(r\bigg(1-\frac{x_{\alpha, \sigma}(s)}{K}\bigg) - \frac{my_{\alpha, \sigma}(s)}{a+by_{\alpha, \sigma}(s)+cx_{\alpha, \sigma}(s)} - \frac{\sigma_1^2}2\bigg)\,{\textrm{d}} s + \sigma_1 B_1(t), \\ \ln \varphi_{\alpha,\sigma}(t) & = \ln\varphi_{\alpha,\sigma}(0) + \int_0^t\bigg(r\bigg(1-\frac{\varphi_{\alpha,\sigma}(s)}K\bigg) - \frac{\sigma_1^2}2\bigg)\,{\textrm{d}} s + \sigma_1 B_1(t). \end{align*}

From these and the inequalities

\begin{equation*}\frac{my}{a+by+cx}\leq\frac{my}{a}\quad\text{for all}\ x, y>0, \qquad x_{\alpha,\sigma}(t)\leq \varphi_{\alpha,\sigma}(t), \quad t\geq 0,\end{equation*}

we have

(2.7) \begin{align} 0 & \leq \limsup_{t\to\infty}\frac{\ln \varphi_{\alpha,\sigma}(t)-\ln x_{\alpha,\sigma}(t)}{t} \nonumber \\ & \leq \limsup_{t\to\infty}\frac{1}{t}\int_{0}^t\bigg(\frac{r}{K}(x_{\alpha,\sigma}(s)-\varphi_{\alpha,\sigma}(s)) + \frac{m}{a} y_{\alpha,\sigma}(s)\bigg)\,{\textrm{d}} s \nonumber \\ & \leq \frac{m}{a}\limsup_{t\to\infty}\frac{1}{t}\int_{0}^ty_{\alpha,\sigma}(s)\,{\textrm{d}} s = 0. \end{align}

From (2.4), it is easy to see that $\lim_{t\to\infty}({\ln\varphi_{\alpha,\sigma}(t)})/{t}=0$ a.s. Combining this with (2.7), we obtain $\lim_{t\to\infty}({\ln x_{\alpha,\sigma}(t)})/{t}=0$ a.s. Hence, from (2.6),

(2.8) \begin{align} \limsup_{t\to \infty}\dfrac{\ln y_{\alpha,\sigma}(t)-\ln x_{\alpha,\sigma}(t)}{t}\leq \lambda_{\alpha,\sigma}<0. \end{align}

Otherwise,

\begin{align*} d\bigg(\frac{1}{x_{\alpha, \sigma}(t)}\bigg) & = \bigg(\frac{\sigma_1^2-r}{x_{\alpha, \sigma}(t)} + \frac{r}{K} + \frac{my_{\alpha, \sigma}(t)}{x_{\alpha, \sigma}(t)(a+by_{\alpha, \sigma}(t)+cx_{\alpha, \sigma}(t))}\bigg)\,{\textrm{d}} t - \frac{\sigma_1}{x_{\alpha,\sigma}(t)}\,{\textrm{d}} B_1(t), \\ {\textrm{d}}\bigg(\frac{1}{\varphi_{\alpha, \sigma}(t)}\bigg) & = \bigg(\frac{\sigma_1 ^2-r}{\varphi_{\alpha, \sigma}(t)}+\frac{r}{K}\bigg)\,{\textrm{d}} t - \frac{\sigma_1}{\varphi_{\alpha,\sigma}(t)}\,{\textrm{d}} B_1(t). \end{align*}

Put

\begin{equation*}z(t)=\dfrac{1}{x_{\alpha,\sigma}(t)}-\dfrac{1}{\varphi_{\alpha,\sigma}(t)}.\end{equation*}

We see that $z(t)\geq 0$ for all $t\geq 0$ , and

\begin{equation*} {\textrm{d}} z(t) = \bigg(\big(\sigma_1^2-r\big)z(t) + \frac{my_{\alpha,\sigma}(t)}{x_{\alpha,\sigma}(t)(a+by_{\alpha,\sigma}(t)+cx_{\alpha,\sigma}(t))}\bigg)\,{\textrm{d}} t - \sigma_1z(t)\,{\textrm{d}} B_1(t). \end{equation*}

In view of the variation of constants formula [Reference Mao21, Theorem 3.1, p .96], this yields

(2.9) \begin{equation} z(t) = c_1\Psi(t)\int_0^t\Psi^{-1}(s) \frac{my_{\alpha,\sigma}(s)}{x_{\alpha, \sigma}(s)(a+by_{\alpha, \sigma}(s)+cx_{\alpha, \sigma}(s))}\,{\textrm{d}} s, \end{equation}

where $\Psi(t)={\textrm{e}}^{({\sigma_1 ^2}/2-r) t-\sigma_1 B_1(t)}$ . It is easy to see that

(2.10) \begin{equation} \lim_{t\to\infty}\frac{\ln \Psi(t)}t=\frac{\sigma_1 ^2}2-r. \end{equation}

Let $0< \overline \lambda<\max\big\{\big({r-\sigma_1 ^2}\big)/2,-\lambda_{\alpha,\sigma}\big\}$ be arbitrary, and choose $\varepsilon >0$ such that $0<\overline \lambda+3\varepsilon<\max\big\{r-{\sigma_1 ^2}/2, -\lambda_{\alpha, \sigma}\big\}$ . From (2.10), there are two positive random variables $\eta_1,\eta_2$ such that

(2.11) \begin{equation} \eta_1{\textrm{e}}^{\big({\sigma_1 ^2}/2-r-\varepsilon\big)t}\leq\Psi(t)\leq\eta_2{\textrm{e}}^{\big({\sigma_1 ^2}/2-r+\varepsilon\big)t} \quad \text{ for all } t\geq 0\ \text{a.s.} \end{equation}

Further, it follows from (2.8) that there exists a positive random variable $\eta_3$ that satisfies

(2.12) \begin{equation} \frac{y_{\alpha,\sigma}(t)}{x_{\alpha,\sigma}(t)}\leq \eta_3{\textrm{e}}^{(\lambda_{\alpha, \sigma}+\varepsilon)t} \quad \text{ for all } t\geq 0\ \text{a.s.} \end{equation}

Combining (2.9), (2.11), and (2.12), we get

\begin{align*} {\textrm{e}}^{\overline \lambda t}z(t) \leq \frac{c_1m\eta_2\eta_3}{a\eta_1} {\textrm{e}}^{\big(\overline\lambda+{\sigma_1^2}/2 - r + \varepsilon\big)t} \int_0^t{\textrm{e}}^{\big(r-{\sigma_1 ^2}/2 + \lambda_{\alpha, \sigma}+2\varepsilon\big)s}\,{\textrm{d}} s. \end{align*}

Thus,

\begin{align*} 0 \leq \lim_{t\to\infty}{\textrm{e}}^{\overline\lambda t}z(t) \leq \frac{c_1m\eta_2\eta_3}{a\eta_1\big(r-{\sigma_1^2}/2+\lambda_{\alpha, \sigma}+2\varepsilon\big)} \lim_{t\to\infty}\big({\textrm{e}}^{(\lambda_{\alpha, \sigma}+\overline \lambda+3\varepsilon)t} - {\textrm{e}}^{(\overline \lambda+{\sigma_1 ^2}/2-r+\varepsilon)t}\big) = 0. \end{align*}

As a result,

(2.13) \begin{equation} \lim_{t\to\infty}{\textrm{e}}^{\overline\lambda t}z(t) = \lim_{t\to\infty}{\textrm{e}}^{\overline\lambda t} \bigg(\frac1{x_{\alpha,\sigma}(t)}-\frac1{\varphi_{\alpha,\sigma}(t)}\bigg)=0. \end{equation}

Since $\lim_{t\to\infty}({\ln{\varphi_{\alpha, \sigma}}(t)})/{t}=0$ , $\lim_{t\to\infty}{\textrm{e}}^{-{\overline\lambda t}/2}\varphi_{\alpha,\sigma}^2(t)=0$ . Using (2.13) we get

\begin{align*} \lim_{t\to\infty}{\textrm{e}}^{{\overline\lambda t}/2}({\varphi_{\alpha,\sigma}}(t)-x_{\alpha, \sigma}(t)) & = \lim_{t\to\infty}{\textrm{e}}^{{\overline\lambda t}/2}{\varphi_{\alpha,\sigma}}(t)x_{\alpha, \sigma}(t) \bigg(\frac1{x_{\alpha, \sigma}(t)}-\frac1{\varphi_{\alpha,\sigma}(t)}\bigg) \notag \\ & = \lim_{t\to\infty}{\textrm{e}}^{-{\lambda t}/2}{\varphi_{\alpha,\sigma}}(t)x_{\alpha,\sigma}(t){\textrm{e}}^{\overline\lambda t} \bigg(\frac1{x_{\alpha, \sigma}(t)}-\frac1{\varphi_{\alpha,\sigma}(t)}\bigg)=0. \end{align*}

This means that $x_{\alpha, \sigma}(t)-\varphi_{\alpha,\sigma}(t)$ converges almost surely to 0 as $t\to\infty$ at an exponential rate.

We now turn to the estimate of the Lyapunov exponent of $y_{\alpha,\sigma}$ . From (2.5) we get

\begin{align*} \frac{\ln y_{\alpha,\sigma}(t)}t & = \frac{\ln y_{\alpha,\sigma}(0)}t - \gamma - \frac{\sigma_2^2}2 + \frac1t\int_0^t\frac{\beta m x_{\alpha,\sigma}(s)}{a+b y_{\alpha,\sigma}(s)+cx_{\alpha,\sigma}(s)}\,{\textrm{d}} s + \frac{\sigma_2 B_2(t)}t \\ & = \frac{\ln y_{\alpha,\sigma}(0)}t - \gamma - \frac{\sigma_2^2}2 + \frac1t\int_0^t\frac{\beta m \varphi_{\alpha,\sigma}(s)}{a+c\varphi_{\alpha,\sigma}(s)}\,{\textrm{d}} s + \frac{\sigma_2 B_2(t)}t \\ & \quad + \frac1t\int_0^t\beta m\bigg(\frac{x_{\alpha,\sigma}(s)}{a+b y_{\alpha,\sigma}(s)+cx_{\alpha,\sigma}(s)} - \frac{\varphi_{\alpha,\sigma}(s)}{a+c\varphi_{\alpha,\sigma}(s)}\bigg)\,{\textrm{d}} s. \end{align*}

Since $\lim_{t\to\infty}(x_{\alpha,\sigma}(t)-\varphi_{\alpha,\sigma}(t))=0$ and $\lim_{t\to\infty}y_{\alpha,\sigma}(t)=0$ , it is easy to see that

\begin{equation*}\lim_{t\to\infty}\frac1t\int_0^t\beta m \bigg(\frac{x_{\alpha,\sigma}(s)}{a+b y_{\alpha,\sigma}(s)+cx_{\alpha,\sigma}(s)} - \frac{\varphi_{\alpha,\sigma}(s)}{a+c\varphi_{\alpha,\sigma}(s)}\bigg)\,{\textrm{d}} s = 0.\end{equation*}

Thus,

\begin{equation*} \lim_{t\to\infty}\frac{\ln y_{\alpha,\sigma}(t)}t = \lim_{t\to\infty}\bigg(\frac{\ln y_{\alpha,\sigma}(0)}t - \gamma - \frac{\sigma_2^2}2 + \frac1t\int_0^t\frac{\beta m \varphi_{\alpha,\sigma}(s)}{a+c\varphi_{\alpha,\sigma}(s)}\,{\textrm{d}} s + \frac{\sigma_2 B_2(t)}t\bigg) = \lambda_{\alpha,\sigma}, \end{equation*}

which completes the proof.

The following lemma is similar to one in [Reference Du, Alexandru, Dang and Yin6].

Lemma 2.4 For $T,\varsigma>0$ and $\alpha_0\in \mathbb{R}^{8,\textrm{o}}_+$ , there exists a number $\kappa$ and $\delta$ such that, for all $\alpha\in {\mathcal{U}}_{\delta}(\alpha_0)$ and $0<\|\sigma\|<\kappa$ ,

(2.14) \begin{equation} \mathbb{P}\big\{\big\|\Phi^\phi_{\alpha,\sigma}(t)-\Phi^\phi_{\alpha_0}(t)\big\| \geq \varsigma\ \text{for some}\ t\in[0,T]\big\} < \exp\bigg\{{-}\frac{\kappa}{\|\sigma\|^2}\bigg\},\quad \phi \in \mathcal{R}_{\delta}(\alpha_0). \end{equation}

Proof. Let $0<\delta\leq{\varsigma}/{2L_1 T {\textrm{e}}^{L_1 T}}$ . From (1.3) we have

(2.15) \begin{equation} \sup_{0\leq t\leq T}\big\|\Phi^\phi_{\alpha}(t)-\Phi^\phi_{\alpha_0}(t)\big\| \leq\frac\varsigma2 \quad \text{ for all } \alpha\in{\mathcal{U}}_\delta(\alpha_0),\,\phi\in\mathcal{R}_\delta(\alpha_0). \end{equation}

Let $R>0$ be large enough that $\mathcal{R}_\delta(\alpha_0)\subset {\bf B}_R$ , and let $h_R(\cdot)$ be a twice-differentiable function such that, for $0\leq h_{R}\leq 1$ ,

\begin{equation*}h_{R}(\phi)= \begin{cases} 1 & \text{if }\|\phi\|\leq R, \\ 0 & \text{if }\|\phi\| > R+1. \end{cases}\end{equation*}

Put

\begin{align*} f_h(\phi,\alpha) = h(\phi)f(\phi,\alpha) & = \begin{pmatrix}h(\phi)f^{(1)}(\phi,\alpha)\\[3pt]h(\phi)f^{(2)}(\phi,\alpha)\end{pmatrix}, \\[3pt] g_h(\phi) = h(x,y)g(x,y) & = h(x,y)\begin{pmatrix}\sigma_1x & \quad 0\\[3pt] 0 & \quad \sigma_2y\end{pmatrix}. \end{align*}

It can be seen that $f_h(\phi,\alpha)$ is a Lipschitz function, i.e. there exists $M>0$ such that

(2.16) \begin{equation} \|f_h(\phi_1,\alpha)-f_h(\phi_2,\alpha)\| \leq M\|\phi_2-\phi_1\| \quad \text{ for all }\alpha\in{\mathcal{U}}_\delta(\alpha_0),\, \phi_1, \phi_2\in \mathbb{R}^2_+. \end{equation}

If we choose $M\geq 2R^2$ we also have

(2.17) \begin{equation} \big\|g_h(\phi)g_h^\top(\phi)\big\|\leq M\|\sigma\|^2 \quad \text{ for all } \phi\in \mathbb{R}^2_+. \end{equation}

Let $\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(t)$ be the solution starting at $\phi\in\mathcal{R}_{\delta}(\alpha_0)$ of the equation

\begin{equation*} {\textrm{d}}\widetilde{\Phi}(t) = f_h\big(\widetilde{\Phi}(t),\alpha\big)\,{\textrm{d}} t + g_h\big(\widetilde{\Phi}(t)\big)\,{\textrm{d}} B(t), \end{equation*}

where $B(t)=(B_1(t), B_2(t))^\top$ . Define the stopping time $\tau_R^\phi = \inf\big\{t\geq 0\,:\,\big\|\Phi^{\phi}_{\alpha, \sigma}(t)\big\|\geq R\big\}$ .

It is easy to see that $\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(t)=\Phi^{\phi}_{\alpha, \sigma}(t)$ up to time $\tau_R^\phi$ . Since the state space $\mathcal{R}_{\delta}(\alpha_0)$ of (1.1) is contained in $ {\bf B}_R$ , the solution $\Phi^\phi_\alpha(\cdot)$ of (1.1) is also the solution of the equation ${\textrm{d}}\Phi(t)=f_h({\Phi}(t),\alpha)\,{\textrm{d}} t$ , $t\geq 0$ , with initial value $\phi\in \mathcal{R}_{\delta}(\alpha_0)$ . For all $t\in[0, T]$ , using Itô’s formula we have

\begin{align*} \big\|\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(t)-\Phi^{\phi}_{\alpha}(t)\big\|^2 & = 2\int_0^{t}\big(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s)-\Phi^{\phi}_{\alpha}(s)\big)^\top \big(f_h(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s),\alpha)-f_h\big(\Phi^\phi_\alpha(s),\alpha\big)\big)\,{\textrm{d}} s \\ & \quad + \int_0^th\big(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s)\big) \textrm{trace}\big(g_h\big(\widetilde{\Phi}^{\phi}_{\alpha,\sigma}(s)\big) g_h^\top\big(\widetilde{\Phi}^{\phi}_{\alpha,\sigma}(s)\big)\big)\,{\textrm{d}} s \\ & \quad + 2\int_0^t\big(\widetilde{\Phi}^{\phi}_{\alpha,\sigma}(s)-\Phi^{\phi}_{\alpha}(s)\big)^\top {g_h}\big(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s)\big)\,{\textrm{d}} B(t). \end{align*}

By the exponential martingale inequality, for $T, \varsigma>0$ there exists a number $\kappa=\kappa(T,\varsigma)$ such that $\mathbb{P}(\widetilde\Omega)\geq 1-\exp\big\{-{\kappa}/{\|\sigma\|^2}\big\}$ , where

\begin{align*} \widetilde\Omega & = \bigg\{\sup_{t\in[0, T]}\bigg[\int_0^t\big(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s)-\Phi^{\phi}_{\alpha}(s)\big)^\top {g_h}\big(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s)\big)\,{\textrm{d}} B(t) \\ & \qquad\qquad - \frac1{2\|\sigma\|^2}\int_0^t\big(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s)-\Phi^{\phi}_{\alpha}(s)\big)^\top {g_h}\big(\widetilde{\Phi}^{\phi}_{\alpha,\sigma}(s)\big){g^\top_h}\big(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s)\big) \big(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s)-\Phi^{\phi}_{\alpha}(s)\big)\,{\textrm{d}} s\bigg] \\ & \qquad \,:\, \kappa, \phi\in \mathcal{R}_{\delta}(\alpha_0)\bigg\}. \end{align*}

From (2.16) and (2.17), this implies that, for all $\omega\in \widetilde\Omega$ ,

\begin{align*} \big\|\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(t)-\Phi^{\phi}_{\alpha}(t)\big\|^2 & \leq 2\int_0^{t}\big\|\widetilde{\Phi}^{\phi}_{\alpha,\sigma}(s)-\Phi^{\phi}_{\alpha}(s)\big\| \big\|f_h(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s),\alpha)-f_h\big(\Phi^\phi_\alpha(s),\alpha\big)\big\|\,{\textrm{d}} s \\ & \quad + \int_0^th\big(\widetilde{\Phi}^{\phi}_{\alpha,\sigma}(s)\big)\textrm{trace} \big(g_h\big(\widetilde{\Phi}^{\phi}_{\alpha,\sigma}(s)\big)g_h^\top\big(\widetilde{\Phi}^{\phi}_{\alpha,\sigma}(s)\big)\big)\,{\textrm{d}} s \\ & \quad + \frac1{\big\|\sigma\big\|^2}\int_0^t\big\|\widetilde{\Phi}^{\phi}_{\alpha,\sigma}(s)-\Phi^{\phi}_{\alpha}(s)\big\|^2 \big\|{g_h}\big(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s)\big){g^\top_h}\big(\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s)\big)\big\|\,{\textrm{d}} s \\ & \quad + 2\int_0^t\kappa\,{\textrm{d}} s \\ & \leq 3M\int_0^t\big\|\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(s)-\Phi^{\phi}_{\alpha}(s)\big\|^2\,{\textrm{d}} s + (M\big\|\sigma\big\|^2+2\kappa)T. \end{align*}

For all $t\in[0, T]$ , an application of the Gronwall–Belmann inequality implies that

\begin{equation*} \big\|\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(t)-\Phi^{\phi}_{\alpha}(t)\big\| \leq \sqrt{(M\|\sigma\|^2+2\kappa)T\exp\{3MT\}} \end{equation*}

in the set $\widetilde\Omega$ . Choosing $\kappa$ sufficiently small that $(M\kappa^2+2\kappa)T\exp\{3MT\}\leq {\varsigma^2}/4$ implies that

(2.18) \begin{equation} \big\|\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(t)-\Phi^{\phi}_{\alpha}(t)\big\| \leq \frac{\varsigma}2 \quad \text{ for all $\alpha\in \mathcal{U}_\delta(\alpha_0),\, 0<\|\sigma\|<\kappa$},\,\omega\in \widetilde\Omega. \end{equation}

From (2.18), (2.15), and the triangle inequality, we have $\|\widetilde{\Phi}^{\phi}_{\alpha, \sigma}(t)-\Phi^{\phi}_{\alpha_0}(t)\|\leq\varsigma$ for all $\alpha\in\mathcal{U}_{\delta}(\alpha_0)$ , $t\in [0,T]$ , $\omega\in \widetilde\Omega$ . It also follows from this inequality that when $\omega\in \widetilde \Omega$ we have $\tau_R>T$ , which implies that

\begin{equation*}\mathbb{P}\big\{\big\|\Phi^{\phi}_{\alpha,\sigma}(t)-\Phi^{\phi}_{\alpha_0}(t)\big\|<\varsigma \text{ for all }t\in[0, T]\big\} \geq \mathbb{P}(\Omega_1) \geq 1 - \exp\bigg\{\frac{\kappa}{\|\sigma\|^2}\bigg\}\end{equation*}

holds for all $0<\|\sigma\|<\kappa$ and $\alpha\in{\mathcal{U}}_\delta(\alpha_0)$ . This completes the proof.

Theorem 2.2 Let $\alpha_0 = (r_0,K_0,m_0,\beta_0,\gamma_0,a_0,b_0,c_0)$ be a vector of parameters of the system in (1.1) such that $\lambda_{\alpha_0}>0$ and

\begin{equation*}b_0 < \min\bigg\{\frac{c_0}{\beta_0}, \frac{m_0^2\beta_0^2-c_0^2\gamma_0^2}{\gamma_0\beta_0(m_0\beta_0-c_0\gamma_0)+m_0r_0\beta_0^2}\bigg\}.\end{equation*}

Then, there exist $\delta_1>0$ and $\overline \sigma>0$ such that, for all $\alpha\in{\mathcal{U}}_{\delta_1}(\alpha_0)$ and $0<\|\sigma\|\leq \overline \sigma$ , the Markov process $\Phi^{\phi}_{\alpha, \sigma}(t)=(x_{\alpha,\sigma}(t), y_{\alpha,\sigma}(t))$ has a unique stationary distribution $\mu_{\alpha, \sigma}$ . Further, $\mu_{\alpha, \sigma}$ is concentrated on $\mathbb{R}^{2,\circ}_+$ and has a density with respect to the Lebesgue measure. Also, for any open set $\mathcal{V}$ containing $\Gamma_{\alpha_0}$ we have

(2.19) \begin{equation} \lim_{(\alpha, \sigma)\to(\alpha_0, 0)}\mu_{\alpha, \sigma}(\mathcal{V})=1, \end{equation}

where $\Gamma_{\alpha_0}$ is the limit cycle of the system in (1.1) corresponding to the parameter $\alpha_0$ .

Proof. Since

\begin{equation*}\lambda_{\alpha_0} = -\gamma_0 + \frac{Km_0\beta_0}{a_0+Kc_0}>0, \qquad b_0 < \min\bigg\{\frac{c_0}{\beta_0}, \frac{m_0^2\beta_0^2-c_0^2\gamma_0^2}{\gamma_0\beta_0(m_0\beta_0-c_0\gamma_0)+m_0r_0\beta_0^2}\bigg\},\end{equation*}

we can use Lemma 2.3 to find $\delta_1>0$ and $\overline \sigma$ such that

\begin{equation*} \lambda_{\alpha,\sigma} = -\gamma - \frac{\sigma_2^2}2 + \int_0^\infty\frac{m\beta x}{a+cx}p_{\alpha,\sigma}(x)\,{\textrm{d}} x > 0, \quad b < \min\bigg\{\frac{c}{\beta},\frac{m^2\beta^2-c^2\gamma^2}{\gamma\beta(m\beta-c\gamma)+mr\beta^2}\bigg\} \end{equation*}

hold for all $\alpha\in{\mathcal{U}}_{\delta_1}(\alpha_0)$ and $0\leq\|\sigma\|\leq\overline\sigma$ . By virtue of [Reference Du, Nguyen and Yin7, Theorem 2.3, p. 192], the Markov process $\Phi^{\phi}_{\alpha, \sigma}(t)=(x_{\alpha, \sigma}(t), y_{\alpha, \sigma}(t))$ has a unique stationary distribution $\mu_{\alpha, \sigma}$ with support $\mathbb{R}^{2,\circ}$ . Further, by [Reference Arnold and Kliemannt1, Reference Kliemann19], the stationary distribution $\mu_{\alpha, \sigma}$ has a density with respect to the Lebesgue measure on $\mathbb{R}^{2,\circ}$ by the non-degenerate property of $g(\phi)$ .

On the other hand, it follows from (2.2) that for any $\varepsilon>0$ there exists $R=R(\varepsilon)$ such that $\mu_{\alpha, \sigma}({\bf B}_R)\geq 1-\varepsilon $ for every $\alpha\in {\mathcal{U}}_{\delta_1}(\alpha_0)$ and $0\leq \|\sigma\|\leq \overline \sigma$ .

By the assumption of Theorem 2.2, it can be seen from Lemma 1.1(ii) that $\phi^*_{\alpha_0}$ is a source point, i.e. two eigenvalues of the matrix $Df\big(\phi^*_{\alpha_0},\alpha_0\big)$ have positive real parts. Therefore, the Lyapunov equation $Df\big(\phi^*_{\alpha_0},\alpha_0\big)^\top P+PDf\big(\phi^*_{\alpha_0},\alpha_0\big)=I$ has a positive definite solution P. Since $Df(\phi,\alpha)$ is continuous in $\phi,\alpha$ , there exist positive constants $0<\delta_2<\delta_1$ , $0<\delta_3$ , and c such that

\begin{align*} \dot V(\phi,\alpha)=\;{V}_{\phi}(\phi, \alpha)f(\phi, \alpha) & > c\|\phi-\phi^*_{\alpha}\|, \\ \textrm{trace}\big(g^\top(\phi) Pg(\phi)\big) & \geq c\|\sigma\|^2 \end{align*}

for all $\phi\in \mathcal{V}_{\delta_3}\big(\phi^*_{\alpha_0}\big)$ , $\alpha\in\mathcal{U}_{\delta_2}(\alpha_0)$ , where $V(\phi, \alpha)=(\phi-\phi_{\alpha}^*)^\top P(\phi-\phi_{\alpha}^*)$ . Hence,

\begin{equation*} \mathcal{L}V(\phi,\alpha)={V}_{\phi}(\phi,\alpha)f(\phi,\alpha) + \textrm{trace}\big(g^\top(\phi) Pg(\phi)\big) \geq {c\|\sigma\|^2} \end{equation*}

for all $\phi\in \mathcal{V}_{\delta_3}\big(\phi^*_{\alpha_0}\big), \alpha\in \mathcal{U}_{\delta_2}(\alpha_0)$ .

Writing $S=\mathcal{V}_{\delta_3/2}\big(\phi^*_{\alpha_0}\big)$ and $\mathcal{Z}=\mathcal{V}_{\delta_3}\big(\phi^*_{\alpha_0}\big)$ , we now prove that $\lim_{(\alpha,\sigma)\to(\alpha_0, 0)}\mu_{\alpha,\sigma}(S)=0$ . First, we note from (2.14) that there is $T^*$ such that if $\phi\in{\bf B}_R\setminus S$ then $\Phi_\alpha^\phi(t) \in {\bf B}_R\setminus \mathcal{Z}$ for all $t\geq T^*$ . Further, for any $T>0$ we have

\begin{align*} \mu_{\alpha,\sigma}(S) & = \int_{{\mathbb{R}^2_+}}P_{\alpha,\sigma}(T,\phi,S)\mu_{\alpha,\sigma}({\textrm{d}}\phi) \\ & = \int_{{\bf B}_R\setminus S}P_{\alpha,\sigma}(T,\phi,S)\mu_{\alpha,\sigma}({\textrm{d}}\phi) + \int_{S}P_{\alpha,\sigma}(T,\phi,S)\mu_{\alpha,\sigma}({\textrm{d}}\phi) + \int_{{\bf B}_R^\textrm{c}}P_{\alpha,\sigma}(T,\phi,S)\mu_{\alpha,\sigma}({\textrm{d}}\phi), \end{align*}

where $P_{\alpha,\sigma}(T,\phi,\cdot)=\mathbb{P}\big(\Phi^\phi_{\alpha,\sigma}(T)\in \cdot\big)$ is the transition probability of the Markov process $\Phi_{\alpha,\sigma}^\phi$ . It is clear that

\begin{equation*} \int_{{\bf B}_R^\textrm{c}}P_{\alpha,\sigma}(T,\phi,S)\mu_{\alpha,\sigma}({\textrm{d}}\phi) \leq \mu_{\alpha,\sigma}\big({\bf B}_R^\textrm{c}\big)\leq \varepsilon . \end{equation*}

For $\phi\in S$ , let $\tau_{\alpha,\sigma}^\phi$ be the exit time of $\Phi^\phi_{\alpha, \sigma}(\cdot)$ from $\mathcal{Z}$ , i.e. $\tau_{\alpha, \sigma}^\phi=\inf\big\{t\geq 0\,:\, \Phi^\phi_{\alpha, \sigma}(t)\not\in \mathcal{Z}\big\}$ . By Itô’s formula, we have

\begin{align*} \theta \geq \mathbb{E}V\big(\Phi^\phi_{\alpha,\sigma}\big(\tau_{\alpha,\sigma}^\phi\wedge t\big)\big) - V(\phi) & = \mathbb{E}\int_0^{\tau_{\alpha,\sigma}^\phi\wedge t}\mathcal{L}V\big(\Phi^\phi_{\alpha,\sigma}(s)\big)\,{\textrm{d}} s \\ & \geq {c\|\sigma\|^2}\mathbb{E}\big[\tau_{\alpha,\sigma}^\phi\wedge t\big] \geq {c\|\sigma\|^2 t}\mathbb{P}\big(\tau_{\alpha,\sigma}^\phi\geq t\big), \quad t\geq 0, \end{align*}

where $\theta=\sup\big\{V(\phi,\alpha)\,:\,{\phi\in\mathcal{Z},\alpha\in\mathcal{U}_{\delta_2}(\alpha_0)}\big\}$ . Choosing

\begin{equation*}T_{\sigma}=\max\bigg\{\frac{2\theta}{c\|\sigma\|^2},T^*\bigg\}\end{equation*}

means that $\mathbb{P}\big(\tau_{\alpha, \sigma}^\phi\geq T_{\sigma}\big)\leq\frac12$ for all $\phi\in S$ , $\alpha\in \mathcal{U}_{\delta_2}(\alpha_0)$ . Thus,

(2.20) \begin{align} P_{\alpha,\sigma}\big(T_{\sigma},\phi, S\big) & = \mathbb{P}\big(\Phi^\phi(T_{\sigma})\in S\big) \nonumber \\ & = \mathbb{P}\big(\Phi^\phi(T_{\sigma})\in S,\tau_{\alpha, \sigma}^\phi\geq T_{\sigma}\big) + \mathbb{P}\big(\Phi^\phi(T_{\sigma})\in S,\tau_{\alpha, \sigma}^\phi < T_{\sigma}\big) \nonumber \\ & \leq \frac12 + \mathbb{P}\big(\Phi^\phi(T_{\sigma})\in S,\tau_{\alpha, \sigma}^\phi < T_{\sigma}\big). \end{align}

By using the strong Markov property of the solution, we get

\begin{equation*} \mathbb{P}\big(\Phi^\phi(T_{\sigma})\in S,\tau_{\alpha, \sigma}^\phi<T_{\sigma}\big) = \int_0^{T_{\sigma}}\bigg[\int_{\partial\mathcal{Z}}\mathbb{P}\big\{\Phi_{\alpha,\sigma}^\psi(T_{\sigma}-t)\in S\big\} \mathbb{P}\big\{\Phi_{\alpha,\sigma}^\phi(t)\in {\textrm{d}}\psi\big\}\bigg]P\big\{\tau_{\alpha,\sigma}^\phi\in {\textrm{d}} t\big\}. \end{equation*}

Note that when $\psi\in\partial\mathcal{Z}$ we have $\Phi_\alpha^\psi(T_{\sigma}-t)\not\in\mathcal{Z}$ for all $t\geq 0$ . Therefore, by Lemma 2.4 with $\varsigma=\delta_3/2$ we can find $\overline\sigma>\kappa>0$ and $0<\delta_4\leq \delta_2$ such that $\mathbb{P}\big(\Phi_{\alpha,\sigma}^\psi(T_{\sigma}-t)\in S\big) \leq \exp\big\{-{\kappa}/{\|\sigma\|^2}\big\}$ for $0\leq t\leq T_{\sigma}$ , $0<\|\sigma\|<\kappa$ , and $\alpha\in\mathcal{U}_{\delta_4}(\alpha_0)$ . Hence,

(2.21) \begin{equation} \mathbb{P}\big\{\Phi^\phi(T_{\sigma})\in S,\tau_{\alpha,\sigma}^\phi < T_{\sigma}\big\} \leq T_{\sigma}\exp\bigg\{{-}\frac{\kappa}{\|\sigma\|^2}\bigg\}, \quad \alpha\in\mathcal{U}_{\delta_4}(\alpha_0),\ 0<\|\sigma\|<\kappa. \end{equation}

Combining (2.20) and (2.21), we get

\begin{align*} \int_{S}P(T_{\sigma},\phi,S)\mu_{\alpha,\sigma}({\textrm{d}}\phi) \leq \frac12\mu_{\alpha,\sigma}(S) + T_{\sigma}\exp\bigg\{{-}\frac{\kappa}{\|\sigma\|^2}\bigg\}, \quad \alpha\in\mathcal{U}_{\delta_4}(\alpha_0),\ 0<\|\sigma\|<\kappa. \end{align*}

On the other hand, when $\phi\in {\bf B}_R\setminus S$ we see that $\Phi_\alpha^\phi (T_{\sigma})\not\in \mathcal{Z}$ . Therefore, using (2.14) again we get

\begin{equation*}\int_{{\bf B}_R\setminus S}P(T_{\sigma},\phi,S)\mu_{\alpha,\sigma}({\textrm{d}}\phi) < \exp\bigg\{{-}\frac{\kappa}{\|\sigma\|^2}\bigg\}, \quad \alpha\in \mathcal{U}_{\delta_4}(\alpha_0), \ 0<\|\sigma\|<\kappa.\end{equation*}

Summing up, we have

\begin{align*} \mu_{\alpha,\sigma}(S) \leq \frac12\mu_{\alpha,\sigma}(S) + (T_{\sigma}+1)\exp\bigg\{{-}\frac{\kappa}{\|\sigma\|^2}\bigg\} + \varepsilon, \quad \alpha\in \mathcal{U}_{\delta_4}(\alpha_0), \ 0<\|\sigma\|<\kappa. \end{align*}

Noting that $\lim_{(\alpha,\sigma)\to(\alpha_0, 0)}(T_{\sigma}+1)\exp\big\{-{\kappa}/{\|\sigma\|^2}\big\}=0$ , we can pass the limit to get

\begin{equation*}\limsup_{(\alpha,\sigma)\to (\alpha_0,0)}\mu_{\alpha, \sigma}(S) \leq \frac12\limsup_{(\alpha,\sigma)\to (\alpha_0,0)}\mu_{\alpha, \sigma}(S)+\varepsilon.\end{equation*}

Since $\varepsilon>0$ is arbitrary, $\limsup_{(\alpha,\sigma)\to(\alpha_0,0)}\mu_{\alpha,\sigma}(S) = 0$ .

We now prove (2.19). Let $\mathcal{V}$ be an open set containing $\Gamma_{\alpha_0}$ . It suffices to show that, for any compact set B intersecting neither $\mathcal{V}$ nor S, we have $\lim_{(\alpha,\sigma)\to(\alpha_0, 0)}\mu_{\alpha, \sigma}(B)=0$ . Let $3d=\text{dist}\big(\partial\mathcal{V},\Gamma_{\alpha_0}\big)$ and $\mathcal{V}_d\big(\Gamma_{\alpha_0}\big) = \big\{x\,:\,\text{dist}\big(x,\Gamma_{\alpha_0}\big) < d\big\}$ . From Lemma 1.2(ii), there exists $0<\delta_5<{\delta_4}$ such that $\Gamma_\alpha\subset\mathcal{V}_d(\Gamma_{\alpha_0})$ for all $\alpha\in\overline{\mathcal{U}_{\delta_5}}(\alpha_0)$ . It is clear that $\text{dist}(B,\mathcal{V}_d(\Gamma_{\alpha_0}))>2d$ . Let $\varepsilon>0$ and $R=R(\varepsilon)>0$ be such that $S,B\subset{\bf B}_R$ and $\mu_{\alpha, \sigma}\big({\bf B}_R^\textrm{c}\big)\leq \varepsilon$ . Since $\Gamma_{\alpha_0}$ is asymptotically stable, we can find an open neighbourhood U of $\Gamma_{\alpha_0}$ such that $U\subset\mathcal{V}_d(\Gamma_{\alpha_0})$ and if $\phi \in U$ then $\Phi^\phi_{\alpha_0}(t)\in \mathcal{V}_d(\Gamma_{\alpha_0})$ for all $t\geq 0$ . Further, the simple property of the limit cycle $\Gamma_{\alpha_0}$ implies that $\lim_{t\to\infty}\text{dist}(\Phi^\phi_{\alpha_0}(t),\Gamma_{\alpha_0})=0$ for all $\phi\not\in S$ . This means that, for every $\phi\in\overline{{{\bf B}}_R}\setminus S$ , there exists a $T^\phi$ such that $\Phi_{\alpha_0}^\phi(t)\in U$ for all $t\geq T^\phi$ .

By the continuity of solutions on the initial condition, there exists $\delta_{\phi}>0$ such that $\Phi_{ \alpha_0}^{z}(t)\in U$ for all $z\in \mathcal{V}_{\delta_\phi}(\phi)$ and $t>T^\phi$ . Since $\overline {{\bf B}}_R\setminus S$ is compact, there are $\phi_1,\phi_2,\ldots,\phi_n$ such that $\overline {{\bf B}}_R\setminus S\subset \cup_{k=1}^n \mathcal{V}_{\delta_{\phi_k}}(\phi_k)$ . Let $T=\max\big\{T^{\phi_1},T^{\phi_2},\ldots,T^{\phi_n}\big\}$ . It can be seen that $\Phi_{\alpha_0}^\phi(T)\in U\subset\mathcal{V}_d(\Gamma_{\alpha_0})$ for all $\phi\in\overline{{\bf B}}_R\setminus S$ . Similar to the above, we have

\begin{align*} \mu_{\alpha,\sigma}(B) & = \int_{{\bf B}_R\setminus S}P_{\alpha,\sigma}(T,\phi,B)\mu_{\alpha,\sigma}({\textrm{d}}\phi) + \int_{S}P_{\alpha,\sigma}(T,\phi,B)\mu_{\alpha,\sigma}({\textrm{d}}\phi)\\& \quad + \int_{{\bf B}_R^c}P_{\alpha,\sigma}(T,\phi,B)\mu_{\alpha,\sigma}({\textrm{d}}\phi) \\ & \leq \int_{{\bf B}_R\setminus S}P_{\alpha,\sigma}(T,\phi,B)\mu_{\alpha,\sigma}({\textrm{d}}\phi) + \mu_{\alpha,\sigma}(S) + \mu_{\alpha,\sigma}\big({\bf B}_R^\textrm{c}\big) \\ & \leq \int_{{\bf B}_R\setminus S}P_{\alpha,\sigma}(T,\phi,B)\mu_{\alpha,\sigma}({\textrm{d}}\phi) + \mu_{\alpha,\sigma}(S) + \varepsilon. \end{align*}

Using Lemma 2.4 again with $\varsigma=d$ , we can find $\delta_6<\delta_5$ and $\kappa_1$ such that

\begin{equation*} \mathbb{P}\big\{\big\|\Phi^\phi_{\alpha,\sigma}(T)-\Phi^\phi_{\alpha}(T)\big\|\geq\varsigma\big\} \leq \exp\bigg\{{-}\frac{\kappa_1}{\|\sigma\|^2}\bigg\}, \quad \alpha\in\mathcal{U}_{\delta_6}(\alpha_0),\ \phi\in {\bf B}_R\setminus S,\ 0<\|\sigma\|<\kappa_1. \end{equation*}

This inequality implies that

\begin{equation*} \mathbb{P}\big\{\big\|\Phi^\phi_{\alpha,\sigma}(T)\in\mathcal{V}\big\} \geq 1 - \exp\bigg\{{-}\frac{\kappa_1}{\|\sigma\|^2}\bigg\}, \quad \alpha\in\mathcal{U}_{\delta_6}(\alpha_0),\ \phi\in {\bf B}_R\setminus S,\ 0<\|\sigma\|<\kappa_1. \end{equation*}

Since $B\cap \mathcal{V}=\emptyset$ , $P_{\alpha,\sigma}(T,\phi,B) =\mathbb{P}\big\{\big\|\Phi^\phi_{\alpha,\sigma}(T)\in B\big\} \leq \exp\big\{-{\kappa_1}/{\|\sigma\|^2}\big\}$ for all $\alpha\in\mathcal{U}_{\delta_6}(\alpha_0)$ , $\phi\in{\bf B}_R\setminus S$ , $0<\|\sigma\|<\kappa_1.$ Hence,

\begin{equation*} \mu_{\alpha,\sigma}(B) \leq \exp\bigg\{{-}\frac{\kappa_1}{\|\sigma\|^2}\bigg\}+\mu_{\alpha,\sigma}(S)+\varepsilon, \end{equation*}

and thus

\begin{equation*} \limsup_{(\alpha,\sigma)\to(\alpha_0,0)}\mu_{\alpha,\sigma}(B) \leq \lim_{(\alpha,\sigma)\to(\alpha_0,0)}\bigg(\exp\bigg\{{-}\frac{\kappa}{\|\sigma\|^2}\bigg\} + \mu_{\alpha,\sigma}(S) + \varepsilon\bigg) = \varepsilon. \end{equation*}

Since $\varepsilon$ is arbitrary, $\limsup_{(\alpha,\sigma)\to(\alpha_0, 0)}\mu_{\alpha,\sigma}(B)=0$ , which completes the proof.

Corollary 2.1 Suppose that all assumptions of Theorem 2.2 hold. If H is a continuous and bounded function defined on $\mathbb{R}^2_+$ , then

\begin{equation*} \lim_{(\alpha,\sigma)\to(\alpha_0,0)}\int_{\mathbb{R}_+^{2}}H(x)\,{\textrm{d}}\mu_{\alpha,\sigma}(x) = \frac1{T^*}\int_0^{T^*}H(\Phi_{\alpha_0}^{\overline\phi}(t))\,{\textrm{d}} t, \end{equation*}

where $\overline\phi$ is any point on $\Gamma_{\alpha_0}$ and $T^*$ is the period of the limit cycle, i.e. $\Phi_{\alpha _0}^{\overline \phi}(t+T^*)=\Phi_{\alpha_0 }^{\overline \phi}(t)$ .

Proof. Let $\widehat \Phi_{\alpha, \sigma}(\cdot)$ be the stationary solution of (1.4), whose distribution is $\mu_{\alpha, \sigma}$ . We can see that

\begin{equation*}\int_{\mathbb{R}_+^{2}}H(\phi)\,{\textrm{d}}\mu_{\alpha, \sigma}(\phi) = \mathbb{E}H\big(\widehat\Phi_{\alpha,\sigma}(t)\big), \quad \text{for all}\ t\geq 0.\end{equation*}

In particular,

\begin{equation*}\int_{\mathbb{R}_+^{2}}H(\phi)\,{\textrm{d}}\mu_{\alpha,\sigma}(\phi) = \frac1{T^*}\int_0^{T^*}H\big(\widehat\Phi_{\alpha,\sigma}(t)\big)\,{\textrm{d}} t,\end{equation*}

where $T^*$ is the period of the cycle. Since the measure $\mu_{\alpha, \sigma}(\cdot)$ becomes concentrated on the cycle $\Gamma_{\alpha_0 }$ as $(\alpha,\sigma)$ approaches $(\alpha_0,0)$ and H is a bounded continuous function, we obtain

\begin{equation*}\lim_{(\alpha,\sigma)\to(\alpha_0,0)}\int_{\mathbb{R}_+^{2}}H(\phi)\,{\textrm{d}}\mu_{\alpha,\sigma}(\phi) = \frac1{T^*}\int_0^{T^*}H\big(\Phi^{\overline\phi}_{\alpha_0}(t)\big)\,{\textrm{d}} t,\end{equation*}

where $\overline\phi$ is any point on the limit cycle $\Gamma_{\alpha_0}$ . This completes the proof.

Theorem 2.3 Let $\alpha_0 =(r_0,K_0,m_0,\beta_0,\gamma_0,a_0,b_0,c_0)$ be a vector of parameters of the system in (1.1) such that $\lambda_{\alpha_0}>0$ and

\begin{equation*}b_0 \geq \min\bigg\{\frac{c_0}{\beta_0}, \frac{m_0^2\beta_0^2-c_0^2\gamma_0^2}{\gamma_0\beta_0(m_0\beta_0-c_0\gamma_0)+m_0r_0\beta_0^2}\bigg\}.\end{equation*}

Then, there exist $\delta>0$ and $\overline \sigma>0$ such that, for all $\alpha\in\mathcal{U}_{\delta}(\alpha_0)$ and $0<\sigma\leq \overline \sigma$ , the process $(x_{\alpha,\sigma}(t), y_{\alpha,\sigma}(t))$ has a stationary distribution $\mu_{\alpha, \sigma}$ concentrated on $\mathbb{R}^{2,\circ}_+$ . Further, for any open set $\mathcal{V}$ containing the positive equilibrium point $\phi^*_{\alpha_0 }=(x^*_{\alpha_0 }, y^*_{\alpha_0 })$ of the system in (1.1), $\lim_{(\alpha, \sigma)\to(\alpha_0 , 0)}\mu_{\alpha, \sigma}(\mathcal{V})=1$ .

Proof. The proof is quite similar to that of Theorem 2.1, so we omit it here.

3. Numerical example

Consider (1.1) having the parameter $\alpha_0$ with $r_0 = 1$ , $K_0 = 5$ , $m_0 = 9$ , $a_0 = 1.75$ , $b_0=1$ , $c_0 =1$ , $\gamma_0 = 0.6$ , and $\beta_0 = 0.5$ . Direct calculation shows that $\lambda_\alpha= 2.7582>0$ and a positive equilibrium $(x^*,y^*)=(0.3,0.233)$ with $Df(x^*,y^*)$ has two eigenvalues, $ 0.0016 \pm 0.66\textrm{i}$ . Further,

\begin{equation*}b_0\leq\min\bigg\{\frac{c_0 }{\beta_0 }, \frac{m_0^2\beta_0^2-c_0^2\gamma_0^2}{\gamma_0\beta_0(m_0\beta_0-c_0\gamma_0)+m_0r_0\beta_0^2}\bigg\}.\end{equation*}

Thus, this system has a limit cycle $\Gamma_0$ , as shown in figure 1, starting from the point $(0.67,0.25)$ . Let $\mathcal{V}$ be an $\varepsilon$ -neighbourhood of $\Gamma_0$ with $\varepsilon=0.01$ . For $\|\sigma\|\leq 1$ , we have $\lambda_{\alpha,\sigma}>0$ . This means that (1.4) has a unique stationary distribution $\mu_{\alpha,\sigma}$ .

We estimate the probability $\mu_{\alpha,\sigma}(\mathcal{V})$ as $\sigma \to0$ . To simplify the simulation, we fix all the other parameters, except for the variation of a, and list the results in table 1. The simulation phase pictures of the solution (1.4) are presented in figures 1 and 2.

Table 1. Simulation results for numerical example.

Figure 1. Left: Limit cycle $\Gamma_0$ of (1.1). Centre: A sample path of (1.4) when $a=1.78$ , $\sigma=0.5$ . Right: A sample path of (1.4) when $a=1.75$ , $\sigma=0.1$ .

Figure 2. Left: A sample path of (1.4) when $a=1.2$ , $\sigma=0.01$ . Right: A sample path of (1.4) when $a=1.71$ , $\sigma=0.001$ .

4. Discussion

Robustness plays a very important role in studying mathematical models in biology and other fields since it ensures the output and forecasts are consistently accurate even if one or more of the input variables or assumptions vary. In fact, we know that the parametric model is not quite true, and therefore we require that the distribution of the estimator changes only slightly if the distribution of the observations is slightly altered from that of the strict parametric model with certain parameter values. In this paper we have proved that the predator–prey model perturbed by white noise with Beddington–DeAngelis functional response (1.4) is robustly permanent and the stationary distribution depends (in weak topology) continuously on the data if it exists. The fact that the long-term population density concentrated at any neighbourhood of $\big(x^*_{\alpha_0}, y^*_{\alpha_0}\big)$ or of $\Gamma_{\alpha_0}$ when the parameter ${\alpha}$ approaches ${\alpha_0}$ is significant in practical problems since it allows us to know that the long-term behaviour of the population is very close to the evolution of the corresponding deterministic system.

Acknowledgements

The authors would like to thank the referees and editor for giving very precious comments and suggestions which allowed us to improve the content. The second author would like to thank Vietnam Institute for Advance Study in Mathematics (VIASM) for support and providing a fruitful research environment and kind hospitality during his research visit.

Funding information

The first, second, and fourth authors would like to thank Vietnam's Ministry of Education and Training for supporting their work with grant number B2023-TDV-02. The third author would like to thank the Vietnam National Foundation for Science and Technology Development (NAFOSTED) for supporting his work with grant number 101.03-2021.29.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Arnold, L. and Kliemannt, W. (1987). On unique ergodicity for degenerate diffusions. Stochastics 21, 4161.10.1080/17442508708833450CrossRefGoogle Scholar
Beddington, J. R. (1975). Mutual interference between parasites or predators and its effect on searching efficiency. J. Animal Ecol. 44, 331340.10.2307/3866CrossRefGoogle Scholar
Benaïm, M. (2018). Stochastic persistence. Preprint, arXiv:1806.08450.Google Scholar
DeAngelis, D. L., Goldstein, R. A. and O’Neill, R. V. (1975). A model for tropic interaction. Ecology 56, 881892.10.2307/1936298CrossRefGoogle Scholar
Dieu, N. T., Nguyen, D. H., Du, N. H. and Yin, G. (2016). Classification of asymptotic behavior in a stochastic SIR model. SIAM J. Appl. Dyn. Syst. 15, 10621084.10.1137/15M1043315CrossRefGoogle Scholar
Du, N. H., Alexandru, H., Dang, N. H. and Yin, G. (2021). Dynamical systems under random perturbations with fast switching and slow diffusion: Hyperbolic equilibria and stable limit cycles. J. Differential Equat. 293, 313358.10.1016/j.jde.2021.05.032CrossRefGoogle Scholar
Du, N. H., Nguyen, D. H. and Yin, G. (2016). Conditions for permanence and ergodicity of certain stochastic predator–prey models. J. Appl. Prob. 53, 187202.10.1017/jpr.2015.18CrossRefGoogle Scholar
Goel, N. S., Maitra, S. C. and Montroll, E. W. (1971). On the Volterra and Other Nonlinear Models of Interacting Populations. Academic Press, New York.10.1103/RevModPhys.43.231CrossRefGoogle Scholar
Han, M. (2006). Bifurcation theory of limit cycles of planar systems. In Handbook of Differential Equations: Ordinary Differential Equations Vol. 3, eds A. Cañada, P. Drábek and A. Fonda. Elsevier, Amsterdam, pp. 341–433.Google Scholar
Hening, A. and Nguyen, D. H. (2018). Coexistence and extinction for stochastic Kolmogorov systems. Ann. Appl. Prob. 28, 18931942.10.1214/17-AAP1347CrossRefGoogle Scholar
Hening, A., Nguyen, D. H. and Chesson, P. (2021). A general theory of coexistence and extinction for stochastic ecological communities. J. Math. Biol. 82, 56.10.1007/s00285-021-01606-1CrossRefGoogle ScholarPubMed
Holling, C. S. (1959). The components of predation as revealed by a study of small-mammal predation of the European pine sawfly. Canadian Entomologist 91, 293320.10.4039/Ent91293-5CrossRefGoogle Scholar
Hwang, T. W. (2004). Uniqueness of limit cycles of the predator–prey system with Beddington–DeAngelis functional response. J. Math. Anal. Appl. 290, 113122.10.1016/j.jmaa.2003.09.073CrossRefGoogle Scholar
Jeschke, J. M., Kopp, M. and Tollrian, R. (2002). Predator functional responses: Discriminating between handling and digesting prey. Ecol. Monogr. 72, 95112.10.1890/0012-9615(2002)072[0095:PFRDBH]2.0.CO;2CrossRefGoogle Scholar
Ji, C. and Jiang, D. (2011). Dynamics of a stochastic density dependent predator–prey system with Beddington–DeAngelis functional response. J. Math. Anal. Appl. 381, 441453.10.1016/j.jmaa.2011.02.037CrossRefGoogle Scholar
Ji, C., Jiang, D. and Li, X. (2011). Qualitative analysis of a stochastic ratio-dependent predator–prey system. J. Comput. Appl. Math. 235, 13261341.10.1016/j.cam.2010.08.021CrossRefGoogle Scholar
Kingsland, S. (1995). Modeling Nature: Episodes in the History of Population Ecology. University of Chicago Press.Google Scholar
Kink, P. (2018). Some analysis of a stochastic logistic growth model. Stoch. Anal. Appl. 36, 240256.10.1080/07362994.2017.1393343CrossRefGoogle Scholar
Kliemann, W. (1987). Recurrence and invariant measures for degenerate diffusions. Ann. Prob. 15, 690707.10.1214/aop/1176992166CrossRefGoogle Scholar
Lotka, A. J. (1900). Contribution to the theory of periodic reaction. J. Phys. Chem. 3, 271274.Google Scholar
Mao, X. (2007). Stochastic Differential Equations and Applications. Elsevier, Amsterdam.Google Scholar
Mao, X., Sabais, S. and Renshaw, E. (2003). Asymptotic behavior of stochastic Lotka–Volterra model. J. Math. Anal. 287, 141156.10.1016/S0022-247X(03)00539-0CrossRefGoogle Scholar
Real, L. A. (1977). The kinetics of functional response. Amer. Naturalist 111, 289300.10.1086/283161CrossRefGoogle Scholar
Rosenzweig, M. L. and MacArthur, R. H. (1963). Graphical representation and stability conditions of predator–prey interactions. Amer. Naturalist 97, 209223.10.1086/282272CrossRefGoogle Scholar
Rudnicki, R. (2003). Long-time behaviour of a stochastic prey–predator model. Stoch. Process. Appl. 108, 93107.10.1016/S0304-4149(03)00090-5CrossRefGoogle Scholar
Schreiber, S. J., Benaïm, M. and Atchadé, K. A. S. (2011). Persistence in fluctuating environments. J. Math. Biol. 62, 655683.10.1007/s00285-010-0349-5CrossRefGoogle ScholarPubMed
Tuan, H. T., Dang, N. H. and Van, V. K. (2012). Dynamics of a stochastic predator–prey model with Beddington–DeAngelis functional response. Sci. Ser. A Math. Sci. (N.S.) 22, 7584.Google Scholar
Volterra, V. (1926). Variazioni e fluttuazioni del numero d’individui in specie animali conviventi. Mem. Acad. Lincei Roma 2, 31113.Google Scholar
Zhang, X. C., Sun, G. Q. and Jin, Z. (2012). Spatial dynamics in a predator–prey model with Beddington–DeAngelis functional response. Phys. Rev. 85, 021924.Google Scholar
Zou, X. and Fan, D. (2013). Stationary distribution and stochastic Hopf bifurcation for a predator–prey system with noises. Discrete Contin. Dyn. Syst. Ser. B 18, 15071519.Google Scholar
Figure 0

Table 1. Simulation results for numerical example.

Figure 1

Figure 1. Left: Limit cycle $\Gamma_0$ of (1.1). Centre: A sample path of (1.4) when $a=1.78$, $\sigma=0.5$. Right: A sample path of (1.4) when $a=1.75$, $\sigma=0.1$.

Figure 2

Figure 2. Left: A sample path of (1.4) when $a=1.2$, $\sigma=0.01$. Right: A sample path of (1.4) when $a=1.71$, $\sigma=0.001$.