1. Introduction
Consider an insurance company that invests an asset in a financial market. Such a situation carries two kinds of risk, as summarized by [Reference Norberg24] and [Reference Tang and Tsitsiashvili32]. One kind, called insurance risk in the literature, stems from insurance claims, and the other, called financial risk, stems from financial investments. These two types of risk should be dealt with carefully in conducting solvency assessments of insurance companies. In this paper, we use multidimensional risk models to accommodate the two types of risks in the discrete-time and continuous-time cases.
Major events, such as the COVID-19 crisis and natural catastrophes, significantly affect almost all economic sectors. This applies in particular to financial markets and the insurance industry, implying that it makes sense to consider interdependencies between financial and insurance risks. In this paper, we propose to describe the dependence structure using Assumption 1 within a framework of general multivariate regular variation (MRV). Under certain conditions, multiple dependence structures satisfy Assumption 1, such as those of Proposition 2 and the relation (14) in this paper. Moreover, Assumption 1 includes scale mixtures, which have a variety of interpretations in different applications. For instance, they can reflect both individual and common factors of a risk class, and they can describe loss variables. However, the implications of Assumption 1 are not confined to this, and one can find available information after carefully studying Assumption 1.
Since the pioneering work of [Reference Norberg24] and [Reference Tang and Tsitsiashvili32], various studies have been performed on discrete-time insurance risk models with financial and insurance risks. Some recent works include [Reference Chen6], [Reference Chen and Yuan9], [Reference Li and Tang21], [Reference Tang and Yang29], and [Reference Yang and Konstantinides33]. All of these works restrict attention to an insurance company with one business line. However, special attention is also given to those insurance companies with multiple business lines. In Section 4 of this paper, we introduce the multidimensional discrete-time risk model (16) with a dependence structure between financial and insurance risks satisfying Assumption 1. We then obtain a precise asymptotic expansion for the ruin probability that holds uniformly for the whole time horizon. Furthermore, we present a two-dimensional numerical example. The contribution of this numerical example is twofold. First, it provides a specific dependence structure that satisfies Assumption 1. Second, it shows the accuracy of the asymptotic expansions for the ruin probability.
For the continuous-time risk model with financial and insurance risks, there is abundant literature on the asymptotic estimation of ruin probabilities; see [Reference Asmussen and Albrecher3], [Reference Cheng, Konstantinides and Wang10], [Reference Heyde and Wang12], [15], [Reference Li19], [Reference Paulsen26], and [Reference Tang, Wang and Yuen28]. However, in almost all of these papers, it is assumed that financial and insurance risks are independent. There is no doubt that, in a similar or the same macroeconomic environment, it is unrealistic to assume that financial and insurance risks over the same period are independent. In Section 5 of this paper, under Assumption 1, we construct a multidimensional continuous-time risk model with a dependence structure between insurance and financial risks. Three ruin probabilities in the infinite-time horizon are then investigated.
 The rest of the paper consists of four parts. Section 2 describes regular variation and MRV and presents some properties of MRV. Section 3 introduces Assumption 1. Under Assumption 1, in Section 4 we study the d-dimensional discrete-time risk model (16) (for 
 $d\geq1$
) and construct a two-dimensional numerical example satisfying Assumption 1. In Section 5 we consider a d-dimensional continuous-time risk model under Assumption 1.
$d\geq1$
) and construct a two-dimensional numerical example satisfying Assumption 1. In Section 5 we consider a d-dimensional continuous-time risk model under Assumption 1.
2. Preliminaries
 If there exist two positive functions 
 $g(\cdot,\cdot)$
 and
$g(\cdot,\cdot)$
 and 
 $f(\cdot,\cdot)$
 satisfying
$f(\cdot,\cdot)$
 satisfying 
 \begin{align*} l_1 = \liminf_{x\rightarrow\infty} \frac{f(x,t)}{g(x,t)} \leq \limsup_{x\rightarrow\infty} \frac{f(x,t)}{g(x,t)} = l_2,\end{align*}
\begin{align*} l_1 = \liminf_{x\rightarrow\infty} \frac{f(x,t)}{g(x,t)} \leq \limsup_{x\rightarrow\infty} \frac{f(x,t)}{g(x,t)} = l_2,\end{align*}
then we say that 
 $f(x,t)\lesssim g(x,t)$
 holds if
$f(x,t)\lesssim g(x,t)$
 holds if 
 $l_2\leq1$
;
$l_2\leq1$
; 
 $f(x,t)\gtrsim g(x,t)$
 holds if
$f(x,t)\gtrsim g(x,t)$
 holds if 
 $l_1\geq1$
;
$l_1\geq1$
; 
 $f(x,t)\sim g(x,t)$
 holds if
$f(x,t)\sim g(x,t)$
 holds if 
 $l_1=l_2=1$
; and
$l_1=l_2=1$
; and 
 $f(x,t)\asymp g(x,t)$
 holds if
$f(x,t)\asymp g(x,t)$
 holds if 
 $0<l_1 \leq l_2< \infty$
. All limit relations, unless otherwise stated, hold as
$0<l_1 \leq l_2< \infty$
. All limit relations, unless otherwise stated, hold as 
 $x\rightarrow\infty$
. Throughout this paper, C denotes a generic positive constant without relation to x, which may vary with the context. Moreover, for any
$x\rightarrow\infty$
. Throughout this paper, C denotes a generic positive constant without relation to x, which may vary with the context. Moreover, for any 
 $x,y\in\mathbb{R}$
, we write
$x,y\in\mathbb{R}$
, we write 
 $x\wedge y=\min\left\{x,y\right\}$
,
$x\wedge y=\min\left\{x,y\right\}$
, 
 $x\vee y=\max\left\{x,y\right\}$
, and
$x\vee y=\max\left\{x,y\right\}$
, and 
 $x^{+}=x\vee 0$
.
$x^{+}=x\vee 0$
.
 In this paper, random and real vectors, supposed to be d-dimensional, are expressed by bold English letters. For two real vectors 
 $\boldsymbol{a}=(a_1, \ldots,a_d)$
 and
$\boldsymbol{a}=(a_1, \ldots,a_d)$
 and 
 $\boldsymbol{b}=(b_1,\ldots,b_d)$
, we assume that operations between vectors such as
$\boldsymbol{b}=(b_1,\ldots,b_d)$
, we assume that operations between vectors such as 
 $\boldsymbol{a}>\boldsymbol{b}$
,
$\boldsymbol{a}>\boldsymbol{b}$
, 
 $\boldsymbol{a}\pm\boldsymbol{b}$
,
$\boldsymbol{a}\pm\boldsymbol{b}$
, 
 $\boldsymbol{a}\boldsymbol{b}$
, and
$\boldsymbol{a}\boldsymbol{b}$
, and 
 $\boldsymbol{a}^{-1}$
 should be interpreted componentwise, and we let
$\boldsymbol{a}^{-1}$
 should be interpreted componentwise, and we let 
 $[\boldsymbol{a},\boldsymbol{b}]=[a_1,b_1]\times\cdots\times[a_d,b_d]$
,
$[\boldsymbol{a},\boldsymbol{b}]=[a_1,b_1]\times\cdots\times[a_d,b_d]$
, 
 $[\boldsymbol{a},\boldsymbol{\infty})=[a_1,\infty)\times\cdots\times[a_d,\infty)$
. Additionally, for
$[\boldsymbol{a},\boldsymbol{\infty})=[a_1,\infty)\times\cdots\times[a_d,\infty)$
. Additionally, for 
 $y\in\mathbb{R}$
, write
$y\in\mathbb{R}$
, write 
 $y\boldsymbol{a}=\left(ya_1,\ldots,ya_d\right)$
 as usual. We also write
$y\boldsymbol{a}=\left(ya_1,\ldots,ya_d\right)$
 as usual. We also write 
 $\mathbf{0}=(0,\ldots,0)$
,
$\mathbf{0}=(0,\ldots,0)$
, 
 $\mathbf{1}=(1,\ldots,1)$
, and
$\mathbf{1}=(1,\ldots,1)$
, and 
 $\mathbf{e}_{k}=(0,\ldots,1,\ldots,0)$
, where the value 1 occurs in the kth spot.
$\mathbf{e}_{k}=(0,\ldots,1,\ldots,0)$
, where the value 1 occurs in the kth spot.
Definition 1. A distribution F on 
 $[0,\infty)$
 satisfying
$[0,\infty)$
 satisfying 
 $\bar{F}(x)>0$
 for all
$\bar{F}(x)>0$
 for all 
 $x\geq0$
 is said to be of regular variation, and we write
$x\geq0$
 is said to be of regular variation, and we write 
 $F\in\mathcal{R}_{-\alpha}$
, if
$F\in\mathcal{R}_{-\alpha}$
, if 
 \begin{align}  \lim_{x\rightarrow\infty} \frac{ \bar{F}(xy) }{ \bar{F}(x) } =y^{-\alpha},\qquad y>0, \end{align}
\begin{align}  \lim_{x\rightarrow\infty} \frac{ \bar{F}(xy) }{ \bar{F}(x) } =y^{-\alpha},\qquad y>0, \end{align}
for some 
 $0<\alpha<\infty$
.
$0<\alpha<\infty$
.
 For some distribution function with 
 $F\in\mathcal{R}_{-\alpha}$
,
$F\in\mathcal{R}_{-\alpha}$
, 
 $0<\alpha<\infty$
, we get by [Reference Bingham, Goldie and Teugels5, Proposition 2.2.3] that there exist positive constants
$0<\alpha<\infty$
, we get by [Reference Bingham, Goldie and Teugels5, Proposition 2.2.3] that there exist positive constants 
 $C_F>1$
 and
$C_F>1$
 and 
 $D_F$
 such that, for all
$D_F$
 such that, for all 
 $x, xy\geq D_F$
 and any
$x, xy\geq D_F$
 and any 
 $0<\varepsilon<\alpha$
,
$0<\varepsilon<\alpha$
, 
 \begin{align} \frac{1}{C_F}(y^{-\alpha+\varepsilon}\wedge y^{-\alpha-\varepsilon}) \leq \frac{\bar{F}(xy)}{\bar{F}(x)} \leq C_F(y^{-\alpha+\varepsilon}\vee y^{-\alpha-\varepsilon}).\end{align}
\begin{align} \frac{1}{C_F}(y^{-\alpha+\varepsilon}\wedge y^{-\alpha-\varepsilon}) \leq \frac{\bar{F}(xy)}{\bar{F}(x)} \leq C_F(y^{-\alpha+\varepsilon}\vee y^{-\alpha-\varepsilon}).\end{align}
From the relation (2), it follows that for 
 $p>\alpha$
,
$p>\alpha$
, 
 \begin{align} x^{-p}=o(\bar{F}(x)),\qquad x\rightarrow \infty.\end{align}
\begin{align} x^{-p}=o(\bar{F}(x)),\qquad x\rightarrow \infty.\end{align}
Definition 2. An 
 $\mathbb{R}_+^d$
-valued random vector
$\mathbb{R}_+^d$
-valued random vector 
 $\boldsymbol{Z}$
 is said to follow a multivariate regularly varying distribution if there exist a Radon measure
$\boldsymbol{Z}$
 is said to follow a multivariate regularly varying distribution if there exist a Radon measure 
 $\nu$
 not identically 0 and a positive normalizing function
$\nu$
 not identically 0 and a positive normalizing function 
 $a(\!\cdot\!)$
 monotonically increasing to
$a(\!\cdot\!)$
 monotonically increasing to 
 $\infty$
, such that when
$\infty$
, such that when 
 $t\rightarrow\infty$
,
$t\rightarrow\infty$
, 
 \begin{align}  t \, \mathbb{P}\left( \frac{\mathbf{Z}}{a(t)} \in \cdot \right)  \stackrel{v}{\longrightarrow} \nu(\!\cdot\!)  \quad\textrm{on } [0,\infty]^d\backslash \{\textbf{0}\}, \end{align}
\begin{align}  t \, \mathbb{P}\left( \frac{\mathbf{Z}}{a(t)} \in \cdot \right)  \stackrel{v}{\longrightarrow} \nu(\!\cdot\!)  \quad\textrm{on } [0,\infty]^d\backslash \{\textbf{0}\}, \end{align}
where 
 $\stackrel{v}{\longrightarrow}$
 denotes vague convergence.
$\stackrel{v}{\longrightarrow}$
 denotes vague convergence.
 From the above definition, one can show that the Radon measure 
 $\nu$
 is homogeneous—namely, that there exists some
$\nu$
 is homogeneous—namely, that there exists some 
 $\alpha>0$
, denoting the MRV index, such that the equality
$\alpha>0$
, denoting the MRV index, such that the equality 
 \begin{align} \nu(sB)=s^{-\alpha}\nu(B)\end{align}
\begin{align} \nu(sB)=s^{-\alpha}\nu(B)\end{align}
holds for every Borel set 
 $B\subset[0,\infty]^d\backslash \{\textbf{0}\}$
. For details on this, see [Reference Resnick27, p. 178]. From [Reference Konstantinides17, p. 459], we obtain
$B\subset[0,\infty]^d\backslash \{\textbf{0}\}$
. For details on this, see [Reference Resnick27, p. 178]. From [Reference Konstantinides17, p. 459], we obtain 
 $a(t)\in\mathcal{R}_{1/\alpha}$
, and hence there exists some distribution
$a(t)\in\mathcal{R}_{1/\alpha}$
, and hence there exists some distribution 
 $F\in\mathcal{R}_{-\alpha}$
 such that
$F\in\mathcal{R}_{-\alpha}$
 such that 
 \begin{align*} \bar{F}(t) \sim \frac{1}{a^{\leftarrow}(t)}, \quad \textrm{as } t\rightarrow\infty,\end{align*}
\begin{align*} \bar{F}(t) \sim \frac{1}{a^{\leftarrow}(t)}, \quad \textrm{as } t\rightarrow\infty,\end{align*}
where 
 $a^{\leftarrow}(t)$
 is the generalized inverse function of a(t). Consequently, the relation (4) can be expressed as follows:
$a^{\leftarrow}(t)$
 is the generalized inverse function of a(t). Consequently, the relation (4) can be expressed as follows: 
 \begin{align} \frac{1}{\bar{F}(x)} \mathbb{P}\left(\frac{\boldsymbol{Z}}{x}\in\cdot\right) \stackrel{v}{\longrightarrow}\nu(\!\cdot\!) ,\quad\textrm{on } [0,\infty]^d\backslash \{\textbf{0}\}.\end{align}
\begin{align} \frac{1}{\bar{F}(x)} \mathbb{P}\left(\frac{\boldsymbol{Z}}{x}\in\cdot\right) \stackrel{v}{\longrightarrow}\nu(\!\cdot\!) ,\quad\textrm{on } [0,\infty]^d\backslash \{\textbf{0}\}.\end{align}
Hence, we write 
 $\boldsymbol{Z}\in \textrm{MRV}(\alpha,F,\nu)$
. For more details on MRV, the reader is referred to [Reference Resnick27, Chapter 6] or [Reference Konstantinides17, Chapter 13].
$\boldsymbol{Z}\in \textrm{MRV}(\alpha,F,\nu)$
. For more details on MRV, the reader is referred to [Reference Resnick27, Chapter 6] or [Reference Konstantinides17, Chapter 13].
Next we present a lemma that is important in the following proofs.
Lemma 1. Let random vector 
 $\boldsymbol{Z}\in \boldsymbol{MRV}(\alpha,F,\nu)$
 for some
$\boldsymbol{Z}\in \boldsymbol{MRV}(\alpha,F,\nu)$
 for some 
 $\alpha>0$
, let
$\alpha>0$
, let 
 $\boldsymbol{\xi}$
 be a positive random vector with arbitrarily dependent components satisfying
$\boldsymbol{\xi}$
 be a positive random vector with arbitrarily dependent components satisfying 
 $\mathbb{E}\boldsymbol{\xi}^{\beta}<\boldsymbol{\infty}$
 for some
$\mathbb{E}\boldsymbol{\xi}^{\beta}<\boldsymbol{\infty}$
 for some 
 $\beta>\alpha$
, and let
$\beta>\alpha$
, and let 
 $\left\{\Delta_t,t\in\mathcal{T}\right\}$
 be a set of random events such that
$\left\{\Delta_t,t\in\mathcal{T}\right\}$
 be a set of random events such that 
 $\lim_{t\rightarrow t_0}\mathbb{P}(\Delta_t)=0$
 for some
$\lim_{t\rightarrow t_0}\mathbb{P}(\Delta_t)=0$
 for some 
 $t_0$
 in the closure of the index set
$t_0$
 in the closure of the index set 
 $\mathcal{T}$
. Furthermore, assume that
$\mathcal{T}$
. Furthermore, assume that 
 $\left\{\boldsymbol{\xi},\left\{\Delta_t,t\in\mathcal{T}\right\}\right\}$
 and
$\left\{\boldsymbol{\xi},\left\{\Delta_t,t\in\mathcal{T}\right\}\right\}$
 and 
 $\boldsymbol{Z}$
 are independent. Then
$\boldsymbol{Z}$
 are independent. Then 
 \begin{align*}  \lim_{t\rightarrow t_0}\limsup_{x\rightarrow\infty}  \frac{\mathbb{P}\left(\boldsymbol{Z}\boldsymbol{\xi}>x\,\textbf{1},\Delta_t\right)}  {\bar{F}(x)}  =0. \end{align*}
\begin{align*}  \lim_{t\rightarrow t_0}\limsup_{x\rightarrow\infty}  \frac{\mathbb{P}\left(\boldsymbol{Z}\boldsymbol{\xi}>x\,\textbf{1},\Delta_t\right)}  {\bar{F}(x)}  =0. \end{align*}
Proof. Since the random vector 
 $\boldsymbol{Z}\in \textrm{MRV}(\alpha,F,\nu)$
, the marginal tail of
$\boldsymbol{Z}\in \textrm{MRV}(\alpha,F,\nu)$
, the marginal tail of 
 $\boldsymbol{Z}$
 is regularly varying from [Reference Konstantinides17, p. 458]. Therefore, we have
$\boldsymbol{Z}$
 is regularly varying from [Reference Konstantinides17, p. 458]. Therefore, we have 
 $Z_k\in\mathcal{R}_{-\alpha}$
,
$Z_k\in\mathcal{R}_{-\alpha}$
, 
 $1\leq k\leq d$
. Then
$1\leq k\leq d$
. Then 
 \begin{align*}  \lim_{t\rightarrow t_0}\limsup_{x\rightarrow\infty}  \frac{\mathbb{P}\left(\boldsymbol{Z}\boldsymbol{\xi}>x\,\textbf{1},\Delta_t\right)}  {\bar{F}(x)}  \leq  \lim_{t\rightarrow t_0}\limsup_{x\rightarrow\infty}  \frac{ \mathbb{P}\left( Z_k\xi_k>x, \Delta_t \right) } { \bar{F}(x) }  =0, \end{align*}
\begin{align*}  \lim_{t\rightarrow t_0}\limsup_{x\rightarrow\infty}  \frac{\mathbb{P}\left(\boldsymbol{Z}\boldsymbol{\xi}>x\,\textbf{1},\Delta_t\right)}  {\bar{F}(x)}  \leq  \lim_{t\rightarrow t_0}\limsup_{x\rightarrow\infty}  \frac{ \mathbb{P}\left( Z_k\xi_k>x, \Delta_t \right) } { \bar{F}(x) }  =0, \end{align*}
where we used [Reference Tang and Yuan30, Lemma 6.2]. This completes the proof.
3. Main assumption
 This paper proposes the following assumption to describe certain dependence structures between the random vectors 
 $\mathbf{U}$
 and
$\mathbf{U}$
 and 
 $\mathbf{V}$
.
$\mathbf{V}$
.
Assumption 1. Let 
 $\left\{\boldsymbol{U}_i\boldsymbol{V}_i =(U_{1i}V_{1i}, \,\ldots, \, U_{di}V_{di} ), i\in\mathbb{N} \right\}$
,
$\left\{\boldsymbol{U}_i\boldsymbol{V}_i =(U_{1i}V_{1i}, \,\ldots, \, U_{di}V_{di} ), i\in\mathbb{N} \right\}$
, 
 $d\geq1$
, be a sequence of independent and identically distributed (i.i.d.) nonnegative random vectors with generic vector
$d\geq1$
, be a sequence of independent and identically distributed (i.i.d.) nonnegative random vectors with generic vector 
 $\boldsymbol{U}\boldsymbol{V} =(U_1V_1, \,\ldots, \, U_dV_d)\in\boldsymbol{MRV}(\alpha,F,\nu)$
 such that
$\boldsymbol{U}\boldsymbol{V} =(U_1V_1, \,\ldots, \, U_dV_d)\in\boldsymbol{MRV}(\alpha,F,\nu)$
 such that 
 $\nu\left((\textbf{1},\boldsymbol{\infty})\right)>0$
.
$\nu\left((\textbf{1},\boldsymbol{\infty})\right)>0$
.
Remark 1. From Assumption 1, we can derive that
 \begin{align}  \lim_{x\rightarrow\infty}  \frac{\mathbb{P}\left(\bigcap_{k=1}^d\left\{U_k V_k>x\right\}\right)}  {\bar{F}(x)}  =\nu\left((\textbf{1},\boldsymbol{\infty})\right)>0   \end{align}
\begin{align}  \lim_{x\rightarrow\infty}  \frac{\mathbb{P}\left(\bigcap_{k=1}^d\left\{U_k V_k>x\right\}\right)}  {\bar{F}(x)}  =\nu\left((\textbf{1},\boldsymbol{\infty})\right)>0   \end{align}
and
 \begin{align}  \lim_{x\rightarrow\infty}  \frac{ \mathbb{P}\left(U_k V_k>x\right) } { \bar{F}(x) }  =\nu\left(\left(\mathbf{e}_{k},\boldsymbol{\infty}\right]\right)  \;:\!=\;a_k >0, \qquad 1\leq k \leq d.   \end{align}
\begin{align}  \lim_{x\rightarrow\infty}  \frac{ \mathbb{P}\left(U_k V_k>x\right) } { \bar{F}(x) }  =\nu\left(\left(\mathbf{e}_{k},\boldsymbol{\infty}\right]\right)  \;:\!=\;a_k >0, \qquad 1\leq k \leq d.   \end{align}
The relation (7) indicates that 
 $U_1V_1,\ldots,U_dV_d$
 reveal large joint movements and accordingly are asymptotically dependent. The relation (8) indicates that the tails of the products
$U_1V_1,\ldots,U_dV_d$
 reveal large joint movements and accordingly are asymptotically dependent. The relation (8) indicates that the tails of the products 
 $U_1V_1,\ldots,U_dV_d$
 are regularly varying and that
$U_1V_1,\ldots,U_dV_d$
 are regularly varying and that 
 $U_iV_i$
 and
$U_iV_i$
 and 
 $U_jV_j$
 have comparable tails. Furthermore, note that by (5) and (7), for any
$U_jV_j$
 have comparable tails. Furthermore, note that by (5) and (7), for any 
 $(b_1, \ldots, b_d)\in[0,\infty]^d\backslash \{\textbf{0}\}$
,
$(b_1, \ldots, b_d)\in[0,\infty]^d\backslash \{\textbf{0}\}$
, 
 \begin{align*}  \lim_{x\rightarrow\infty}  \frac{\mathbb{P}\left(\bigcap_{k=1}^d\left\{U_k V_k>b_kx\right\}\right)}  {\bar{F}(x)}  =\nu\left((b_1,\infty]\times\cdots\times(b_d,\infty]\right)>0. \end{align*}
\begin{align*}  \lim_{x\rightarrow\infty}  \frac{\mathbb{P}\left(\bigcap_{k=1}^d\left\{U_k V_k>b_kx\right\}\right)}  {\bar{F}(x)}  =\nu\left((b_1,\infty]\times\cdots\times(b_d,\infty]\right)>0. \end{align*}
Remark 2. Set 
 $\mathbf{U}=U\mathbf{1}$
; then the product
$\mathbf{U}=U\mathbf{1}$
; then the product 
 $\mathbf{UV}$
 can be rewritten as
$\mathbf{UV}$
 can be rewritten as 
 \begin{align}  \mathbf{UV}=(UV_1, \ldots, UV_d), \end{align}
\begin{align}  \mathbf{UV}=(UV_1, \ldots, UV_d), \end{align}
which is called the scale mixture. This structure can be applied in various areas, including investments or insurance portfolios, and it has various interpretations in different applications. For example, [Reference Li and Sun18] developed a method to estimate the tail dependence of heavy-tailed scale mixtures of multivariate distributions. The paper [Reference Zhu and Li34] studied the asymptotic relations between the value-at-Risk and multivariate tail conditional expectation for heavy-tailed scale mixtures of multivariate distributions. Additionally, the class (9) of loss distributions is a subset of all multivariate regularly varying distributions considered in [Reference Joe and Li14], but it includes various loss distributions. In addition, the scale mixture can reflect both individual factors (via the 
 $V_i$
) and the common factor (via U) of a risk class. For instance, U is a common systemic risk factor associated with macroeconomic conditions, including regulatory bodies and economic conditions, while the quantities
$V_i$
) and the common factor (via U) of a risk class. For instance, U is a common systemic risk factor associated with macroeconomic conditions, including regulatory bodies and economic conditions, while the quantities 
 $V_k$
,
$V_k$
, 
 $1\leq k\leq d$
, are individual risks explained as individual business risks and assets.
$1\leq k\leq d$
, are individual risks explained as individual business risks and assets.
Remark 3. Since the structure of 
 $\mathbf{UV}\in\textrm{MRV}(\alpha, F, \nu)$
 in Assumption 1 makes no dependence assumption between random vectors
$\mathbf{UV}\in\textrm{MRV}(\alpha, F, \nu)$
 in Assumption 1 makes no dependence assumption between random vectors 
 $\mathbf{U}$
 and
$\mathbf{U}$
 and 
 $\mathbf{V}$
, it allows for a wide range of dependence structures between
$\mathbf{V}$
, it allows for a wide range of dependence structures between 
 $\mathbf{U}$
 and
$\mathbf{U}$
 and 
 $\mathbf{V}$
, and many dependence structures satisfy Assumption 1 under certain conditions (such as those of Proposition 2 and the relation (14)). This further enhances the practical and theoretical interest of Assumption 1.
$\mathbf{V}$
, and many dependence structures satisfy Assumption 1 under certain conditions (such as those of Proposition 2 and the relation (14)). This further enhances the practical and theoretical interest of Assumption 1.
 For Assumption 1, the following question arises naturally: what are the conditions under which the distribution of 
 $\mathbf{UV}$
 is MRV? This question has received an increasing amount of attention. The paper [Reference Basrak, Davis and Mikosch4] demonstrated the MRV of the product
$\mathbf{UV}$
 is MRV? This question has received an increasing amount of attention. The paper [Reference Basrak, Davis and Mikosch4] demonstrated the MRV of the product 
 $\boldsymbol{UV}$
, when
$\boldsymbol{UV}$
, when 
 $\mathbf{U}$
 is MRV and independent of
$\mathbf{U}$
 is MRV and independent of 
 $\mathbf{V}$
, as in Proposition 1 (see also [Reference Basrak, Davis and Mikosch4, Appendix], [Reference Fougeres and Mercadier11, Theorem 1], and [Reference Konstantinides and Li16, Lemma 3.1]).
$\mathbf{V}$
, as in Proposition 1 (see also [Reference Basrak, Davis and Mikosch4, Appendix], [Reference Fougeres and Mercadier11, Theorem 1], and [Reference Konstantinides and Li16, Lemma 3.1]).
Proposition 1. Let the random vector 
 $\boldsymbol{U}\in {\boldsymbol{MRV}}(\alpha,F,\nu)$
 for some
$\boldsymbol{U}\in {\boldsymbol{MRV}}(\alpha,F,\nu)$
 for some 
 $\alpha>0$
, and let
$\alpha>0$
, and let 
 $\boldsymbol{V}$
 be a nonnegative random vector with dependent components satisfying
$\boldsymbol{V}$
 be a nonnegative random vector with dependent components satisfying 
 $\mathbb{E}\boldsymbol{V}^p<\boldsymbol{\infty}$
 for some
$\mathbb{E}\boldsymbol{V}^p<\boldsymbol{\infty}$
 for some 
 $p>\alpha$
. Assume that
$p>\alpha$
. Assume that 
 $\boldsymbol{V}$
 and
$\boldsymbol{V}$
 and 
 $\boldsymbol{U}$
 are independent. Then the following relation holds for every Borel set
$\boldsymbol{U}$
 are independent. Then the following relation holds for every Borel set 
 $K\subset[0,\infty]^d\backslash \{\textbf{0}\}$
:
$K\subset[0,\infty]^d\backslash \{\textbf{0}\}$
: 
 \begin{align*}  \lim_{x\rightarrow\infty}\frac{1}{\bar{F}(x)}  \mathbb{P}\left(\frac{\boldsymbol{UV}}{x}\in K\right)  =\mathbb{E}\left[\nu(\boldsymbol{V}^{-1}K)\right], \end{align*}
\begin{align*}  \lim_{x\rightarrow\infty}\frac{1}{\bar{F}(x)}  \mathbb{P}\left(\frac{\boldsymbol{UV}}{x}\in K\right)  =\mathbb{E}\left[\nu(\boldsymbol{V}^{-1}K)\right], \end{align*}
where
 \begin{align*}  \boldsymbol{V}^{-1}K  &=\left\{\left(b_1,\ldots,b_d\right)  |\left(V_1b_1,\ldots,V_db_d\right)\in K\right\}\\[5pt]   &=\left\{\left(V^{-1}_1a_1,\ldots,V^{-1}_da_d\right), \,  \left(a_1,\ldots,a_d\right)\in K\right\}. \end{align*}
\begin{align*}  \boldsymbol{V}^{-1}K  &=\left\{\left(b_1,\ldots,b_d\right)  |\left(V_1b_1,\ldots,V_db_d\right)\in K\right\}\\[5pt]   &=\left\{\left(V^{-1}_1a_1,\ldots,V^{-1}_da_d\right), \,  \left(a_1,\ldots,a_d\right)\in K\right\}. \end{align*}
 The paper [Reference Chen, Wang and Zhang7] studied the tail asymptotics of the nonnegative random loss 
 $\sum_{i=1}^d R_iS$
, where the stand-alone risk vector
$\sum_{i=1}^d R_iS$
, where the stand-alone risk vector 
 $\mathbf{R}=(R_1, \, \ldots, \, R_d)$
 follows a multivariate regularly varying distributions with index
$\mathbf{R}=(R_1, \, \ldots, \, R_d)$
 follows a multivariate regularly varying distributions with index 
 $\alpha>0$
 and is independent of S, representing the background risk factor. Furthermore, [Reference Chen, Wang and Zhang7] also assumed that
$\alpha>0$
 and is independent of S, representing the background risk factor. Furthermore, [Reference Chen, Wang and Zhang7] also assumed that 
 $\mathbb{E}S^{\alpha+\delta}<\infty$
 for some
$\mathbb{E}S^{\alpha+\delta}<\infty$
 for some 
 $\delta>0$
. Essentially, the conditions in [Reference Chen, Wang and Zhang7] imply that the random vector
$\delta>0$
. Essentially, the conditions in [Reference Chen, Wang and Zhang7] imply that the random vector 
 $(R_1S, \, \ldots, \, R_dS)$
 is MRV by Proposition 1.
$(R_1S, \, \ldots, \, R_dS)$
 is MRV by Proposition 1.
 The hypothesis of independence between 
 $\mathbf{U}$
 and
$\mathbf{U}$
 and 
 $\mathbf{V}$
 in Proposition 1 may be too strong in certain settings. If this condition can be weakened, Assumption 1 will become broader and more meaningful. The paper [Reference Fougeres and Mercadier11] further improved Proposition 1 to the following result, Proposition 2, which seems meaningful in the context of actuarial risk theory.
$\mathbf{V}$
 in Proposition 1 may be too strong in certain settings. If this condition can be weakened, Assumption 1 will become broader and more meaningful. The paper [Reference Fougeres and Mercadier11] further improved Proposition 1 to the following result, Proposition 2, which seems meaningful in the context of actuarial risk theory.
Proposition 2. Let us assume that
 \begin{align}  t \, \mathbb{P}\left( \left( \frac{\mathbf{U}}{a(t)}, \, \mathbf{V} \right)  \in \cdot \right)  \stackrel{v}{\longrightarrow} (\nu \times L) (\!\cdot\!) \end{align}
\begin{align}  t \, \mathbb{P}\left( \left( \frac{\mathbf{U}}{a(t)}, \, \mathbf{V} \right)  \in \cdot \right)  \stackrel{v}{\longrightarrow} (\nu \times L) (\!\cdot\!) \end{align}
on the Borel sets of 
 $([0,\infty]^d\backslash \{\textbf{0}\}) \times [0,\infty]^d$
, where L represents a probability measure on
$([0,\infty]^d\backslash \{\textbf{0}\}) \times [0,\infty]^d$
, where L represents a probability measure on 
 $[0,\infty]^d$
 and
$[0,\infty]^d$
 and 
 $\nu$
 denotes a Radon measure on
$\nu$
 denotes a Radon measure on 
 $[0,\infty]^d\backslash \{\textbf{0}\}$
 not concentrated at
$[0,\infty]^d\backslash \{\textbf{0}\}$
 not concentrated at 
 $\infty$
. Suppose that, for some
$\infty$
. Suppose that, for some 
 $\delta>0$
 and any
$\delta>0$
 and any 
 $i=1, \, 2, \, \ldots, \, d$
,
$i=1, \, 2, \, \ldots, \, d$
, 
 \begin{align}  \lim_{\varepsilon\rightarrow0} \, \limsup_{t\rightarrow\infty}  t \mathbb{E}\left[ \left( \frac{|\mathbf{U}| \, |V_i|}{a(t)} \right)^{\delta}  \mathbb{I}_{[|\mathbf{U}| / a(t) \leq \varepsilon ]} \right]  =0, \end{align}
\begin{align}  \lim_{\varepsilon\rightarrow0} \, \limsup_{t\rightarrow\infty}  t \mathbb{E}\left[ \left( \frac{|\mathbf{U}| \, |V_i|}{a(t)} \right)^{\delta}  \mathbb{I}_{[|\mathbf{U}| / a(t) \leq \varepsilon ]} \right]  =0, \end{align}
and also that
 \begin{align}  \int_{[\mathbf{0}, \, \boldsymbol{\infty}]}  || \mathbf{v} ||^{\alpha} L(\textrm{d}\mathbf{v}) <\infty, \end{align}
\begin{align}  \int_{[\mathbf{0}, \, \boldsymbol{\infty}]}  || \mathbf{v} ||^{\alpha} L(\textrm{d}\mathbf{v}) <\infty, \end{align}
where 
 $ \mathbb{I}_{[E]}$
 is the indicator of the Borel set E and
$ \mathbb{I}_{[E]}$
 is the indicator of the Borel set E and 
 $|| \cdot ||$
 denotes some norm on
$|| \cdot ||$
 denotes some norm on 
 $\mathbb{R}^d$
. Then the random vector
$\mathbb{R}^d$
. Then the random vector 
 $\mathbf{UV}$
 follows a multivariate regularly varying distribution under the relations (10), (11), and (12). More precisely,
$\mathbf{UV}$
 follows a multivariate regularly varying distribution under the relations (10), (11), and (12). More precisely, 
 \begin{align*}  t \mathbb{P}(\mathbf{UV}\in a(t) \cdot ) \stackrel{v}{\longrightarrow} \nu_L(\!\cdot\!) \end{align*}
\begin{align*}  t \mathbb{P}(\mathbf{UV}\in a(t) \cdot ) \stackrel{v}{\longrightarrow} \nu_L(\!\cdot\!) \end{align*}
as 
 $t\rightarrow\infty$
, where
$t\rightarrow\infty$
, where 
 $\nu_L$
 is the measure defined on
$\nu_L$
 is the measure defined on 
 $[0,\infty]^d\backslash \{\textbf{0}\}$
 by
$[0,\infty]^d\backslash \{\textbf{0}\}$
 by 
 \begin{align*}  \nu_L(A)  =(\nu \times L) (\{(\mathbf{x},\mathbf{y}), \mathbf{x}\in \mathbf{y}^{-1}A\})  =\int_{[0, \infty]^d} \nu(\mathbf{y}^{-1}A)L(\textrm{d}y). \end{align*}
\begin{align*}  \nu_L(A)  =(\nu \times L) (\{(\mathbf{x},\mathbf{y}), \mathbf{x}\in \mathbf{y}^{-1}A\})  =\int_{[0, \infty]^d} \nu(\mathbf{y}^{-1}A)L(\textrm{d}y). \end{align*}
 Assume that 
 $(U_1, \ldots, U_d, V)$
,
$(U_1, \ldots, U_d, V)$
, 
 $d\geq1$
, is a positive random vector. Li [Reference Li19] introduced the following dependence structure among the components of
$d\geq1$
, is a positive random vector. Li [Reference Li19] introduced the following dependence structure among the components of 
 $(U_1, \ldots, U_d, V)$
:
$(U_1, \ldots, U_d, V)$
:
- 
• There are some d-variate function  $f\;:\; \, [0,\infty]^d\backslash \{\textbf{0}\} \mapsto (0, \infty)$
, some univariate function $f\;:\; \, [0,\infty]^d\backslash \{\textbf{0}\} \mapsto (0, \infty)$
, some univariate function $h\;:\; \, (0,\infty) \mapsto (0,\infty)$
 satisfying (13)and some distribution F supported on $h\;:\; \, (0,\infty) \mapsto (0,\infty)$
 satisfying (13)and some distribution F supported on \begin{align}   0 < \inf_{0 < t < \infty} h(t)   \leq \sup_{0 < t < \infty} h(t) <\infty,  \end{align} \begin{align}   0 < \inf_{0 < t < \infty} h(t)   \leq \sup_{0 < t < \infty} h(t) <\infty,  \end{align} $(0,\infty)$
 with an infinite upper endpoint, such that for every $(0,\infty)$
 with an infinite upper endpoint, such that for every $\mathbf{b} \in [0,\infty]^d\backslash \{\textbf{0}\}$
 the relation (14)holds uniformly for $\mathbf{b} \in [0,\infty]^d\backslash \{\textbf{0}\}$
 the relation (14)holds uniformly for \begin{align}   \mathbb{P}(U_1>b_1u,\ldots,U_d>b_du | V = v)   \sim   f(\mathbf{b}) h(v) \bar{F}(u)  \end{align} \begin{align}   \mathbb{P}(U_1>b_1u,\ldots,U_d>b_du | V = v)   \sim   f(\mathbf{b}) h(v) \bar{F}(u)  \end{align} $v\in(0,\infty)$
 as $v\in(0,\infty)$
 as $u\rightarrow\infty$
. $u\rightarrow\infty$
.
 According to the relation (2.10) in [Reference Li19], taking 
 $\mathbf{b}=\mathbf{e}_{k}$
,
$\mathbf{b}=\mathbf{e}_{k}$
, 
 $1\leq k\leq d$
, in (14) yields that the relation
$1\leq k\leq d$
, in (14) yields that the relation 
 \begin{align} \mathbb{P}(U_k>u|V=v) \sim \frac{1}{\mu}h(v)\bar{F}_k(u)\end{align}
\begin{align} \mathbb{P}(U_k>u|V=v) \sim \frac{1}{\mu}h(v)\bar{F}_k(u)\end{align}
holds uniformly for 
 $v\in(0,\infty)$
 as
$v\in(0,\infty)$
 as 
 $u\rightarrow\infty$
, where
$u\rightarrow\infty$
, where 
 $\bar{F}_k(u)$
 is the distribution function of
$\bar{F}_k(u)$
 is the distribution function of 
 $U_k$
, and
$U_k$
, and 
 $\mu=\mathbb{E}[h(V)]\in(0,\infty)$
. Therefore, the dependence structure specified in (14) implies that the marginal vector
$\mu=\mathbb{E}[h(V)]\in(0,\infty)$
. Therefore, the dependence structure specified in (14) implies that the marginal vector 
 $(U_k,V)$
 follows a parallel two-dimensional dependence structure as shown in (15). The structure (15) is defined in [Reference Asimit and Jones2] and further developed in [Reference Asimit and Badescu1] and [Reference Li, Tang and Wu22]; it includes a wide range of commonly used bivariate copulas.
$(U_k,V)$
 follows a parallel two-dimensional dependence structure as shown in (15). The structure (15) is defined in [Reference Asimit and Jones2] and further developed in [Reference Asimit and Badescu1] and [Reference Li, Tang and Wu22]; it includes a wide range of commonly used bivariate copulas.
 Now define a new positive random variable 
 $V^*$
 with distribution
$V^*$
 with distribution 
 \begin{align*} \mathbb{P}(V^*\in \textrm{d}v)=\frac{1}{\mu}h(v)\mathbb{P}(V\in \textrm{d}v).\end{align*}
\begin{align*} \mathbb{P}(V^*\in \textrm{d}v)=\frac{1}{\mu}h(v)\mathbb{P}(V\in \textrm{d}v).\end{align*}
For simplicity, we introduce a new random variable 
 $\xi$
 with distribution function
$\xi$
 with distribution function 
 $F\in\mathcal{R}_{-\alpha}$
 for
$F\in\mathcal{R}_{-\alpha}$
 for 
 $0<\alpha<\infty$
, and let
$0<\alpha<\infty$
, and let 
 $\xi$
 and
$\xi$
 and 
 $V^*$
 be independent of all other sources of randomness. If
$V^*$
 be independent of all other sources of randomness. If 
 \begin{align*} \mathbb{E}(V^{\alpha+\varepsilon})<\infty\end{align*}
\begin{align*} \mathbb{E}(V^{\alpha+\varepsilon})<\infty\end{align*}
holds for 
 $\varepsilon>0$
, then by (13),
$\varepsilon>0$
, then by (13), 
 \begin{align*}\mathbb{E}[(V^{*})^{\alpha+\varepsilon}]=\frac{1}{\mu}\int_{0}^{\infty}v^{\alpha+\varepsilon}h(v)\mathbb{P}(V\in \textrm{d}v)\leq C\mathbb{E}(V^{\alpha+\varepsilon})<\infty,\end{align*}
\begin{align*}\mathbb{E}[(V^{*})^{\alpha+\varepsilon}]=\frac{1}{\mu}\int_{0}^{\infty}v^{\alpha+\varepsilon}h(v)\mathbb{P}(V\in \textrm{d}v)\leq C\mathbb{E}(V^{\alpha+\varepsilon})<\infty,\end{align*}
and for 
 $\frac{\alpha}{\alpha+\varepsilon}<p<1$
 and large
$\frac{\alpha}{\alpha+\varepsilon}<p<1$
 and large 
 $u>0$
,
$u>0$
, 
 \begin{align*}\mathbb{P}(V>u^p)\leq \mathbb{E}(V^{\alpha+\varepsilon})u^{-p(\alpha+\varepsilon)}=o(1)\bar{F}(u)\end{align*}
\begin{align*}\mathbb{P}(V>u^p)\leq \mathbb{E}(V^{\alpha+\varepsilon})u^{-p(\alpha+\varepsilon)}=o(1)\bar{F}(u)\end{align*}
and
 \begin{align*}\mathbb{P}(V^*>u^p)\leq \mathbb{E}[(V^{*})^{\alpha+\varepsilon}]u^{-p(\alpha+\varepsilon)}=o(1)\bar{F}(u)\end{align*}
\begin{align*}\mathbb{P}(V^*>u^p)\leq \mathbb{E}[(V^{*})^{\alpha+\varepsilon}]u^{-p(\alpha+\varepsilon)}=o(1)\bar{F}(u)\end{align*}
hold. Consequently, we have that by (14),
 \begin{align*} \mathbb{P}(\mathbf{U} V>\mathbf{b}u) &=\mathbb{P}(\mathbf{U}V>\mathbf{b}u, 0<V\leq u^p) + \mathbb{P}(\mathbf{U} V>\mathbf{b}u, V> u^p)\\[5pt]  &=\int_{0+}^{u^p}\mathbb{P} \left(\mathbf{U}>\mathbf{b}\frac{u}{v}|V=v\right) \mathbb{P}(V\in \textrm{d}v) +o(1)\bar{F}(u)\\[5pt]  &\sim \mu f(\mathbf{b}) \int_{0+}^{u^p} \bar{F}\left(\frac{u}{v}\right)\mathbb{P}(V^*\in \textrm{d}v)\\[5pt]  &=\mu f(\mathbf{b})\left[ \mathbb{P}(\xi V^*>u)-\mathbb{P}(\xi V^*>u, V^*>u^p) \right]\\[5pt]  &\sim\mu f(\mathbf{b}) \mathbb{E}[(V^*)^{\alpha}]\bar{F}(u),\end{align*}
\begin{align*} \mathbb{P}(\mathbf{U} V>\mathbf{b}u) &=\mathbb{P}(\mathbf{U}V>\mathbf{b}u, 0<V\leq u^p) + \mathbb{P}(\mathbf{U} V>\mathbf{b}u, V> u^p)\\[5pt]  &=\int_{0+}^{u^p}\mathbb{P} \left(\mathbf{U}>\mathbf{b}\frac{u}{v}|V=v\right) \mathbb{P}(V\in \textrm{d}v) +o(1)\bar{F}(u)\\[5pt]  &\sim \mu f(\mathbf{b}) \int_{0+}^{u^p} \bar{F}\left(\frac{u}{v}\right)\mathbb{P}(V^*\in \textrm{d}v)\\[5pt]  &=\mu f(\mathbf{b})\left[ \mathbb{P}(\xi V^*>u)-\mathbb{P}(\xi V^*>u, V^*>u^p) \right]\\[5pt]  &\sim\mu f(\mathbf{b}) \mathbb{E}[(V^*)^{\alpha}]\bar{F}(u),\end{align*}
where 
 $\mathbb{P}(\xi V^*>u)\sim \bar{F}(u)\mathbb{E}[(V^*)^{\alpha}]$
 is due to the relation (4.4) in [Reference Tang, Wang and Yuen28]. Therefore,
$\mathbb{P}(\xi V^*>u)\sim \bar{F}(u)\mathbb{E}[(V^*)^{\alpha}]$
 is due to the relation (4.4) in [Reference Tang, Wang and Yuen28]. Therefore, 
 $(U_1V,\ldots,U_dV)$
 satisfies Assumption 1.
$(U_1V,\ldots,U_dV)$
 satisfies Assumption 1.
4. The study of a d-dimensional discrete-time risk model under Assumption 1
 This section considers an insurer who runs multiple lines of business and makes risky assets along individual lines. For every 
 $i \in \mathbb{N}=\{1, \, 2, \, \ldots\}$
 and integer
$i \in \mathbb{N}=\{1, \, 2, \, \ldots\}$
 and integer 
 $d\geq1$
, the real-valued random variable
$d\geq1$
, the real-valued random variable 
 $X_{ki}$
,
$X_{ki}$
, 
 $1\leq k \leq d$
, denotes the net insurance loss (described by the aggregate claim amount minus the aggregate premium income) of the kth business of the insurer over the period i, and the positive random variable
$1\leq k \leq d$
, denotes the net insurance loss (described by the aggregate claim amount minus the aggregate premium income) of the kth business of the insurer over the period i, and the positive random variable 
 $\theta_{ki}$
 denotes the discount factor, related to the return on the investment, of the kth business of the insurer over the same period. Let
$\theta_{ki}$
 denotes the discount factor, related to the return on the investment, of the kth business of the insurer over the same period. Let 
 $\{(X_{1},\ldots,X_{d}), (X_{1i},\ldots,X_{di}), i\in \mathbb{N}\}$
 and
$\{(X_{1},\ldots,X_{d}), (X_{1i},\ldots,X_{di}), i\in \mathbb{N}\}$
 and 
 $\{(\theta_{1},\ldots,\theta_{d}), (\theta_{1i},\ldots,\theta_{di}), i\in \mathbb{N}\}$
 be sequences of i.i.d. random vectors. The stochastic discounted value of total net insurance losses for the insurance company up to the time n can be described as
$\{(\theta_{1},\ldots,\theta_{d}), (\theta_{1i},\ldots,\theta_{di}), i\in \mathbb{N}\}$
 be sequences of i.i.d. random vectors. The stochastic discounted value of total net insurance losses for the insurance company up to the time n can be described as 
 \begin{align} \mathbf{S}_n =(S_{1n}, \, \ldots , \, S_{dn}) &=\left( \sum_{i=1}^n X_{1i} \prod_{j=1}^{i} \theta_{1j}, \, \ldots, \, \sum_{i=1}^n X_{di} \prod_{j=1}^{i} \theta_{dj} \right)\nonumber\\[5pt]  &=\left( \sum_{i=1}^n X_{1i} Y_{1i}, \, \ldots, \, \sum_{i=1}^n X_{di} Y_{di} \right), \quad n\in \mathbb{N},\end{align}
\begin{align} \mathbf{S}_n =(S_{1n}, \, \ldots , \, S_{dn}) &=\left( \sum_{i=1}^n X_{1i} \prod_{j=1}^{i} \theta_{1j}, \, \ldots, \, \sum_{i=1}^n X_{di} \prod_{j=1}^{i} \theta_{dj} \right)\nonumber\\[5pt]  &=\left( \sum_{i=1}^n X_{1i} Y_{1i}, \, \ldots, \, \sum_{i=1}^n X_{di} Y_{di} \right), \quad n\in \mathbb{N},\end{align}
where multiplication over the empty set is understood to be 1, and 
 $Y_{ki}=\prod_{j=1}^{i} \theta_{kj}$
,
$Y_{ki}=\prod_{j=1}^{i} \theta_{kj}$
, 
 $1\leq k\leq d$
. The product
$1\leq k\leq d$
. The product 
 $\boldsymbol{\rho}x = (\rho_1x, \, \ldots, \rho_dx)$
 denotes the vector of initial reserves assigned to different businesses, with positive
$\boldsymbol{\rho}x = (\rho_1x, \, \ldots, \rho_dx)$
 denotes the vector of initial reserves assigned to different businesses, with positive 
 $\rho_1, \, \ldots, \, \rho_d$
 such that
$\rho_1, \, \ldots, \, \rho_d$
 such that 
 $\sum_{k=1}^d \rho_k =1$
. For
$\sum_{k=1}^d \rho_k =1$
. For 
 $n\in\mathbb{N}$
, the ruin probability can be defined as
$n\in\mathbb{N}$
, the ruin probability can be defined as 
 \begin{align*} \Psi(x, n) &\;:\!=\;\mathbb{P}\left( \bigcap_{k=1}^d \left\{\max_{1\leq m\leq n}{S}_{km} >\rho_k x \right\} \right) =\mathbb{P}\left(\max_{1\leq m\leq n}\mathbf{S}_m >\boldsymbol{\rho}x \right)\\[5pt]  &=\mathbb{P}\left(\max_{1\leq m\leq n}\sum_{i=1}^m \boldsymbol{X}_{i}\boldsymbol{Y}_{i} >\boldsymbol{\rho}x \right),\end{align*}
\begin{align*} \Psi(x, n) &\;:\!=\;\mathbb{P}\left( \bigcap_{k=1}^d \left\{\max_{1\leq m\leq n}{S}_{km} >\rho_k x \right\} \right) =\mathbb{P}\left(\max_{1\leq m\leq n}\mathbf{S}_m >\boldsymbol{\rho}x \right)\\[5pt]  &=\mathbb{P}\left(\max_{1\leq m\leq n}\sum_{i=1}^m \boldsymbol{X}_{i}\boldsymbol{Y}_{i} >\boldsymbol{\rho}x \right),\end{align*}
which denotes the probability of the insurance company’s wealth process going below zero by time n. When 
 $d=1$
, [Reference Fougeres and Mercadier11] considered the ruin probability of (16) if the random vector
$d=1$
, [Reference Fougeres and Mercadier11] considered the ruin probability of (16) if the random vector 
 \begin{equation*} (X_{11}Y_{11}, X_{12}Y_{12}, \,\ldots,\, X_{1n}Y_{1n})\end{equation*}
\begin{equation*} (X_{11}Y_{11}, X_{12}Y_{12}, \,\ldots,\, X_{1n}Y_{1n})\end{equation*}
is MRV, and [Reference Tang and Yuan31] studied the ruin probability of (16) if the random vector 
 $(X_{1i}, \theta_{1i})$
,
$(X_{1i}, \theta_{1i})$
, 
 $i\in\mathbb{N}$
, follows a bivariate regular variation structure. When
$i\in\mathbb{N}$
, follows a bivariate regular variation structure. When 
 $d=2$
, according to [Reference Chen and Yang8], we can obtain that the random vector
$d=2$
, according to [Reference Chen and Yang8], we can obtain that the random vector 
 \begin{equation*} \left(\sum_{i=1}^mX_{1i}\prod_{j=1}^{i}\theta_{1j}, \sum_{i=1}^mX_{2i}\prod_{j=1}^{i}\theta_{2j}\right)\end{equation*}
\begin{equation*} \left(\sum_{i=1}^mX_{1i}\prod_{j=1}^{i}\theta_{1j}, \sum_{i=1}^mX_{2i}\prod_{j=1}^{i}\theta_{2j}\right)\end{equation*}
still follows a bivariate-regular-variation structure if the pairs 
 $(X_{1i}\theta_{1i}, X_{2i}\theta_{2i})$
,
$(X_{1i}\theta_{1i}, X_{2i}\theta_{2i})$
, 
 $i\in\mathbb{N}$
, follow some bivariate-regular-variation structure. In this section, we assume that
$i\in\mathbb{N}$
, follow some bivariate-regular-variation structure. In this section, we assume that 
 $(X_{1i}^+\theta_{1i}, \ldots, X_{di}^+\theta_{di})$
,
$(X_{1i}^+\theta_{1i}, \ldots, X_{di}^+\theta_{di})$
, 
 $i\in\mathbb{N}$
 and
$i\in\mathbb{N}$
 and 
 $d\geq1$
, is a sequence of i.i.d. random vectors with generic vector
$d\geq1$
, is a sequence of i.i.d. random vectors with generic vector 
 $(X_{1}^+\theta_{1}, \ldots, X_{d}^+\theta_{d})\in \textrm{MRV}(\alpha, F,\nu)$
 such that
$(X_{1}^+\theta_{1}, \ldots, X_{d}^+\theta_{d})\in \textrm{MRV}(\alpha, F,\nu)$
 such that 
 $\nu((\textbf{1},\boldsymbol{\infty}))>0$
; then we study the asymptotic formula for the ruin probability for
$\nu((\textbf{1},\boldsymbol{\infty}))>0$
; then we study the asymptotic formula for the ruin probability for 
 $n\in\mathbb{N}$
.
$n\in\mathbb{N}$
.
Theorem 1. Consider the d-dimensional discrete-time risk model (16). For each 
 $i\in\mathbb{N}$
, assume that
$i\in\mathbb{N}$
, assume that 
 $\boldsymbol{X}_{i}^+ \boldsymbol{\theta}_{i}$
 satisfies Assumption 1. If there is a constant
$\boldsymbol{X}_{i}^+ \boldsymbol{\theta}_{i}$
 satisfies Assumption 1. If there is a constant 
 $\beta>\alpha$
 such that
$\beta>\alpha$
 such that 
 $\mathbb{E}\boldsymbol{\theta}^{\beta}<\textbf{1}$
, then it holds uniformly for
$\mathbb{E}\boldsymbol{\theta}^{\beta}<\textbf{1}$
, then it holds uniformly for 
 $n\in\mathbb{N}$
 that
$n\in\mathbb{N}$
 that 
 \begin{align}  \Psi(x, n)  &=\mathbb{P}\left( \max_{1\leq m\leq n}\sum_{i=1}^m \boldsymbol{X}_{i}  \boldsymbol{Y}_{i} > \boldsymbol{\rho}x \right)\nonumber\\[5pt]   &\sim \mathbb{P}\left( \sum_{i=1}^n\boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)  \sim \bar{F}(x)\sum_{i=1}^n  \mathbb{E}\left[\nu(\boldsymbol{Y}_{i-1}^{-1}(\boldsymbol{\rho}, \boldsymbol{\infty}]) \right]. \end{align}
\begin{align}  \Psi(x, n)  &=\mathbb{P}\left( \max_{1\leq m\leq n}\sum_{i=1}^m \boldsymbol{X}_{i}  \boldsymbol{Y}_{i} > \boldsymbol{\rho}x \right)\nonumber\\[5pt]   &\sim \mathbb{P}\left( \sum_{i=1}^n\boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)  \sim \bar{F}(x)\sum_{i=1}^n  \mathbb{E}\left[\nu(\boldsymbol{Y}_{i-1}^{-1}(\boldsymbol{\rho}, \boldsymbol{\infty}]) \right]. \end{align}
4.1. Numerical results
 This subsection presents a two-dimensional numerical example to examine the accuracy of Theorem 1. All computations are conducted in R. For simplicity, we assume that 
 $\theta_{1}=\theta_{2}=\theta$
, and let
$\theta_{1}=\theta_{2}=\theta$
, and let 
 $\theta$
 be exponential with parameter
$\theta$
 be exponential with parameter 
 $\lambda=2$
.
$\lambda=2$
.
 We first construct a random pair 
 $(X_1, X_2)$
. Let the distribution function F of the random variable Z follow a Pareto distribution, with scale parameter
$(X_1, X_2)$
. Let the distribution function F of the random variable Z follow a Pareto distribution, with scale parameter 
 $\kappa=4$
 and shape parameter
$\kappa=4$
 and shape parameter 
 $\alpha=2$
 (
$\alpha=2$
 (
 $\alpha>0$
); that is,
$\alpha>0$
); that is, 
 $F(x) = 1-\left( \frac{\kappa}{\kappa+x} \right)^{\alpha} \in \mathcal{R}_{-\alpha}$
,
$F(x) = 1-\left( \frac{\kappa}{\kappa+x} \right)^{\alpha} \in \mathcal{R}_{-\alpha}$
, 
 $x>0$
. Suppose that the dependence structure between Z and
$x>0$
. Suppose that the dependence structure between Z and 
 $\theta$
 is established by a Farlie–Gumbel–Morgenstern (FGM) copula,
$\theta$
 is established by a Farlie–Gumbel–Morgenstern (FGM) copula, 
 \begin{align} C(u,v)=uv+\gamma uv(1-u)(1-v), \quad \gamma\in[-1,1], \, (u,v)\in[0,1]^2.\end{align}
\begin{align} C(u,v)=uv+\gamma uv(1-u)(1-v), \quad \gamma\in[-1,1], \, (u,v)\in[0,1]^2.\end{align}
Then, by [Reference Asimit and Badescu1, Example 2.2] or [Reference Li, Tang and Wu22, Example 3.2], the random pair 
 $(Z, \theta)$
 satisfies
$(Z, \theta)$
 satisfies 
 \begin{align*} \mathbb{P}(Z>z\,\big|\, \theta=t) \sim \bar{F}(z) h(t), \quad\textrm{uniformly for all } t\in[0,\infty),\end{align*}
\begin{align*} \mathbb{P}(Z>z\,\big|\, \theta=t) \sim \bar{F}(z) h(t), \quad\textrm{uniformly for all } t\in[0,\infty),\end{align*}
with 
 $h(t)=1+\gamma(1-2e^{-\lambda t})$
. Let
$h(t)=1+\gamma(1-2e^{-\lambda t})$
. Let 
 $(X_1, X_2)=(\zeta_1Z, \zeta_2Z)$
, where
$(X_1, X_2)=(\zeta_1Z, \zeta_2Z)$
, where 
 $\zeta_k$
,
$\zeta_k$
, 
 $k=1,2$
, are uniform on [0,1] and independent of
$k=1,2$
, are uniform on [0,1] and independent of 
 $(Z, \theta)$
, and
$(Z, \theta)$
, and 
 $\mathbb{E}\zeta_k^q<\infty$
 for some
$\mathbb{E}\zeta_k^q<\infty$
 for some 
 $q>\alpha$
.
$q>\alpha$
.
 We next verify that 
 $(X_1\theta, X_2\theta)$
 satisfies Assumption 1. Using [Reference Li19, Example 3.1], we obtain
$(X_1\theta, X_2\theta)$
 satisfies Assumption 1. Using [Reference Li19, Example 3.1], we obtain 
 \begin{align*} \mathbb{P}(X_1>b_1x, X_2>b_2x|\theta=t) \sim V(b_1, b_2) h(t)\bar{F}(x), \quad\textrm{uniformly for all } t\in[0,\infty),\end{align*}
\begin{align*} \mathbb{P}(X_1>b_1x, X_2>b_2x|\theta=t) \sim V(b_1, b_2) h(t)\bar{F}(x), \quad\textrm{uniformly for all } t\in[0,\infty),\end{align*}
where 
 $b_1,\, b_2>0$
 are constants, and
$b_1,\, b_2>0$
 are constants, and 
 \begin{align*} V(b_1, b_2) =\mathbb{E}\left( \frac{\zeta_1^{\alpha}}{b_1^{\alpha}}\bigwedge \frac{\zeta_2^{\alpha}}{b_2^{\alpha}} \right) =\frac{b_1+b_2}{(\alpha+1)(b_1\vee b_2)^{\alpha+1}} -\frac{2b_1b_2}{(\alpha+2)(b_1\vee b_2)^{\alpha+2}}.\end{align*}
\begin{align*} V(b_1, b_2) =\mathbb{E}\left( \frac{\zeta_1^{\alpha}}{b_1^{\alpha}}\bigwedge \frac{\zeta_2^{\alpha}}{b_2^{\alpha}} \right) =\frac{b_1+b_2}{(\alpha+1)(b_1\vee b_2)^{\alpha+1}} -\frac{2b_1b_2}{(\alpha+2)(b_1\vee b_2)^{\alpha+2}}.\end{align*}
Moreover, for 
 $\beta=3>\alpha$
 satisfying
$\beta=3>\alpha$
 satisfying 
 $\mathbb{E}\theta^{\beta}=\frac{6}{\lambda^3}<1$
 and
$\mathbb{E}\theta^{\beta}=\frac{6}{\lambda^3}<1$
 and 
 $\alpha/\beta<p<1$
, we obtain that
$\alpha/\beta<p<1$
, we obtain that 
 \begin{align*} &\mathbb{P}(X_1\theta>b_1x, X_2\theta>b_2x)\\[5pt]  &=\mathbb{P}(X_1\theta>b_1x, X_2\theta>b_2x, 0<\theta\leq x^p) +\mathbb{P}(X_1\theta>b_1x, X_2\theta>b_2x, \theta> x^p)\\[5pt]  &=\int_{0+}^{x^p} \mathbb{P}\left(X_1>b_1x/t, X_2>b_2x/t|\theta=t\right) \mathbb{P}(\theta\in \textrm{d}t) +o(1)\bar{F}(x)\\[5pt]  &\sim V(b_1, b_2) \bar{F}(x) \int_{0+}^{x^p} t^{\alpha}h(t) \mathbb{P}(\theta\in \textrm{d}t)\\[5pt]  &\sim V(b_1, b_2) \bar{F}(x) \left[ \mathbb{E}\theta^{\alpha}+\gamma \mathbb{E}\theta^{\alpha} -2\gamma \mathbb{E}\left(\theta^{\alpha}e^{-\lambda\theta}\right) \right] \;:\!=\;\mu V(b_1, b_2) \bar{F}(x),\end{align*}
\begin{align*} &\mathbb{P}(X_1\theta>b_1x, X_2\theta>b_2x)\\[5pt]  &=\mathbb{P}(X_1\theta>b_1x, X_2\theta>b_2x, 0<\theta\leq x^p) +\mathbb{P}(X_1\theta>b_1x, X_2\theta>b_2x, \theta> x^p)\\[5pt]  &=\int_{0+}^{x^p} \mathbb{P}\left(X_1>b_1x/t, X_2>b_2x/t|\theta=t\right) \mathbb{P}(\theta\in \textrm{d}t) +o(1)\bar{F}(x)\\[5pt]  &\sim V(b_1, b_2) \bar{F}(x) \int_{0+}^{x^p} t^{\alpha}h(t) \mathbb{P}(\theta\in \textrm{d}t)\\[5pt]  &\sim V(b_1, b_2) \bar{F}(x) \left[ \mathbb{E}\theta^{\alpha}+\gamma \mathbb{E}\theta^{\alpha} -2\gamma \mathbb{E}\left(\theta^{\alpha}e^{-\lambda\theta}\right) \right] \;:\!=\;\mu V(b_1, b_2) \bar{F}(x),\end{align*}
where 
 $\mu=\mathbb{E}\theta^{\alpha}+\gamma \mathbb{E}\theta^{\alpha}-2\gamma \mathbb{E}\left(\theta^{\alpha}e^{-\lambda\theta}\right)$
, and in the second and fourth steps we use the relation
$\mu=\mathbb{E}\theta^{\alpha}+\gamma \mathbb{E}\theta^{\alpha}-2\gamma \mathbb{E}\left(\theta^{\alpha}e^{-\lambda\theta}\right)$
, and in the second and fourth steps we use the relation 
 $\mathbb{P}(\theta>x^p)\leq x^{-p\beta} \mathbb{E}\theta^\beta=o(1)\bar{F}(x)$
 from (3). This implies that
$\mathbb{P}(\theta>x^p)\leq x^{-p\beta} \mathbb{E}\theta^\beta=o(1)\bar{F}(x)$
 from (3). This implies that 
 $\mathbb{P}(X_1\theta>\cdot, X_2\theta>\cdot)$
 possesses a bivariate regularly varying tail. Set
$\mathbb{P}(X_1\theta>\cdot, X_2\theta>\cdot)$
 possesses a bivariate regularly varying tail. Set 
 $\gamma=0.5$
, so
$\gamma=0.5$
, so 
 $\mu={11}/{16}$
 and
$\mu={11}/{16}$
 and 
 $\nu((1,\infty]\times(1,\infty])=\mu V(1,1)={11}/{96}>0$
; then
$\nu((1,\infty]\times(1,\infty])=\mu V(1,1)={11}/{96}>0$
; then 
 $(X_1\theta, X_2\theta)$
 satisfies Assumption 1.
$(X_1\theta, X_2\theta)$
 satisfies Assumption 1.
 Finally, we estimate and compare the numerical results of the asymptotic formula (17) and the simulation of 
 $\Psi(x,n)$
. Set
$\Psi(x,n)$
. Set 
 $n=5$
; then the asymptotic formula (17) becomes
$n=5$
; then the asymptotic formula (17) becomes 
 \begin{align*} \Psi(x,n) \sim \mu V(\rho_1, \rho_2) \bar{F}(x) \sum_{i=1}^n \left(\mathbb{E}\theta^{\alpha}\right)^{i-1} \;:\!=\; \Psi_1(x,n).\end{align*}
\begin{align*} \Psi(x,n) \sim \mu V(\rho_1, \rho_2) \bar{F}(x) \sum_{i=1}^n \left(\mathbb{E}\theta^{\alpha}\right)^{i-1} \;:\!=\; \Psi_1(x,n).\end{align*}
Denote by 
 $\Psi_2(x,n)$
 the Monte Carlo simulation of
$\Psi_2(x,n)$
 the Monte Carlo simulation of 
 $\Psi(x,n)$
. From [Reference Nelsen25, Exercise 3.23] we can generate an FGM random pair
$\Psi(x,n)$
. From [Reference Nelsen25, Exercise 3.23] we can generate an FGM random pair 
 $(Z, \theta)$
, and then we get
$(Z, \theta)$
, and then we get 
 $(X_1, X_2, \theta)=(Z\zeta_1, Z\zeta_2, \theta)$
 by generating two independent uniform (0,1) variates,
$(X_1, X_2, \theta)=(Z\zeta_1, Z\zeta_2, \theta)$
 by generating two independent uniform (0,1) variates, 
 $\zeta_1$
 and
$\zeta_1$
 and 
 $\zeta_2$
. We simulate
$\zeta_2$
. We simulate 
 $N=10^7$
 samples of
$N=10^7$
 samples of 
 $\left((X_{11},X_{21}, \theta_{1}), \ldots,(X_{1n},X_{2n}, \theta_{n})\right)$
, and for each
$\left((X_{11},X_{21}, \theta_{1}), \ldots,(X_{1n},X_{2n}, \theta_{n})\right)$
, and for each 
 $k=1,\ldots,N$
, we denote by
$k=1,\ldots,N$
, we denote by 
 $\left( (X_{11}^{(k)},X_{21}^{(k)}, \theta_{1}^{(k)}), \ldots,(X_{1n}^{(k)},X_{2n}^{(k)}, \theta_{n}^{(k)}) \right)$
 the independent copy of
$\left( (X_{11}^{(k)},X_{21}^{(k)}, \theta_{1}^{(k)}), \ldots,(X_{1n}^{(k)},X_{2n}^{(k)}, \theta_{n}^{(k)}) \right)$
 the independent copy of 
 $\left((X_{11},X_{21}, \theta_{1}), \ldots,(X_{1n},X_{2n}, \theta_{n})\right)$
. Hence,
$\left((X_{11},X_{21}, \theta_{1}), \ldots,(X_{1n},X_{2n}, \theta_{n})\right)$
. Hence, 
 $\Psi_2(x,n)$
 is estimated by
$\Psi_2(x,n)$
 is estimated by 
 \begin{align*} \frac{1}{N} \sum_{k=1}^{N} \mathbb{I}_{\left\{  \max_{1\leq m\leq n}\sum_{i=1}^m X_{1i}^{(k)}\prod_{j=1}^{i}\theta_j^{(k)}  >\rho_1x, \quad  \max_{1\leq m\leq n}\sum_{i=1}^m X_{2i}^{(k)}\prod_{j=1}^{i}\theta_j^{(k)}  >\rho_2x  \right\}}.\end{align*}
\begin{align*} \frac{1}{N} \sum_{k=1}^{N} \mathbb{I}_{\left\{  \max_{1\leq m\leq n}\sum_{i=1}^m X_{1i}^{(k)}\prod_{j=1}^{i}\theta_j^{(k)}  >\rho_1x, \quad  \max_{1\leq m\leq n}\sum_{i=1}^m X_{2i}^{(k)}\prod_{j=1}^{i}\theta_j^{(k)}  >\rho_2x  \right\}}.\end{align*}
 From Table 1, we observe that the ratio 
 $\Psi_1(x,n)/\Psi_2(x,n)$
 approaches 1 as x becomes large. Fixing x, we notice that the ruin probabilities decay when
$\Psi_1(x,n)/\Psi_2(x,n)$
 approaches 1 as x becomes large. Fixing x, we notice that the ruin probabilities decay when 
 $\rho_1$
 decreases from
$\rho_1$
 decreases from 
 $0.5$
 to
$0.5$
 to 
 $0.1$
, meaning that a larger value of
$0.1$
, meaning that a larger value of 
 $(1-\rho_1)$
 leads to a smaller ruin probability of the corresponding business line from the insurance company.
$(1-\rho_1)$
 leads to a smaller ruin probability of the corresponding business line from the insurance company.
Table 1. Asymptotic versus simulated values

4.2. Some lemmas before the proof of Theorem 1
 Clearly, one can derive the following relation for all 
 $x>0$
:
$x>0$
: 
 \begin{align} \mathbb{P}\left( \boldsymbol{X}_{i}^+ \boldsymbol{Y}_{i} > \boldsymbol{b}x \right) =\mathbb{P}\left( \boldsymbol{X}_{i} \boldsymbol{Y}_{i} > \boldsymbol{b}x\right), \qquad \textrm{for any } \boldsymbol{b}\in[0,\infty]^d\backslash \{\textbf{0}\} \textrm{ and } i \in \mathbb{N}.\end{align}
\begin{align} \mathbb{P}\left( \boldsymbol{X}_{i}^+ \boldsymbol{Y}_{i} > \boldsymbol{b}x \right) =\mathbb{P}\left( \boldsymbol{X}_{i} \boldsymbol{Y}_{i} > \boldsymbol{b}x\right), \qquad \textrm{for any } \boldsymbol{b}\in[0,\infty]^d\backslash \{\textbf{0}\} \textrm{ and } i \in \mathbb{N}.\end{align}
Lemma 2. If the conditions of Theorem 1 hold, then for every fixed 
 $\boldsymbol{b}\in[0,\infty]^d\backslash \{\textbf{0}\}$
 and n we have
$\boldsymbol{b}\in[0,\infty]^d\backslash \{\textbf{0}\}$
 and n we have 
 \begin{align*}  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i}  > \boldsymbol{b}x \right)  \sim \sum_{i=1}^n \mathbb{P}\left( \boldsymbol{X}_{i}^+ \boldsymbol{Y}_{i}  > \boldsymbol{b}x \right)  \sim \sum_{i=1}^n\bar{F}(x) \mathbb{E}\left[\nu\left( \boldsymbol{Y}_{i-1}^{-1}  (\boldsymbol{b},\boldsymbol{\infty}]\right)\right]. \end{align*}
\begin{align*}  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i}  > \boldsymbol{b}x \right)  \sim \sum_{i=1}^n \mathbb{P}\left( \boldsymbol{X}_{i}^+ \boldsymbol{Y}_{i}  > \boldsymbol{b}x \right)  \sim \sum_{i=1}^n\bar{F}(x) \mathbb{E}\left[\nu\left( \boldsymbol{Y}_{i-1}^{-1}  (\boldsymbol{b},\boldsymbol{\infty}]\right)\right]. \end{align*}
Proof. For any 
 $1\leq i\leq n$
, we have by the conditions of Theorem 1 that
$1\leq i\leq n$
, we have by the conditions of Theorem 1 that 
 $\boldsymbol{X}_{i}^+ \boldsymbol{\theta}_i\in {\textrm{MRV}}(\alpha,F,\nu)$
 and
$\boldsymbol{X}_{i}^+ \boldsymbol{\theta}_i\in {\textrm{MRV}}(\alpha,F,\nu)$
 and 
 \begin{align}  \mathbb{E}\boldsymbol{Y}_{i}^{\beta}  =\left( \mathbb{E}Y_{1i}^{\beta}, \, \ldots, \, \mathbb{E}Y_{di}^{\beta} \right)  =\left( \prod_{j=1}^i \mathbb{E}\theta_{1j}^{\beta}, \, \ldots, \,  \prod_{j=1}^i \mathbb{E}\theta_{dj}^{\beta} \right)  <\boldsymbol{\infty}. \end{align}
\begin{align}  \mathbb{E}\boldsymbol{Y}_{i}^{\beta}  =\left( \mathbb{E}Y_{1i}^{\beta}, \, \ldots, \, \mathbb{E}Y_{di}^{\beta} \right)  =\left( \prod_{j=1}^i \mathbb{E}\theta_{1j}^{\beta}, \, \ldots, \,  \prod_{j=1}^i \mathbb{E}\theta_{dj}^{\beta} \right)  <\boldsymbol{\infty}. \end{align}
Since 
 $\boldsymbol{X}_{i}^+ \boldsymbol{\theta}_i$
 and
$\boldsymbol{X}_{i}^+ \boldsymbol{\theta}_i$
 and 
 $ \boldsymbol{Y}_{i-1}=\prod_{j=1}^{i-1}\boldsymbol{\theta}_{j} $
 are independent, it follows from Proposition 1 that
$ \boldsymbol{Y}_{i-1}=\prod_{j=1}^{i-1}\boldsymbol{\theta}_{j} $
 are independent, it follows from Proposition 1 that 
 \begin{align}  \mathbb{P}\left( \boldsymbol{X}_{i}^+ \boldsymbol{Y}_{i} > \boldsymbol{b}x \right)  &=\mathbb{P}\left(\frac{\boldsymbol{X}_{i}^+ \boldsymbol{\theta}_i   \boldsymbol{Y}_{i-1}} {x}  \in (\boldsymbol{b},\boldsymbol{\infty}]\right)  \sim \bar{F}(x) \mathbb{E}\left[\nu\left( \boldsymbol{Y}_{i-1}^{-1}  (\boldsymbol{b},\boldsymbol{\infty}]\right)\right]. \end{align}
\begin{align}  \mathbb{P}\left( \boldsymbol{X}_{i}^+ \boldsymbol{Y}_{i} > \boldsymbol{b}x \right)  &=\mathbb{P}\left(\frac{\boldsymbol{X}_{i}^+ \boldsymbol{\theta}_i   \boldsymbol{Y}_{i-1}} {x}  \in (\boldsymbol{b},\boldsymbol{\infty}]\right)  \sim \bar{F}(x) \mathbb{E}\left[\nu\left( \boldsymbol{Y}_{i-1}^{-1}  (\boldsymbol{b},\boldsymbol{\infty}]\right)\right]. \end{align}
Hence, it suffices to show that for every fixed 
 $\boldsymbol{b}\in[0,\infty]^d\backslash \{\textbf{0}\}$
 and n, the following asymptotic formula holds:
$\boldsymbol{b}\in[0,\infty]^d\backslash \{\textbf{0}\}$
 and n, the following asymptotic formula holds: 
 \begin{align}  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i}  > \boldsymbol{b}x \right)  \sim \sum_{i=1}^n \mathbb{P}\left( \boldsymbol{X}_{i}^+ \boldsymbol{Y}_{i}  > \boldsymbol{b}x \right). \end{align}
\begin{align}  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i}  > \boldsymbol{b}x \right)  \sim \sum_{i=1}^n \mathbb{P}\left( \boldsymbol{X}_{i}^+ \boldsymbol{Y}_{i}  > \boldsymbol{b}x \right). \end{align}
 Now we aim to show the upper bound of 
 $\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i} > \boldsymbol{b}x \right)$
. For an arbitrary
$\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i} > \boldsymbol{b}x \right)$
. For an arbitrary 
 $\varepsilon>0$
, choose small
$\varepsilon>0$
, choose small 
 $v_1>0$
 such that
$v_1>0$
 such that 
 $(1-v_1)^{-\alpha}\leq1+\varepsilon$
 and
$(1-v_1)^{-\alpha}\leq1+\varepsilon$
 and 
 $(1+v_1)^{-\alpha}\geq 1-\varepsilon$
 hold. Then
$(1+v_1)^{-\alpha}\geq 1-\varepsilon$
 hold. Then 
 \begin{align*}  &\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i}  > \boldsymbol{b}x \right)  \leq \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+ \boldsymbol{Y}_{i} >  \boldsymbol{b}x \right)\\[5pt]   &= \mathbb{P}\left(  \sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \bigcup_{m=1}^n \left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}  >\boldsymbol{b}(1-v_1)x\right)  \right)\\[5pt]   &\quad+\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \bigcap_{m=1}^n \left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}  >\boldsymbol{b}(1-v_1)x\right)^c\right)\\[5pt]   & \;:\!=\; K_1(x,n)+K_2(x,n). \end{align*}
\begin{align*}  &\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i}  > \boldsymbol{b}x \right)  \leq \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+ \boldsymbol{Y}_{i} >  \boldsymbol{b}x \right)\\[5pt]   &= \mathbb{P}\left(  \sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \bigcup_{m=1}^n \left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}  >\boldsymbol{b}(1-v_1)x\right)  \right)\\[5pt]   &\quad+\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \bigcap_{m=1}^n \left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}  >\boldsymbol{b}(1-v_1)x\right)^c\right)\\[5pt]   & \;:\!=\; K_1(x,n)+K_2(x,n). \end{align*}
For 
 $K_1(x,n)$
, we have by the relation (21) that
$K_1(x,n)$
, we have by the relation (21) that 
 \begin{align*}  K_1(x,n)  &\leq \sum_{m=1}^n  \mathbb{P}\left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}  >\boldsymbol{b}(1-v_1)x\right)\sim \sum_{m=1}^n\bar{F}\left((1-v_1)x\right)  \mathbb{E}\left\{\nu(\boldsymbol{Y}_{m-1}^{-1}\boldsymbol{b},  \boldsymbol{\infty}]\right\}\\[4pt]   &\sim (1-v_1)^{-\alpha}\bar{F}(x)\sum_{m=1}^n  \mathbb{E}\left\{\nu \left( \boldsymbol{Y}_{m-1}^{-1}\boldsymbol{b},  \boldsymbol{\infty} \right] \right\}\\[4pt]   &\leq (1+\varepsilon)\sum_{m=1}^n \bar{F}(x)  \mathbb{E}\left\{\nu \left(  \boldsymbol{Y}_{m-1}^{-1}\boldsymbol{b},\boldsymbol{\infty}  \right] \right\}  \sim (1+\varepsilon)\sum_{m=1}^n  \mathbb{P}\left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}>\boldsymbol{b}x\right). \end{align*}
\begin{align*}  K_1(x,n)  &\leq \sum_{m=1}^n  \mathbb{P}\left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}  >\boldsymbol{b}(1-v_1)x\right)\sim \sum_{m=1}^n\bar{F}\left((1-v_1)x\right)  \mathbb{E}\left\{\nu(\boldsymbol{Y}_{m-1}^{-1}\boldsymbol{b},  \boldsymbol{\infty}]\right\}\\[4pt]   &\sim (1-v_1)^{-\alpha}\bar{F}(x)\sum_{m=1}^n  \mathbb{E}\left\{\nu \left( \boldsymbol{Y}_{m-1}^{-1}\boldsymbol{b},  \boldsymbol{\infty} \right] \right\}\\[4pt]   &\leq (1+\varepsilon)\sum_{m=1}^n \bar{F}(x)  \mathbb{E}\left\{\nu \left(  \boldsymbol{Y}_{m-1}^{-1}\boldsymbol{b},\boldsymbol{\infty}  \right] \right\}  \sim (1+\varepsilon)\sum_{m=1}^n  \mathbb{P}\left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}>\boldsymbol{b}x\right). \end{align*}
For 
 $K_2(x,n)$
 we obtain, for
$K_2(x,n)$
 we obtain, for 
 $1\leq k\leq d$
 and some
$1\leq k\leq d$
 and some 
 $b_k>0$
,
$b_k>0$
, 
 \begin{align}   &K_2(x,n)  = \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \bigcap_{m=1}^n\bigcup_{l=1}^d \left(X_{lm}^+Y_{lm}\leq b_l(1-v_1)x\right)  \right)\nonumber\\[4pt]   & = \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \bigcup_{j=1}^n \left(X_{kj}^+Y_{kj}>\frac{b_kx}{n}\right),  \bigcap_{m=1}^n\bigcup_{l=1}^d \left(X_{lm}^+Y_{lm}\leq b_l(1-v_1)x\right)  \right)\nonumber\\[4pt]   &\leq \sum_{j=1}^n  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x,  X_{kj}^+Y_{kj}>\frac{b_kx}{n},  \bigcup_{l=1}^d \left(X_{lj}^+Y_{lj}\leq b_l(1-v_1)x\right)  \right)\nonumber\\[4pt]   &\leq \sum_{j=1}^n\sum_{l=1}^d  \mathbb{P}\left(\sum_{i=1}^n X_{li}^+Y_{li}>b_lx,  X_{kj}^+Y_{kj}>\frac{b_kx}{n},  X_{lj}^+Y_{lj}\leq b_l(1-v_1)x  \right)\nonumber\\[4pt]   &\leq \sum_{j=1}^n\sum_{l=1}^d\sum_{1\leq i\leq n, i\neq j}  \mathbb{P}\left(X_{li}^+Y_{li}>\frac{b_lv_1x}{n-1},  X_{kj}^+Y_{kj}>\frac{b_kx}{n}  \right)\nonumber\\[4pt]   &=\left( \sum_{j=1}^n\sum_{l=1}^d\sum_{1\leq i<j\leq n}  +\sum_{j=1}^n\sum_{l=1}^d\sum_{1\leq j<i\leq n} \right)    \mathbb{P}\left(X_{li}^+\theta_{li}Y_{l,i-1}>\frac{b_lv_1x}{n-1},  X_{kj}^+\theta_{kj}Y_{k,j-1}>\frac{b_kx}{n}  \right) \nonumber \\[4pt]   &=o(1)\bar{F}(x)\,, \end{align}
\begin{align}   &K_2(x,n)  = \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \bigcap_{m=1}^n\bigcup_{l=1}^d \left(X_{lm}^+Y_{lm}\leq b_l(1-v_1)x\right)  \right)\nonumber\\[4pt]   & = \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \bigcup_{j=1}^n \left(X_{kj}^+Y_{kj}>\frac{b_kx}{n}\right),  \bigcap_{m=1}^n\bigcup_{l=1}^d \left(X_{lm}^+Y_{lm}\leq b_l(1-v_1)x\right)  \right)\nonumber\\[4pt]   &\leq \sum_{j=1}^n  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x,  X_{kj}^+Y_{kj}>\frac{b_kx}{n},  \bigcup_{l=1}^d \left(X_{lj}^+Y_{lj}\leq b_l(1-v_1)x\right)  \right)\nonumber\\[4pt]   &\leq \sum_{j=1}^n\sum_{l=1}^d  \mathbb{P}\left(\sum_{i=1}^n X_{li}^+Y_{li}>b_lx,  X_{kj}^+Y_{kj}>\frac{b_kx}{n},  X_{lj}^+Y_{lj}\leq b_l(1-v_1)x  \right)\nonumber\\[4pt]   &\leq \sum_{j=1}^n\sum_{l=1}^d\sum_{1\leq i\leq n, i\neq j}  \mathbb{P}\left(X_{li}^+Y_{li}>\frac{b_lv_1x}{n-1},  X_{kj}^+Y_{kj}>\frac{b_kx}{n}  \right)\nonumber\\[4pt]   &=\left( \sum_{j=1}^n\sum_{l=1}^d\sum_{1\leq i<j\leq n}  +\sum_{j=1}^n\sum_{l=1}^d\sum_{1\leq j<i\leq n} \right)    \mathbb{P}\left(X_{li}^+\theta_{li}Y_{l,i-1}>\frac{b_lv_1x}{n-1},  X_{kj}^+\theta_{kj}Y_{k,j-1}>\frac{b_kx}{n}  \right) \nonumber \\[4pt]   &=o(1)\bar{F}(x)\,, \end{align}
where we applied (20), Lemma 1, and (8) in the last step, using the independence between
 \begin{align*}  X_{li}^+\theta_{li} \quad \textrm{and} \quad (Y_{l,i-1} , X_{kj}^+\theta_{kj}Y_{k,j-1}) \end{align*}
\begin{align*}  X_{li}^+\theta_{li} \quad \textrm{and} \quad (Y_{l,i-1} , X_{kj}^+\theta_{kj}Y_{k,j-1}) \end{align*}
when 
 $i>j$
 and the independence between
$i>j$
 and the independence between 
 $X_{kj}^+\theta_{kj}$
 and
$X_{kj}^+\theta_{kj}$
 and 
 $(Y_{k,j-1} , X_{li}^+\theta_{li}Y_{l,i-1})$
 when
$(Y_{k,j-1} , X_{li}^+\theta_{li}Y_{l,i-1})$
 when 
 $i<j$
. Hence, for large x, we get
$i<j$
. Hence, for large x, we get 
 \begin{align*}  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i} > \boldsymbol{b}x \right)  \leq (1+C\varepsilon) \sum_{i=1}^n  \mathbb{P}\left(\boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x\right). \end{align*}
\begin{align*}  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i} > \boldsymbol{b}x \right)  \leq (1+C\varepsilon) \sum_{i=1}^n  \mathbb{P}\left(\boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x\right). \end{align*}
 Finally, we construct the lower bound of 
 $\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i} > \boldsymbol{b}x \right)$
. We have
$\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i} > \boldsymbol{b}x \right)$
. We have 
 \begin{align*}  &\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{b}x \right)  \geq \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \bigcup_{m=1}^n \left(\boldsymbol{X}_{m}\boldsymbol{Y}_{m}  >\boldsymbol{b}(1+v_1)x\right)  \right)\\[5pt]   &\geq \sum_{m=1}^n  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x  \right)\\[5pt]   &\qquad-\sum_{1\leq m<k\leq n}  \mathbb{P}\left( \sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x,  \boldsymbol{X}_{k}\boldsymbol{Y}_{k}>\boldsymbol{b}(1+v_1)x  \right)\,, \end{align*}
\begin{align*}  &\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{b}x \right)  \geq \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \bigcup_{m=1}^n \left(\boldsymbol{X}_{m}\boldsymbol{Y}_{m}  >\boldsymbol{b}(1+v_1)x\right)  \right)\\[5pt]   &\geq \sum_{m=1}^n  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x  \right)\\[5pt]   &\qquad-\sum_{1\leq m<k\leq n}  \mathbb{P}\left( \sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x,  \boldsymbol{X}_{k}\boldsymbol{Y}_{k}>\boldsymbol{b}(1+v_1)x  \right)\,, \end{align*}
so we find
 \begin{align*}  &\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{b}x \right) \\[5pt] &\geq \sum_{m=1}^n  \mathbb{P}\left(\boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x\right)\\[5pt]   &\quad  -\sum_{m=1}^n  \mathbb{P}\left(\boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x,  \bigcup_{k=1}^d \left( \sum_{i=1}^n X_{ki}Y_{ki} \leq b_kx \right)  \right)\\[5pt]   &\quad  -\sum_{1\leq m<k\leq n}  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x,  \boldsymbol{X}_{k}\boldsymbol{Y}_{k}>\boldsymbol{b}(1+v_1)x  \right)\\[5pt]   &\;:\!=\;K'_{\!\!1}(x,n)-K'_{\!\!2}(x,n)-K'_{\!\!3}(x,n). \end{align*}
\begin{align*}  &\mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{b}x \right) \\[5pt] &\geq \sum_{m=1}^n  \mathbb{P}\left(\boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x\right)\\[5pt]   &\quad  -\sum_{m=1}^n  \mathbb{P}\left(\boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x,  \bigcup_{k=1}^d \left( \sum_{i=1}^n X_{ki}Y_{ki} \leq b_kx \right)  \right)\\[5pt]   &\quad  -\sum_{1\leq m<k\leq n}  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}>\boldsymbol{b}x,  \boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x,  \boldsymbol{X}_{k}\boldsymbol{Y}_{k}>\boldsymbol{b}(1+v_1)x  \right)\\[5pt]   &\;:\!=\;K'_{\!\!1}(x,n)-K'_{\!\!2}(x,n)-K'_{\!\!3}(x,n). \end{align*}
For 
 $K'_{\!\!1}(x,n)$
, we have by the relations (19) and (21) that
$K'_{\!\!1}(x,n)$
, we have by the relations (19) and (21) that 
 \begin{align*}  &K'_{\!\!1}(x,n)  \sim (1+v_1)^{-\alpha}\bar{F}(x) \sum_{m=1}^n  \mathbb{E}\left(\nu \left(  \boldsymbol{Y}_{m-1}^{-1}\boldsymbol{b},\boldsymbol{\infty}  \right] \right)\\[5pt]   &\geq (1-\varepsilon)\bar{F}(x) \sum_{m=1}^n  \mathbb{E}\left(\nu \left(  \boldsymbol{Y}_{m-1}^{-1}\boldsymbol{b},\boldsymbol{\infty}  \right] \right)\\[5pt]   &\geq (1-C\varepsilon)\sum_{m=1}^n  \mathbb{P}\left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}>\boldsymbol{b}x\right). \end{align*}
\begin{align*}  &K'_{\!\!1}(x,n)  \sim (1+v_1)^{-\alpha}\bar{F}(x) \sum_{m=1}^n  \mathbb{E}\left(\nu \left(  \boldsymbol{Y}_{m-1}^{-1}\boldsymbol{b},\boldsymbol{\infty}  \right] \right)\\[5pt]   &\geq (1-\varepsilon)\bar{F}(x) \sum_{m=1}^n  \mathbb{E}\left(\nu \left(  \boldsymbol{Y}_{m-1}^{-1}\boldsymbol{b},\boldsymbol{\infty}  \right] \right)\\[5pt]   &\geq (1-C\varepsilon)\sum_{m=1}^n  \mathbb{P}\left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}>\boldsymbol{b}x\right). \end{align*}
 For 
 $K'_{\!\!2}(x,n)$
, we obtain
$K'_{\!\!2}(x,n)$
, we obtain 
 \begin{align*}  &K'_{\!\!2}(x,n)  \leq \sum_{m=1}^n\sum_{k=1}^d  \mathbb{P}\left(\boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x,  \sum_{i=1}^n X_{ki}Y_{ki}\leq b_kx  \right)\\[5pt]   &\leq \sum_{m=1}^n\sum_{k=1}^d  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  \sum_{1\leq i\neq m\leq n} X_{ki}Y_{ki}\leq-v_1b_kx  \right)\\[5pt]   &\leq \sum_{m=1}^n\sum_{k=1}^d\sum_{1\leq i< m\leq n}  \mathbb{P}\left( X_{km}^+\theta_{km}Y_{k,m-1}>b_k(1+v_1)x,  |X_{ki}|Y_{ki} \geq \frac{v_1b_kx}{n-1} \right) \\[5pt]   &\quad + \sum_{m=1}^n\sum_{k=1}^d\sum_{1\leq m<i\leq n}  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  |X_{ki}|Y_{ki} \geq \frac{v_1b_kx}{n-1} \right) \\[5pt]   & = o(1)\bar{F}(x)  + \sum_{m=1}^n\sum_{k=1}^d\sum_{1\leq m<i\leq n}  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  |X_{ki}|Y_{ki} \geq \frac{v_1b_kx}{n-1} \right) , \end{align*}
\begin{align*}  &K'_{\!\!2}(x,n)  \leq \sum_{m=1}^n\sum_{k=1}^d  \mathbb{P}\left(\boldsymbol{X}_{m}\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x,  \sum_{i=1}^n X_{ki}Y_{ki}\leq b_kx  \right)\\[5pt]   &\leq \sum_{m=1}^n\sum_{k=1}^d  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  \sum_{1\leq i\neq m\leq n} X_{ki}Y_{ki}\leq-v_1b_kx  \right)\\[5pt]   &\leq \sum_{m=1}^n\sum_{k=1}^d\sum_{1\leq i< m\leq n}  \mathbb{P}\left( X_{km}^+\theta_{km}Y_{k,m-1}>b_k(1+v_1)x,  |X_{ki}|Y_{ki} \geq \frac{v_1b_kx}{n-1} \right) \\[5pt]   &\quad + \sum_{m=1}^n\sum_{k=1}^d\sum_{1\leq m<i\leq n}  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  |X_{ki}|Y_{ki} \geq \frac{v_1b_kx}{n-1} \right) \\[5pt]   & = o(1)\bar{F}(x)  + \sum_{m=1}^n\sum_{k=1}^d\sum_{1\leq m<i\leq n}  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  |X_{ki}|Y_{ki} \geq \frac{v_1b_kx}{n-1} \right) , \end{align*}
where the last equality follows from Lemma 1, (20), and (8) by the independence between 
 $X_{km}^+\theta_{km}$
 and
$X_{km}^+\theta_{km}$
 and 
 $(Y_{k,m-1} , |X_{ki}|Y_{ki})$
. For
$(Y_{k,m-1} , |X_{ki}|Y_{ki})$
. For 
 \begin{align*}   \frac{\alpha}{\beta} < p <1 \end{align*}
\begin{align*}   \frac{\alpha}{\beta} < p <1 \end{align*}
we find that, by Chebyshev’s inequality,
 \begin{align*}  &\sum_{m=1}^n\sum_{k=1}^d\sum_{1\leq m<i\leq n}  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  |X_{ki}|Y_{ki} \geq \frac{v_1b_kx}{n-1} \right)\leq \sum_{m=1}^n\sum_{k=1}^d\\[5pt]   &\sum_{1\leq m<i\leq n}  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  |X_{ki}|Y_{ki} \geq \frac{v_1b_kx}{n-1}, \theta_{km} \leq x^{p} \right)  +dn^3\mathbb{P}\left( \theta_{km}> x^{p} \right)\\[5pt]   &\leq \sum_{m=1}^n\sum_{k=1}^d\sum_{1\leq m<i\leq n}  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  |X_{ki}| \prod_{j=1,j\neq m}^{i}\theta_{kj} \geq \frac{v_1b_kx^{1-p}}{n-1} \right)  \\[5pt]   &+dn^3x^{-p\beta}\mathbb{E}\theta_{km}^{\beta}=o(1) \bar{F}(x), \end{align*}
\begin{align*}  &\sum_{m=1}^n\sum_{k=1}^d\sum_{1\leq m<i\leq n}  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  |X_{ki}|Y_{ki} \geq \frac{v_1b_kx}{n-1} \right)\leq \sum_{m=1}^n\sum_{k=1}^d\\[5pt]   &\sum_{1\leq m<i\leq n}  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  |X_{ki}|Y_{ki} \geq \frac{v_1b_kx}{n-1}, \theta_{km} \leq x^{p} \right)  +dn^3\mathbb{P}\left( \theta_{km}> x^{p} \right)\\[5pt]   &\leq \sum_{m=1}^n\sum_{k=1}^d\sum_{1\leq m<i\leq n}  \mathbb{P}\left( X_{km}^+Y_{km}>b_k(1+v_1)x,  |X_{ki}| \prod_{j=1,j\neq m}^{i}\theta_{kj} \geq \frac{v_1b_kx^{1-p}}{n-1} \right)  \\[5pt]   &+dn^3x^{-p\beta}\mathbb{E}\theta_{km}^{\beta}=o(1) \bar{F}(x), \end{align*}
where the last step is due to (3), Lemma 1, (20), and (8), using the independence between the product 
 $X_{km}^+\theta_{km}$
 and
$X_{km}^+\theta_{km}$
 and 
 $\left( Y_{k,m-1} , |X_{ki}| \prod_{j=1,j\neq m}^{i}\theta_{kj} \right)$
. For
$\left( Y_{k,m-1} , |X_{ki}| \prod_{j=1,j\neq m}^{i}\theta_{kj} \right)$
. For 
 $K'_{\!\!3}(x,n)$
, by Lemma 1 we have
$K'_{\!\!3}(x,n)$
, by Lemma 1 we have 
 \begin{align*}  K'_{\!\!3}(x,n)  \leq \sum_{1\leq m<k\leq n}  \mathbb{P}\left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x,  \boldsymbol{X}_{k}^+\boldsymbol{Y}_{k}>\boldsymbol{b}(1+v_1)x  \right)  =o(1)\bar{F}(x). \end{align*}
\begin{align*}  K'_{\!\!3}(x,n)  \leq \sum_{1\leq m<k\leq n}  \mathbb{P}\left(\boldsymbol{X}_{m}^+\boldsymbol{Y}_{m}>\boldsymbol{b}(1+v_1)x,  \boldsymbol{X}_{k}^+\boldsymbol{Y}_{k}>\boldsymbol{b}(1+v_1)x  \right)  =o(1)\bar{F}(x). \end{align*}
Therefore, for large x, we obtain
 \begin{align*}  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i} > \boldsymbol{b}x \right)  \geq (1-C\varepsilon) \sum_{i=1}^n  \mathbb{P}\left(\boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x\right). \end{align*}
\begin{align*}  \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i} \boldsymbol{Y}_{i} > \boldsymbol{b}x \right)  \geq (1-C\varepsilon) \sum_{i=1}^n  \mathbb{P}\left(\boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{b}x\right). \end{align*}
This completes the proof.
Lemma 3. If the conditions of Theorem 1 hold, then for every fixed 
 $\boldsymbol{b}\in[0,\infty]^d\backslash \{\textbf{0}\}$
 we get
$\boldsymbol{b}\in[0,\infty]^d\backslash \{\textbf{0}\}$
 we get 
 \begin{align*}  \lim_{N\rightarrow \infty}\limsup_{x\rightarrow\infty}  \frac{\mathbb{P}\left(\sum_{i=N+1}^{\infty}\boldsymbol{X}_i^+\boldsymbol{Y}_i   >\boldsymbol{b}x\right)}  {\bar{F}(x)}  =  \lim_{N\rightarrow \infty}\limsup_{x\rightarrow\infty}  \frac{\sum_{i=N+1}^{\infty}   \mathbb{P}\left(\boldsymbol{X}_i^+\boldsymbol{Y}_i > \boldsymbol{b}x\right)}  {\bar{F}(x)}=0. \end{align*}
\begin{align*}  \lim_{N\rightarrow \infty}\limsup_{x\rightarrow\infty}  \frac{\mathbb{P}\left(\sum_{i=N+1}^{\infty}\boldsymbol{X}_i^+\boldsymbol{Y}_i   >\boldsymbol{b}x\right)}  {\bar{F}(x)}  =  \lim_{N\rightarrow \infty}\limsup_{x\rightarrow\infty}  \frac{\sum_{i=N+1}^{\infty}   \mathbb{P}\left(\boldsymbol{X}_i^+\boldsymbol{Y}_i > \boldsymbol{b}x\right)}  {\bar{F}(x)}=0. \end{align*}
Proof. For any fixed 
 $0<p<\beta$
 and
$0<p<\beta$
 and 
 $0<\varepsilon<\beta-\alpha$
, we choose some
$0<\varepsilon<\beta-\alpha$
, we choose some 
 $0<q<1$
 satisfying
$0<q<1$
 satisfying 
 \begin{align*}  \left\{\left[\mathbb{E}\left(\frac{\theta_k}{q}\right)^{\alpha-\varepsilon} \right]\, \bigvee \,  \left[\mathbb{E}\left(\frac{\theta_k}{q}\right)^{\alpha+\varepsilon}\right] \right\}<1,  \quad 1\leq k \leq d. \end{align*}
\begin{align*}  \left\{\left[\mathbb{E}\left(\frac{\theta_k}{q}\right)^{\alpha-\varepsilon} \right]\, \bigvee \,  \left[\mathbb{E}\left(\frac{\theta_k}{q}\right)^{\alpha+\varepsilon}\right] \right\}<1,  \quad 1\leq k \leq d. \end{align*}
Then there exists some 
 $n_1>0$
 such that the following relation holds:
$n_1>0$
 such that the following relation holds: 
 \begin{align*}  \sum_{i=n_1+1}^{\infty}  \left\{  \left[\mathbb{E}\left( \frac{\theta_{1}}{q} \right)^{\alpha-\varepsilon}\right]^{i-1}  \bigvee  \left[\mathbb{E}\left( \frac{\theta_{1}}{q} \right)^{\alpha+\varepsilon}\right]^{i-1}  \right\}  \leq C\varepsilon. \end{align*}
\begin{align*}  \sum_{i=n_1+1}^{\infty}  \left\{  \left[\mathbb{E}\left( \frac{\theta_{1}}{q} \right)^{\alpha-\varepsilon}\right]^{i-1}  \bigvee  \left[\mathbb{E}\left( \frac{\theta_{1}}{q} \right)^{\alpha+\varepsilon}\right]^{i-1}  \right\}  \leq C\varepsilon. \end{align*}
Since 
 $X_{1i}^+\theta_{1i}$
 is of regular variation from (8), applying [Reference Li20, Lemma 1] for large x we obtain
$X_{1i}^+\theta_{1i}$
 is of regular variation from (8), applying [Reference Li20, Lemma 1] for large x we obtain 
 \begin{align*}  &\mathbb{P}\left(\sum_{i=n_1+1}^{\infty}\boldsymbol{X}_i^+\boldsymbol{Y}_i  >\boldsymbol{b}x\right)  \leq \mathbb{P}\left(\sum_{i=n_1+1}^{\infty}\boldsymbol{X}_i^+\boldsymbol{Y}_i  >\sum_{i=n_1+1}^{\infty}\boldsymbol{b}(1-q)q^{i-1}x\right)\\[5pt]   &\leq \sum_{i=n_1+1}^{\infty}  \mathbb{P}\left(\boldsymbol{X}_i^+\boldsymbol{\theta}_i\frac{\boldsymbol{Y}_{i-1}}{q^{i-1}}  >\boldsymbol{b}(1-q)x\right)  \leq \sum_{i=n_1+1}^{\infty} \mathbb{P}\left(  X_{1i}^+\theta_{1i}\frac{Y_{1,i-1}}{q^{i-1}}>b_1(1-q)x  \right)\\[5pt]   &\leq C\bar{F}(b_1(1-q)x)\sum_{i=n_1+1}^{\infty} \mathbb{E}\left[  \left( \frac{Y_{1,i-1}}{q^{i-1}} \right)^{\alpha-\varepsilon}  \bigvee  \left( \frac{Y_{1,i-1}}{q^{i-1}} \right)^{\alpha+\varepsilon}  \right]\\[5pt]   &\leq C\bar{F}(x)\sum_{i=n_1+1}^{\infty} \left\{  \left[\mathbb{E}\left( \frac{\theta_{1}}{q} \right)^{\alpha-\varepsilon}\right]^{i-1}  \bigvee  \left[\mathbb{E}\left( \frac{\theta_{1}}{q} \right)^{\alpha+\varepsilon}\right]^{i-1}  \right\}\\[5pt]   &\leq C\varepsilon \bar{F}(x). \end{align*}
\begin{align*}  &\mathbb{P}\left(\sum_{i=n_1+1}^{\infty}\boldsymbol{X}_i^+\boldsymbol{Y}_i  >\boldsymbol{b}x\right)  \leq \mathbb{P}\left(\sum_{i=n_1+1}^{\infty}\boldsymbol{X}_i^+\boldsymbol{Y}_i  >\sum_{i=n_1+1}^{\infty}\boldsymbol{b}(1-q)q^{i-1}x\right)\\[5pt]   &\leq \sum_{i=n_1+1}^{\infty}  \mathbb{P}\left(\boldsymbol{X}_i^+\boldsymbol{\theta}_i\frac{\boldsymbol{Y}_{i-1}}{q^{i-1}}  >\boldsymbol{b}(1-q)x\right)  \leq \sum_{i=n_1+1}^{\infty} \mathbb{P}\left(  X_{1i}^+\theta_{1i}\frac{Y_{1,i-1}}{q^{i-1}}>b_1(1-q)x  \right)\\[5pt]   &\leq C\bar{F}(b_1(1-q)x)\sum_{i=n_1+1}^{\infty} \mathbb{E}\left[  \left( \frac{Y_{1,i-1}}{q^{i-1}} \right)^{\alpha-\varepsilon}  \bigvee  \left( \frac{Y_{1,i-1}}{q^{i-1}} \right)^{\alpha+\varepsilon}  \right]\\[5pt]   &\leq C\bar{F}(x)\sum_{i=n_1+1}^{\infty} \left\{  \left[\mathbb{E}\left( \frac{\theta_{1}}{q} \right)^{\alpha-\varepsilon}\right]^{i-1}  \bigvee  \left[\mathbb{E}\left( \frac{\theta_{1}}{q} \right)^{\alpha+\varepsilon}\right]^{i-1}  \right\}\\[5pt]   &\leq C\varepsilon \bar{F}(x). \end{align*}
This completes the proof.
4.3. Proof of Theorem 1
Proof. First we show that for any 
 $n_2$
 sufficiently large, the relation (17) holds uniformly for
$n_2$
 sufficiently large, the relation (17) holds uniformly for 
 $1\leq n\leq n_2$
. By Lemma 2, the following relations hold uniformly for
$1\leq n\leq n_2$
. By Lemma 2, the following relations hold uniformly for 
 $1\leq n\leq n_2$
:
$1\leq n\leq n_2$
: 
 \begin{align*}  \Psi(x,n)=  \mathbb{P}\left(\max_{1\leq m\leq n}\sum_{i=1}^m \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)  &\leq \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)\\[5pt]   &\sim \sum_{i=1}^n \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right) \end{align*}
\begin{align*}  \Psi(x,n)=  \mathbb{P}\left(\max_{1\leq m\leq n}\sum_{i=1}^m \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)  &\leq \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)\\[5pt]   &\sim \sum_{i=1}^n \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right) \end{align*}
and
 \begin{align*}  \Psi(x,n)=  &\mathbb{P}\left(\max_{1\leq m\leq n}\sum_{i=1}^m \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)\\[5pt]   &\geq \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)  \sim \sum_{i=1}^n \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right). \end{align*}
\begin{align*}  \Psi(x,n)=  &\mathbb{P}\left(\max_{1\leq m\leq n}\sum_{i=1}^m \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)\\[5pt]   &\geq \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)  \sim \sum_{i=1}^n \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right). \end{align*}
Next, we study the uniformity of the relation (17) when 
 $n>n_2$
. Clearly,
$n>n_2$
. Clearly, 
 \begin{align}   0&<  \nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{i=1}^{\infty}  \mathbb{E}\left( \bigwedge_{k=1}^d Y_{k, i-1}^{\alpha} \right)  =\sum_{i=1}^{\infty}\mathbb{E}\left(  \nu \left(  \bigvee_{k=1}^d Y_{k, i-1}^{-1}\left( \boldsymbol{\rho}, \boldsymbol{\infty}\right]  \right)\right)\nonumber\\[5pt]   &\leq \sum_{i=1}^{\infty} \mathbb{E}\left(  \nu \left(  \boldsymbol{Y}_{i-1}^{-1}\, \boldsymbol{\rho}, \boldsymbol{\infty}  \right]  \right) \leq \sum_{i=1}^{\infty}\mathbb{E}\left(  \nu \left(  \bigwedge_{k=1}^d Y_{k, i-1}^{-1}\left(\boldsymbol{\rho},  \boldsymbol{\infty}\right]  \right)\right)\\[5pt] \nonumber  &\leq \nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{i=1}^{\infty}  \left(\mathbb{E}Y_{1, i-1}^{\alpha} +\cdots+ \mathbb{E}Y_{d, i-1}^{\alpha}\right)  =\nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{k=1}^{d} \sum_{i=1}^{\infty}  \left( \mathbb{E}\theta_k^{\alpha} \right) ^{i-1}\\[5pt]   &<\infty\,\nonumber. \end{align}
\begin{align}   0&<  \nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{i=1}^{\infty}  \mathbb{E}\left( \bigwedge_{k=1}^d Y_{k, i-1}^{\alpha} \right)  =\sum_{i=1}^{\infty}\mathbb{E}\left(  \nu \left(  \bigvee_{k=1}^d Y_{k, i-1}^{-1}\left( \boldsymbol{\rho}, \boldsymbol{\infty}\right]  \right)\right)\nonumber\\[5pt]   &\leq \sum_{i=1}^{\infty} \mathbb{E}\left(  \nu \left(  \boldsymbol{Y}_{i-1}^{-1}\, \boldsymbol{\rho}, \boldsymbol{\infty}  \right]  \right) \leq \sum_{i=1}^{\infty}\mathbb{E}\left(  \nu \left(  \bigwedge_{k=1}^d Y_{k, i-1}^{-1}\left(\boldsymbol{\rho},  \boldsymbol{\infty}\right]  \right)\right)\\[5pt] \nonumber  &\leq \nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{i=1}^{\infty}  \left(\mathbb{E}Y_{1, i-1}^{\alpha} +\cdots+ \mathbb{E}Y_{d, i-1}^{\alpha}\right)  =\nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{k=1}^{d} \sum_{i=1}^{\infty}  \left( \mathbb{E}\theta_k^{\alpha} \right) ^{i-1}\\[5pt]   &<\infty\,\nonumber. \end{align}
 On the one hand, it holds uniformly for 
 $n>n_2$
 that, by Lemmas 2 and 3 and the relation (24),
$n>n_2$
 that, by Lemmas 2 and 3 and the relation (24), 
 \begin{align*}  &\mathbb{P}\left(\max_{1\leq k\leq n}\sum_{i=1}^k \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)  \geq \mathbb{P}\left(\max_{1\leq k\leq n_2}\sum_{i=1}^k  \boldsymbol{X}_{i}\boldsymbol{Y}_{i} > \boldsymbol{\rho}x \right)\\[5pt]   &\geq \mathbb{P}\left(\sum_{i=1}^{n_2}  \boldsymbol{X}_{i}\boldsymbol{Y}_{i} > \boldsymbol{\rho}x \right)  \sim \sum_{i=1}^{n_2} \mathbb{P}\left(  \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i} > \boldsymbol{\rho}x \right)\\[5pt]   &\geq \left(\sum_{i=1}^{n}-\sum_{i=n_2+1}^{\infty}\right)  \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}x \right)\\[5pt]   &\gtrsim \sum_{i=1}^{n}  \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}x \right). \end{align*}
\begin{align*}  &\mathbb{P}\left(\max_{1\leq k\leq n}\sum_{i=1}^k \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)  \geq \mathbb{P}\left(\max_{1\leq k\leq n_2}\sum_{i=1}^k  \boldsymbol{X}_{i}\boldsymbol{Y}_{i} > \boldsymbol{\rho}x \right)\\[5pt]   &\geq \mathbb{P}\left(\sum_{i=1}^{n_2}  \boldsymbol{X}_{i}\boldsymbol{Y}_{i} > \boldsymbol{\rho}x \right)  \sim \sum_{i=1}^{n_2} \mathbb{P}\left(  \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i} > \boldsymbol{\rho}x \right)\\[5pt]   &\geq \left(\sum_{i=1}^{n}-\sum_{i=n_2+1}^{\infty}\right)  \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}x \right)\\[5pt]   &\gtrsim \sum_{i=1}^{n}  \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}x \right). \end{align*}
 On the other hand, it holds uniformly for 
 $n>n_2$
 that, for
$n>n_2$
 that, for 
 $v_3$
 satisfying
$v_3$
 satisfying 
 $(1-v_3)^{-\alpha}\leq 1+\varepsilon$
,
$(1-v_3)^{-\alpha}\leq 1+\varepsilon$
, 
 \begin{align*}  &\mathbb{P}\left(\max_{1\leq k\leq n}\sum_{i=1}^k \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)  \leq \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)\\[5pt]   &\leq \mathbb{P}\left( \sum_{i=1}^{n_2}  \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}(1-v_3)x \right)  +2\sum_{k=1}^d \mathbb{P}\left( \sum_{i=n_2+1}^{n}  X_{ki}^+Y_{ki}>\rho_kv_3x \right)\\[5pt]   & \;:\!=\; I_1(x,n_2)+I_2(x,n). \end{align*}
\begin{align*}  &\mathbb{P}\left(\max_{1\leq k\leq n}\sum_{i=1}^k \boldsymbol{X}_{i}\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)  \leq \mathbb{P}\left(\sum_{i=1}^n \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}  >\boldsymbol{\rho}x \right)\\[5pt]   &\leq \mathbb{P}\left( \sum_{i=1}^{n_2}  \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}(1-v_3)x \right)  +2\sum_{k=1}^d \mathbb{P}\left( \sum_{i=n_2+1}^{n}  X_{ki}^+Y_{ki}>\rho_kv_3x \right)\\[5pt]   & \;:\!=\; I_1(x,n_2)+I_2(x,n). \end{align*}
For 
 $I_1(x,n_2)$
, we have by Lemma 2 that
$I_1(x,n_2)$
, we have by Lemma 2 that 
 \begin{align*}  I_1(x,n_2)  \sim (1-v_3)^{-\alpha}\bar{F}(x)\sum_{i=1}^{n_2}  \mathbb{E}\left\{\nu(\boldsymbol{Y}_{i-1}^{-1}\boldsymbol{\rho},\boldsymbol{\infty}] \right\}  &\sim (1-v_3)^{-\alpha} \sum_{i=1}^{n_2}  \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}x \right)\\[5pt]   &\leq (1+\varepsilon)\sum_{i=1}^{n}  \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}x \right). \end{align*}
\begin{align*}  I_1(x,n_2)  \sim (1-v_3)^{-\alpha}\bar{F}(x)\sum_{i=1}^{n_2}  \mathbb{E}\left\{\nu(\boldsymbol{Y}_{i-1}^{-1}\boldsymbol{\rho},\boldsymbol{\infty}] \right\}  &\sim (1-v_3)^{-\alpha} \sum_{i=1}^{n_2}  \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}x \right)\\[5pt]   &\leq (1+\varepsilon)\sum_{i=1}^{n}  \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}x \right). \end{align*}
For 
 $I_2(x,n)$
, we obtain by Lemmas 2 and 3 and the relation (24) that
$I_2(x,n)$
, we obtain by Lemmas 2 and 3 and the relation (24) that 
 \begin{align*}  I_2(x,n)  \leq 2\sum_{k=1}^d \mathbb{P}\left( \sum_{i=n_2+1}^{\infty}X_{ki}^+Y_{ki}  >\rho_kv_3x \right)  = o(1)\bar{F}(x)  = o(1)\sum_{i=1}^{n}  \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}x \right). \end{align*}
\begin{align*}  I_2(x,n)  \leq 2\sum_{k=1}^d \mathbb{P}\left( \sum_{i=n_2+1}^{\infty}X_{ki}^+Y_{ki}  >\rho_kv_3x \right)  = o(1)\bar{F}(x)  = o(1)\sum_{i=1}^{n}  \mathbb{P}\left( \boldsymbol{X}_{i}^+\boldsymbol{Y}_{i}>\boldsymbol{\rho}x \right). \end{align*}
This completes the proof.
5. The study of d-dimensional continuous-time risk model under Assumption 1
 In this section we consider an insurance company with d lines of business, for 
 $d\geq1$
. Let x be the initial reserve and let
$d\geq1$
. Let x be the initial reserve and let 
 $\boldsymbol{\rho}$
 be the allocation vector. For each
$\boldsymbol{\rho}$
 be the allocation vector. For each 
 $1\leq k\leq d$
, the price process of the investment portfolio in the kth business is expressed by a geometric Lévy process
$1\leq k\leq d$
, the price process of the investment portfolio in the kth business is expressed by a geometric Lévy process 
 $\{ e^{L_k(t)} , \, t\geq0 \}$
; namely,
$\{ e^{L_k(t)} , \, t\geq0 \}$
; namely, 
 $\{ L_k(t), \, t\geq0 \}$
 is a Lévy process that has stationary and independent increments, is stochastically continuous, and starts from 0. Then the insurer’s discounted risk process,
$\{ L_k(t), \, t\geq0 \}$
 is a Lévy process that has stationary and independent increments, is stochastically continuous, and starts from 0. Then the insurer’s discounted risk process, 
 $\mathbf{R}(t)=(R_1(t), \, \ldots, \, R_d(t))$
, is given by
$\mathbf{R}(t)=(R_1(t), \, \ldots, \, R_d(t))$
, is given by 
 \begin{align} \mathbf{R}(t) =\left( \sum_{i=1}^{N(t)} \!\!A_{1i} e^{-L_1(\tau_i)}-c_1\int_0^t\!\! e^{-L_1(s)} \textrm{d}s, \, \ldots, \, \sum_{i=1}^{N(t)}\!\! A_{di} e^{-L_d(\tau_i)}-c_d\int_0^t\!\! e^{-L_d(s)} \textrm{d}s \right),\end{align}
\begin{align} \mathbf{R}(t) =\left( \sum_{i=1}^{N(t)} \!\!A_{1i} e^{-L_1(\tau_i)}-c_1\int_0^t\!\! e^{-L_1(s)} \textrm{d}s, \, \ldots, \, \sum_{i=1}^{N(t)}\!\! A_{di} e^{-L_d(\tau_i)}-c_d\int_0^t\!\! e^{-L_d(s)} \textrm{d}s \right),\end{align}
with 
 $t\geq0$
, where
$t\geq0$
, where 
 $A_{ki}$
,
$A_{ki}$
, 
 $1\leq k \leq d$
 and
$1\leq k \leq d$
 and 
 $i\in\mathbb{N}$
, denotes the ith claim amount from the kth business, and
$i\in\mathbb{N}$
, denotes the ith claim amount from the kth business, and 
 $(c_1, \, \ldots, \, c_d)$
 denotes the vector of constant premium rates. The successive claim arrival times are denoted by
$(c_1, \, \ldots, \, c_d)$
 denotes the vector of constant premium rates. The successive claim arrival times are denoted by 
 $0<\tau_1<\tau_2<\cdots$
, and the claim arrival process
$0<\tau_1<\tau_2<\cdots$
, and the claim arrival process 
 $\{ N(t); \, t\geq0 \}$
 is a renewal process with the following finite renewal function:
$\{ N(t); \, t\geq0 \}$
 is a renewal process with the following finite renewal function: 
 \begin{align*} \lambda(t) = \mathbb{E}N(t) = \sum_{i=1}^{\infty} \mathbb{P}(\tau_i \leq t)\,.\end{align*}
\begin{align*} \lambda(t) = \mathbb{E}N(t) = \sum_{i=1}^{\infty} \mathbb{P}(\tau_i \leq t)\,.\end{align*}
The vectors 
 $ (A_{1i}, \, \ldots, \, A_{di})$
,
$ (A_{1i}, \, \ldots, \, A_{di})$
, 
 $i\in\mathbb{N}$
, form a sequence of i.i.d. nonnegative random vectors with a generic random vector
$i\in\mathbb{N}$
, form a sequence of i.i.d. nonnegative random vectors with a generic random vector 
 $(A_1,\, \ldots, \, A_d)$
. The inter-arrival times are denoted by
$(A_1,\, \ldots, \, A_d)$
. The inter-arrival times are denoted by 
 $\chi_1=\tau_1$
 and
$\chi_1=\tau_1$
 and 
 $\chi_i=\tau_i-\tau_{i-1}$
 for
$\chi_i=\tau_i-\tau_{i-1}$
 for 
 $i=2, \, 3, \, \ldots$
. The sequence
$i=2, \, 3, \, \ldots$
. The sequence 
 $\{\chi_i, \, i\geq1\}$
 is i.i.d. with generic random variable
$\{\chi_i, \, i\geq1\}$
 is i.i.d. with generic random variable 
 $\chi$
. We define the Laplace exponent for the Lévy process {
$\chi$
. We define the Laplace exponent for the Lévy process {
 $L_k(t), \, t\geq0$
},
$L_k(t), \, t\geq0$
}, 
 $1\leq k\leq d$
, by the formula
$1\leq k\leq d$
, by the formula 
 \begin{align} \phi_k(s)=\log \mathbb{E} e^{-sL_k(1)}, \quad s\in(-\infty,\infty).\end{align}
\begin{align} \phi_k(s)=\log \mathbb{E} e^{-sL_k(1)}, \quad s\in(-\infty,\infty).\end{align}
If 
 $\phi_k(s)$
 is finite, then
$\phi_k(s)$
 is finite, then 
 $\mathbb{E}e^{-sL_k(t)}=e^{t\phi_k(s)}<\infty$
 for
$\mathbb{E}e^{-sL_k(t)}=e^{t\phi_k(s)}<\infty$
 for 
 $t\geq0$
.
$t\geq0$
.
In the multivariate risk model case, ruin may appear in various situations. Thus, there are several versions of the probabilities of ruin based on various ruin sets. For the model (25) we adopt the following three ruin times:
 \begin{align} T_{\textrm{max}} =\inf\{t>0: \boldsymbol{\rho} x - \boldsymbol{R}(t) <\mathbf{0} \},\end{align}
\begin{align} T_{\textrm{max}} =\inf\{t>0: \boldsymbol{\rho} x - \boldsymbol{R}(t) <\mathbf{0} \},\end{align}
which is the first instant when all line reserves become negative simultaneously,
 \begin{align} T_{\textrm{min}}=\inf\left\{t>0: \min_{1\leq k\leq d} \{\rho_kx-R_k(t)\}<0\right\},\end{align}
\begin{align} T_{\textrm{min}}=\inf\left\{t>0: \min_{1\leq k\leq d} \{\rho_kx-R_k(t)\}<0\right\},\end{align}
which is the first instant when at least one of the businesses is below zero, and
 \begin{align} T_{\textrm{ult}} =\inf\left\{t>0: \inf_{0\leq s\leq t} \{\rho_1x-R_1(s)\}<0,\ldots, \inf_{0\leq s\leq t} \{\rho_dx-R_d(s)\}<0 \right\},\end{align}
\begin{align} T_{\textrm{ult}} =\inf\left\{t>0: \inf_{0\leq s\leq t} \{\rho_1x-R_1(s)\}<0,\ldots, \inf_{0\leq s\leq t} \{\rho_dx-R_d(s)\}<0 \right\},\end{align}
which is the first instant at which all lines have at some point run into deficit, though not necessarily simultaneously. Here, 
 $\inf \emptyset$
 is understood as
$\inf \emptyset$
 is understood as 
 $\infty$
 by convention. In this section we study the following three types of ruin probabilities:
$\infty$
 by convention. In this section we study the following three types of ruin probabilities: 
 \begin{align} \Psi_{\#} ( x, \infty) =\mathbb{P}\left(T_{\#}< \infty|\mathbf{R}(0)=\boldsymbol{\rho}x \right), \qquad 0<t\leq\infty,\end{align}
\begin{align} \Psi_{\#} ( x, \infty) =\mathbb{P}\left(T_{\#}< \infty|\mathbf{R}(0)=\boldsymbol{\rho}x \right), \qquad 0<t\leq\infty,\end{align}
where ‘
 $\#$
’ denotes either ‘max’, ‘min’, or ‘ult’. The ruin probability represents a significant indicator of the functional quality of the insurance company and has been studied extensively by [Reference Asmussen and Albrecher3], [Reference Ji13], [Reference Konstantinides and Li16], and [Reference Li19], and others.
$\#$
’ denotes either ‘max’, ‘min’, or ‘ult’. The ruin probability represents a significant indicator of the functional quality of the insurance company and has been studied extensively by [Reference Asmussen and Albrecher3], [Reference Ji13], [Reference Konstantinides and Li16], and [Reference Li19], and others.
Theorem 2. Consider the d-dimensional continuous-time risk model (25). Assume that the vector 
 $\mathbf{A}_i e^{ \mathbf{L}(\tau_{i-1})-\mathbf{L}(\tau_i) } =\left( A_{1i}e^{L_1(\tau_{i-1})-L_1(\tau_i)}, \, \ldots, \, A_{di}e^{L_d(\tau_{i-1})-L_d(\tau_i)}\right)$
,
$\mathbf{A}_i e^{ \mathbf{L}(\tau_{i-1})-\mathbf{L}(\tau_i) } =\left( A_{1i}e^{L_1(\tau_{i-1})-L_1(\tau_i)}, \, \ldots, \, A_{di}e^{L_d(\tau_{i-1})-L_d(\tau_i)}\right)$
, 
 $i\in\mathbb{N}$
, satisfies Assumption 1. If
$i\in\mathbb{N}$
, satisfies Assumption 1. If 
 $\mathbb{E}L_k(1)<\infty$
 for
$\mathbb{E}L_k(1)<\infty$
 for 
 $1\leq k\leq d$
 and there is a constant
$1\leq k\leq d$
 and there is a constant 
 $\beta>\alpha$
 such that the Laplace exponent of the Lévy process
$\beta>\alpha$
 such that the Laplace exponent of the Lévy process 
 $\{ L_k(t), \ t\geq0 \}$
 satisfies
$\{ L_k(t), \ t\geq0 \}$
 satisfies 
 $\phi_k(\beta)<0$
, then the following asymptotic formulas hold:
$\phi_k(\beta)<0$
, then the following asymptotic formulas hold: 
 \begin{align}  \Psi_{\boldsymbol{max}}(x, \, \infty)  \sim \Psi_{\boldsymbol{ult}}(x, \, \infty)  \sim \bar{F}(x)\sum_{i=1}^{\infty}  \mathbb{E}\left(  \nu \left\{  \left( e^{\mathbf{L}(\tau_{i-1})}\boldsymbol{\rho},  \boldsymbol{\infty}\right]\right\}  \right), \end{align}
\begin{align}  \Psi_{\boldsymbol{max}}(x, \, \infty)  \sim \Psi_{\boldsymbol{ult}}(x, \, \infty)  \sim \bar{F}(x)\sum_{i=1}^{\infty}  \mathbb{E}\left(  \nu \left\{  \left( e^{\mathbf{L}(\tau_{i-1})}\boldsymbol{\rho},  \boldsymbol{\infty}\right]\right\}  \right), \end{align}
 \begin{align} \Psi_{\boldsymbol{min}}(x, \, \infty) \notag \\[5pt]   &\sim \bar{F}(x)  \sum_{k=1}^{d}(\!-\!1)^{k-1}\sum_{1\leq m_1<\cdots< m_k\leq d}  \sum_{i=1}^{\infty}  \mathbb{E}\left(  \nu \left\{\left(  e^{\mathbf{L}(\tau_{i-1})}\sum_{l=1}^k\rho_{m_l}\boldsymbol{e}_{m_l},  \boldsymbol{\infty}  \right] \right\}  \right). \end{align}
\begin{align} \Psi_{\boldsymbol{min}}(x, \, \infty) \notag \\[5pt]   &\sim \bar{F}(x)  \sum_{k=1}^{d}(\!-\!1)^{k-1}\sum_{1\leq m_1<\cdots< m_k\leq d}  \sum_{i=1}^{\infty}  \mathbb{E}\left(  \nu \left\{\left(  e^{\mathbf{L}(\tau_{i-1})}\sum_{l=1}^k\rho_{m_l}\boldsymbol{e}_{m_l},  \boldsymbol{\infty}  \right] \right\}  \right). \end{align}
Remark 4. Suppose that 
 $L_k(t)$
,
$L_k(t)$
, 
 $1\leq k\leq d$
, follow the jump-diffusion process
$1\leq k\leq d$
, follow the jump-diffusion process 
 \begin{align}   L_k(t) = r_k t + \sigma_k W_k(t) + \sum_{i=1}^{N(t)} B_{ki},  \end{align}
\begin{align}   L_k(t) = r_k t + \sigma_k W_k(t) + \sum_{i=1}^{N(t)} B_{ki},  \end{align}
where 
 $r_k\in \mathbb{R}$
 stands for the log return rate,
$r_k\in \mathbb{R}$
 stands for the log return rate, 
 $\sigma_k>0$
 denotes the volatility, and N(t) represents the aforementioned claim arrival process. Here,
$\sigma_k>0$
 denotes the volatility, and N(t) represents the aforementioned claim arrival process. Here, 
 $\{ W_k(t), \, t\geq0 \}$
 is a Wiener process and
$\{ W_k(t), \, t\geq0 \}$
 is a Wiener process and 
 $B_{ki}$
 describes the jump sizes. Assume that for
$B_{ki}$
 describes the jump sizes. Assume that for 
 $1\leq k \leq d$
 all the random sources
$1\leq k \leq d$
 all the random sources 
 $\{(A_{ki}, B_{ki}), i\geq1\}$
,
$\{(A_{ki}, B_{ki}), i\geq1\}$
, 
 $\{N(t), t\geq0\}$
, and
$\{N(t), t\geq0\}$
, and 
 $\{W_k(t), t\geq0\}$
 are mutually independent and
$\{W_k(t), t\geq0\}$
 are mutually independent and 
 $(A_{1i}e^{-B_{1i}}, \, \ldots, \, A_{di}e^{-B_{di}})$
 satisfies Assumption 1. We observe that
$(A_{1i}e^{-B_{1i}}, \, \ldots, \, A_{di}e^{-B_{di}})$
 satisfies Assumption 1. We observe that 
 $(A_{1i}e^{-B_{1i}}, \, \ldots, \, A_{di}e^{-B_{di}})$
 and
$(A_{1i}e^{-B_{1i}}, \, \ldots, \, A_{di}e^{-B_{di}})$
 and 
 $(e^{-r_1\chi_i-\sigma_1W_1(\chi_i)},\,\ldots,\,e^{-r_d\chi_i-\sigma_dW_d(\chi_i)})$
 are independent and
$(e^{-r_1\chi_i-\sigma_1W_1(\chi_i)},\,\ldots,\,e^{-r_d\chi_i-\sigma_dW_d(\chi_i)})$
 are independent and 
 \begin{equation*}   \mathbb{E}[e^{-\beta r_1\chi_i-\beta\sigma_1W_1(\chi_i)}]<\infty\,,  \end{equation*}
\begin{equation*}   \mathbb{E}[e^{-\beta r_1\chi_i-\beta\sigma_1W_1(\chi_i)}]<\infty\,,  \end{equation*}
for 
 $\beta>\alpha$
. Furthermore,
$\beta>\alpha$
. Furthermore, 
 \begin{align*}   &\mathbb{P}\left(\mathbf{A}_ie^{\mathbf{L}(\tau_{i-1})-\mathbf{L}(\tau_{i})}   >\mathbf{b}x\right)\\[5pt]    &=\mathbb{P}\left(A_{1i}e^{-B_{1i}}e^{-r_1\chi_i-\sigma_1W_1(\chi_i)}>b_1x,   \ldots,   A_{di}e^{-B_{di}}e^{-r_d\chi_i-\sigma_dW_d(\chi_i)}>b_dx \right)  \end{align*}
\begin{align*}   &\mathbb{P}\left(\mathbf{A}_ie^{\mathbf{L}(\tau_{i-1})-\mathbf{L}(\tau_{i})}   >\mathbf{b}x\right)\\[5pt]    &=\mathbb{P}\left(A_{1i}e^{-B_{1i}}e^{-r_1\chi_i-\sigma_1W_1(\chi_i)}>b_1x,   \ldots,   A_{di}e^{-B_{di}}e^{-r_d\chi_i-\sigma_dW_d(\chi_i)}>b_dx \right)  \end{align*}
for any 
 $\mathbf{b}\in[0,\infty]^d\setminus \{\mathbf{0}\}$
, which implies that the random vector
$\mathbf{b}\in[0,\infty]^d\setminus \{\mathbf{0}\}$
, which implies that the random vector 
 $\boldsymbol{A}_i e^{\mathbf{L}(\tau_{i-1}) - \mathbf{L}(\tau_{i})}$
 is MRV by Proposition 1.
$\boldsymbol{A}_i e^{\mathbf{L}(\tau_{i-1}) - \mathbf{L}(\tau_{i})}$
 is MRV by Proposition 1.
5.1. Some lemmas before the proof of Theorem 2
Lemma 4. Let the conditions of Theorem 2 hold. Then for any 
 $\mathbf{b}\in[0,\infty]^d\setminus \{\mathbf{0}\}$
 and n we get
$\mathbf{b}\in[0,\infty]^d\setminus \{\mathbf{0}\}$
 and n we get 
 \begin{align*}  \mathbb{P}\left\{ \sum_{i=1}^{n} \boldsymbol{A}_{i} e^{-\mathbf{L}(\tau_{i})} >  \boldsymbol{b}x \right\}  \sim \sum_{i=1}^{n} \bar{F}(x)  \mathbb{E}\left(  \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{b},  \boldsymbol{\infty}\right]\right\}\right). \end{align*}
\begin{align*}  \mathbb{P}\left\{ \sum_{i=1}^{n} \boldsymbol{A}_{i} e^{-\mathbf{L}(\tau_{i})} >  \boldsymbol{b}x \right\}  \sim \sum_{i=1}^{n} \bar{F}(x)  \mathbb{E}\left(  \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{b},  \boldsymbol{\infty}\right]\right\}\right). \end{align*}
Proof. From the conditions of Theorem 2, we obtain by Hölder’s inequality that for any fixed 
 $i\in\mathbb{N}$
,
$i\in\mathbb{N}$
, 
 \begin{align}  \mathbb{E}e^{-p\mathbf{L}(\tau_{i-1})}  \leq \left( \mathbb{E}e^{-\beta \mathbf{L}(\tau_{i-1})}\right) ^{p/\beta}  = \left( \mathbb{E}e^{\tau_{i-1}\boldsymbol{\phi}(\beta)}\right) ^{p/\beta}  < \boldsymbol{ \infty},  \qquad p\leq \beta. \end{align}
\begin{align}  \mathbb{E}e^{-p\mathbf{L}(\tau_{i-1})}  \leq \left( \mathbb{E}e^{-\beta \mathbf{L}(\tau_{i-1})}\right) ^{p/\beta}  = \left( \mathbb{E}e^{\tau_{i-1}\boldsymbol{\phi}(\beta)}\right) ^{p/\beta}  < \boldsymbol{ \infty},  \qquad p\leq \beta. \end{align}
Hence by Proposition 1 we find
 \begin{align}\notag  \mathbb{P}\left(\!\boldsymbol{A}_{i} e^{-\mathbf{L}(\tau_{i})} \!>\! \boldsymbol{b}x \!\right)   &= \mathbb{P}\!\left\{ \!  \frac{ \boldsymbol{A}_{i} e^{[\mathbf{L}(\tau_{i-1})-\mathbf{L}(\tau_{i})]}\cdot   e^{-\mathbf{L}(\tau_{i-1})} } {x}  \in (\boldsymbol{b},\boldsymbol{\infty}]  \!\right\}\!\\[5pt]   &\sim\! \bar{F}(x)  \mathbb{E}\left(  \nu \left\{\!\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{b},  \boldsymbol{\infty}\right]\right\}\!\right). \end{align}
\begin{align}\notag  \mathbb{P}\left(\!\boldsymbol{A}_{i} e^{-\mathbf{L}(\tau_{i})} \!>\! \boldsymbol{b}x \!\right)   &= \mathbb{P}\!\left\{ \!  \frac{ \boldsymbol{A}_{i} e^{[\mathbf{L}(\tau_{i-1})-\mathbf{L}(\tau_{i})]}\cdot   e^{-\mathbf{L}(\tau_{i-1})} } {x}  \in (\boldsymbol{b},\boldsymbol{\infty}]  \!\right\}\!\\[5pt]   &\sim\! \bar{F}(x)  \mathbb{E}\left(  \nu \left\{\!\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{b},  \boldsymbol{\infty}\right]\right\}\!\right). \end{align}
Indeed, in Lemma 2, for each 
 $i=1, \ldots, n$
, take
$i=1, \ldots, n$
, take 
 $\boldsymbol{X}_{i}=\boldsymbol{A}_{i}$
,
$\boldsymbol{X}_{i}=\boldsymbol{A}_{i}$
, 
 \begin{align*}  \boldsymbol{\theta}_{i}=e^{[\mathbf{L}(\tau_{i-1})-\mathbf{L}(\tau_{i})]}\,,  \qquad \boldsymbol{Y}_{i-1}=e^{-\mathbf{L}(\tau_{i-1})}\,; \end{align*}
\begin{align*}  \boldsymbol{\theta}_{i}=e^{[\mathbf{L}(\tau_{i-1})-\mathbf{L}(\tau_{i})]}\,,  \qquad \boldsymbol{Y}_{i-1}=e^{-\mathbf{L}(\tau_{i-1})}\,; \end{align*}
then, applying Lemma 2 together with (35), we obtain
 \begin{align*}  \mathbb{P}\left\{ \sum_{i=1}^{n} \boldsymbol{A}_{i} e^{-\mathbf{L}(\tau_{i})} >  \boldsymbol{b}x \right\}  \sim \sum_{i=1}^{n} \bar{F}(x)  \mathbb{E}\left(  \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{b},  \boldsymbol{\infty}\right]\right\}\right). \end{align*}
\begin{align*}  \mathbb{P}\left\{ \sum_{i=1}^{n} \boldsymbol{A}_{i} e^{-\mathbf{L}(\tau_{i})} >  \boldsymbol{b}x \right\}  \sim \sum_{i=1}^{n} \bar{F}(x)  \mathbb{E}\left(  \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{b},  \boldsymbol{\infty}\right]\right\}\right). \end{align*}
The proof is complete.
Lemma 5. Let the conditions of Theorem 2 hold. Then for any 
 $\mathbf{b}\in[0,\infty]^d\setminus \{\mathbf{0}\}$
 we get
$\mathbf{b}\in[0,\infty]^d\setminus \{\mathbf{0}\}$
 we get 
 \begin{align*}  \mathbb{P}\left\{ \sum_{i=1}^{\infty} \boldsymbol{A}_{i} e^{-\mathbf{L}(\tau_{i})}  > \mathbf{b} x \right\}  \sim \bar{F}(x) \sum_{i=1}^{\infty}  \mathbb{E}\left( \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\,\boldsymbol{b},  \boldsymbol{\infty}\right]\right\} \right). \end{align*}
\begin{align*}  \mathbb{P}\left\{ \sum_{i=1}^{\infty} \boldsymbol{A}_{i} e^{-\mathbf{L}(\tau_{i})}  > \mathbf{b} x \right\}  \sim \bar{F}(x) \sum_{i=1}^{\infty}  \mathbb{E}\left( \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\,\boldsymbol{b},  \boldsymbol{\infty}\right]\right\} \right). \end{align*}
Proof. Take 
 $\boldsymbol{X}_{i}=\boldsymbol{A}_{i}$
 and
$\boldsymbol{X}_{i}=\boldsymbol{A}_{i}$
 and 
 \begin{align*}  \boldsymbol{\theta}_{i}=e^{[\mathbf{L}(\tau_{i-1})-\mathbf{L}(\tau_{i})]}\,, \qquad  \boldsymbol{Y}_{i-1}=e^{-\mathbf{L}(\tau_{i-1})}\,, \end{align*}
\begin{align*}  \boldsymbol{\theta}_{i}=e^{[\mathbf{L}(\tau_{i-1})-\mathbf{L}(\tau_{i})]}\,, \qquad  \boldsymbol{Y}_{i-1}=e^{-\mathbf{L}(\tau_{i-1})}\,, \end{align*}
for each 
 $i=1, \ldots, n$
. Using Theorem 1 with
$i=1, \ldots, n$
. Using Theorem 1 with 
 $n=\infty$
 and (35), we obtain
$n=\infty$
 and (35), we obtain 
 \begin{align*}  \mathbb{P}\left\{ \sum_{i=1}^{\infty} \boldsymbol{A}_{i} e^{-\mathbf{L}(\tau_{i})}  > \mathbf{b} x \right\}  \sim \bar{F}(x)\sum_{i=1}^{\infty}  \mathbb{E}\left( \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\,\boldsymbol{b},  \boldsymbol{\infty}\right]\right\} \right). \end{align*}
\begin{align*}  \mathbb{P}\left\{ \sum_{i=1}^{\infty} \boldsymbol{A}_{i} e^{-\mathbf{L}(\tau_{i})}  > \mathbf{b} x \right\}  \sim \bar{F}(x)\sum_{i=1}^{\infty}  \mathbb{E}\left( \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\,\boldsymbol{b},  \boldsymbol{\infty}\right]\right\} \right). \end{align*}
This completes the proof.
The following lemma is significant in the literature on the uniform estimates for the ruin probability of insurance risk models with an exponential Lévy process investment return (see for example [Reference Li19, Lemma 4.8] or [Reference Tang, Wang and Yuen28, Lemma 4.6]).
Lemma 6. Let 
 $Z_k$
,
$Z_k$
, 
 $1\leq k\leq d$
, be an exponential functional of the Lévy process
$1\leq k\leq d$
, be an exponential functional of the Lévy process 
 $\{ L_k(t), \, t \geq 0 \}$
 defined as
$\{ L_k(t), \, t \geq 0 \}$
 defined as 
 \begin{align*}  Z_k=\int_0^{\infty} e^{-L_k(t)} \textrm{d}t. \end{align*}
\begin{align*}  Z_k=\int_0^{\infty} e^{-L_k(t)} \textrm{d}t. \end{align*}
If 
 $\mathbb{E}L_k(1)<\infty$
, then for every
$\mathbb{E}L_k(1)<\infty$
, then for every 
 $\beta>0$
 satisfying
$\beta>0$
 satisfying 
 $\phi_k(\beta)<0$
, we have
$\phi_k(\beta)<0$
, we have 
 $\mathbb{E}Z_k^{\beta}<\infty$
.
$\mathbb{E}Z_k^{\beta}<\infty$
.
Proof. From [Reference Tang, Wang and Yuen28], the condition 
 $\phi_k(\beta)<0$
 implies that
$\phi_k(\beta)<0$
 implies that 
 $\mathbb{E}L_k(1)>0$
. Since
$\mathbb{E}L_k(1)>0$
. Since 
 $0<\mathbb{E}L_k(1)<\infty$
, we can apply [Reference Maulik and Zwart23, Lemma 2.1] to complete the proof.
$0<\mathbb{E}L_k(1)<\infty$
, we can apply [Reference Maulik and Zwart23, Lemma 2.1] to complete the proof.
5.2. Proof of Theorem 2
Proof. From the conditions in Theorem 2 we may conclude that 
 $\phi_k(\alpha)<0$
 for
$\phi_k(\alpha)<0$
 for 
 $\alpha<\beta$
 and
$\alpha<\beta$
 and 
 $1\leq k\leq d$
. Then
$1\leq k\leq d$
. Then 
 \begin{align}  0&<  \nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{i=1}^{\infty}  \mathbb{E}\left(\bigwedge_{k=1}^de^{-\alpha L_k(\tau_{i-1})}\right)  =\sum_{i=1}^{\infty}\mathbb{E}\left(  \nu \left( \bigvee_{k=1}^d e^{{L}_k(\tau_{i-1})}\left(\boldsymbol{\rho},  \boldsymbol{\infty}\right]\right)\right)\nonumber\\[5pt]   &\leq \sum_{i=1}^{\infty} \mathbb{E}\left(  \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]  \right) \leq \sum_{i=1}^{\infty}\mathbb{E}\left(  \nu \left( \bigwedge_{k=1}^d e^{{L}_k(\tau_{i-1})}\left(\boldsymbol{\rho},  \boldsymbol{\infty}\right]\right)\right)\nonumber\\[5pt]   &\leq \nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{i=1}^{\infty} \mathbb{E}\left\{  e^{-\alpha L_1(\tau_{i-1})}+ \cdots + e^{-\alpha L_d(\tau_{i-1})}  \right\}\nonumber\\[5pt]   &=\nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{k=1}^{d} \sum_{i=1}^{\infty}  \left[ \mathbb{E}e^{\chi\phi_k(\alpha)}\right] ^{i-1}  <\infty. \end{align}
\begin{align}  0&<  \nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{i=1}^{\infty}  \mathbb{E}\left(\bigwedge_{k=1}^de^{-\alpha L_k(\tau_{i-1})}\right)  =\sum_{i=1}^{\infty}\mathbb{E}\left(  \nu \left( \bigvee_{k=1}^d e^{{L}_k(\tau_{i-1})}\left(\boldsymbol{\rho},  \boldsymbol{\infty}\right]\right)\right)\nonumber\\[5pt]   &\leq \sum_{i=1}^{\infty} \mathbb{E}\left(  \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]  \right) \leq \sum_{i=1}^{\infty}\mathbb{E}\left(  \nu \left( \bigwedge_{k=1}^d e^{{L}_k(\tau_{i-1})}\left(\boldsymbol{\rho},  \boldsymbol{\infty}\right]\right)\right)\nonumber\\[5pt]   &\leq \nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{i=1}^{\infty} \mathbb{E}\left\{  e^{-\alpha L_1(\tau_{i-1})}+ \cdots + e^{-\alpha L_d(\tau_{i-1})}  \right\}\nonumber\\[5pt]   &=\nu \left(\boldsymbol{\rho},\boldsymbol{\infty}\right]  \sum_{k=1}^{d} \sum_{i=1}^{\infty}  \left[ \mathbb{E}e^{\chi\phi_k(\alpha)}\right] ^{i-1}  <\infty. \end{align}
Since
 \begin{align*}\sum_{i=1}^{n} \mathbb{E}\left( \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho}, \boldsymbol{\infty}\right] \right) \uparrow\sum_{i=1}^{\infty} \mathbb{E}\left( \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho}, \boldsymbol{\infty}\right] \right)\quad \textrm{as } n\rightarrow\infty,\end{align*}
\begin{align*}\sum_{i=1}^{n} \mathbb{E}\left( \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho}, \boldsymbol{\infty}\right] \right) \uparrow\sum_{i=1}^{\infty} \mathbb{E}\left( \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho}, \boldsymbol{\infty}\right] \right)\quad \textrm{as } n\rightarrow\infty,\end{align*}
there exists some large 
 $n_3$
 such that, for any
$n_3$
 such that, for any 
 $\varepsilon>0$
,
$\varepsilon>0$
, 
 \begin{align}  \sum_{i=1}^{\infty} \mathbb{E}\left(  \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]  \right)  -\sum_{i=1}^{n_3} \mathbb{E}\left(  \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]  \right)  <\varepsilon. \end{align}
\begin{align}  \sum_{i=1}^{\infty} \mathbb{E}\left(  \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]  \right)  -\sum_{i=1}^{n_3} \mathbb{E}\left(  \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]  \right)  <\varepsilon. \end{align}
 We first focus on the relation (31) for 
 $\Psi_{\textrm{max}}(x, \, \infty)$
. By (25), (27), and (30), we have
$\Psi_{\textrm{max}}(x, \, \infty)$
. By (25), (27), and (30), we have 
 \begin{align*}  \Psi_{\textrm{max}}(x, \, \infty)  =\mathbb{P}\left(\bigcup_{0< s< \infty}  \left\{\boldsymbol{\rho}x +\boldsymbol{c}\int_0^s e^{-\mathbf{L}(u)} \textrm{d}u  -\sum_{i=1}^{\infty}\boldsymbol{A}_{i}  e^{-\mathbf{L}(\tau_i)}\mathbb{I}_{[\tau_i\leq s]}<\textbf{0}  \right\}  \right). \end{align*}
\begin{align*}  \Psi_{\textrm{max}}(x, \, \infty)  =\mathbb{P}\left(\bigcup_{0< s< \infty}  \left\{\boldsymbol{\rho}x +\boldsymbol{c}\int_0^s e^{-\mathbf{L}(u)} \textrm{d}u  -\sum_{i=1}^{\infty}\boldsymbol{A}_{i}  e^{-\mathbf{L}(\tau_i)}\mathbb{I}_{[\tau_i\leq s]}<\textbf{0}  \right\}  \right). \end{align*}
Then, by Lemma 5, it holds that
 \begin{align}  \Psi_{\textrm{max}}(x, \, \infty)  \leq \mathbb{P}\left(\sum_{i=1}^{\infty}  \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})} > \boldsymbol{\rho}x\right)  \sim \sum_{i=1}^{\infty} \bar{F}(x)  \mathbb{E}\left(  \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]\right\}\right). \end{align}
\begin{align}  \Psi_{\textrm{max}}(x, \, \infty)  \leq \mathbb{P}\left(\sum_{i=1}^{\infty}  \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})} > \boldsymbol{\rho}x\right)  \sim \sum_{i=1}^{\infty} \bar{F}(x)  \mathbb{E}\left(  \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]\right\}\right). \end{align}
Let
 \begin{align*}\mathbf{Z}=(Z_1, \ldots, Z_d) =\left( \int_0^{\infty} e^{-L_1(s)} \textrm{d}s, \ldots, \int_0^{\infty} e^{-L_d(s)} \textrm{d}s\right).\end{align*}
\begin{align*}\mathbf{Z}=(Z_1, \ldots, Z_d) =\left( \int_0^{\infty} e^{-L_1(s)} \textrm{d}s, \ldots, \int_0^{\infty} e^{-L_d(s)} \textrm{d}s\right).\end{align*}
By Lemma 1, we have
 \begin{align}  &\Psi_{\textrm{max}}(x, \, \infty)  \geq \mathbb{P}\left\{\sum_{i=1}^{n_3} \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})}  >\boldsymbol{\rho}x  +\boldsymbol{c}\int_0^{\infty} e^{-\mathbf{L}(s)} \textrm{d}s\right\}\nonumber\\[5pt]   &\geq \mathbb{P}\left\{\bigcup_{i=1}^{n_3}  \left( \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})}  >\boldsymbol{\rho}x+\boldsymbol{c}\mathbf{Z} \right)  \right\}\nonumber\\[5pt] &  \geq \sum_{i=1}^{n_3} \mathbb{P}\left(  \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})}  >\boldsymbol{\rho}x+\boldsymbol{c} \boldsymbol{Z} \right)  \quad   - \sum_{1\leq i<j\leq n_3} \mathbb{P}\left( \boldsymbol{A}_ie^{-\mathbf{L}(\tau_{i})}>\boldsymbol{\rho}x ,  \boldsymbol{A}_je^{-\mathbf{L}(\tau_{j})}>\boldsymbol{\rho}x  \right) \nonumber\\[5pt] &  \;:\!=\; I_1(x)+o(1)\bar{F}(x). \end{align}
\begin{align}  &\Psi_{\textrm{max}}(x, \, \infty)  \geq \mathbb{P}\left\{\sum_{i=1}^{n_3} \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})}  >\boldsymbol{\rho}x  +\boldsymbol{c}\int_0^{\infty} e^{-\mathbf{L}(s)} \textrm{d}s\right\}\nonumber\\[5pt]   &\geq \mathbb{P}\left\{\bigcup_{i=1}^{n_3}  \left( \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})}  >\boldsymbol{\rho}x+\boldsymbol{c}\mathbf{Z} \right)  \right\}\nonumber\\[5pt] &  \geq \sum_{i=1}^{n_3} \mathbb{P}\left(  \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})}  >\boldsymbol{\rho}x+\boldsymbol{c} \boldsymbol{Z} \right)  \quad   - \sum_{1\leq i<j\leq n_3} \mathbb{P}\left( \boldsymbol{A}_ie^{-\mathbf{L}(\tau_{i})}>\boldsymbol{\rho}x ,  \boldsymbol{A}_je^{-\mathbf{L}(\tau_{j})}>\boldsymbol{\rho}x  \right) \nonumber\\[5pt] &  \;:\!=\; I_1(x)+o(1)\bar{F}(x). \end{align}
For 
 $I_1(x)$
, choose
$I_1(x)$
, choose 
 $v_4$
 satisfying
$v_4$
 satisfying 
 \begin{align*}  (1+v_4)^{-\alpha}\geq(1-\varepsilon)\,. \end{align*}
\begin{align*}  (1+v_4)^{-\alpha}\geq(1-\varepsilon)\,. \end{align*}
By Lemma 4, 
 $F\in\mathcal{R}_{-\alpha}$
, and (37), we have
$F\in\mathcal{R}_{-\alpha}$
, and (37), we have 
 \begin{align*}   &\sum_{i=1}^{n_3} \mathbb{P}\left(   \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})} >\boldsymbol{\rho} (1+v_4) x\right)   \sim (1+v_4)^{-\alpha} \bar{F}(x) \sum_{i=1}^{n_3} \mathbb{E}\left(   \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},   \boldsymbol{\infty}\right]\right\}\right) \\[5pt]    &\geq (1-C\varepsilon)\bar{F}(x)   \sum_{i=1}^{\infty} \mathbb{E}\left(   \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},   \boldsymbol{\infty}\right]\right)   -C\varepsilon\bar{F}(x).  \end{align*}
\begin{align*}   &\sum_{i=1}^{n_3} \mathbb{P}\left(   \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})} >\boldsymbol{\rho} (1+v_4) x\right)   \sim (1+v_4)^{-\alpha} \bar{F}(x) \sum_{i=1}^{n_3} \mathbb{E}\left(   \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},   \boldsymbol{\infty}\right]\right\}\right) \\[5pt]    &\geq (1-C\varepsilon)\bar{F}(x)   \sum_{i=1}^{\infty} \mathbb{E}\left(   \nu \left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},   \boldsymbol{\infty}\right]\right)   -C\varepsilon\bar{F}(x).  \end{align*}
From Markov’s inequality, Lemma 6, and the relation (3), we obtain
 \begin{align}   \sum_{k=1}^{d} \mathbb{P}\left( c_k Z_k > \rho_k v_4 x\right)   \leq C x^{ -\beta } \sum_{k=1}^{d} \mathbb{E}Z_k^{\beta}   \leq C\varepsilon \bar{F} (x).  \end{align}
\begin{align}   \sum_{k=1}^{d} \mathbb{P}\left( c_k Z_k > \rho_k v_4 x\right)   \leq C x^{ -\beta } \sum_{k=1}^{d} \mathbb{E}Z_k^{\beta}   \leq C\varepsilon \bar{F} (x).  \end{align}
Then we find
 \begin{align}   I_1(x)   &\geq \sum_{i=1}^{n_3} \mathbb{P}\left(   \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})}   >\boldsymbol{\rho}x+\boldsymbol{c} \boldsymbol{Z},   \boldsymbol{c} \boldsymbol{Z}\leq \boldsymbol{\rho} v_4 x \right)   \nonumber\\[5pt]    &\geq \sum_{i=1}^{n_3} \mathbb{P}\left(   \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})}   >\boldsymbol{\rho} (1+v_4) x,   \boldsymbol{c} \boldsymbol{Z}\leq \boldsymbol{\rho} v_4 x \right)      \geq \sum_{i=1}^{n_3} \mathbb{P}\left(   \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})} >\boldsymbol{\rho} (1+v_4) x\right)   \nonumber\\[5pt]    &-\sum_{i=1}^{n_3} \mathbb{P}\left\lbrace   \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})} > \boldsymbol{\rho}x,   \bigcup_{k=1}^d \left( c_k Z_k > \rho_k v_4 x\right)   \right\rbrace   \nonumber\\[5pt]    &\geq (1-C\varepsilon)\bar{F}(x) \sum_{i=1}^{\infty} \mathbb{E}\left(   \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},   \boldsymbol{\infty}\right]\right\}\right).      \end{align}
\begin{align}   I_1(x)   &\geq \sum_{i=1}^{n_3} \mathbb{P}\left(   \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})}   >\boldsymbol{\rho}x+\boldsymbol{c} \boldsymbol{Z},   \boldsymbol{c} \boldsymbol{Z}\leq \boldsymbol{\rho} v_4 x \right)   \nonumber\\[5pt]    &\geq \sum_{i=1}^{n_3} \mathbb{P}\left(   \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})}   >\boldsymbol{\rho} (1+v_4) x,   \boldsymbol{c} \boldsymbol{Z}\leq \boldsymbol{\rho} v_4 x \right)      \geq \sum_{i=1}^{n_3} \mathbb{P}\left(   \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})} >\boldsymbol{\rho} (1+v_4) x\right)   \nonumber\\[5pt]    &-\sum_{i=1}^{n_3} \mathbb{P}\left\lbrace   \boldsymbol{A}_i e^{-\mathbf{L}(\tau_{i})} > \boldsymbol{\rho}x,   \bigcup_{k=1}^d \left( c_k Z_k > \rho_k v_4 x\right)   \right\rbrace   \nonumber\\[5pt]    &\geq (1-C\varepsilon)\bar{F}(x) \sum_{i=1}^{\infty} \mathbb{E}\left(   \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},   \boldsymbol{\infty}\right]\right\}\right).      \end{align}
Combining the relations (38), (39), and (41) yields the relation (31) for 
 $\Psi_{\textrm{max}}(x, \, \infty)$
.
$\Psi_{\textrm{max}}(x, \, \infty)$
.
 Now we establish the relation (32) for 
 $\Psi_{\textrm{min}}(x,\infty)$
. We obtain by the inclusion–exclusion principle and Lemma 5 that
$\Psi_{\textrm{min}}(x,\infty)$
. We obtain by the inclusion–exclusion principle and Lemma 5 that 
 \begin{align}  &\Psi_{\textrm{min}}(x, \, \infty)\nonumber\\[5pt] &  \leq \mathbb{P}\left(\bigcup_{k=1}^d \left\{ \sum_{i=1}^{\infty}A_{ki}e^{-L_k(\tau_i)}>\rho_kx\right\}\right)\nonumber\\[5pt]   &= \sum_{k=1}^d (\!-\!1)^{k-1}  \sum_{1\leq m_1<\cdots< m_k\leq d}  \mathbb{P}\left(\sum_{i=1}^{\infty}\boldsymbol{A}_{i}e^{-\mathbf{L}(\tau_i)}  >\left(\sum_{l=1}^k\rho_{m_l}\boldsymbol{e}_{m_l}\right)x\right)\nonumber\\[5pt]   &\sim  \bar{F}(x)  \sum_{k=1}^{d}(\!-\!1)^{k-1}\sum_{1\leq m_1<\cdots< m_k\leq d}  \sum_{i=1}^{\infty}  \mathbb{E}\left(  \nu \left\{\left(  e^{\mathbf{L}(\tau_{i-1})}\sum_{l=1}^k\rho_{m_l}\boldsymbol{e}_{m_l},  \boldsymbol{\infty}  \right] \right\}  \right). \end{align}
\begin{align}  &\Psi_{\textrm{min}}(x, \, \infty)\nonumber\\[5pt] &  \leq \mathbb{P}\left(\bigcup_{k=1}^d \left\{ \sum_{i=1}^{\infty}A_{ki}e^{-L_k(\tau_i)}>\rho_kx\right\}\right)\nonumber\\[5pt]   &= \sum_{k=1}^d (\!-\!1)^{k-1}  \sum_{1\leq m_1<\cdots< m_k\leq d}  \mathbb{P}\left(\sum_{i=1}^{\infty}\boldsymbol{A}_{i}e^{-\mathbf{L}(\tau_i)}  >\left(\sum_{l=1}^k\rho_{m_l}\boldsymbol{e}_{m_l}\right)x\right)\nonumber\\[5pt]   &\sim  \bar{F}(x)  \sum_{k=1}^{d}(\!-\!1)^{k-1}\sum_{1\leq m_1<\cdots< m_k\leq d}  \sum_{i=1}^{\infty}  \mathbb{E}\left(  \nu \left\{\left(  e^{\mathbf{L}(\tau_{i-1})}\sum_{l=1}^k\rho_{m_l}\boldsymbol{e}_{m_l},  \boldsymbol{\infty}  \right] \right\}  \right). \end{align}
By the inclusion–exclusion principle and (40), we get

Combining (42) and (43) gives the relation (32).
 To establish the relation (31) for 
 $\Psi_{\textrm{ult}}(x,\infty)$
, we note that by Lemma 5,
$\Psi_{\textrm{ult}}(x,\infty)$
, we note that by Lemma 5, 
 \begin{align}  \Psi_{\textrm{ult}}(x,\infty)  &=  \mathbb{P}\left\{  \inf_{0< s< \infty}  \left( \rho_1x+c_1\int_0^s e^{-L_1(u)} \textrm{d}u  -\sum_{i=1}^{N(s)}A_{1i}e^{-L_1{(\tau_i)}}  \right)<0,\ldots,\right.\nonumber\\[5pt]   &\left.\qquad\quad  \inf_{0< s< \infty}\left( \rho_dx+c_d\int_0^s e^{-L_d(u)} \textrm{d}u  -\sum_{i=1}^{N(s)}A_{di}e^{-L_d{(\tau_i)}}  \right)<0  \right\}\nonumber\\[5pt]   &\leq  \mathbb{P}\left\{  \sum_{i=1}^{\infty}A_{1i}e^{-L_1{(\tau_i)}}>\rho_1x  ,\ldots,  \sum_{i=1}^{\infty}A_{di}e^{-L_d{(\tau_i)}}>\rho_dx  \right\}  \nonumber\\[5pt] &  =  \mathbb{P}\left\{  \sum_{i=1}^{\infty}  \boldsymbol{A}_{i}e^{-\mathbf{L}{(\tau_i)}}>  \boldsymbol{\rho} x  \right\}\sim  \sum_{i=1}^{\infty} \bar{F}(x) \mathbb{E}\left(  \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]\right\}\right).   \end{align}
\begin{align}  \Psi_{\textrm{ult}}(x,\infty)  &=  \mathbb{P}\left\{  \inf_{0< s< \infty}  \left( \rho_1x+c_1\int_0^s e^{-L_1(u)} \textrm{d}u  -\sum_{i=1}^{N(s)}A_{1i}e^{-L_1{(\tau_i)}}  \right)<0,\ldots,\right.\nonumber\\[5pt]   &\left.\qquad\quad  \inf_{0< s< \infty}\left( \rho_dx+c_d\int_0^s e^{-L_d(u)} \textrm{d}u  -\sum_{i=1}^{N(s)}A_{di}e^{-L_d{(\tau_i)}}  \right)<0  \right\}\nonumber\\[5pt]   &\leq  \mathbb{P}\left\{  \sum_{i=1}^{\infty}A_{1i}e^{-L_1{(\tau_i)}}>\rho_1x  ,\ldots,  \sum_{i=1}^{\infty}A_{di}e^{-L_d{(\tau_i)}}>\rho_dx  \right\}  \nonumber\\[5pt] &  =  \mathbb{P}\left\{  \sum_{i=1}^{\infty}  \boldsymbol{A}_{i}e^{-\mathbf{L}{(\tau_i)}}>  \boldsymbol{\rho} x  \right\}\sim  \sum_{i=1}^{\infty} \bar{F}(x) \mathbb{E}\left(  \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]\right\}\right).   \end{align}
Analogously to the reasoning for the lower bound of 
 $\Psi_{\textrm{max}}(x, \, \infty)$
, it follows that
$\Psi_{\textrm{max}}(x, \, \infty)$
, it follows that 
 \begin{align}  &\Psi_{\textrm{ult}}(x,\infty)  =\mathbb{P}\left\{   \sup_{0<s<\infty} \boldsymbol{R}(t)>\boldsymbol{\rho}x   \right\}\nonumber\\[5pt]   &\geq  \mathbb{P}\left\{  \sum_{i=1}^{\infty}A_{1i}e^{-L_1{(\tau_i)}} -c_1Z_1 > \rho_1x,  \, \ldots, \,  \sum_{i=1}^{\infty}A_{di}e^{-L_d{(\tau_i)}} -c_dZ_d >\rho_dx  \right\}\nonumber\\[5pt]   &\geq  \mathbb{P}\left\{  \sum_{i=1}^{n_3}  \boldsymbol{A}_{i}e^{-\mathbf{L}{(\tau_i)}}  >\boldsymbol{\rho} x + \boldsymbol{c}\mathbf{Z}  \right\}\nonumber\\[5pt] &  \geq (1-C\varepsilon)\bar{F}(x) \sum_{i=1}^{\infty} \mathbb{E}\left(  \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]\right\}\right).   \end{align}
\begin{align}  &\Psi_{\textrm{ult}}(x,\infty)  =\mathbb{P}\left\{   \sup_{0<s<\infty} \boldsymbol{R}(t)>\boldsymbol{\rho}x   \right\}\nonumber\\[5pt]   &\geq  \mathbb{P}\left\{  \sum_{i=1}^{\infty}A_{1i}e^{-L_1{(\tau_i)}} -c_1Z_1 > \rho_1x,  \, \ldots, \,  \sum_{i=1}^{\infty}A_{di}e^{-L_d{(\tau_i)}} -c_dZ_d >\rho_dx  \right\}\nonumber\\[5pt]   &\geq  \mathbb{P}\left\{  \sum_{i=1}^{n_3}  \boldsymbol{A}_{i}e^{-\mathbf{L}{(\tau_i)}}  >\boldsymbol{\rho} x + \boldsymbol{c}\mathbf{Z}  \right\}\nonumber\\[5pt] &  \geq (1-C\varepsilon)\bar{F}(x) \sum_{i=1}^{\infty} \mathbb{E}\left(  \nu \left\{\left(e^{\mathbf{L}(\tau_{i-1})}\, \boldsymbol{\rho},  \boldsymbol{\infty}\right]\right\}\right).   \end{align}
Combining the relations (44) and (45) gives the relation (31) for 
 $\Psi_{\textrm{ult}}(x, \, \infty)$
. This completes the proof.
$\Psi_{\textrm{ult}}(x, \, \infty)$
. This completes the proof.
Acknowledgements
We are sincerely grateful to the anonymous reviewers, whose insightful comments helped us improve the quality of this paper. We also express our great appreciation to the editor for their kind work.
Funding information
This work was supported by the National Natural Science Foundation of China (Project No. 71271042).
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.
 
  
 
 
 
 
 
 
 
 

