Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-10T05:44:04.715Z Has data issue: false hasContentIssue false

Tail asymptotics and precise large deviations for some Poisson cluster processes

Published online by Cambridge University Press:  26 July 2024

Fabien Baeriswyl*
Affiliation:
Département des Opérations, Université de Lausanne and Laboratoire de Probabilités, Statistique et Modélisation, Sorbonne Université
Valérie Chavez-Demoulin*
Affiliation:
Département des Opérations, Université de Lausanne
Olivier Wintenberger*
Affiliation:
Laboratoire de Probabilités, Statistique et Modélisation, Sorbonne Université
*
*Postal address: Département des Opérations, Anthropole, CH-1015 Lausanne, Suisse. Emails: fabien.baeriswyl@unil.ch and fabien.baeriswyl@sorbonne-universite.fr
**Postal address: Département des Opérations, Anthropole, CH-1015 Lausanne, Suisse.
***Postal address: Laboratoire de Probabilités, Statistique et Modélisation, Sorbonne Université, Campus Pierre et Marie Curie, 4 place Jussieu, 75005 Paris, France.
Rights & Permissions [Opens in a new window]

Abstract

We study the tail asymptotics of two functionals (the maximum and the sum of the marks) of a generic cluster in two sub-models of the marked Poisson cluster process, namely the renewal Poisson cluster process and the Hawkes process. Under the hypothesis that the governing components of the processes are regularly varying, we extend results due to [6, 19], notably relying on Karamata’s Tauberian Theorem to do so. We use these asymptotics to derive precise large-deviation results in the fashion of [32] for the just-mentioned processes.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

In this paper, we study the asymptotic properties of processes exhibiting clustering behaviour. Such processes are common in applications: for instance, earthquakes in seismology, where a main shock has the ability to trigger a series of secondary shocks in a specific spatio-temporal neighbourhood; but also accidents giving rise to a series of subsequent claims in non–life insurance or heavy rainfall in meteorology, to name a few. We will focus on two different processes that have effectively been used in these fields. The Hawkes process has been introduced in the pioneer works of [Reference Ogata46, Reference Vere-Jones and Ozaki59] and has found applications in earthquake modeling (see e.g. [Reference Musmeci and Vere-Jones41]), in finance (see e.g. [Reference Chavez-Demoulin, Davison and McNeil9, Reference Hawkes22]), in genome analysis (see [Reference Reynaud-Bouret and Schbath51]) or in insurance (see [Reference Swishchuk55]). The renewal Poisson cluster process is a tool of choice in an insurance context for modelling series of claims arising from a single event (see e.g. [Reference Mikosch38] for a reference textbook), as well as in teletraffic modelling (see [Reference Faÿ, González-Arévalo, Mikosch and Samorodnitsky19]) and in meteorology and weather forecasting (see e.g. [Reference Foufoula-Georgiou and Lettenmaier20] or [Reference Onof48]).

The above processes, described heuristically and in the specific contexts just mentioned, are part of the class of the so-called point processes: for a comprehensive overview, see the monographs of [Reference Daley and Vere-Jones13, Reference Daley and Vere-Jones14] or, more recently, and with connection to martingale theory, see [Reference Brémaud.8]. Point process theory is an elegant framework describing the properties of random points occurring in general spaces. In both cases the temporal marked point process $N$ possesses a representation as an infinite sum of Dirac measures (recall that the Dirac measure $\varepsilon $ on $\mathcal{A}$ satisfies for every $A \in \mathcal{A}$ that ${\varepsilon _x}( A ) = 1$ if $x \in A$ and ${\varepsilon _x}( A ) = 0$ otherwise):

$$N({\cdot}) = \mathop \sum \limits_{i = 1}^\infty {\varepsilon _{{T_i},{A_i}}}({\cdot}),$$

where ${T_i}$ is the (random) time of occurrence of the $i$ th event and ${A_i}$ is its associated mark. The specific temporal marked point processes that we are interested in are cluster point processes. More specifically, we will assume that there exists an immigration process under which independent points arise at a Poissonian rate; then each of these immigrant events has the ability to trigger new points, called first generation offspring events. We will then look at two submodels. One is the renewal Poisson cluster process. It is complete with the immigrant events and their first generation offsprings. The term “renewal” comes from the fact that the times of the events form a renewal sequence. The other submodel is the Hawkes process in which every point of the first generation has the ability to generate new points, acting as an immigrant event, potentially generating therefore a whole cascade of points. Each immigrant event and its associated offspring events (whether direct or indirect children) form a generic cluster.

We will study the tail asymptotics of the partial maxima and sums of a transformation $X = f( A )$ , for some nonnegative real-valued function $\,f$ , of the mark $A$ of any event of $N$ . Determining the behaviour of the maximum and the sum at the level of the cluster decomposition of a process is crucial to obtaining limit theorems for partial maxima and sums of the whole process over finite intervals—see e.g. [Reference Karabash and Zhu29, Reference Stabile and Torrisi54], or [Reference Basrak, Wintenberger and Žugec6]. Thus, we describe first a generic cluster from each of the just-mentioned processes.

For the renewal Poisson cluster process, we will consider a distributional representation of the maximum of the marks in the generic cluster, denoted ${H^R}$ :

$${H^R}\stackrel{\textrm{D}}{=} X \vee \mathop {\mathop \bigvee \limits_{j = 1} }\limits^{{K_A}} {X_j},$$

where $X$ is a transformation $\,f( A )$ of the mark $A$ of the immigrant event and ${X_j}$ is the transformed mark of the $j$ th first-generation offspring event. The number of offspring events, ${K_A}$ , is random and possibly dependent on $X$ . In particular, we will let vector $\left( {X,{K_A}} \right)$ be heavy-tailed and assess whether the heavy-tailedness transfers to ${H^R}$ . Details are relegated to Section 2. Note that under the hypothesis that $X$ and ${K_A}$ are independent, the above-distributional equation has received early consideration, e.g. in [Reference Tillier and Wintenberger58] or [Reference Jessen and Mikosch28], where it is shown that ${H^R}$ and $X$ belong to the same maximum domain of attraction of some extreme value distribution (MDA for short—see [Reference De Haan and Ferreira15, Reference Resnick49], or [Reference Embrechts, Klüppelberg and Mikosch17] for references on extreme value theory). A more recent advance in the case where $X$ and ${K_A}$ are dependent is to be found in [Reference Basrak5], where a similar conclusion is reached about the MDA. Our emphasis is on the Fréchet MDA, which allows a certain refinement on the characterisation of the tail asymptotics.

We will also consider tail asymptotics for the sum functional, which for the very same renewal Poisson cluster process, and for a generic cluster, possesses the distributional representation

\begin{equation*}{D^R}\stackrel{\textrm{D}}{=} X + \mathop \sum \limits_{j = 1}^{{K_A}} {X_j},\end{equation*}

supposing again that $\left( {X,{K_A}} \right)$ is heavy-tailed, We will also assess whether the heavy-tailedness of $\left( {X,{K_A}} \right)$ transfers to ${D^R}$ . This equation has received consideration under the hypothesis that $X$ and ${K_A}$ are independent; see [Reference Faÿ, González-Arévalo, Mikosch and Samorodnitsky19]. We will retrieve their results in our framework. More recently, in the case of arbitrary dependence between $X$ and ${K_A}$ , similar asymptotics have been derived in [Reference Olvera-Cravioto47].

We will then derive the very same kind of tail asymptotics for the very same functionals of a generic cluster in the context of the Hawkes process. The distributional representation associated with the maximum of the marks in a generic cluster, denoted ${H^H}$ , is given by

$${H^H}\stackrel{\textrm{D}}{=} X \vee \mathop {\mathop \bigvee \limits_{j = 1} }\limits^{{L_A}} H_j^H,$$

where ${L_A}$ is the number of first-generation offspring events $A$ of the event acting as immigrant, and ${H_j}$ is the maximum of the marks of the offsprings of the $j$ th offspring of the immigrant event considered, itself acting as immigrant for further subranches of the cluster, emphasising once again the cascade structure of the Hawkes process. The equation for ${H^H}$ above is a special case of the higher-order Lindley equation; see [Reference Karpelevich, Kelbert and Suhov31]. Note that $X = f( A )$ and X = ${L_A}$ are dependent through $A$ . Letting ${L_A}$ be Poisson-distributed with parameter ${\kappa _A}$ and ${\textrm{letting}}\;\left( {X,{\kappa _A}} \right)$ be heavy-tailed, we assess whether this transfers to ${H^H}$ . This functional has received attention in the recent work of [Reference Basrak5], where it was shown that ${H^H}$ has the same MDA as that of $X$ .

The distributional representation associated with the sum of the marks in a generic cluster in the Hawkes process, denoted ${D^H}$ , is given by

$${D^H}\stackrel{\textrm{D}}{=} X + \mathop \sum \limits_{j = 1}^{{L_A}} D_j^H.$$

We will again let $\left( {X,{\kappa _A}} \right)$ be heavy-tailed and assess whether this transfers to ${D^H}$ . This distributional equation, with cascade structure, has been extensively studied: see e.g. [Reference Basrak, Kulik and Palmowski4]; but also, as a main stochastic modelling approach to Google’s PageRank algorithm, see [Reference Chen, Litvak and Olvera-Cravioto10, Reference Chen, Litvak and Olvera-Cravioto11, 27, Reference Litvak, Scheinhardt and Volkovich34Reference Volkovich and Litvak60] and, even more closely related to our results, [Reference Olvera-Cravioto47]; in the context of random networks, see [Reference Markovich and Rodionov37] or [Reference Markovich36]; for a recent theoretical advance as well as application to queuing systems, see [Reference Asmussen and Foss1] or [Reference Ernst, Asmussen and Hasenbein18].

The way we will deal with heavy-tailedness is through the classical notion of regular variation, introduced by J. Karamata in the 20th century (see e.g. [Reference Karamata30]), which specifies that the functions of interest behave, in a neighbourhood of infinity, like power-law functions. For a thorough, textbook treatment of the topic in univariate settings, see [Reference Bingham, Goldie and Teugels7]; we rely on [Reference De Haan and Ferreira15, Reference Mikosch and Wintenberger40, Reference Resnick49Reference Resnick50] for the multivariate case.

The flexibility offered by our approach to the way we specify the regular variation of the governing components of our processes allows us, in the sequel, to extend results due to [Reference Faÿ, González-Arévalo, Mikosch and Samorodnitsky19, Reference Hult and Samorodnitsky26, Reference Robert and Segers52], or [Reference Denisov, Foss and Korshunov16] so that all studied the asymptotics of the tail of distributional quantities such as $H$ and $D$ in the above examples but under various assumptions on the relations of the tails of $X$ and ${K_A}$ for the renewal Poisson cluster process, respectively, $X$ and ${L_A}$ , for the Hawkes process.

Finally, we use the results on the tails of $H$ and $D$ to derive (precise) large-deviation principles for our processes of interest in the flavour of [Reference Mikosch and Nagaev39, Reference Nagaev44]. The “precise” terminology comes from the fact that we have exact asymptotic equivalence instead of logarithmic ones when assuming Cramér’s condition. Early results on precise large deviations in the case of non-random maxima and sums can be found in [Reference Cline and Hsing12, Reference Heyde24, Reference Nagaev42, Reference Nagaev43]. The case of random maxima and sums of extended regularly varying random variables (a class containing regularly varying random variables) is to be found in [Reference Klüppelberg and Mikosch32], and we will rely on their results to derive our very own precise large-deviation results. Contributions in this area for another subclass of subexponential distributions, namely, the class of consistently varying random variables, can be found in [Reference Tang, Su, Jiang and Zhang57] or [Reference Ng, Tang, Yan and Yang45]; for precise large-deviation results on (negatively) dependent sequences, see [Reference Tang56] or [Reference Liu35].

The organisation of the paper is as follows: In Section 2, we describe the main processes of interest, those that are part of the Poisson cluster process family; in Section 3, we recall some important notions and characterisations of (multivariate) regular variation; in Section 4, we derive the tail asymptotics for the maximum of the marks in a generic cluster in the renewal Poisson cluster process; in Section 5, we derive the tail asymptotics for the sum of the marks in a generic cluster in the renewal Poisson cluster process; in Section 6, we derive the tail asymptotics for the maximum of the marks in a generic cluster in the Hawkes process; in Section 7, we derive the tail asymptotics for the sum of the marks in a generic cluster in the Hawkes process; in Section 8, we use the results from Section 4 to Section 7 to derive (precise) large-deviations result for our processes of interest.

Notation

Vectors are usually in boldface. By “i.i.d.”, we classically mean independent and identically distributed, and, consistently, “i.d.” means identically distributed. We let ⌈ $\cdot $ ⌉ denote the upper integer part and ⌊ $\cdot $ ⌋ denote the lower integer part. For two functions $\,f({\cdot})$ and $g({\cdot})$ and $c \in \left\{ {0,\infty } \right\}$ , we denote $\,f(x) = \mathcal{O}( {g(x)} ),\ {\textrm{as }}x \to c$ whenever ${\textrm{lim sup}}_{x \to c}\left|\, {f(x)/g(x)} \right| \leqslant M$ , for some finite $M \gt 0$ ; $\,f(x) = o( {g( x )}),\ {\textrm{as\ }}x \to c$ whenever ${\textrm{li}}{{\textrm{m}}_{x \to c}}\left|\, {f( x )/g( x )} \right| = 0$ ; ${\textrm{and}}f(x)\sim g(x)$ , as $x \to c$ whenever ${\textrm{li}}{{\textrm{m}}_{x \to c}}\, f(x)/g(x) = 1$ . The product of two measures $\mu $ and $\nu $ is written as the tensor product $\mu \otimes \nu $ .

2. Random functionals of clusters

We formally introduce the general Poisson cluster process, a class that includes the processes discussed in Section 1, keeping the spirit of the presentation and (most) notations from [Reference Basrak, Wintenberger and Žugec6]. As hinted at in Section 1, this process is made up of two components: an immigration process and an offspring process.

The immigration process, say, ${N_0}$ , is a marked homogeneous Poisson process (or marked PRM in short, for marked Poisson random measure), with representation given by:

$${N_0}({\cdot})\; :\!= \mathop \sum \limits_{i = 1}^\infty {\varepsilon _{{{\Gamma }_i},{A_{i0}}}}({\cdot}).$$

This point process has mean measure $\nu {\textrm{Leb}} \otimes F$ , for $\nu \gt 0$ , on the space $\left[ {0,\infty } \right) \times \mathbb{A}$ , where ${\textrm{Leb}}$ is the Lebesgue measure, $F$ is the common distribution function to all marks ${({A_{i0}})_{i \in \mathbb{N}}}$ , which take values on a measurable space $\left( {\mathbb{A},\mathcal{A}} \right)$ ; and where $\mathcal{A}$ corresponds to the Borel $\sigma $ -field on $\mathbb{A}$ . In particular, this means that the sequence of times ${({{\Gamma }_i})_{i \in \mathbb{N}}}$ , corresponding to the arrivals of immigrant events, is a homogeneous Poisson process with rate given by $\nu {\textrm{Leb}}$ . Since the space $\mathbb{A}$ can be quite general, applying a transformation $\,f({\cdot})\;:\; \mathbb{A} \to {\mathbb{R}_ + }$ is natural, especially in practical applications. Note that we will also assume this transformation of the marks, i.e. we consider only nonnegative transformed marks in our models. For example, in a non–life insurance context and supposing that ${A_{i0}}$ represents the characteristics of the $i$ th accident, $\,f\left( {{A_{i0}}} \right)$ could represent the claim size pertaining to this accident. In subsequent sections, and to ease the notation, we shall denote ${X_{i0}}\;:\!=\; f\left( {{A_{i0}}} \right)$ .

Conditioning on observing an immigration event at time ${{\Gamma }_i}$ , the marked PRM ${N_0}$ is supplemented with an additional point process in ${M_p}( {[ {0,\infty } ) \times \mathbb{A}} )$ (the space of locally finite point measures on $\left[ {0,\infty } \right) \times \mathbb{A}$ ), which we denote by ${G_{{A_{i0}}}}$ . The cluster of points ${G_{{A_{i0}}}}$ , occurring after time ${{\Gamma }_i}$ , augments ${N_0}$ with triggered offspring points or events.

The offspring cluster process, conditioned on observing an immigrant event $\left( {{{\Gamma }_i},{A_{i0}}} \right)$ , admits the representation

$${G_{{A_{i0}}}}({\cdot}):\!=\; \mathop \sum \limits_{j = 1}^{{K_{{A_{i0}}}}} {\varepsilon _{{T_{ij}},{A_{ij}}}}({\cdot}),$$

where ${({T_{ij}})_{1 \leqslant j \leqslant {K_{{A_{i0}}}}}}$ forms a sequence of nonnegative random variables indicating, for a fixed $j$ , the random time from the immigrant event occurring at time ${{\Gamma }_i}$ and the $j$ th event of the cluster and where ${K_{{A_{i0}}}}$ is a random variable with values in ${\mathbb{N}_0}$ corresponding to the number of events in the $i$ th cluster. These events are the offspring of the immigrant event identified by $\left( {{{\Gamma }_i},\;{A_{i0}}} \right)$ . A complete representation of the general Poisson cluster process is given by

$$N({\cdot})\;:\!=\; \mathop \sum \limits_{i = 1}^\infty \mathop \sum \limits_{j = 0}^{{K_{{A_{i0}}}}} {\varepsilon _{{{\Gamma }_i} + {T_{ij}},{A_{ij}}}}({\cdot}),$$

providing we set ${T_{i0}} = 0$ for all $i \in \mathbb{N}$ .

The first functional of interest is the maximum of the marks in the $i$ th cluster, defined by

(1) \begin{align}{H_i}\;:\!=\; \mathop {\mathop \bigvee \limits_{j = 0} }\limits^{{K_{{A_{i0}}}}} {X_{ij}}.\end{align}

For ease of notation, we have defined ${X_{i0}} = f\left( {{A_{i0}}} \right)$ ; accordingly, we let ${X_{ij}} = f\left( {{A_{ij}}} \right)$ for the transformation $\,f({\cdot})\;:\;\mathbb{A} \to {\mathbb{R}_ + }$ . The point process associated with the $i$ th cluster is defined by

$${C_i}({\cdot})\;:\!=\; {\varepsilon _{0,{A_{i0}}}}({\cdot}) + {G_{{A_{i0}}}}({\cdot}).$$

It allows us to define the second functional of interest in this paper, namely, the sum of all marks in the $i$ th cluster, by

(2) \begin{align}{D_i}\;:\!=\; \mathop \int \nolimits_{\left[ {0,\infty } \right) \times \mathbb{A}} f(a){C_i}( {{\textrm{d}}t,{\textrm{d}}a} ).\end{align}

In Section 8, we will look at the whole process on a subset of the temporal axis: At the level of the point process $N$ , the sum of all marks in the finite time interval $\left[ {0,\ T} \right]$ , for $T \gt 0$ , is given by

(3) \begin{align}{S_T}\;:\!=\; \mathop \int \nolimits_{[ {0,T} ] \times \mathbb{A}} f(a)N( {{\textrm{d}}t,{\textrm{d}}a} ).\end{align}

From Section 4 to Section 7, we propose tail asymptotics for ${H_i}$ and ${D_i}$ in the settings of mainly two different submodels of the general Poisson cluster process, which was briefly described in the introduction, that we formally discuss next, keeping the presentation in [Reference Basrak, Wintenberger and Žugec6] but fully described in Example 6.3 of [Reference Daley and Vere-Jones13]. However, we refer to the former reference for a complete description. In our work, we also assume that the sequence of marks $\left( {{X_{ij}}} \right)$ is i.i.d.

2.1. Mixed binomial Poisson cluster process

In this model, the assumptions on ${N_0}$ are kept unchanged, and the $i$ th cluster has a representation of the form

$${C_i}({\cdot}) = {{\varepsilon }_{0,{A_{i0}}}}({\cdot}) + {G_{{A_{i0}}}}({\cdot}) = {\varepsilon _{0,{A_{i0}}}}({\cdot}) + \mathop \sum \limits_{j = 1}^{{K_{{A_{i0}}}}} {\varepsilon _{{W_{ij}},{A_{ij}}}}({\cdot}),$$

where ${\left( {{K_{{A_{i0}}}}, {{({W_{ij}})}_{j \geqslant 1}}, {{({A_{ij}})}_{j \geqslant 0}}} \right)_{i \geqslant 0}}$ is an i.i.d. sequence; the sequence ${({A_{ij}})_{j \geqslant 0}}$ is also i.i.d. for any fixed $i = 1,{\textrm{}}2, \ldots ;$ and, finally, ${({A_{ij}})_{j \geqslant 1}}$ is independent of both ${K_{{A_{i0}}}}$ and ${({W_{ij}})_{j \geqslant 1}}$ for any $i = 1, 2, \ldots $ . Note that this latter statement does not exclude dependence between ${A_{i0}}$ and ${K_{{A_{i0}}}}$ (respectively, ${({W_{ij}})_{j \geqslant 1}}$ ). Additionally, it is assumed that $\mathbb{E}\left[ {{K_A}} \right] \lt \infty $ , where ${K_A}$ denotes a generic random quantity distributed as ${K_{{A_{i0}}}}.$

2.2. Renewal Poisson cluster process

In this model, the $i$ th cluster has the representation

(4) \begin{align}{C_i}({\cdot}) = {\varepsilon _{0,{A_{i0}}}}({\cdot}) + {G_{{A_{i0}}}}({\cdot}) = {\varepsilon _{0,{A_{i0}}}}({\cdot}) + \mathop \sum \limits_{j = 1}^{{K_{{A_{i0}}}}} {\varepsilon _{{T_{ij}},{A_{ij}}}}({\cdot}),\end{align}

where all the assumptions from Section 2.1 hold, except that now, we denote the occurrence time sequence of the offspring events by ${({T_{ij}})_{j \geqslant 1}}$ to emphasise that this forms a renewal sequence—that is, for any fixed $i = 1, 2, \ldots $ , ${T_{ij}} = {W_{i1}} + \cdots + {W_{ij}}.$ Note that this process is such that every Poisson immigrant has only ${K_{{A_{i0}}}}$ first-generation offspring events. These points cannot generate further generations themselves, in contrast with the Hawkes process, whichwe will introduce next.

Applying the transformation $\,f$ on the marks of the events, we will, in Section 4 and Section 5, derive tail asymptotics of generic versions of Equation (1) and Equation (2), given by:

  1. (i) for the maximum,

    (5) \begin{align}{H^R} \stackrel{\textrm{D}}{=} X \vee \mathop {\mathop \bigvee \limits_{j = 1} }\limits^{{K_A}} {X_j};\end{align}
  2. (ii) for the sum,

    (6) \begin{align}{D^R}\stackrel{\textrm{D}}{=} X + \mathop \sum \limits_{j = 1}^{{K_A}} {X_j}.\end{align}

We isolate $X\;:\!=\; f( A )$ from the rest of the transformed claims $\left( {{X_j}} \right)\;:\!=\; \left( {f\left( {{A_j}} \right)} \right)$ to emphasise the possible dependence between $X$ and ${K_A}$ .

Remark 1. These two processes have been considered in the monograph [Reference Mikosch38]. The mixed binomial Poisson cluster process and the renewal Poisson cluster process are very similar in their description, and because their sole difference is the placement of the points along the time axis, we focus—in what follows—on the renewal Poisson cluster process. The results of Section 4 and Section 5 are directly applicable to the mixed binomial Poisson cluster process; the results of Section 8 also apply, upon the use of an alternative justification regarding the leftover effects to be discussed in that section. We refer to [Reference Basrak5, Reference Basrak, Wintenberger and Žugec6] for justifications.

2.3. Hawkes process

The specificity of the Hawkes process is that the clusters have a recursive pattern, in the sense that each point, whether immigrant or offspring, has the ability to act as an immigrant and generate a new cluster. To obtain the representation of the $i$ th cluster ${G_{{A_i}}}$ , one typically introduces a time-shift operator ${\theta _t}$ , as in [Reference Basrak, Wintenberger and Žugec6]. Let $m({\cdot}) = \mathop \sum \nolimits_{j = 1}^\infty {\varepsilon _{{t_j},{a_j}}}({\cdot})$ be a point measure: Then, the time-shift operator is defined by

$${\theta _t}m({\cdot}) = \mathop \sum \limits_{j = 1}^\infty {\varepsilon _{{t_j} + t,{a_j}}}({\cdot})$$

for all $t \geqslant 0$ . Then the (recursive) representation of the $i$ th cluster, conditioning on observing an immigration event $\left( {{{\Gamma }_i},{A_{i0}}} \right)\!,$ is given by

$${C_i}({\cdot}) = {\varepsilon _{0,{A_{i0}}}}({\cdot}) + {G_{{A_{i0}}}}({\cdot}) = {\varepsilon _{0,{A_{i0}}}}({\cdot}) + \mathop \sum \limits_{j = 1}^{{L_{{A_{i0}}}}} \left( {{\varepsilon _{\tau^{\!1} _{ij},A^{\!1}_{ij}}}({\cdot}) + {\theta _{\tau^{\!1} _{ij}}}{G_{A^{\!1}_{ij}}}({\cdot})} \right)\!,$$

where, given ${A_{i0}}$ , the first-generation offspring process ${N_{{A_{i0}}}}({\cdot}) = \mathop \sum \nolimits_{j = 1}^{{L_{{A_{i0}}}}} {\varepsilon _{\tau _{ij}^1,A_{ij}^1}}({\cdot})$ is again a Poisson process, this time with (random) mean measure $\smallint h\left( {s,{A_{i0}}} \right){\textrm{ds}} \otimes F$ , and where the sequence ${({G_{A_{ij}^1}})_{j \geqslant 1}}$ is i.i.d. and independent of the first-generation offspring process ${N_{{A_{i0}}}}$ . Note that the sequence of times in the cluster representation ${G_{{A_{i0}}}}$ , hereby denoted as $\left( {{\tau _{ij}}} \right)$ , is the sequence of times of the first-generation offspring events. The function $h({\cdot})$ is referred to as the fertility function and controls both the displacement and the expected number of offspring(s) of a specific event. Hence, by definition, the number of first-generation offspring events is Poisson and depends on the mark of the event acting as an immigrant to the stream of points considered. Note that the above representation also emphasises the independence among the subclusters considered at any point from the immigrant perspective. There is a connection with Galton-Watson theory that was historically used to show that the Hawkes process is a general Poisson cluster process (see [Reference Hawkes and Oakes23]); we define it as part of this family, but the Hawkes process is classically introduced from the self-excitation perspective; that is, from the specification of the function $h({\cdot})$ (see e.g. [Reference Hawkes21]).

We propose in Section 6 and Section 7 tail asymptotics for the generic versions of Equation (2) and Equation (1), which satisfy, in the settings of the Hawkes process, fixed-point distributional equations of the form:

  1. (i) for the maximum,

    (7) \begin{align}{H^H} \stackrel{\textrm{D}}{=} X \vee \mathop {\mathop \bigvee \limits_{j = 1} }\limits^{{L_A}} H_j^H;\end{align}
  2. (ii) for the sum,

    (8) \begin{align}{D^H}\stackrel{\textrm{D}}{=} X + \mathop \sum \limits_{j = 1}^{{L_A}} D_j^H,\end{align}

where ${L_A}|A\sim {\textrm{Poisson}}\left( {{\kappa _A}} \right)$ and ${\kappa _A} = \mathop \smallint \nolimits_{\left( {0, \infty } \right)} h\left( {t, A} \right){\textrm{;d}}t$ and where $\left( {H_j^H} \right)$ and $\left( {D_j^H} \right)$ are i.i.d. copies of ${H^H}$ and ${D^H}$ , respectively. In this work, we always assume the subcriticality condition (in the terminology of branching processes) $\mathbb{E}\left[ {{\kappa _A}} \right] \lt 1$ , in order for clusters to be almost surely finite. This also implies that the expected total number of points in a cluster is given by $\frac{1}{{1 - \mathbb{E}\left[ {{\kappa _A}} \right]}}$ , using a geometric series argument (see Chapter 12 in [Reference Brémaud.8]). As pointed out in [Reference Asmussen and Foss1] and references therein, the combination of the subcriticality assumption (the fact that the random quantities involved in Equation (8) are nonnegative) and the assumption that $\mathbb{E}\left[ X \right] \lt \infty $ (to be made through the index of regular variation of $X$ in further sections) yields the existence and uniqueness of a nonnegative solution to this distributional equation; for Equation (7), a discussion about the existence of potentially multiple solutions to the higher-order Lindley equation can be found in [Reference Basrak, Conroy, Olvera-Cravioto and Palmowski2]. Lastly, note that Equation (7) and Equation (8) emphasise the cascade structure of the Hawkes process.

3. A word on regular variation

Throughout this paper, we will assume that the governing random components of our processes of interest are regularly varying; that is, roughly speaking, they exhibit heavy tails. More specifically, we will assume that the random vector ${\textbf{X}}$ is regularly varying. For the renewal Poisson cluster process, this amounts to assuming that ${\textbf{X}} = \left( {X, {K_A}} \right)$ is regularly varying, where $X$ and ${K_A}$ are defined as in Section 2.2; for the Hawkes process, this amounts to assuming that ${\textbf{X}} = \left( {X,{\kappa _A}} \right)$ is regularly varying, where $X$ and ${\kappa _A}$ are as defined in Section 2.3. The exact definition of regular variation varies in the literature depending on the context (see e.g. [Reference De Haan and Ferreira15, Reference Hult and Lindskog25, Reference Resnick49, Reference Resnick50, Reference Segers, Zhao and Meinguet53]). Hence, we first recall the definition of regular variation that we use in this text in full generality, borrowing notations from [Reference Mikosch and Wintenberger40]. We let $\mathbb{R}_{\boldsymbol 0}^d = {\mathbb{R}^d}\backslash \left\{ {\boldsymbol 0} \right\},$ with ${\boldsymbol 0} = \left( {0, 0, \ldots , 0} \right)$ . We let $\left| \cdot \right|$ be any norm on ${\mathbb{R}^d}$ (by their equivalence). Note that in subsequent sections, our framework is restricted to the case where $d = 2$ .

Definition 1. Let ${\textbf{X}}$ be a random vector with values in ${\mathbb{R}^d}$ . Suppose that $\left| {\textbf{X}} \right|$ is regularly varying with index $\alpha \gt 0$ . Let $\left( {{a_n}} \right)$ be a real sequence satisfying $n\mathbb{P}\left( {\left| {\textbf{X}} \right| \gt {a_n}} \right) \to 1$ , as $n \to \infty $ . The random vector ${\textbf{X}}$ (and its distribution) are said to be regularly varying if there exists a non-null Radon measure $\mu $ on the Borel $\sigma $ -field of $\mathbb{R}_0^d$ such that, for every $\mu $ -continuity set $A$ , it holds that

$${\mu _n}( A )\; :\!= n\mathbb{P}\left( {a_n^{ - 1}{\textbf{X}} \in A} \right) \to \mu ( A )\!,{\textrm{as\;\;}}n \to \infty .$$

In the Definition 1, two remarks are in order:

  1. (i). the regular variation of $\left| {\textbf{X}} \right|$ is univariate; the standard definition applies, namely, that the distribution of $\left| {\textbf{X}} \right|$ has power-law tails; that is, $\mathbb{P}\left( {\left| {\textbf{X}} \right| \gt x} \right) = {x^{ - \alpha }}L(x)$ for $x \gt 0$ , where $L({\cdot})$ is a slowly varying function;

  2. (ii). the kind of convergence that takes place is vague convergence. The limiting measure possesses various nice properties, among which one can cite homogeneity: For any Borel set $B \subset \mathbb{R}_0^d$ and $t \gt 0$ , it holds that $\mu \left( {tB} \right) = {t^{ - \alpha }}\mu \left( B \right)$ .

Rather than using the sequential form as in Definition 1, it is possible to use an alternative continuous form. Additionally, a distinguished characterisation in the literature is through a limiting decomposition into “spectral” and “radial” parts; see [Reference Resnick50].

Proposition 1. (Theorem 6.1 in [Reference Resnick50].) A random vector ${\textbf{X}}$ with values in ${\mathbb{R}^d}$ is regularly varying with index $\alpha \gt 0$ and non-null Radon measure $\mu $ on $\mathbb{R}_{\boldsymbol 0}^d$ if and only if one of the following relations holds:

  1. (i) (Continuous form): The random variable $\left| {\textbf{X}} \right|$ is regularly varying with index $\alpha \gt 0$ and

    $$\frac{{\mathbb{P}\left( {{x^{ - 1}}{\textbf{X}} \in \cdot } \right)}}{{\mathbb{P}\left( {\left| {\textbf{X}} \right| \gt x} \right)}}\mathop \to \limits^{\textrm{v}} \mu ({\cdot}),as\ x \to \infty .$$
  2. (ii) (Weak convergence to independent radial/spectral decomposition): the following limit holds

    $$\mathbb{P}\left( {\left( {\frac{{\textbf{X}}}{x},\frac{{\textbf{X}}}{{\left| {\textbf{X}} \right|}}} \right) \in \cdot } \right)\mathop \to \limits^{\textrm{w}} \mathbb{P}\left( {\left( {Y,{\Theta }} \right) \in \cdot } \right)\ as\ x \to \infty ,$$
    where $Y\sim Pareto\!\left( \alpha \right)$ with $\alpha \gt 0$ and is independent of ${\boldsymbol\Theta }$ , which takes values on the unit sphere ${\mathbb{S}^{d - 1}}$ defined by ${\mathbb{S}^{d - 1}} = \left\{ {{\textbf{x}} \in {\mathbb{R}^d}\;:\left| {\textbf{x}} \right| = 1} \right\}$ .

In Proposition 1, the notation $\mathop \to \limits^{\textrm{v}} $ refers to vague convergence: Wee say that in a sequence of measures $\left( {{\mu _n}} \right)$ (with ${\mu _n} \in {M_ + }\left( E \right)$ , the space of nonnegative Radon measure on $\left( {E,\mathcal{E}} \right)$ ) converges vaguely to a measure $\mu \in {M_ + }\left( E \right)$ if for all functions $\,f \in \mathbb{C}_K^ + \left( E \right)$ , we have $\mathop \smallint \nolimits_E f(x){\mu _n}\left( {{\textrm{d}}x} \right) \to \mathop \smallint \nolimits_E f(x)\mu \left( {{\textrm{d}}x} \right)$ , where $C_K^ + \left( E \right)$ denotes the set of functions $\,f:\; E \to {\mathbb{R}_ + }$ being continuous with compact support. For more details about vague convergence, see e.g. Chapter 3 in [Reference Resnick50]. The notation refers to the standard notion of weak convergence. The above characterisations have various consequences. The first property is a continuous mapping theorem, first proved in [Reference Hult and Lindskog25] in the framework of metric spaces. We use a simplified version fitting our settings, which we partially reproduce from [Reference Mikosch and Wintenberger40]. See also Proposition 4.3 and Corollary 4.2 in [Reference Lindskog, Resnick and Roy33].

Proposition 2. (Theorem 2.2.30 in [Reference Mikosch and Wintenberger40], Proposition 4.3, and Corollary 4.2 in [Reference Lindskog, Resnick and Roy33]) Let ${\textbf{X}}$ be a random vector in ${\mathbb{R}^d}$ , and suppose it is regularly varying with index $\alpha \gt 0$ and non-null Radon measure $\mu $ on $\mathbb{R}_0^d$ . Let $g({\cdot})\;:\;{\mathbb{R}^d} \to \mathbb{R}$ be a non-zero, continuous, and positively homogeneous map of order $\gamma $ ; i.e. for every ${\textbf{x}} \in {\mathbb{R}^d}$ , $g\left( {t{\textbf{x}}} \right) = {t^\gamma }g(x)$ for some $\gamma \gt 0$ . Then the following limit relation holds:

$$\frac{{\mathbb{P}\left( {{x^{ - 1}}g\left( {\textbf{X}} \right) \in \cdot } \right)}}{{\mathbb{P}\left( {| {{\textbf{X}}{|^\gamma }} \gt x} \right)}}\mathop \to \limits^{\textrm{v}} \mu \left( {{g^{ - 1}}({\cdot})} \right){\textrm{ as }}x \to \infty .$$

Note that for every $\varepsilon \gt 0$ , $\mu ({g^{ - 1}}(\{ x \in \mathbb{R}\;:\left| x \right| \gt \varepsilon \} )) \lt \infty .$ Moreover, if $\mu \!\left( {{g^{ - 1}}({\cdot})} \right)$ is not the null measure on ${\mathbb{R}_0}$ , then $g\left( {\textbf{X}} \right)$ is regularly varying with index $\alpha /\gamma $ and with non-null Radon measure

$$\frac{{\mu \left( {{g^{ - 1}}({\cdot})} \right)}}{{\mu ({g^{ - 1}}(\{ x \in \mathbb{R}\;:\left| x \right| \gt 1\} ))}}.$$

Example 1. It is easily seen that the map defined by the projection on any coordinate of ${\textbf{X}}$ is a continuous mapping satisfying the assumptions of Proposition 2 with $\gamma = 1$ . If $d = 2$ , ${\textbf{X}} = \left( {{X_1},{X_2}} \right)$ and $g\left( {\textbf{X}} \right)\;:\!=\; {X_1}$ , then by the homogeneity property of the limiting Radon measure $\mu $ , as long as

$$\mu (\{ \left( {{x_1},{x_2}} \right) \in \mathbb{R}_0^2\;:\;{x_1} \gt 1\} ) \gt 0,$$

one obtains regular variation of ${X_1}$ with index $\alpha \gt 0$ .

A second useful result, due to [Reference Segers, Zhao and Meinguet53] again in the setting of metric spaces that we simplify here, shows that one can actually replace the norm $\left| \cdot \right|$ by any modulus. A modulus, as defined in Definition 2.2 of [Reference Segers, Zhao and Meinguet53], is a function $\rho\; :\; {\mathbb{R}^d} \to \left[ {0,\infty } \right)$ such that $\rho ({\cdot})$ is non-zero, continuous, and positively homogeneous of order 1. Proposition 3.1 in [Reference Segers, Zhao and Meinguet53] then ensures the following:

Proposition 3. (Proposition 3.1 in [Reference Segers, Zhao and Meinguet53]) A random vector ${\textbf{X}}$ with values in ${\mathbb{R}^d}$ is regularly varying with index $\alpha \gt 0$ and non-null Radon measure $\mu $ on $\mathbb{R}_0^d$ if and only if there exists a modulus $\rho $ such that $\rho \left( {\textbf{X}} \right)$ is regularly varying with index $\alpha \gt 0$ and is a random vector ${\Theta }$ taking values on ${\mathbb{S}^{d - 1}}\;:\!=\; \left\{ {{\textbf{x}} \in {\mathbb{R}^d}\;:\;\rho \left( {\textbf{x}} \right) = 1} \right\}$ such that

$$\mathbb{P}\left( {\left. {\frac{{\textbf{X}}}{{\rho \left( {\textbf{X}} \right)}} \in \cdot } \right| \rho \left( {\textbf{X}} \right) \gt x} \right)\mathop \to \limits^{\textrm{w}} \mathbb{P}\left( {{\Theta } \in \cdot } \right){\textrm{ as }}x \to \infty .$$

Finally, in subsequent sections we shall also use another characterisation via the regular variation of linear combinations, proven by [Reference Basrak, Davis and Mikosch3]. We denote the inner product in ${\mathbb{R}^d}$ by $\langle \cdot , \cdot \rangle $ .

Proposition 4. (Proposition 1.1 in [Reference Basrak, Davis and Mikosch3]) A random vector ${\textbf{X}}$ with values in ${\mathbb{R}^d}$ is regularly varying with noninteger index $\alpha \gt 0$ if and only if there exists a slowly varying function $L({\cdot})$ such that, for all ${\textbf{t}} \in {\mathbb{R}^d}$ ,

$$\mathop {{\textrm{lim}}}\limits_{x \to \infty } \frac{{\mathbb{P}\left( \langle {{\textbf{t}},{\textbf{X}}\rangle \gt \;x} \right)}}{{{x^{ - \alpha }}L(x)}} = w\left( {\textbf{t}} \right) exists,$$

for some function $w({\cdot})$ and there exists one ${{\textbf{t}}_0} \ne 0$ such that $w\left( {{{\textbf{t}}_0}} \right) \gt 0.$

The above result states that a random vector ${\textbf{X}}$ is regularly varying with index $\alpha \gt 0$ if and only if all linear combinations of its components are regularly varying with the same index $\alpha \gt 0$ . Note that it is not necessary for $\alpha $ in Proposition 4 to be noninteger for the above equivalence to hold; however, when this is not the case, there are some caveats that we avoid considering in our the results of upcoming sections (e.g., with $\alpha $ noninteger, we do not have to consider $t \in {\mathbb{R}^d}$ but rather $t \in \mathbb{R}_ + ^d$ ); see [Reference Basrak, Davis and Mikosch3].

Finally, the last result of great importance in showing the transfer of regular variation in the subsequent sections is Karamata’s Theorem, which can be found as Theorem 8.1.6 in [Reference Bingham, Goldie and Teugels7]. Let $X$ be a random variable, denote its associated Laplace-Stieltjes transform by ${\varphi _X}( s )\;:\!=\; \mathbb{E}\left[ {{e^{ - sX}}} \right]$ for $s \gt 0$ , and denote its $n$ th derivative by $\varphi _X^{\left( n \right)}( s ) = \mathbb{E}\left[ {{{(\!-\!X)}^n}{e^{ - sX}}} \right]$ . Let ${\Gamma }({\cdot})$ define the Gamma function.

Theorem 1. (Karamata’s Tauberian Theorem, Theorem 8.1.6 in [Reference Bingham, Goldie and Teugels7]) The following statements are equivalent:

  1. (i) $X$ is regularly varying with noninteger index $\alpha \gt 0$ and slowly varying function ${L_X}({\cdot})$ , i.e.

    $$\mathbb{P}\,( {X \gt x} ) \sim {x^{ - \alpha }}{L_X}(x){\textrm{ as }}x \to \infty .$$
  2. (ii) For a noninteger index $\alpha \gt 0$ ,

    $$\varphi _X^{\left(\lceil \alpha \rceil\right)}( s ) \sim {C_\alpha }{s^{\alpha - \lceil\alpha\rceil }}{L_X}\left( {{1}/{s}} \right){\textrm{ as }}s \to {0^ + },$$
    for ${L_X}({\cdot})$ a slowly varying function, where ${C_\alpha }\; :\!=\; - {\Gamma }\left( {\alpha + 1} \right){\Gamma }\left( {1 - \alpha } \right)/{\Gamma }\left( {\alpha - \left\lfloor \alpha \right\rfloor } \right)\!.$

Remark 2. Note that when $X$ is regularly varying with index $\alpha \in \left( {n,n + 1} \right)$ , the $\left( {n + 1} \right)$ -th moment does not exist. Observe that the above trivially implies that when $\alpha \in \left( {n,n + 1} \right)$ , $\varphi _X^{\left( {n + 1} \right)}( s ) = \varphi _X^{\left( {\left\lceil \alpha \right\rceil } \right)}( s ) \to \infty $ as $s \to {0^ + }$ , a property we will use repeatedly in subsequent sections.

4. Tail asymptotics of maximum functional in renewal Poisson cluster process

We now prove a single big-jump principle for the tail asymptotics of the distribution of the maximum functional of a generic cluster in the settings of the renewal Poisson cluster process. As mentioned in Remark 1, the conclusions reached for this process are, of course, valid for the mixed binomial Poisson cluster process.

Proposition 5. Suppose the vector $\left( {X,{K_A}} \right)$ in Equation (5) is regularly varying with index $\alpha \gt 1$ and non-null Radon measure $\mu $ . Then,

$$\mathbb{P}\left( {{H^R} \gt x} \right) \sim \left( {1 + \mathbb{E}\left[ {{K_A}} \right]} \right)\mathbb{P}\,( {X \gt x} ){\textrm{ as }}x \to \infty .$$

Moreover, if $\mu (\{ \left( {{x_1},{x_2}} \right) \in \mathbb{R}_{ + ,\textbf{0}}^2\;:\; {x_1} \gt 1\} ) \gt 0$ , then ${H^R}$ is regularly varying with index $\alpha \gt 1.\;$

Proof of Proposition 5. The proof can be found in Appendix A. It uses a classical approach via conditioning on ${K_A}$ and Taylor expansions and is given for completeness.

Remark 3. In the proof of Proposition 5, one only needs $X$ to be regularly varying for ${H^H}$ to be regularly varying. However, to keep the same settings in terms of regular variation as for the upcoming results, we make the assumption that $\left( {X,{K_A}} \right)$ is regularly varying and regular variation of $X$ follows by considering the consequences of this assumption contained in Example (1). The case where $\mathbb{P}\,( {X \gt x} ) = o\!\left( {\mathbb{P}\left( {{K_A} \gt x} \right)} \right)$ , $x \to \infty $ , $X$ regularly varying and ${K_A}$ a stopping time with respect to ${({A_j})_{j \geqslant 0}}$ is treated in Proposition 3.1 and Corollary 4.2 of [Reference Basrak5]. It is proved that ${H^R}$ is also regularly varying but, more generally, that ${H^R}$ falls in the same MDA than $X$ . What we propose in Proposition (5) is merely a refinement for the Fréchet MDA, describing explicitly the tail of ${H^R}$ when $\mathbb{P}\left( {{K_A} \gt x} \right) = \mathcal{O}\!\left( {\mathbb{P}\,( {X \gt x} )} \right)$ , $x \to \infty $ , and ${K_A}$ depending only on  ${X_0}$ .

5. Tail asymptotics of the sum functional in renewal Poisson cluster process

We now prove a result concerning the sum functional of a generic cluster in the settings of the renewal Poisson cluster process. Again, this extends easily to the mixed-binomial Poisson cluster process.

Proposition 6. Suppose the vector $\left( {X,{K_A}} \right)$ in Equation (6) is regularly varying with noninteger index $\alpha \gt 1.\;$ Then ${D^R}$ is regularly varying with the same index $\alpha $ . More specifically,

$$\mathbb{P}\left( {{D^R} \gt x} \right)\sim \mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt x} \right) + \mathbb{E}\left[ {{K_A}} \right]\mathbb{P}\,( {X \gt x} ){\textrm{ as }}x \to \infty .$$

Proof of Proposition 6. First note that the Laplace-Stieltjes transform of ${D^R}$ in Equation (6) is given by

\begin{align*}{\varphi _{{D^R}}}( s ) \;:\!=\; \mathbb{E}\left[ {{e^{ - sX - s\mathop \sum \nolimits_{j = 1}^{{K_A}} {X_j}}}} \right] & = \mathbb{E}\left[ {\mathbb{E}\left[ {{e^{ - sX}}{e^{ - s\mathop \sum \nolimits_{j = 1}^{{K_A}} {X_j}}} | A} \right]} \right]\\& = \mathbb{E}\left[ {{e^{ - sX}}{e^{{K_A}{\textrm{ log }}\mathbb{E}\left[ {{e^{ - sX}}} \right]}}} \right] = \!:\; \mathbb{E}\left[ {{e^{ - sX + {K_A}{\textrm{ log }}{\varphi _X}( s )}}} \right]\end{align*}

upon recalling that $X \;:\!=\; f( A )$ and ${K_A}$ are independent conditionally on the ancestral mark $A$ and that ${({X_j})_{j \geqslant 1}}$ are i.i.d. and independent of $A$ . We first show that for any noninteger $\alpha \in \left( {n,n + 1} \right)$ , $n \in \mathbb{N}$ ,

$$\varphi _{{D^R}}^{\left( {n + 1} \right)}( s ) \sim \varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s ){\textrm{ as }}s \to {0^ + },$$

where ${\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}}( s )\;:\!=\; \mathbb{E}\left[ {{e^{ - sX - s\mathbb{E}\left[ X \right]{K_A}}}} \right]$ and where $\varphi _X^{\left( n \right)}$ is the $n$ th derivative of the Laplace-Stieltjes transform of a random variable $X$ .

We have to consider the following expression:

(9) \begin{align}\Bigg|\varphi _{{D^R}}^{\left( {n + 1} \right)}( s ) - & \left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right)\Bigg| \notag\\= & \Bigg|\mathbb{E}\left[ {{{\Bigg(\!-\!X + {K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^{n + 1}}{e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \right]\notag\\ &+ \mathbb{E}\left[ {{K_A}\frac{{\varphi _X^{\left( {n + 1} \right)}( s )}}{{{\varphi _X}( s )}}{e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \right]\notag\\ &- \mathbb{E}\left[ {{{( \!-\! X - \mathbb{E}\left[ X \right]{K_A})}^{n + 1}}{e^{ - sX - s\mathbb{E}\left[ X \right]{K_A}}}} \right]\notag\\ & - \mathbb{E}\left[ {{K_A}} \right]\mathbb{E}\left[ {{{(\!-\!X)}^{n + 1}}{e^{ - sX}}} \right] + {C_{n + 1}}\Bigg|\notag\\ =\!: & \left| {{B_1} + {B_2} - {B_3} - {B_4} + {C_{n + 1}}} \right|.\end{align}

Consider first the difference $\left| {{B_1} - {B_3}} \right|$ . The following set of inequalities, directly due to the convexity of the function ${\textrm{log}}\;{\varphi _X}({\cdot})$ , will prove useful in controlling the above difference: For $s \gt 0$ , we have

(10) \begin{align} - s\mathbb{E}\left[ X \right]K \leqslant K\;{\textrm{log}}{\varphi _X}( s ) \leqslant sK\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}} \leqslant 0 \leqslant - sK\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}} \leqslant - K\;{\textrm{log}}{\varphi _X}( s ) \leqslant s\mathbb{E}\left[ X \right]K.\end{align}

Using the basic decomposition $\left( {{a^{n + 1}} - {b^{n + 1}}} \right) = \left( {a - b} \right)\mathop \sum \nolimits_{k = 0}^n \,{a^{n - k}}{b^k}$ as well as Equation (10) yields

\begin{align*}\big|{B_1} - {B_3}\big| \leqslant\; & \bigg|\left( {\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}} + \mathbb{E}\left[ X \right]} \right)\\& \cdot \mathbb{E}\left[ {{K_A}\Bigg(\mathop \sum \limits_{k = 0}^n {{\Bigg(\!-\!X + {K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^{n - k}}\left( { - X - \mathbb{E}\left[ X \right]{K_A}}\right)^k \Bigg){e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \right]\bigg|\\ \leqslant\; & \Bigg|\left( {\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}} + \mathbb{E}\left[ X \right]} \right)\Bigg(\mathbb{E}\left[ {{K_A}{{\bigg(\!-\!X + {K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\bigg)}^n}{e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \right]\\ & + \mathbb{E}\left[ {{K_A}{{(\!-\!X - \mathbb{E}\left[ X \right]{K_A})}^n}{e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \right]\\ & + \mathbb{E}\Bigg[ {{K_A}\Bigg(\mathop \sum \limits_{k = 1}^{n - 1} {{\Bigg(\!-\!X + {K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^{n - k}}\left( - X - \mathbb{E}\left[ X \right]{K_{A}}\right)^{k} \Bigg){e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \Bigg]\Bigg)\Bigg|\\ =\! : & \left| {G\!\left( {{B_{11}} + {B_{12}} + {B_{13}}} \right)} \right|,\end{align*}

where $G\;:\!=\; \frac{{\varphi _{x}^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}} + \mathbb{E}\left[ X \right].$

We then treat each term separately. First consider ${B_{11}}$ . Using the binomial theorem, we have that

$${\Bigg(\!-\!X + {K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)^n} = \mathop \sum \limits_{j = 0}^n \left( {\begin{array}{*{20}{l}}n\\j\end{array}} \right){(\!-\!X)^j}{\Bigg({K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)^{n - j}}.$$

Using the linearity of expectations, we separate the cases. Let $j = 0$ . Because $G \gt 0$ , ${K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}} \lt 0$ , using Equation (10) and the basic inequality $x{e^{ - x}} \leqslant {e^{ - 1}}$ , we get:

(11) \begin{align} &\left| {G\mathbb{E}\left[ {{K_A}{{\Bigg({K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^n}{e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \right]} \right| \notag\\ \leqslant\; & \frac{G}{s}\mathbb{E}\left[ {K_A^{n - 1}\Bigg|{{\Bigg(\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^{n - 1}}\Bigg|\left( { - {K_A}{\textrm{log}}{\varphi _X}( s )} \right){e^{{K_A}{\textrm{log}}{\varphi _X}( s )}}} \right]\notag\\ \leqslant\; & \frac{G}{s}\mathbb{E}\left[ {K_A^{n - 1}\Bigg|{{\Bigg(\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^{n - 1}}\Bigg|{e^{ - 1}}} \right].\end{align}

In order to control the upper bound, we need to control $G/s$ , and we have to distinguish two cases:

Case $\alpha \in (1, 2)$ : We have the identities

\begin{align*}\frac{G}{s} = \frac{{\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}} + \mathbb{E}\left[ X \right]}}{s} & = \frac{{\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}} - \varphi _X^{\left( 1 \right)}( s ) + \varphi _X^{\left( 1 \right)}( s ) + \mathbb{E}\left[ X \right]}}{s}\\& = \varphi _X^{\left( 1 \right)}( s )\left( {\frac{{\frac{1}{{{\varphi _X}( s )}} - 1}}{s}} \right) + \frac{{\varphi _X^{\left( 1 \right)}( s ) + \mathbb{E}\left[ X \right]}}{s}.\end{align*}

The limit as $s \to {0^ + }$ of $\frac{{\frac{1}{{{\varphi _X}( s )}} - 1}}{s}$ is the derivative of $1/{\varphi _X}( s )$ at $s = 0$ and hence is finite; it follows that

$$\varphi _X^{\left( 1 \right)}( s )\left( {\frac{{\frac{1}{{{\varphi _X}( s )}} - 1}}{s}} \right) = \mathcal{O}\!\left( {\varphi _X^{\left( 1 \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Now note that for the second term, if the first $X$ has negligible tails with respect to $X + \mathbb{E}\left[ X \right]{K_A}$ , by Lemma 2, it follows that

$$\frac{{\varphi _X^{\left( 1 \right)}( s ) + \mathbb{E}\left[ X \right]}}{s} = o\!\left( {\varphi _{{\textit{X}} + \mathbb{E}\left[ X \right]{K_A}}^{\left( 2 \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( 2 \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

If $X$ is regularly varying with the same index as $X + \mathbb{E}\left[ X \right]{K_A}$ , then clearly, by adapting the proof of Lemma 2, it follows that

$$\frac{{\varphi _X^{\left( 1 \right)}( s ) + \mathbb{E}\left[ X \right]}}{s} = \mathcal{O}\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( 2 \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( 2 \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

By a dominated convergence argument, the upper bound in Equation (11) is such that

$$\mathbb{E}\left[ {K_A^{n - 1}\Bigg|{{\Bigg(\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^{n - 1}}\Bigg|\!\left( { - {K_A}{\textrm{log}}{\varphi _X}( s )} \right){e^{{K_A}{\textrm{log}}{\varphi _X}( s )}}} \right] = o\!\left( 1 \right){\textrm{ as }}s \to {0^ + },$$

and combining with the arguments just given proves that no matter if $X$ is lighter than or as heavy as the modulus $X + \mathbb{E}\left[ X \right]{K_A}$ ,

$${B_{11}} = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( 2 \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( 2 \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Case $\left( n + 1 \right)$ , with $n \in N\{1\}$ :. Using the definition of the derivative, as $s \to {0^ + }$ ,

$$\mathop {{\textrm{lim}}}\limits_{s \to {0^ + }} \frac{G}{s} = \mathop {{\textrm{lim}}}\limits_{s \to {0^ + }} \frac{{\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}} + \mathbb{E}\left[ X \right]}}{s}{\textrm{and}}\mathop {{\textrm{lim}}}\limits_{s \to {0^ + }} \frac{{\frac{{\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}} + \mathbb{E}\left[ X \right]}}{s}}}{{\frac{{\varphi _X^{\left( 2 \right)}( s )}}{{{\varphi _X}( s )}} - \frac{{{{(\varphi _X^{\left( 1 \right)}( s ))}^2}}}{{{{({\varphi _X}( s ))}^2}}}}} = 1,$$

and for this range of $\alpha \in \left( {n,n + 1} \right)\!,$ with $n \in \mathbb{N}\backslash \left\{ 1 \right\}$ ,

$$\frac{{\varphi _X^{\left( 2 \right)}( s )}}{{{\varphi _X}( s )}} - \frac{{{{(\varphi _X^{\left( 1 \right)}( s ))}^2}}}{{{{({\varphi _X}( s ))}^2}}} \lt \infty ,$$

which is finite since $\alpha \in \left( {n,n + 1} \right)$ for $n \geqslant 2$ . Because $\varphi _X^{\left( 1 \right)}( s )$ is finite, Equation (11) is finite. Upon applying Theorem 1, it follows that as $s \to {0^ + }$ ,

$$\left| {G\mathbb{E}\left[ {{K_A}{{\Bigg({K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^n}{e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \right]} \right| = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right)\!.$$

The treatment of terms where $j \gt 0$ is easier: It is sufficient to note that whenever $X$ appears in the product, one can always “lose a power”: Suppose without loss of generality that $j = 1$ in the decomposition due to the binomial theorem above; we are left to consider the following term

$$\left| {G\mathbb{E}\left[ {{K_A}\Bigg\{ \left( {\begin{array}{*{20}{l}}n\\1\end{array}} \right){{(\!-\!X)}^1}{{\Bigg({K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^{n - 1}}\Bigg\} {e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \right]} \right|.$$

This is smaller than

$$\frac{G}{s}\mathbb{E}\left[ {\left( {\begin{array}{*{20}{l}}n\\1\end{array}} \right)K_A^n\Bigg|{{\Bigg(\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}()}}\Bigg)}^{n - 1}}\Bigg|\left( {sX} \right){e^{ - sX}}} \right] \leqslant \frac{G}{s}\mathbb{E}\left[ {\left( {\begin{array}{*{20}{l}}n\\1\end{array}} \right)K_A^n\Bigg|{{\Bigg(\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^{n - 1}}\Bigg|{e^{ - 1}}} \right],$$

and by similar reasoning as above, the expectation as well as the whole of the upper bound are finite. All in all, this shows that as $s \to {0^ + }$ ,

\begin{align*}&\left| {G\mathbb{E}\left[ {{K_A}\Bigg\{ \left( {\begin{array}{*{20}{l}}n\\1\end{array}} \right){{(\!-\!X)}^1}{{\Bigg({K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^{n - 1}}\Bigg\} {e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \right]} \right|\\&\qquad\qquad\qquad\qquad = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right)\!.\end{align*}

Upon applying the same arguments on all terms making up ${B_{11}}$ , using at times Hölder’s inequality to justify that expectations of the form $\mathbb{E}\left[ {{K_A}{{(\!-\!X)}^{j - 1}}{{({K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}})}^{n - j}}} \right]$ for $2 \leqslant j \leqslant n - 1$ are finite, and noting that one $X$ is factorised as in the reasoning above, this is sufficient to show that

$$\left| {G{B_{11}}} \right| = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

A completely analogous approach—omitted for brevity—shows that

$$\left| {G{B_{12}}} \right| = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + },$$

replacing only the appeal to Equation (10) by the fact that we can always find an $s \gt 0$ small enough such that $s\mathbb{E}\left[ X \right] \leqslant - 2{\textrm{log}}{\varphi _X}( s )$ , which holds because of the following reasoning: Since ${\varphi _X}( s )$ is differentiable at 0, by the integrability of $X$ , one obtains

$$\mathop {{\textrm{lim}}}\limits_{s \to {0^ + }} - \frac{{{\textrm{log}}{\varphi _X}( s )}}{s} = - \frac{{\varphi _X^{\left( 1 \right)}\left( 0 \right)}}{{{\varphi _X}\left( 0 \right)}} = - \mathbb{E}\left[ { - X} \right] \Leftarrow \mathop {{\textrm{lim}}}\limits_{s \to {0^ + }} - \frac{{{\textrm{log}}{\varphi _X}( s )}}{s} = \mathbb{E}\left[ X \right].$$

By a similar argument, $ - \frac{{2{\textrm{log}}{\varphi _X}( s )}}{s} \to 2\mathbb{E}\left[ X \right],{\textrm{as}}s \to {0^ + }$ . Hence, there exists an $s \gt 0$ small enough such that

$$s\mathbb{E}\left[ X \right] \leqslant - 2{\textrm{log}}{\varphi _X}( s )\!.$$

Finally, consider $\left| {G{B_{13}}} \right|$ . The sum given can be factorised as

\begin{align*} &\mathop \sum \limits_{k = 1}^{n - 1} {\Bigg(\!-\!X + {K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)^{n - k}}{(\!-\!X - \mathbb{E}\left[ X \right]{K_A})^k}\\&\qquad\qquad = \left( { - X + {K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}} \right)\mathop \sum \limits_{k = 1}^{n - 1} {\Bigg(\!-\!X + {K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)^{n - 1 - k}}{(\!-\!X - \mathbb{E}\left[ X \right]{K_A})^k}.\end{align*}

Now this yields, upon using Equation 10 and the basic inequality $x{e^{ - x}} \leqslant {e^{ - 1}}$ in the last step,

\begin{align*}\left| {G{B_{13}}} \right| & = \frac{G}{s}\mathbb{E}\Bigg[{K_A}\left| {\left( { - sX + {K_A}s\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}} \right)} \right|{e^{ - \left( {sX - {K_A}{\textrm{log}}{\varphi _X}( s )} \right)}}\\& \quad \cdot \mathop \sum \limits_{k = 1}^{n - 1} \Bigg|{\Bigg(\!-\!X + {K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)^{n - k - 1}}{(\!-\!X - \mathbb{E}\left[ X \right]{K_A})^k}\Bigg|\Bigg]\\& \leqslant \frac{G}{s}\mathbb{E}\Bigg[{K_A}\left( {sX - {K_A}{\textrm{log}}{\varphi _X}( s )} \right){e^{ - \left( {sX - {K_A}{\textrm{log}}{\varphi _X}( s )} \right)}}\\& \quad \cdot \mathop \sum \limits_{k = 1}^{n - 1} \Bigg|{\Bigg(\!-\!X + {K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)^{n - k - 1}}{(\!-\!X - \mathbb{E}\left[ X \right]{K_A})^k}\Bigg|\Bigg]\\& \leqslant \frac{G}{s}\mathbb{E}\left[ {{K_A}{e^{ - 1}}\mathop \sum \limits_{k = 1}^{n - 1} \Bigg|{{\Bigg(\!-\!X + {K_A}\frac{{\varphi _X^{\left( 1 \right)}( s )}}{{{\varphi _X}( s )}}\Bigg)}^{n - k - 1}}{{(\!-\!X - \mathbb{E}\left[ X \right]{K_A})}^k}\Bigg|} \right].\end{align*}

The highest order of the product of the summands above is of power $n$ : Again, since $\alpha \in \left( {n,n + 1} \right)$ , using Hölder’s inequality, the expectation is finite. Overall, this shows once again that

$$\left| {G{B_{13}}} \right| = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Collecting all of these bounds, this shows that

$$\left| {{B_1} - {B_3}} \right| = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Consider the difference $\left| {{B_2} - {B_4}} \right|,$ and note that one can write it as

\begin{align*}\big| {B_2} - {B_4}\big| & = \Bigg|\frac{{\varphi _X^{\left( {n + 1} \right)}( s )}}{{{\varphi _X}( s )}}\mathbb{E}\left[ {{K_A}\left( {1 - {e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \right)} \right] + \varphi _X^{\left( {n + 1} \right)}( s )\left( {\frac{1}{{{\varphi _X}( s )}} - 1} \right)\mathbb{E}\left[ {{K_A}} \right] \Bigg|\\& =\! :\left| {{B_{21}} + {B_{22}}} \right|.\end{align*}

Now by a dominated convergence argument as before, one has that $\mathbb{E}\left[ {{K_A}\left( {1 - {e^{ - sX + {K_A}{\textrm{log}}{\varphi _X}( s )}}} \right)} \right] = o\!\left( 1 \right)$ as $s \to {0^ + }$ and, hence, that

$$\left| {{B_{21}}} \right| = o\!\left( {\varphi _X^{\left( {n + 1} \right)}( s )} \right) = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Similarly, by the integrability of ${K_A}$ ,

$$\left| {{B_{22}}} \right| = o\!\left( {\varphi _X^{\left( {n + 1} \right)}( s )} \right) = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Collecting the above, this implies that

$$\left| {{B_2} - {B_4}} \right| = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Lastly, the terms making up ${C_{n + 1}}$ when $\alpha \in \left( {n,n + 1} \right)$ are all the terms (and cross-products) of order strictly lower than $n + 1$ and, consequently, are finite. It follows by Theorem 1 that

$${C_{n + 1}} = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

All in all, this essentially shows that as $s \to {0^ + }$ ,

$$\left| {\varphi _{{D^R}}^{\left( {n + 1} \right)}( s ) - \left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right)} \right| = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s )} \right)$$

and hence that

$$\varphi _{{D^R}}^{\left( {n + 1} \right)}( s )\sim \varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( {n + 1} \right)}( s ){\textrm{ as }}s \to {0^ + },$$

and this equivalence holds for any $\alpha \in \left( {n,n + 1} \right)$ , $n \in \mathbb{N}$ .

Because the modulus $X + \mathbb{E}\left[ X \right]{K_A}$ is regularly varying whenever $\left( {X,{K_A}} \right)$ is—see Remark 4—Karamata’s Theorem (1) implies that

$$\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s ) \sim {C_\alpha }{s^{\alpha - \lceil\alpha\rceil }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {{1}/{s}} \right){\textrm{ as }}s \to {0^ + }$$

for some slowly varying function ${L_{X + \mathbb{E}\left[ X \right]{K_A}}}({\cdot})$ . Then suppose first that $X$ is not regularly varying and has negligible tails with respect to the modulus $X + \mathbb{E}\left[ X \right]{K_A}$ . Then Lemma 1 yields that

$$\varphi _X^{\left( {n + 1} \right)}( s ) = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + },$$

and hence, this implies that

$$\varphi _{{D^R}}^{\left( {n + 1} \right)}( s ) \sim {C_\alpha }{s^{\alpha - \lceil\alpha\rceil }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {{1}/{s}} \right)\left( {1 + o\!\left( 1 \right)} \right){\textrm{ as }}s \to {0^ + },$$

which yields by re-applying Karamata’s Tauberian Theorem (1) that

$$\mathbb{P}\left( {{D^R} \gt x} \right) \sim {x^{ - \alpha }}{L_{{D^R}}}(x) \sim {x^{ - \alpha }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}(x)\left( {1 + o\!\left( 1 \right)} \right){\textrm{ as }}x \to \infty .$$

In the case where $X$ is regularly varying, by Example 1, and because $X$ has the same index $\alpha \gt 1$ as the modulus $X + \mathbb{E}\left[ X \right]{K_A}$ , if the limiting Radon measure is non-null on the correct subspace, Karamata’s Tauberian Theorem (1) yields

$$\varphi _X^{\left( {n + 1} \right)}( s ) \sim {C_\alpha }{s^{\alpha - \lceil\alpha\rceil }}{L_X}\left( {{1}/{s}} \right){\textrm{ as }}s \to {0^ + }.$$

Then for each $n \in \mathbb{N}$ ,

$$\varphi _{{D^R}}^{\left( {n + 1} \right)}( s ) \sim 2{C_\alpha }{s^{\alpha - \lceil\alpha\rceil }}\left( {{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {{1}/{s}} \right) + \mathbb{E}\left[ {{K_A}} \right]{L_X}\left( {{1}/{s}} \right)} \right){\textrm{ as }}s \to {0^ + },$$

and because the sum of two slowly varying functions is still a slowly varying function, $L_D^R({\cdot}) \;:\!=\; {L_{X + \mathbb{E}\left[ X \right]{K_A}}}({\cdot}) + \mathbb{E}\left[ {{K_A}} \right]{L_X}({\cdot})$ is slowly varying. Again applying Karamata’s Tauberian Theorem (1) in the other direction yields

$$\mathbb{P}\left( {{D^R} \gt x} \right) \sim {x^{ - \alpha }}{L_{{D^R}}}(x) \sim {x^{ - \alpha }}\left( {{L_{X + \mathbb{E}\left[ X \right]{K_A}}}(x) + \mathbb{E}\left[ {{K_A}} \right]{L_X}(x)} \right){\textrm{ as }}x \to \infty ,$$

which yields the desired result, and the proof is complete.

Remark 4. Note that the assumption that the random vector $\left( {X,{K_A}} \right)$ is regularly varying with index $\alpha \gt 1$ ensures, by Proposition 3, that $X + \mathbb{E}\left[ X \right]{K_A}$ is regularly varying with the same index $\alpha \gt 1.\;$ Indeed, it can be easily seen that $\rho \left( {X,{K_A}} \right) \;:\!=\; X + \mathbb{E}\left[ X \right]{K_A}$ is a modulus (in the sense made precise in Section 3), provided that $\mathbb{E}\left[ X \right] \ne 0$ , which is a natural assumption to make since $X$ is taken to be nonnegative.

Remark 5. Note that the findings of Proposition 6 are consistent with the findings of [Reference Faÿ, González-Arévalo, Mikosch and Samorodnitsky19]: In particular, if $X$ and ${K_A}$ are independent—which is the setting in the aforementioned paper—or even if $X$ and ${K_A}$ are asymptotically independent (i.e. if $\mathbb{P}\left( {X \gt x,\mathbb{E}\left[ X \right]{K_A} \gt x} \right) = o\!\left( {\mathbb{P}\,( {X \gt x} )\mathbb{P}\left( {\mathbb{E}\left[ X \right]{K_A} \gt x} \right)} \right)$ as $x \to \infty $ ), then the proposed asymptotics of Proposition 6 encompass three cases, depending on the relation between $X$ and ${K_A}$ :

  1. (i) when $\mathbb{P}\left( {{K_A} \gt x} \right) = o\!\left( {\mathbb{P}\,( {X \gt x} )} \right){\textrm{ as }}x \to \infty $ , then $\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt x} \right)\;\sim \;\mathbb{P}\,( {X \gt x} )$ as $x \to \infty .$ From Proposition 6, this means that

    $$\mathbb{P}\left( {{D^R} \gt x} \right)\;\sim \;\mathbb{P}\,( {X \gt x} ) + \mathbb{E}\left[ {{K_A}} \right]\mathbb{P}\,( {X \gt x} ) \sim \left( {\mathbb{E}\left[ {{K_A}} \right] + 1} \right)\mathbb{P}\,( {X \gt x} ){\textrm{ as }}x \to \infty ,$$
    which is equivalent to Proposition 4.1 in [Reference Faÿ, González-Arévalo, Mikosch and Samorodnitsky19];
  2. (ii) when $\mathbb{P}\,( {X \gt x} ) = o\!\left( {\mathbb{P}\left( {{K_A} \gt x} \right)} \right){\textrm{ as }}x \to \infty $ , then $\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt x} \right)\;\sim \;{(\mathbb{E}\left[ X \right])^\alpha }\mathbb{P}\left( {{K_A} \gt x} \right)$ as $x \to \infty $ . From Proposition 6 this means that

    $$\mathbb{P}\left( {{D^R} \gt x} \right)\;\sim \;{(\mathbb{E}\left[ X \right])^\alpha }\mathbb{P}\left( {{K_A} \gt x} \right){\textrm{ as }}x \to \infty ,$$
    which is equivalent to Proposition 4.3 in [Reference Faÿ, González-Arévalo, Mikosch and Samorodnitsky19];
  3. (iii) lastly, when $\mathbb{P}\left( {{{\textrm{K}}_A} \gt x} \right) \sim c\mathbb{P}\,( {X \gt x} )$ as $x \to \infty $ and for $c \gt 0$ , then $\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt x} \right)\;\sim \;\mathbb{P}\,( {X \gt x} ) + c{(\mathbb{E}\left[ X \right])^\alpha }\mathbb{P}\,( {X \gt x} ){\textrm{ as }}x \to \infty $ . From Proposition 6, this means that

    $$\mathbb{P}\left( {{D^R} \gt x} \right)\;\sim \;(\mathbb{E}\left[ {{K_A}} \right] + 1 + c\left( {\mathbb{E}\left[ X \right]{)^{ - \alpha }}} \right)\mathbb{P}\,( {X \gt x} ){\textrm{ as }}x \to \infty ,$$
    which is equivalent to Lemma 4.7 in [Reference Faÿ, González-Arévalo, Mikosch and Samorodnitsky19].

Our approach offers a more flexible framework for dependence among the governing components of the clusters, namely, $X$ and ${K_A}$ . Yet in this latter direction and more closely related to our results, [Reference Olvera-Cravioto47] shows in a recent contribution that

$$\mathbb{P}\left( {{D^R} \gt x} \right) \sim {1_{\{ \mathbb{E}\left[ {{K_A}} \right] \lt \;\infty \} }}\mathbb{E}\left[ {{K_A}} \right]\mathbb{P}\,( {X \gt x} ) + \mathbb{P}\left( {\mathbb{E}\left[ X \right]{K_A} \gt x} \right){\textrm{ as }}x \to \infty ,$$

in the regime where $\left( {X,{K_A}} \right)$ are arbitrarily dependent, and either ${K_A}$ is intermediate regularly varying and $\mathbb{P}\,( {X \gt x} ) = o\!\left( {\mathbb{P}\left( {{K_A} \gt x} \right)} \right){\textrm{ as }}x \to \infty $ (Theorem 6.10 in [Reference Olvera-Cravioto47]) or $X$ is intermediate regularly varying, and $\mathbb{P}\left( {{K_A} \gt x} \right) = o\!\left( {\mathbb{P}\,( {X \gt x} )} \right){\textrm{ as }}x \to \infty $ (Theorem 6.11 in [Reference Olvera-Cravioto47]). The novelty in this paper is that it proposes similar asymptotics in the case where ${K_A}$ and $X$ are effectively tail equivalent.

Note that the content of Propositio 6 is a kind of “double” big-jump principle: The heavy-tailedness introduced by letting the vector $\left( {X,{K_A}} \right)$ be regularly varying implies that there are two ways for the sum ${D^R}$ to be large: either through a combination of the dependent variables $X$ and ${K_A}$ or through the classical single big-jump coming from the additional term $\mathbb{E}\left[ {{K_A}} \right]\mathbb{P}\,( {X \gt x} )$ consisting of the offspring events.

6. Tail asymptotics of the maximum functional in the Hawkes process

We now propose a single big-jump principle concerning the maximum functional of a generic cluster in the settings of the Hawkes process. Recall that $\mathbb{E}\left[ {{L_A}} \right] = \mathbb{E}\left[ {{\kappa _A}} \right] = 1.$

Proposition 7. Suppose that the vector $\left( {X,{\kappa _A}} \right)$ in Equation (7) is regularly varying with index $\alpha \gt 1$ and non-null Radon measure $\mu $ . Then,

$$\mathbb{P}\left( {{H^H} \gt x} \right) \sim \frac{1}{{1 - \mathbb{E}\left[ {{\kappa _A}} \right]}}\mathbb{P}\,( {X \gt x} ){\textrm{ as }}x \to \infty .$$

Moreover, if $\mu (\{ \left( {{x_1},{x_2}} \right) \in \mathbb{R}_{ + ,0}^2:{x_1} \gt 1\} ) \gt 0$ , then ${H^H}$ is regularly varying with index $\alpha \gt 1$ .

Proof of Proposition 7. The proof can be found in Appendix B and follows the same approach as the proof of Proposition 5.

Remark 6. As hinted at in Section 1, a closely related work concerning the maxima of the marks in a generic cluster of the Hawkes process can be found in [Reference Basrak5]. Under the assumption that ${K_A}$ is a stopping time with respect to a filtration, including the information about $\left( {{X_{ij}}} \right)$ , it is shown in their Lemma 4.1 that ${H^H}$ falls in the same MDA as $X$ . What we propose in Proposition 7 is merely a refinement for the Fréchet MDA that describes explicitly the tail of ${H^H}$ .

7. Tail asymptotics of the sum functional in the Hawkes process

We now propose another “double” big-jump principle concerning the sum functional of a generic cluster in the setting of the Hawkes process. The tail approximation obtained in Proposition 8 is in fact very similar to the one in Proposition 6, where both a single big-jump principle and a combination of the effects of the dependent variables $X$ and ${\kappa _A}$ yield large values for ${D^H}$ .

Proposition 8. Assume that $\left( {X,{\kappa _A}} \right)$ in Equation (8) has a regularly varying distribution with noninteger index $\alpha \gt 1.\;$ Then $\left( {X,{L_A}} \right)$ is regularly varying with the same index, $\alpha $ . Further, ${D^H}$ is regularly varying with index $\alpha $ . In fact,

$$\mathbb{P}\left( {{D^H} \gt x} \right) \sim \frac{1}{{1 - \mathbb{E}\left[ {{\kappa _A}} \right]}}\mathbb{P}\left( {X + \left( {\frac{{\mathbb{E}\left[ X \right]}}{{1 - \mathbb{E}\left[ {{\kappa _A}} \right]}}} \right){\kappa _A} \gt x} \right){\textrm{ as }}x \to \infty .$$

Proof of Proposition 8. Recall that the assumption that $\left( {X,{\kappa _A}} \right)$ is regularly varying with index $\alpha \gt 1$ is equivalent to the regular variation of the linear combinations ${t_1}X + {t_2}{\kappa _A}$ for all ${t_1},{t_2} \in {\mathbb{R}_ + }$ by Proposition 4. Similarly as in the proof of Proposition 6, if we can show, at any order $\left( {n + 1} \right)$ for $n \in \mathbb{N}$ and for any ${t_1},{t_2} \in {\mathbb{R}_ + }$ , that the behaviour of $\varphi _{{t_1}X + {t_2}{\kappa _A}}^{\left( {n + 1} \right)}( s )\;:\!=\; \frac{{{\partial ^{n + 1}}}}{{\partial {s^{n + 1}}}}\left( {\mathbb{E}\left[ {{e^{ - s\left( {{t_1}X + {t_2}{\kappa _A}} \right)}}} \right]} \right)$ and that of $\varphi _{{t_1}X + {t_2}{L_A}}^{\left( {n + 1} \right)}( s ) \;:\!=\; \frac{{{\partial ^{n + 1}}}}{{\partial {s^{n + 1}}}}\left( {\mathbb{E}\left[ {{e^{ - s\left( {{t_1}X + {t_2}{L_A}} \right)}}} \right]} \right)$ as $s \to {0^ + }$ are comparable, i.e. if

$$\varphi _{{t_1}X + {t_2}{\kappa _A}}^{\left( {n + 1} \right)}( s ) \sim \varphi _{{t_1}X + {t_2}{L_A}}^{\left( {n + 1} \right)}( s ){\textrm{ as }}s \to {0^ + },$$

then by Karamata’s Theorem (1), we have

$$\mathbb{P}\left( {{t_1}X + {t_2}{\kappa _A} \gt x} \right)\;\sim \;\mathbb{P}\left( {{t_1}X + {t_2}{L_A} \gt x} \right){\textrm{ as }}x \to \infty .$$

But this essentially means, reapplying Proposition 4, that $\left( {X,{L_A}} \right)$ is regularly varying.

The following bounds will be useful:

  1. (i) By a Taylor expansion, as $s \to {0^ + }$ ,

    (12) \begin{align}s - \left( {1 - {e^{ - s}}} \right) \leqslant {s^2}/2.\end{align}
  2. (ii) For an $s \gt 0$ small enough,

    (13) \begin{align} - \!\left( {1 - {e^{ - s}}} \right) \leqslant - s/2.\end{align}

First, note that it is possible to write ${\varphi _{{t_1}X + {t_2}{L_A}}}({\cdot})$ as a function of ${\kappa _A}$ instead of as ${L_A}$ . Using the Tower property and recalling that ${L_A}|A\;\sim {\textrm{ Poisson}}\!\left( {{\kappa _A}} \right)$ yields

$${\varphi _{{t_1}X + {t_2}{L_A}}}( s ) = \mathbb{E}\left[ {\mathbb{E}\left[ {{e^{ - s{t_1}X - s{t_2}{L_A}}} \;|\; A} \right]} \right] = \mathbb{E}\left[ {{e^{ - s{t_1}X - \left( {1 - {e^{ - s{t_2}}}} \right){\kappa _A}}}} \right].$$

From this, and letting $\alpha \in \left( {n,n + 1} \right)$ , $n \geqslant 1$ , simple derivations and collection of terms lead us to consider the difference given by

\begin{align*}&\big|\varphi _{{t_1}X + {t_2}{L_A}}^{\left( {n + 1} \right)}( s ) - \varphi _{{t_1}X + {t_2}{\kappa _A}}^{\left( {n + 1} \right)}( s )\big|\\&\quad = \Big|\mathbb{E}\left[ {{{(\!-\!{t_1}X)}^{n + 1}}\left( {{e^{ - s{t_1}X - \left( {1 - {e^{ - s{t_2}}}} \right){\kappa _A}}} - {e^{ - s{t_1}X - s{t_2}{\kappa _A}}}} \right)} \right]\\&\qquad + {I_1}\mathbb{E}\left[ {{{(\!-\!{t_1}X)}^n}\!\left( { - {t_2}{\kappa _A}} \right)\left( {{e^{ - s{t_1}X - \left( {1 - {e^{ - s{t_2}}}{\kappa _A}} \right){\kappa _A} - {I_2}s{t_2}}} - {e^{ - s{t_1}X - s{t_2}{\kappa _A}}}} \right)} \right]\\ &\qquad + \ldots \\ &\qquad + \;{I_j}\mathbb{E}\left[ {\left( { - {t_1}X} \right){{(\!-\!{t_2}{\kappa _A})}^n}\left( {{e^{ - s{t_1}X - \left( {1 - {e^{ - s{t_2}}}{\kappa _A}} \right){\kappa _A} - {I_k}s{t_2}}} - {e^{ - s{t_1}X - s{t_2}{\kappa _A}}}} \right)} \right]\\ &\qquad + \;\mathbb{E}\left[ {{{(\!-\!{t_2}{\kappa _A})}^{n + 1}}\left( {{e^{ - s{t_1}X - \left( {1 - {e^{ - s{t_2}}}} \right){\kappa _{A\;}} - \left( {n + 1} \right)s{t_2}}} - {e^{ - s{t_1}X - s{t_2}{\kappa _A}}}} \right)} \right] + {C_{n + 1}}\Big|\\ &\quad =\! : \left| {{B_1} + {B_{21}} + \ldots + {B_{2j}} + {B_3} + {C_{n + 1}}} \right|,\end{align*}

where the constants of product terms $\left( {{B_{21}}, \ldots ,{B_{2j}}} \right)$ ${I_1},{I_2}, \ldots ,{I_j},{I_k} \in \mathbb{N}$ depend on $n$ .

Consider term ${B_1}$ . Using Equation (12) and Equation (13) and the basic inequality $x{e^{ - x}} \leqslant {e^{ - 1}}$ , one can show that

\begin{align*}\left| {{B_1}} \right|& = \mathbb{E}\left[ {{{({t_1}X)}^{n + 1}}{e^{ - s{t_1}X}}\left( {{e^{ - \left( {1 - {e^{ - s{t_2}}}} \right){\kappa _A}}} - {e^{ - s{t_2}{\kappa _A}}}} \right)} \right]\\&\leqslant \mathbb{E}\left[ {{{({t_1}X)}^{n + 1}}{e^{ - s{t_1}X}}{e^{ - \left( {1 - {e^{ - s{t_2}}}} \right){\kappa _A}}}{\kappa _A}\!\left( {s{t_2} - 1 + {e^{ - s{t_2}}}} \right)} \right]\\&\leqslant \mathbb{E}\left[ {{{({t_1}X)}^{n + 1}}{e^{ - s{t_1}X}}{e^{ - s{t_2}{\kappa _A}/2}}{\kappa _A}{{(s{t_2})}^2}/2} \right]\\&\leqslant \mathbb{E}\left[ {\left( {s{t_1}X{e^{ - s{t_1}X}}} \right)\left( {\frac{{s{t_2}{\kappa _A}}}{2}{e^{ - s{t_2}{\kappa _A}/2}}} \right){t_2}{{({t_1}X)}^n}} \right]\\&\leqslant \mathbb{E}\left[ {{e^{ - 2}}{t_2}{{({t_1}X)}^n}} \right],\end{align*}

and by the finiteness of the $n$ th moment of $X$ when $\alpha \in \left( {n,n + 1} \right)$ , the above expectation is finite. Hence, it follows, using Karamata’s Theorem (1) and Remark 2, that

$${B_1} = o\!\left( {\varphi _{{t_1}X + {t_2}{\kappa _A}}^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Consider one representative for the cross-product terms, say, without loss of generality, ${B_{21}}$ . Then proceeding as before for term ${B_1}$ , using Equation (12) and Equation (13) and the basic inequality $x{e^{ - x}} \leqslant {e^{ - 1}}$ , yields

\begin{align*}\left| {{B_{21}}} \right| & = {I_1}\mathbb{E}\left[ {{{({t_1}X)}^n}\left( {{t_2}{\kappa _A}} \right)\!{e^{ - s{t_1}X}}\left(\! {\left( {{e^{ - \left( {1 - {e^{ - s{t_2}}}} \right){\kappa _A} - s{t_2}}} - {e^{ - s{{\textrm{t}}_2}{\kappa _A} - s{t_1}}}} \right) - \left( {{e^{ - s{t_2}{\kappa _A}}} - {e^{ - s{t_2}{\kappa _A} - s{t_1}}}} \right)} \!\right)} \!\right]\\& \leqslant {I_1}\mathbb{E}\left[ {{{({t_1}X)}^n}{e^{ - s{t_1}X}}\left( {{t_2}\kappa _A^2} \right)\!{e^{ - \left( {1 - {e^{ - s{t_2}}}} \right){\kappa _A}}}\left( {s{t_2} - 1 + {e^{ - s{t_2}}}} \right)} \right]\\& \leqslant {I_1}\mathbb{E}\left[ {{{({t_1}X)}^n}{e^{ - s{t_1}X}}\left( {{t_2}\kappa _A^2} \right)\!{e^{ - s{t_2}{\kappa _A}/2}}\frac{{{{(s{t_2})}^2}}}{2}} \right]\\& \leqslant {I_1}\mathbb{E}\left[ {\left( {s{t_1}X{e^{ - s{t_1}X}}} \right)\left( {\frac{{s{t_2}{\kappa _A}}}{2}{e^{ - s{t_2}{\kappa _A}/2}}} \right){{({t_1}X)}^{n - 1}}t_2^2{\kappa _A}} \right]\\& \leqslant {I_1}\mathbb{E}\left[ {{e^{ - 2}}{{({t_1}X)}^{n - 1}}t_2^2{\kappa _A}} \right].\end{align*}

Using Hölder’s inequality, because the order of the product of ${X^{n - 1}}$ and ${\kappa _A}$ is $n$ , one discovers that the above expectation is finite. It follows from Karamata’s Theorem (1) and Remark 2 that

$${B_{21}} = o\!\left( {\varphi _{{t_1}X + {t_2}{\kappa _A}}^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + },$$

which is similar for each cross-product term ${B_{22}}, \ldots ,{B_{2j}}$ .

Consider now ${B_3}$ . With similar tools as before and using Equation (12) and Equation (13) and the basic inequality ${x^2}{e^{ - x}} \leqslant 4{e^{ - 2}}$ , yields

\begin{align*}\left| {{B_3}} \right| & = \mathbb{E}\left[ {{{({t_2}{\kappa _A})}^{n + 1}}{e^{ - s{t_1}X}}\left( {\left( {{e^{ - \left( {1 - {e^{ - s{t_2}}}} \right){\kappa _{A\;}} - 2s{t_2}}} - {e^{s{t_2}{\kappa _{A\;}} - 2s{t_2}}}} \right) - \left( {{e^{ - s{t_2}{\kappa _A}}} - {e^{ - s{t_2}{\kappa _{A\;}} - 2s{t_2}}}} \right)} \right)} \right]\\&\leqslant \mathbb{E}\left[ {{{({t_2}{\kappa _A})}^{n + 1}}{e^{ - s{t_1}X}}\left( {{e^{ - \left( {1 - {e^{ - s{t_2}}}} \right){\kappa _{A\;}} - 2s{t_2}}} - {e^{s{t_2}{\kappa _A} - 2s{t_2}}}} \right)} \right]\\&\leqslant \mathbb{E}\left[ {{{({t_2}{\kappa _A})}^{n + 1}}{e^{ - \left( {1 - {e^{ - s{t_2}}}} \right){\kappa _A}}}\left( {s{t_2}{\kappa _A} - \left( {1 - {e^{ - s{t_2}}}} \right)\!{\kappa _A}} \right)} \right]\\& \leqslant 2\mathbb{E}\left[ {t_2^{n + 1}\kappa _A^{n + 2}{{({\textrm{s}}{t_2}/2)}^2}{e^{ - s{t_2}{\kappa _A}/2}}} \right]\\& \leqslant 2\mathbb{E}\left[ {{{({t_2}{\kappa _A})}^n}4{e^{ - 2}}} \right],\end{align*}

which essentially shows, once again, using Karamata’s Theorem (1) and Remark 2, that

$${B_3} = o\!\left( {\varphi _{{t_1}X + {t_2}{\kappa _A}}^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Lastly, making up the remainder ${C_{n + 1}}$ are terms of strictly smaller order than $n + 1. $ These are finite and trivially, using Karamata’s Theorem (1) and Remark 2:

$${C_{n + 1}} = o\!\left( {\varphi _{{t_1}X + {t_2}{\kappa _A}}^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Collecting all of these results, it follows that

$$\left| {\varphi _{{t_1}X + {t_2}{L_A}}^{\left( {n + 1} \right)}( s ) - \varphi _{{t_1}X + {t_2}{\kappa _A}}^{\left( {n + 1} \right)}( s )} \right| = o\!\left( {\varphi _{{t_1}X + {t_2}{\kappa _A}}^{\left( {n + 1} \right)}( s )} \right){\textrm{ as\;}}s \to {0^ + },$$

which essentially means that for all $n \in \mathbb{N},$

$$\varphi _{{t_1}X + {t_2}{L_A}}^{\left( {n + 1} \right)}( s ) \sim \varphi _{{t_1}X + {t_2}{\kappa _A}}^{\left( {n + 1} \right)}( s ){\textrm{ as }}s \to {0^ + }.$$

Now, by Karamata’s Theorem (1), this means that for all ${t_1},{t_2} \in {\mathbb{R}_ + },$

$$\mathbb{P}\left( {{t_1}X + {t_2}{L_A} \gt x} \right)\;\sim \;\mathbb{P}\left( {{t_1}X + {t_2}{{\kappa }_A} \gt x} \right){\textrm{ as }}x \to \infty ,$$

and using Proposition 4, this means that $\left( {X,{L_A}} \right)$ is regularly varying with index $\alpha \gt 1.\;$ We conclude by applying Theorem 1 in [Reference Asmussen and Foss1], which yields the desired result.

Remark 7. Proposition 8 essentially concerns showing that if $\left( {X,{\kappa _A}} \right)$ is regularly varying, then $\left( {X,{L_A}} \right)$ is also regularly varying, using the same index, $\alpha \gt 1.\;$ The equivalence between the regularly varying property of ${\kappa _A}$ and that of ${L_A}$ is easy to prove and is to be found, for example, in [Reference Litvak, Scheinhardt and Volkovich34]. The crucial step to obtain the tail asymptotic of ${D^H}$ and its regularly varying property in Proposition 8 relies on Theorem 1 in [Reference Asmussen and Foss1]. In their even more general setting, the distribution of $X + c{L_A}$ is intermediate regularly varying for all $c \in \left( {\mathbb{E}\left[ {{D^H}} \right] - \epsilon ,\mathbb{E}\left[ {{D^H}} \right] + \epsilon } \right)$ for some $\epsilon \gt 0$ : This assumption encompasses the case where $\left( {X,{L_A}} \right)$ is regularly varying but also encompasses the cases where $X$ (respectively ${L_A}$ ) is intermediate regularly varying and ${L_A}$ (respectively $X$ ) is lighter, in the sense that $\mathbb{P}\left( {{L_A} \gt x} \right) = o\!\left( {\mathbb{P}\,( {X \gt x} )} \right){\textrm{ as }}x \to \infty $ (respectively $\mathbb{P}\,( {X \gt x} ) = o\!\left( {\mathbb{P}\,\left( {{L_A} \gt x} \right)} \right){\textrm{ as }}x \to \infty $ ).

Proposition 8 extends Lemma 5.2 in [Reference Basrak, Wintenberger and Žugec6] by letting $\left( {X,{\kappa _A}} \right)$ be regularly varying, while it is shown in the aforementioned paper that ${D^H}$ is regularly varying in the case where $X$ is itself regularly varying and with noninteger $\alpha \in \left( {0,2} \right)$ . In the aforementioned paper, three cases are distinguished, with various assumptions on the relation between $X$ and ${L_A}$ . Note that we do not cover the case $\alpha \in \left( {0,1} \right)$ in Proposition 8 because it is studied in [Reference Basrak, Wintenberger and Žugec6].

In a recent contribution concerning PageRank, Theorem 4.2 in [Reference Olvera-Cravioto47] provides similar asymptotics as in Theorem 4.2 that can be specialised to our case when $X$ and ${K_A}$ are allowed to have any form of dependence but one has a negligible tail with respect to the other. The aforementioned theorem also applies to intermediate regularly varying $X$ and ${K_A}$ . The main connection and specialisation are the following two:

  1. (i) if ${K_A}$ is regularly varying with index $\alpha \gt 1$ and $\mathbb{E}\left[ {{X^{\alpha + \epsilon }}} \right] \lt \infty $ for some $\epsilon \gt 0$ and if $\mathbb{P}\,( {X \gt x} ) = o\!\left( {\mathbb{P}\,\left( {{K_A} \gt x} \right)} \right){\textrm{ as }}x \to \infty ,$ then

    $$\mathbb{P}\left( {{D^H} \gt x} \right)\;\sim \;\mathbb{E}\left[ {{K_A}} \right]\mathbb{P}\,( {X \gt x} ) + \mathbb{P}\left( {\mathbb{E}\left[ X \right]{K_A} \gt x} \right){\textrm{ as }}x \to \infty ;$$
  2. (ii) if $X$ is regularly varying with index $\alpha \gt 1$ and $\mathbb{E}\left[ {K_A^{\alpha + \epsilon }} \right] \lt \infty $ for some $\epsilon \gt 0$ and if $\mathbb{P}\left( {{K_A} \gt x} \right) = o\!\left( {\mathbb{P}\,( {X \gt x} )} \right){\textrm{ as }}x \to \infty ,$ then

    $$\mathbb{P}\left( {{D^H} \gt x} \right) \sim \left( {1 + \mathbb{E}\left[ {{K_A}} \right]} \right)\mathbb{P}\,( {X \gt x} ){\textrm{ as }}x \to \infty .$$

Hence, our result essentially extends the above, allowing for tail equivalence between $X$ and ${K_A}$ .

8. Precise large deviations of cluster process functionals

In this section, we make use of the cluster asymptotics from Section 4 through Section 7 to derive (precise) large deviation results for the renewal Poisson cluster process as well as for the Hawkes process.

Notation-wise, we let

$${N_T} = \left| {\left\{ {\left( {i,j} \right):\;0 \leqslant {{\Gamma }_i} \leqslant T,0 \leqslant {{\Gamma }_i} + {T_{ij}} \leqslant T} \right\}} \right|$$

represent the number of events occurring in the time interval $\left[ {0,T} \right]$ for $T \gt 0,$ and we let

$${J_T} = \left| {\left\{ {\left( {i,j} \right):\;0 \leqslant {{\Gamma }_i} \leqslant T,T \leqslant {{\Gamma }_i} + {T_{ij}}} \right\}} \right|$$

represent the number of (ordered) events coming from clusters that started in the time interval $\left[ {0,T} \right]$ but that occurred after time $T \gt 0$ . We will also need the following decomposition of the maximum: For $x \gt 0,$

(14) \begin{align}\left\{ {\mathop {{\textrm{max}}}\limits_{1{{\; \leqslant \;}}i\; \leqslant {C_T}}\,{H_i} - \mathop {{\textrm{max}}}\limits_{1{{\; \leqslant }}\;j\; \leqslant {J_T}}\, {X_j} \gt x} \right\} \subseteq \left\{ {\mathop {{\textrm{max}}}\limits_{1{{\; \leqslant \;}}i\; \leqslant {N_T}}\, {X_i} \gt x} \right\} \subseteq \left\{ {\mathop {{\textrm{max}}}\limits_{1{{\; \leqslant \;}}i\; \leqslant {C_T}}\,{H_i} \gt x} \right\},\end{align}

where ${C_T} \sim {\textrm{ Poisson}}\!\left( {\nu T} \right)$ is the number of clusters starting in the interval $\left[ {0,T} \right]$ for $T \gt 0$ , and ${H_i}$ is as in Equation (1). This is due to the fact that the immigration process is the classical homogeneous Poisson process with parameter $\nu \gt 0;$ see Section 2. The upper-bounding set in decompositions (14) overshoots by taking the maximum over all the events belonging to clusters initiated before time $T \gt 0;$ i.e. this includes events occurring after time $T \gt 0$ . This is convenient since ${C_T}$ and $H$ are independent.

The precise large deviation results for the sum will necessitate another decomposition. Notation-wise, rewriting Equation (3) using ${N_T}$ yields

$${S_T}\;:\!=\; \mathop \sum \limits_{j = 1}^{{N_T}}\, {X_j},$$

and we let ${\mu _{{S_T}}}$ denote the expectation of ${S_T}$ . Then we can decompose the deviation as:

(15) \begin{align}{S_T} - {\mu _{{S_T}}} & = \mathop \sum \limits_{i = 1}^{{C_T}} {D_i} - \mathbb{E}\left[ {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i}} \right] - \left( {\mathop \sum \limits_{j = 1}^{{J_T}} f\left( {{A_j}} \right) - \mathbb{E}\left[ {\mathop \sum \limits_{j = 1}^{{J_T}} f\left( {{A_j}} \right)} \right]} \right)\notag\\& =\! : \mathop \sum \limits_{i = 1}^{{C_T}} {D_i} - \mathbb{E}\left[ {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i}} \right] - \left( {{\varepsilon _T} - \mathbb{E}\left[ {{\varepsilon _T}} \right]} \right)\!.\end{align}

As in decomposition (14), the first difference overshoots by summing marks of all events belonging to clusters started before $T \gt 0$ and by removing the leftover effect of events occurring after time $T \gt 0$ in a second step, denoted by ${\varepsilon _T}$ . Again, note that ${C_T}$ and $D$ are independent.

Furthermore, regarding the leftover effect, the following properties hold:

  1. (i) (Property 1) in [Reference Basrak5], for both the renewal Poisson cluster process and the Hawkes process; that is, $\mathbb{E}\left[ {{J_T}} \right] = o\!\left( T \right){\textrm{ as }}T \to \infty ;$

  2. (ii) (Property 2) in [Reference Basrak, Wintenberger and Žugec6], for both the renewal Poisson cluster process and the Hawkes process; that is, $\mathbb{E}\left[ {{\varepsilon _T}} \right] = o\!\left( {\sqrt T } \right){\textrm{ as }}T \to \infty ;$ and hence, in our settings, the condition $\mathbb{E}\left[ {{\varepsilon _T}} \right] = o\!\left( T \right){\textrm{ as }}T \to \infty $ holds as well.

8.1. Large deviations of maxima over an interval $\left[ {0,\boldsymbol{T}} \right]$

We now illustrate how the asymptotics of Proposition 5 and Proposition 7 help to determine the asymptotic behaviour of the whole processes on an interval. In what follows, we let $H$ denote a generic maximum; i.e. it can be either ${H^R}$ or ${H^D}$ from Section 4 and Section 6. At the end of this section, we present some related work.

Proposition 9. Suppose that the conditions of either Proposition 5 or those of Proposition 7 hold. Then, as $T \to \infty $ and for any $\gamma \gt 0,$

$$\mathop {{\textrm{lim}}}_{T \to \infty } {{\textrm{sup}}}_{x\; \geqslant \gamma \nu T} \left| {\frac{{\mathbb{P}\,\left( { {{\textrm{max}}}_{1{{\; \leqslant \;}}i\; \leqslant {N_T}}\, {X_i} \gt x} \right)}}{{\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\,( {X \gt x} )}} - 1} \right| = 0.$$

Proof of Proposition 9. Using decomposition (14),

\begin{align*}\frac{{\mathbb{P}\left( { {{\textrm{max}}}_{1{{\; \leqslant \;}}i\;{{ \leqslant \;}}{C_T}}\,{H_i} - {{\textrm{max}}}_{1{{\; \leqslant }}j\;{{ \leqslant \;}}{J_T}}\, {X_i} \gt \;x} \right)}}{{\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\left( {X\; \gt \;x} \right)}} & \leqslant \frac{{\mathbb{P}\left( { {{\textrm{max}}}_{1{{\; \leqslant \;}}i\;{{ \leqslant \;}}{N_T}}\, {X_i} \gt \;x} \right)}}{{\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\left( {X\; \gt \;x} \right)}}\\[5pt] &{\leqslant }\frac{{\mathbb{P}\left( { {{\textrm{max}}}_{1{{\; \leqslant \;}}i\;{{ \leqslant \;}}{C_T}}\,{H_i} \gt \;x} \right)}}{{\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\left( {X\; \gt \;x} \right)}}.\end{align*}

Upper bound: By the remark following Theorem 3.1 in [Reference Klüppelberg and Mikosch32] for any $\gamma \gt 0$ ,

$$ \mathop{{\textrm{lim}}}\limits_{T \to \infty } {{\textrm{sup}}}_{x\; \geqslant \gamma \nu T} \left| {\frac{{\mathbb{P}\left( { {{\textrm{max}}}_{1{{\; \leqslant \;}}i\;{{ \leqslant \;}}{C_T}}\,{H_{i}} \gt x} \right)}}{{\mathbb{E}\left[ {{C_T}} \right]\mathbb{P}\left( {H \gt x} \right)}} - 1} \right| = 0{\textrm{ as }}T \to \infty .$$

Using the asymptotics of Proposition 5 and of Proposition 7,

$$\mathbb{E}\left[ {{C_T}} \right]\mathbb{P}\left( {H \gt x} \right)\;\sim \;\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\,( {X \gt x} ){\textrm{ as }}T \to \infty $$

for the $x$ -values considered; i.e. when $x \geqslant \gamma \nu T$ for any $\gamma \gt 0$ .

Lower bound:

\begin{align*}&\frac{{\mathbb{P}\!\left( { {{\textrm{max}}}_{1{{\, \leqslant \,}}i\,{{ \leqslant \,}}{C_T}}\,{H_i} - {{\textrm{max}}}_{1{{\, \leqslant \,}}j\,{{ \leqslant \,}}{J_T}}\, {X_{j}} \gt } x\right)}}{{\mathbb{E}\!\left[ {{N_T}} \right]\mathbb{P}\!\left( {X\, \gt \,x} \right)}}\\ & = \frac{{\mathbb{P}\!\left( { {{\textrm{max}}}_{1{{\, \leqslant \,}}i\,{{ \leqslant \,}}{C_T}}\,{H_{i\;}} - {{\textrm{max}}}_{1{{\, \leqslant \,}}j\,{{ \leqslant \,}}{J_T}}\, {X_j} \gt \,x, {{\textrm{max}}}_{1{{\, \leqslant \,}}j\,{{ \leqslant \,}}{J_T}}\, {X_j}{{\, \leqslant \,}}x\varepsilon } \right)}}{{\mathbb{E}\!\left[ {{N_T}} \right]\mathbb{P}\!\left( {X\, \gt \,x} \right)}}\\&\quad + \frac{{\mathbb{P}\!\left( { {{\textrm{max}}}_{1{{\, \leqslant \,}}i\,{{ \leqslant \,}}{C_T}}\,{H_i} - {{\textrm{max}}}_{1{{\, \leqslant \,}}j\,{{ \leqslant \,}}{J_T}}\, {X_j} \gt \;x, {{\textrm{max}}}_{1{{\, \leqslant \,}}j\,{{ \leqslant \,}}{J_T}}\, {X_j} \gt \,x\varepsilon } \right)}}{{\mathbb{E}\!\left[ {{N_T}} \right]\mathbb{P}\!\left( {X\, \gt \,x} \right)}}\\ & \geqslant \frac{{\mathbb{P}\!\left( { {{\textrm{max}}}_{1{{\, \leqslant \,}}i\,{{ \leqslant \,}}{C_T}}\,{H_i} \gt \;x\!\left( {1 + \varepsilon } \right)\!, {{\textrm{max}}}_{1{{\, \leqslant \,}}j\,{{ \leqslant \,}}{J_T}}\, {X_j}{{\; \leqslant \;}}x\varepsilon } \right)}}{{\mathbb{E}\!\left[ {{N_T}} \right]\mathbb{P}\!\left( {X\; \gt \;x} \right)}}\\ & \geqslant \frac{{\mathbb{P}\!\left( { {{\textrm{max}}}_{1{{\, \leqslant \,}}i\,{{ \leqslant \,}}{C_T}}\,{H_i} \gt \;x\!\left( {1 + \varepsilon } \right)} \right)}}{{\mathbb{E}\!\left[ {{N_T}} \right]\mathbb{P}\!\left( {X\; \gt \;x} \right)}}\\ &\quad - \frac{{\mathbb{P}\!\left( { {{\textrm{max}}}_{1{{\, \leqslant \,}}i\,{{ \leqslant \,}}{C_T}}\,{H_{i\;}} \gt x\!\left( {1 + \varepsilon } \right)\!, {{\textrm{max}}}_{1{\leqslant }j{\leqslant }{J_T}}\, {X_j} \gt \;x\varepsilon } \right)}}{{\mathbb{E}\!\left[ {{N_T}} \right]\mathbb{P}\!\left( {X\; \gt \;x} \right)}}.\end{align*}

The very last term in the lower bound is bounded above by

$$\frac{{\mathbb{P}\left( { {{\textrm{max}}}_{1{{\; \leqslant \;}}i\;{{\; \leqslant \;}}{C_T}}\,{H_i} \gt \;x\!\left( {1 + \varepsilon } \right)\!, {{\textrm{max}}}_{1{{\;\; \leqslant \;}}j\;{{\; \leqslant \;}}{J_T}}\, {X_{j\;}} \gt \;x\varepsilon } \right)}}{{\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\left( {X\; \gt \;x} \right)}}{{\; \leqslant }}\frac{{\mathbb{P}\left( { {{\textrm{max}}}_{1{{\;\; \leqslant \;}}j\;{{\; \leqslant \;}}{J_T}}\, {X_j} \gt \;x\varepsilon } \right)}}{{\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\left( {X\; \gt \;x} \right)}}.$$

Conditioning on the values of ${J_T}$ using a union bound and the fact that the ${X_j}$ s are independent,

\begin{align*}\mathbb{P}\left( {\mathop {{\textrm{max}}}\limits_{1{{\; \leqslant \;}}j\;{{ \leqslant \;}}{J_T}}\, {X_j} \gt x\varepsilon } \right)& = \mathop \sum \limits_{k = 1}^\infty \mathbb{P}\left( {\mathop {{\textrm{max}}}\limits_{1{{\; \leqslant \;}}j\;{{ \leqslant \;}}k} {X_j} \gt x\varepsilon } \right)\mathbb{P}\left( {{J_T} = k} \right)\\&\leqslant \mathop \sum \limits_{k = 1}^\infty \mathop \sum \limits_{j = 1}^k \mathbb{P}\left( {{X_j} \gt x\varepsilon } \right)\mathbb{P}\left( {{J_T} = k} \right)\\&\leqslant \mathop \sum \limits_{k = 1}^\infty k\mathbb{P}\left( {X \gt x\varepsilon } \right)\mathbb{P}\left( {{J_T} = k} \right)\\&\leqslant \mathbb{E}\left[ {{J_T}} \right]\mathbb{P}\left( {X \gt x\varepsilon } \right)\!.\end{align*}

Using Property 1 and Remark 8, which essentially say that $\mathbb{E}\left[ {{N_T}} \right] = \mathcal{O}\!\left( T \right){\textrm{ as }}T \to \infty $ , and under the assumption that $x \geqslant \gamma \nu T$ for every $\gamma \gt 0$ , it holds that $T\mathbb{P}\,( {X \gt x} ) \to 0$ as $T \to \infty $ , and it follows that for any fixed $\epsilon \gt 0$ ,

$$\frac{{\mathbb{P}\left( { {{\textrm{max}}}_{1{{\; \leqslant \;}}j\;{{ \leqslant \;}}{J_T}}\, {X_j} \gt \;x\varepsilon } \right)}}{{\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\left( {X\; \gt \;x} \right)}} = o\!\left( 1 \right){\textrm{ as }}T \to \infty .$$

This implies that

$$\frac{{\mathbb{P}\left( { {{\textrm{max}}}_{1{{\; \leqslant \;}}i\;{{ \leqslant \;}}{C_T}}\,{H_i} - {{\textrm{max}}}_{1{{\; \leqslant \;}}j\;{{ \leqslant \;}}{J_T}}\, {X_j} \gt x} \right)}}{{\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\left( {X\; \gt \;x} \right)}} \geqslant \frac{{\mathbb{P}\left( { {{\textrm{max}}}_{1{{\; \leqslant \;}}i\;{{ \leqslant \;}}{C_T}}\,{H_i} \gt \;x\!\left( {1 + \varepsilon } \right)} \right)}}{{\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\left( {X\; \gt \;x} \right)}}.$$

Again using the remark following Theorem 3.1 in [Reference Klüppelberg and Mikosch32], it follows, for any $x \geqslant \gamma \nu T$ , that

$$ \mathop{\textrm{lim}}\limits_{T \to \infty } {{\textrm{sup}}}_{x\; \geqslant \gamma \nu T} \left| {\frac{{\mathbb{P}\left( { {{\textrm{max}}}_{1{{\; \leqslant \;}}i\;{{ \leqslant \;}}{C_T}}\,{H_i} \gt \;x\!\left( {1 + \varepsilon } \right)} \right)}}{{\mathbb{E}\left[ {{C_T}} \right]\mathbb{P}\left( {H\; \gt \;x\!\left( {1 + \varepsilon } \right)} \right)}} - 1} \right| = 0{\textrm{ as }}T \to \infty .$$

Because $H$ is regularly varying with index $\alpha \gt 1$ , it follows that

$$\mathbb{E}\left[ {{C_T}} \right]\mathbb{P}\left( {H \gt x\!\left( {1 + \varepsilon } \right)} \right) = {(1 + \varepsilon )^{ - \alpha }}\mathbb{E}\left[ {{C_T}} \right]\mathbb{P}\left( {H \gt x} \right){\textrm{ as }}x \to \infty ,$$

and using the asymptotics of Proposition 5 and of Proposition 7,

$$\mathbb{E}\left[ {{C_{\textrm{T}}}} \right]\mathbb{P}\left( {H \gt x\!\left( {1 + \varepsilon } \right)} \right) \sim {(1 + \varepsilon )^{ - \alpha }}\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\,( {X \gt x} ){\textrm{ as }}T \to \infty .$$

Letting $\epsilon \to 0$ , collecting the upper and lower bounds yields the desired result.

Remark 8. Note that by the independence of the clusters, we have:

  1. (i) for the renewal Poisson cluster process, $\mathbb{E}\left[ {{N_T}} \right] = \left( {\mathbb{E}\left[ {{K_A}} \right] + 1} \right)\!\nu T$ ;

  2. (ii) for the Hawkes process, $\mathbb{E}\left[ {{N_T}} \right] = \frac{{\nu T}}{{1 - \mathbb{E}\left[ {{\kappa _A}} \right]}}$ (see e.g. Section 12.1 in [Reference Brémaud.8]).

8.2. Large deviations of sums over an interval $\left[ {0,\boldsymbol{T}} \right]$

We finally illustrate how the results of Proposition 6 and Proposition 8 help to derive results for the mixed binomial Poisson cluster process as well as for the Hawkes process on an interval $\left[ {0,T} \right]$ . Note that $D$ denotes a generic sum of the marks.

Proposition 10. Suppose ${\textrm{li}}{{\textrm{m}}_{T \to \infty }}{\textrm{sup}}_{x\; \geqslant \gamma \nu T}\frac{{\mathbb{P}\left( {{\varepsilon _{T\;}} - \;\mathbb{E}\left[ {{\varepsilon _T}} \right] \gt \;x} \right)}}{{\nu T\mathbb{P}\left( {D\; \gt \;x} \right)}} = 0$ for both the mixed binomial Poisson cluster process and the Hawkes process.

  1. (i) Suppose the conditions of Proposition 6 hold for the mixed binomial Poisson cluster process. Then as $T \to \infty $ for all $\gamma \gt 0$ ,

    $$\mathop {{\textrm{lim}}}\limits_{T \to \infty } {{\textrm{sup}}}_{x\; \geqslant \gamma \nu T} \left| {\frac{{\mathbb{P}\left( {{S_T} - {\mu _{{S_T}}} \gt x} \right)}}{{\nu T\left( {\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt \;x} \right) + \mathbb{E}\left[ {{K_A}} \right]\mathbb{P}\left( {X\; \gt \;x} \right)} \right)}} - 1} \right| = 0.$$
  2. (ii) Suppose the conditions of Proposition 8 hold. Then as $T \to \infty $ for all $\gamma \gt 0$ ,

    $$\mathop {{\textrm{lim}}}\limits_{T \to \infty } {{\textrm{sup}}}_{x\; \geqslant \gamma \nu T} \left| {\frac{{\mathbb{P}\left( {{S_T} - {\mu _{{S_T}}} \gt x} \right)}}{{\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\left( {X + \left( {\frac{{\mathbb{E}\left[ X \right]}}{{1 - \mathbb{E}\left[ {{\kappa _A}} \right]}}} \right){\kappa _{A\;}} \gt \;x} \right)}} - 1} \right| = 0.$$

Proof of Proposition 10. We use decomposition (15); i.e.

$$\mathbb{P}\left( {{S_T} - {\mu _{{S_T}}} \gt x} \right) = \mathbb{P}\left( {{{\mathop \sum \limits_{i = 1}^{{C_T}} }_i} {D_i} - \mathbb{E}\left[ {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i}} \right] - \left( {{\varepsilon _T} - \mathbb{E}\left[ {{\varepsilon _T}} \right]} \right) \gt x} \right)\!.$$

Upper bound: Note that

$$\mathbb{P}\left( {{S_T} - {\mu _{{S_T}}} \gt x} \right){\leqslant }\;\mathbb{P}\left( {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i} - \mathbb{E}\left[ {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i}} \right] \gt x - \mathbb{E}\left[ {{\varepsilon _T}} \right]} \right)\!.$$

As $T \to \infty $ , we can rewrite $x \geqslant \gamma \nu T$ as $x \geqslant \gamma '\nu T + \mathbb{E}\left[ {{\varepsilon _T}} \right]$ for some $0 \lt \gamma ' \lt \gamma $ . Hence, under the assumption that $x \geqslant \gamma '\nu T + \mathbb{E}\left[ {{\varepsilon _T}} \right]$ , $x - \mathbb{E}\left[ {{\varepsilon _T}} \right] \geqslant \gamma '\nu T$ , and since ${C_T}\sim {\textrm{Poisson}}\left( {\nu T} \right)$ is independent of $D$ , using Lemma 2.1 and Theorem 3.1 in [Reference Klüppelberg and Mikosch32] yields

$$\mathop {{\textrm{lim}}}\limits_{T \to \infty } \mathop {{\textrm{sup}}}\limits_{x\; \geqslant \gamma '\nu T} \left| {\frac{{\mathbb{P}\left( {\mathop \sum \nolimits_{i = 1}^{{C_T}} {D_i} - \mathbb{E}\left[ {\mathop \sum \nolimits_{i = 1}^{{C_T}} {D_i}} \right] \gt x - \mathbb{E}\left[ {{\varepsilon _T}} \right]} \right)}}{{\nu T\mathbb{P}\left( {D \gt x - \mathbb{E}\left[ {{\varepsilon _T}} \right]} \right)}} - 1} \right| = 0.$$

Recall that $D$ is regularly varying with index $\alpha \gt 1.\;$ Using Property 2, we can write $x - \mathbb{E}\left[ {{\varepsilon _T}} \right] = x - o\!\left( T \right)$ as $T \to \infty $ . Using the Potter bounds (see Theorem 1.5.6 in [Reference Bingham, Goldie and Teugels7]) for all $I \gt 1$ , $\eta \gt 0$ , there exists $X$ such that for all $x - o\!\left( T \right) \geqslant X$ ,

$$\frac{{\mathbb{P}\left( {D\; \gt \;x - o\!\left( T \right)} \right)}}{{\mathbb{P}\left( {D\; \gt \;x} \right)}} \leqslant I{\textrm{max}}\{ {(1 - \frac{{o\!\left( T \right)}}{x})^{ - \alpha + \eta }},{(1 - \frac{{o\!\left( T \right)}}{x})^{ - \alpha + \eta }}\} .$$

Because $x \geqslant \gamma \nu T + \mathbb{E}\left[ {{\varepsilon _T}} \right]$ , the upper bound becomes uniformly close to 1 as $T \to \infty $ . In combination with the above, it follows that as $T \to \infty $ , uniformly for $x \geqslant \gamma '\nu T + \mathbb{E}\left[ {{\varepsilon _T}} \right]$ ,

$$\mathbb{P}\left( {{S_T} - {\mu _{{S_T}}} \gt x} \right) \leqslant \nu T\mathbb{P}\left( {D \gt x} \right)\!.$$

Lower bound: Let $\delta \gt 0$ , and note that

\begin{align*}\mathbb{P}\left( {{S_T} - {\mu _{{S_T}}} \gt x} \right)& = \mathbb{P}\left( {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i} - \mathbb{E}\left[ {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i}} \right] - \left( {{\varepsilon _T} - \mathbb{E}\left[ {{\varepsilon _T}} \right]} \right) \gt x,{\varepsilon _T} - \mathbb{E}\left[ {{\varepsilon _T}} \right] \leqslant x\delta } \right)\\&\quad + \mathbb{P}\left( {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i} - \mathbb{E}\left[ {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i}} \right] - \left( {{\varepsilon _T} - \mathbb{E}\left[ {{\varepsilon _T}} \right]} \right) \gt x,{\varepsilon _T} - \mathbb{E}\left[ {{\varepsilon _T}} \right] \gt x\delta } \right)\\ & \geqslant \mathbb{P}\left( {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i} - \mathbb{E}\left[ {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i}} \right] - \left( {{\varepsilon _T} - \mathbb{E}\left[ {{\varepsilon _T}} \right]} \right) \gt x,{\varepsilon _T} - \mathbb{E}\left[ {{\varepsilon _T}} \right] \leqslant x\delta } \right)\\ & \geqslant \mathbb{P}\left( {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i} - \mathbb{E}\left[ {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i}} \right] \gt x\!\left( {1 + \delta } \right)} \right)\\ &\quad - \mathbb{P}\left( {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i} - \mathbb{E}\left[ {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i}} \right] - \left( {{\varepsilon _T} - \mathbb{E}\left[ {{\varepsilon _T}} \right]} \right) \gt x,{\varepsilon _T} - \mathbb{E}\left[ {{\varepsilon _T}} \right] \gt x\delta } \right)\\ & \geqslant \mathbb{P}\left( {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i} - \mathbb{E}\left[ {\mathop \sum \limits_{i = 1}^{{C_T}} {D_i}} \right] \gt x\!\left( {1 + \delta } \right)} \right) - \mathbb{P}\left( {{\varepsilon _T} - \mathbb{E}\left[ {{\varepsilon _T}} \right] \gt x\delta } \right)\!.\end{align*}

By assumption, the second term is (uniformly) negligible with respect to $\nu T\mathbb{P}\left( {D \gt x} \right)$ for the $x$ -region considered.

Since $x \geqslant \gamma '\nu T + \mathbb{E}\left[ {{\varepsilon _T}} \right] \geqslant \gamma \nu T$ , again using Theorem 3.1 in [Reference Klüppelberg and Mikosch32], it follows that

$$\mathop {{\textrm{lim}}}\limits_{T \to \infty } \mathop {{\textrm{sup}}}\limits_{x\; \geqslant \gamma \nu T} \left| {\frac{{\mathbb{P}\left( {\mathop \sum \nolimits_{i = 1}^{{C_T}} {D_{i\;}} - \;\mathbb{E}\left[ {\mathop \sum \nolimits_{i = 1}^{{C_T}} {D_i}} \right] \gt \;x\!\left( {1 + \delta } \right)} \right)}}{{\nu T\mathbb{P}\left( {D\; \gt \;x\!\left( {1 + \delta } \right)} \right)}} - 1} \right| = 0.$$

Since $D$ is regularly varying with index $\alpha \gt 1$ , letting $\delta \to 0$ yields

$$\nu T\mathbb{P}\left( {D \gt x\!\left( {1 + \delta } \right)} \right) \sim \nu T{(1 + \delta )^{ - \alpha }}\mathbb{P}\left( {D \gt x} \right) \sim \nu T\mathbb{P}\left( {D \gt x} \right)\!.$$

It follows that uniformly for $x \geqslant \gamma \nu T$ and as $T \to \infty $ ,

$$\nu T\mathbb{P}\left( {D \gt x} \right){\leqslant }\mathbb{P}\left( {{S_T} - {\mu _{{S_T}}} \gt x} \right)\!.$$

Collecting the upper and lower bounds,

  1. (i) the asymptotics of Proposition 6 yield

    $$\mathop {{\textrm{lim}}}\limits_{T \to \infty } \mathop {{\textrm{sup}}}\limits_{x\; \geqslant \gamma \nu T} \left| {\frac{{\mathbb{P}\left( {{S_T} - {\mu _{{S_T}\;}} \gt \;x} \right)}}{{\nu T\left( {\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt \;x} \right) + \mathbb{E}\left[ {{K_A}} \right]\mathbb{P}\left( {X\; \gt \;x} \right)} \right)}} - 1} \right| = 0;$$
  2. (ii) the asymptotics of Proposition 8 yield

    $$\mathop {{\textrm{lim}}}\limits_{T \to \infty } \mathop {{\textrm{sup}}}\limits_{x\; \geqslant \gamma \nu T} \left| {\frac{{\mathbb{P}\left( {{S_T} - {\mu _{{S_T}}} \gt \;x} \right)}}{{\mathbb{E}\left[ {{N_T}} \right]\mathbb{P}\left( {X + \left( {\frac{{\mathbb{E}\left[ X \right]}}{{1 - \mathbb{E}\left[ {{\kappa _A}} \right]}}} \right){\kappa _A} \gt \;x} \right)}} - 1} \right| = 0;$$
    and recalling that $\frac{{\nu T}}{{1 - \mathbb{E}\left[ {{\kappa _A}} \right]}} = \mathbb{E}\left[ {{N_T}} \right]$ , this concludes the proof.

Remark 9. Early contributions to the (non-uniform) precise large deviation results for non-random sums of i.i.d. regularly varying random variables can be found in [Reference Heyde24, Reference Nagaev42, Reference Nagaev43], or [Reference Nagaev44].

The proofs of Proposition 9 and Proposition 10 rely heavily on the work of [Reference Klüppelberg and Mikosch32], in which the authors show that under the assumption that the process of integer-valued nonnegative random variables ${({N_T})_{T\; \gt \;0}}$ is such that

  1. (i) $\frac{{{N_T}}}{{{\lambda _T}1}}{\textrm{ as }}{\lambda _T} \to \infty $ , where ${\lambda _T} = \mathbb{E}\left[ {{N_T}} \right]$ ;

  2. (ii) and the following limit holds:

    $$\mathop \sum \limits_{k \gt \left( {1 + \delta } \right){\lambda _T}} \mathbb{P}\left( {{N_T} \gt k} \right){(1 + \varepsilon )^k} \to 0{\textrm{ \;as}}\;\;{\lambda _T} \to \infty .$$

Furthermore, if the process $\left( {{N_T}} \right)$ is independent of the sequence $\left( {{X_j}} \right)$ by their Theorem 3.1 and if the distribution of $X$ is extended so that it is regularly varying, for any $\gamma \gt 0$ ,

\begin{align*}&\mathop {{\textrm{lim}}}\limits_{T \to \infty } \mathop {{\textrm{sup}}}\limits_{x\; \geqslant \gamma {\lambda _T}} \left| {\frac{{\mathbb{P}\left( {{S_{T\;}} - {\mu _{{S_T}}} \gt \;x} \right)}}{{{\lambda _T}\mathbb{P}\left( {X\; \gt \;x} \right)}} - 1} \right| = 0{\textrm{ \;\;and\;\;\;\;}}\\&\mathop {{\textrm{lim}}}\limits_{T \to \infty } \mathop {{\textrm{sup}}}\limits_{x\; \geqslant \gamma {\lambda _T}} \left| {\frac{{\mathbb{P}\left( {\mathop {{\textrm{max}}}\limits_{1{{\; \leqslant \;}}j\; \leqslant {N_T}}\, {X_j} \gt \;x} \right)}}{{{\lambda _T}\mathbb{P}\left( {X\; \gt \;x} \right)}} - 1} \right| = 0,\end{align*}

where ${S_T} = \mathop \sum \nolimits_{j = 1}^{{N_T}}\, {X_j}.$ Note that the authors show that the Poisson process ${C_T}$ satisfies the assumptions above, but the second condition is difficult to show for more complicated processes. Hence, the trick is to bound the processes at hand in this work by a process governed by an independent variable, in our context, ${C_T}$ , which is Poisson distributed and satisfies the settings of [Reference Klüppelberg and Mikosch32].

Note that the work in [Reference Klüppelberg and Mikosch32] extends the precise large-deviation principles already studied in [Reference Cline and Hsing12] (in the case of non-random sums) to the case of random sums.

In [Reference Tang, Su, Jiang and Zhang57], the authors relax the two assumptions used in [Reference Klüppelberg and Mikosch32] (and mentioned earlier) and reduce them to the single condition that

$$\mathbb{E}\left[ {N_T^{\beta + \epsilon }{\mathbb{1}_{\{ {N_T} \gt \;\left( {1 + \delta } \right){\lambda _T}\} }}} \right] = \mathcal{O}\!\left( {{\lambda _T}} \right){\textrm{ as }}T \to \infty $$

for fixed $\varepsilon ,\delta \gt 0$ small and $\beta ,$ named the upper index of extended regular variation, and prove similar precise large-deviation results as [Reference Klüppelberg and Mikosch32]. In [Reference Ng, Tang, Yan and Yang45], the authors study another subclass of the subexponential family, namely, the consistently varying random variables, and prove similar precise large deviations under the same conditions as [Reference Tang, Su, Jiang and Zhang57].

Under the assumption that the sequence $\left( {{X_j}} \right)$ exhibits negative dependence, i.e.

$$\mathbb{P}\left( {\mathop \bigcap \limits_{j = 1}^n \left\{ {{X_j} \leqslant {x_j}} \right\}} \right) \leqslant M\mathop \prod \limits_{j = 1}^n \mathbb{P}\left( {{X_j} \leqslant {x_j}} \right){\textrm{ and }}\mathbb{P}\left( {\mathop \bigcap \limits_{j = 1}^n \{ {X_j} \gt {x_j}\} } \right) \leqslant M \prod _{j = 1}^n \mathbb{P}\left( {{X_j} \gt {x_j}} \right)$$

for some $M \gt 0$ , all ${x_1}, \ldots ,{x_n} \in \mathbb{R}$ , more recent literature such as [Reference Liu35, Reference Tang56] propose extensions and similar results to those of [Reference Ng, Tang, Yan and Yang45] under the same consistently varying random variables.

While our framework is more restrictive regarding the aspect that our sequence ${({X_j})_{1 \leqslant \,j \leqslant {N_T}}}$ has elements that are regularly varying (which is a subclass of the extended regularly varying distributions) and furthermore that the elements of the sequence are independent, knowledge of the tail asymptotics of the cluster functionals allowed us to derive expressions that resemble known precise large-deviation principles for random maxima and sums of independent random variables, even though, clearly, ${N_T}$ and $\left( {{X_j}} \right)$ are dependent over a time window $\left[ {0,T} \right]$ . This comes at the cost of an extra term for the sums of the marks over a finite time interval of an extra leftover effect $\mathbb{E}\left[ {{\varepsilon _T}} \right]$ that vanishes as $T$ becomes large.

Appendix A

Proof of Proposition 5

By conditioning and using the independence of $X$ and ${X_j}$ , ${\textrm{where}}\;j \geq 1$ , and that of ${K_A}$ and ${X_j}$ , ${\textrm{where}}\;j \geq 1$ , we obtain

(16) \begin{align}\mathbb{P}\left( {{H^R} \gt x} \right) & = 1 - \mathop \sum \limits_{k = 0}^\infty \mathbb{P}\,(X \leqslant x \,|\, {K_A} = k){(\mathbb{P}\left( {X \leqslant x} \right))^k}\mathbb{P}\left( {K = k} \right)\notag\\& = 1 - \mathop \sum \limits_{k = 0}^\infty \mathbb{P}\,(X \leqslant x \,|\, {K_A} = k)\;{\textrm{exp}}\left( {k\;{\textrm{log}}\left( {1 - \mathbb{P}\,( {X \gt x} )} \right)} \right)\mathbb{P}\left( {K = k} \right)\!.\end{align}

A Taylor expansion on the exponential term as $x \to \infty $ (and hence, as $\mathbb{P}\,( {X \gt x} ) \to 0$ by the integrability of $X$ ) gives

\begin{align*}{\textrm{exp}}\left( {k{\textrm{ log}}\left( {1 - \mathbb{P}\,( {X \gt x} )} \right)} \right)& = {\textrm{exp}}\left( { - k\mathbb{P}\,( {X \gt x} ) - o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)} \right)\\& = \left( {1 - k\mathbb{P}\,( {X \gt x} ) + o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)} \right){\textrm{exp}}\left( { - o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)} \right)\!,\end{align*}

where the last equality follows from another Taylor expansion of the first exponential term in the second equality as $x \to \infty $ .

Plugging the preceding expansion into Equation (16) yields

\begin{align*}\mathbb{P}\left( {{H^R} \gt x} \right) & = 1 - \mathop \sum \limits_{k = 0}^\infty \mathbb{P}\left( {X \leqslant x,{K_A} = k} \right){\textrm{exp}}\left( { - o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)} \right)\\&\quad + \mathbb{P}\,( {X \gt x} )\mathop \sum \limits_{k = 0}^\infty k\mathbb{P}\left( {X \leqslant x,{K_A} = k} \right){\textrm{exp}}\left( { - o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)} \right)\\&\quad - o\!\left( {\mathbb{P}\,( {X \gt x} )} \right)\mathop \sum \limits_{k = 0}^\infty \mathbb{P}\left( {X \leqslant x,{K_A} = k} \right){\textrm{exp}}\left( { - o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)} \right)\\& =\! :\; 1 - {B_1} + {B_2} - {B_3}.\end{align*}

We treat each term separately. For term $1 - {B_1}$ , remarking that $1 = \mathbb{P}\left( {X \leqslant x} \right) + \mathbb{P}\,( {X \gt x} )$ , we obtain

$$1 - {B_1} = \mathbb{P}\,( {X \gt x} ) + \mathop \sum \limits_{k = 0}^\infty \mathbb{P}\left( {X \leqslant x,{K_A} = k} \right)\left( {1 - {\textrm{exp}}\left( { - o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)} \right)} \right)\!.$$

Using the basic inequality $1 - {e^{ - x}} \leqslant x$ , term $1 - {B_1}$ is bounded by

$$0 \leqslant 1 - {B_1} \leqslant \mathbb{P}\,( {X \gt x} ) + o\!\left( {\mathbb{E}\left[ {{K_A}} \right]\mathbb{P}\left( {{\textrm{X}} \gt x} \right)} \right){\textrm{ as }}x \to \infty .$$

For term ${B_2}$ , we can write

\begin{align*}{B_2}& = \mathbb{P}\,( {X \gt x} )\mathop \sum \limits_{k = 0}^\infty k\left( {\mathbb{P}\left( {X \leqslant x,{K_A} = k} \right)} \right){\textrm{exp}}\left( { - o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)} \right) + \mathbb{P}\left( {K = k} \right) - \mathbb{P}\left( {K = k} \right))\\& = \mathbb{P}\,( {X \gt x} )\,\mathbb{E}\left[ {{K_A}} \right] - \mathbb{P}\,( {X \gt x} )\mathop \sum \limits_{k = 0}^\infty k\mathbb{P}\left( {X \gt x,{K_A} = k} \right)\\&\quad + \mathbb{P}\,( {X \gt x} )\mathop \sum \limits_{k = 0}^\infty k\mathbb{P}\left( {X \leqslant x,{K_A} = k} \right)\left( {{\textrm{exp}}\left( { - o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)} \right) - 1} \right)\\& =\! : {B_{21}} - {B_{22}} + {B_{23}}.\end{align*}

Noting that ${B_{22}}$ is bounded above by ${B_{22}} \leqslant \mathbb{E}\left[ {{K_A}} \right]\mathbb{P}\,( {X \gt x} )$ and, hence, by a dominated convergence argument and the integrability of $X$ , we have that

$$\mathbb{P}\,( {X \gt x} )\mathop \sum \limits_{k = 0}^\infty k\mathbb{P}\left( {X \gt x,{K_A} = k} \right) = o\!\left( {\mathbb{P}\,( {X \gt x} )} \right){\textrm{as\;\;}}x \to \infty .$$

For term ${B_{23}}$ , which is negative since for all $k \geqslant 0$ , $0 \leqslant {e^{ - o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)}} \leqslant 1$ , we bound it below by

\begin{align*} & - \mathbb{P}\,( {X \gt x} )\mathop \sum \limits_{k = 0}^\infty k\mathbb{P}\left( {X \leqslant x,{K_A} = k} \right)\\&\qquad\qquad \leqslant \mathbb{P}\,( {X \gt x} )\mathop \sum \limits_{k = 0}^\infty k\mathbb{P}\left(X { \leqslant x,{K_A} = k} \right)\left( {{\textrm{exp}}\left( { - o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)} \right) - 1} \right)\\&\qquad\qquad\qquad\qquad \leqslant 0,\end{align*}

and hence, by a dominated convergence argument, we obtain that as $x \to \infty $ ,

$$\mathbb{P}\,( {X \gt x} )\mathop \sum \limits_{k = 0}^\infty k\mathbb{P}\left( {X \leqslant x,{K_A} = k} \right)\left( {{\textrm{exp}}\left( { - o\!\left( {k\mathbb{P}\,( {X \gt x} )} \right)} \right) - 1} \right) = o\!\left( {\mathbb{P}\,( {X \gt x} )} \right)\!.$$

Collecting the above results, we see that, essentially,

$${B_2} = \mathbb{P}\,( {X \gt x} )\,\mathbb{E}\left[ {{K_A}} \right] + o\!\left( {\mathbb{P}\,( {X \gt x} )} \right){\textrm{ as }}x \to \infty .$$

Finally, by very similar arguments to those employed for ${B_2}$ and omitted for brevity,

$${B_3} = o\!\left( {\mathbb{P}\,( {X \gt x} )} \right){\textrm{ as }}x \to \infty .$$

Collecting the above, it essentially follows that

$$P\left( {{H^R} \gt x} \right) = \mathbb{P}\,( {X \gt x} ) + \mathbb{P}\,( {X \gt x} )\;\mathbb{E}\left[ {{K_A}} \right] + o\!\left( {\mathbb{P}\,( {X \gt x} )} \right){\textrm{ as }}x \to \infty .$$

The desired result follows at once by taking the limit as $x \to \infty $ and by using the assumption that the limiting Radon measure is non-null on the subspace $\{ \left( {{x_1},{x_2}} \right) \in \mathbb{R}_{ \textbf{+} ,\textbf{0}}^2\;:\;{x_1} \gt 1\} $ , which implies, by means of Example 1, that ${H^R}$ is regularly varying with index $\alpha \gt 1.\;$

Appendix B

Proof of Proposition 7

By conditioning and using the independence of $X$ and ${H^H}$ , as well as that of ${L_A}$ and ${H^H}$ , we obtain, as in the proof of Proposition 5,

$$\mathbb{P}\left( {{H^H} \gt x} \right) = 1 - \mathop \sum \limits_{k = 0}^\infty \,\mathbb{P}\,(X \leqslant x | {L_A} = k){\textrm{exp}}\left( {k\;{\textrm{log}}\left( {1 - \mathbb{P}\left( {{H^H} \gt x} \right)} \right)} \right)\mathbb{P}\left( {{L_A} = k} \right)\!.$$

A Taylor expansion on the exponential term, as $x \to \infty $ (and hence, as $\mathbb{P}\left( {{H^H} \gt x} \right) \to 0$ by the integrability of ${H^H}$ ), yields, as $x \to \infty $ ,

\begin{align*}&{\textrm{exp}}\left( {k\;{\textrm{log}}\left( {1 - \mathbb{P}\left( {{H^H} \gt x} \right)} \right)} \right)\\&\qquad\qquad = \left( {1 - k\mathbb{P}\left( {{H^H} \gt x} \right) + o\!\left( {k\mathbb{P}\left( {{H^H} \gt x} \right)} \right)} \right){\textrm{exp}}\left( { - o\!\left( {k\mathbb{P}\left( {{H^H} \gt x} \right)} \right)} \right)\!.\end{align*}

From here on, the proof follows the same procedures as those of Proposition 6, except that the tail of ${H^H}$ , rather than the tail of $X$ , appears here. The proof is omitted for brevity, but we retrieve

$$P\!\left( {{H^H} \gt x} \right) = \mathbb{P}\,( {X \gt x} ) + \mathbb{E}\left[ {{L_A}} \right]\mathbb{P}\left( {{H^H} \gt x} \right) + o\!\left( {\mathbb{P}\left( {{H^H} \gt x} \right)} \right){\textrm{ as }}x \to \infty ,$$

which yields the desired result.

Appendix C

Proof of Proposition 6

We need the following Lemma in order to prove Lemma 2, which is used in the proof of Proposition 6:

Lemma 1. Suppose $\left( {X,{K_A}} \right)$ is regularly varying, with index $\alpha \in \left( {n,n + 1} \right)$ , for $n \in \mathbb{N}$ . Additionally suppose that $X$ has negligible tails with respect to $X + \mathbb{E}\left[ X \right]{K_A};$ i.e. $\mathbb{P}\,( {X \gt x} ) = o\!\left( {\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt x} \right)} \right)$ as $x \to \infty .$ Then

$$\varphi _X^{\left( {n + 1} \right)}( s ) = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Proof Note that

\begin{align*}\varphi _X^{\left( {n + 1} \right)}( s ) & = \mathbb{E}\left[ {{{(\!-\!X)}^{n + 1}}{e^{ - sX}}} \right] = \mathop \int \nolimits_0^\infty {x^{n + 1}}{e^{ - sx}}{\textrm{d}}\left( { - \mathbb{P}\left( {X \geqslant x} \right)} \right)\\& = [ - {x^{n + 1}}{e^{ - sx}}\mathbb{P}\left( {X \geqslant x} \right)]_0^\infty + \mathop \int \nolimits_0^\infty \left( {\left( {n + 1} \right){x^n}{e^{ - sx}} - s{x^{n + 1}}{e^{ - sx}}} \right)\mathbb{P}\left( {X \geqslant x} \right){\textrm{d}}x.\end{align*}

The first term above vanishes; upon substituting, the second term yields

\begin{align*}&\mathop \int \nolimits_0^\infty \left( {\left( {n + 1} \right){x^n}{e^{ - sx}} - s{x^{n + 1}}{e^{ - sx}}} \right)\mathbb{P}\left( {X \geqslant {\textit{x}}} \right){\textrm{d}}x\\&\qquad\qquad = \mathop \int \nolimits_0^\infty (\left( {n + 1} \right){(y/s)^n}{e^{ - y}} - s\left( {y/s{)^{n + 1}}{e^{ - y}}} \right)\mathbb{P}\left( {X \geqslant y/s} \right)\frac{{{\textrm{d}}y}}{s}\\&\qquad\qquad\qquad\qquad = {s^{ - \left( {n + 1} \right)}}\mathop \int \nolimits_0^\infty \left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)\mathbb{P}\left( {X \geqslant y/s} \right){\textrm{d}}y.\end{align*}

Make $\varepsilon \gt 0$ small, and split the above integral into

\begin{align*}& {s^{ - \left( {n + 1} \right)}}\mathop \int \nolimits_0^\infty \left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)\mathbb{P}\left( {X \geqslant y/s} \right){\textrm{d}}y\\&\qquad\qquad= {s^{ - \left( {n + 1} \right)}}\bigg(\mathop \int \nolimits_0^\varepsilon \left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)\mathbb{P}\left( {X \geqslant y/s} \right){\textrm{d}}y\\&\qquad\qquad\qquad\qquad + \mathop \int \nolimits_\varepsilon ^\infty \left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)\mathbb{P}\left( {X \geqslant {y}/{s}} \right){\textrm{d}}y\bigg) =\! : {I_1} + {I_2}.\end{align*}

Consider integral ${I_2}$ first. For some values $y \in \left[ {\varepsilon ,\infty } \right)\!,$ the expression $\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}$ might be negative, so bound ${I_2}$ above by its absolute value. Additionally, upon using the hypothesis of negligibility of the tail of $X$ with respect to the tail of $X + \mathbb{E}\left[ X \right]{K_A}$ , it follows that for any $\delta \gt 0$ and for any fixed $\varepsilon \gt 0$ and $y \gt \varepsilon $ , there is ${s_0}$ such that for all $s \leqslant {s_0}$ , $\mathbb{P}\left( {X \geqslant y/s} \right) \leqslant {\delta _\varepsilon }\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt y/s} \right)\!.$ All in all, because $X + \mathbb{E}\left[ X \right]{K_A}$ is regularly varying with index $\alpha \in \left( {n,n + 1} \right)$ , this yields as an upper bound

\begin{align*}\left| {{I_2}} \right| & \leqslant {s^{ - \left( {n + 1} \right)}}\mathop \int \nolimits_\varepsilon ^\infty \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|{\delta _\varepsilon }\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt y/s} \right){\textrm{d}}y\\& \leqslant {s^{ - \left( {n + 1} \right)}}\mathop \int \nolimits_\varepsilon ^\infty \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|{\delta _\varepsilon }{(y/s)^{ - \alpha }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {y/s} \right){\textrm{d}}y\\ & \leqslant {s^{\alpha - \left( {n + 1} \right)}}{\delta _\varepsilon }\mathop \int \nolimits_\varepsilon ^\infty \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|{y^{ - \alpha }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {y/s} \right){\textrm{d}}y.\end{align*}

Because $\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right){y^{ - \alpha }}$ is integrable over $\left[ {0,\infty } \right)$ , it follows from Proposition 4.1.2 (b) in [Reference Bingham, Goldie and Teugels7] that as $s \to {0^ + }$ ,

\begin{align*}&{s^{\alpha - \left( {n + 1} \right)}}{\delta _\varepsilon }\mathop \int \nolimits_\varepsilon ^\infty \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|{y^{ - \alpha }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {y/s} \right){\textrm{d}}y\\&\qquad\qquad\qquad\sim {s^{\alpha - \left( {n + 1} \right)}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right){\delta _\varepsilon }\mathop \int \nolimits_\varepsilon ^\infty \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|{y^{ - \alpha }}{\textrm{d}}y.\end{align*}

For each fixed value of $\varepsilon \gt 0$ and as $s \to {0^ + }$ , it is possible to take ${\delta _\varepsilon } \gt 0$ as small as needed in order to guarantee that

$${\delta _\varepsilon }\mathop \int \nolimits_\varepsilon ^\infty \left| \left({\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}}\right) \right|{y^{ - \alpha }}{\textrm{d}}y = o\!\left( 1 \right){\textrm{ as\;}}s \to {0^ + }.$$

This implies that for a fixed $\epsilon \gt 0$ as $s \to {0^ + }$ ,

\begin{align*}&\frac{{\left| {I_2} \right|}}{{s^{\alpha - \left( {n + 1} \right)}{\textit{L}_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)}}\\&\qquad \leqslant \frac{{{s^{\alpha - \left( {n + 1} \right)}}{{\textit{L}}_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right){\delta _\varepsilon }\mathop \int \nolimits_\varepsilon ^\infty \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|{y^{ - \alpha }}{\textrm{d}}y}}{{{s^{\alpha - \left( {n + 1} \right)}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)}}\\&\qquad \leqslant o\!\left( 1 \right)\!.\end{align*}

Now consider integral ${I_1}$ . Because $X$ is stochastically dominated by $X + \mathbb{E}\left[ X \right]{K_A}$ and using the regular variation of the latter quantity, this yields

\begin{align*}\left| {{I_1}} \right|& \leqslant {s^{ - \left( {n + 1} \right)}}\mathop \int \nolimits_0^\varepsilon \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt y/s} \right){\textrm{d}}y\\& \leqslant {s^{\alpha - \left( {n + 1} \right)}}\mathop \int \nolimits_0^\varepsilon \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|{y^{ - \alpha }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {y/s} \right){\textrm{d}}y.\end{align*}

Because the function $\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right){y^{ - \alpha }}$ is integrable over $\left[ {0,\varepsilon } \right)$ , it follows by Proposition 4.1.2 (a) in [Reference Bingham, Goldie and Teugels7] that as $s \to {0^ + }$ ,

\begin{align*} & {s^{\alpha - \left( {n + 1} \right)}}\mathop \int \nolimits_0^\varepsilon \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|{y^{ - \alpha }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {y/s} \right){\textrm{d}}y\\&\qquad\qquad\qquad\qquad\sim {s^{\alpha - \left( {n + 1} \right)}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)\mathop \int \nolimits_0^\varepsilon \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|{y^{ - \alpha }}{\textrm{d}}y.\end{align*}

It follows that

\begin{align*}\frac{{\left| {{I_1}} \right|}}{{{s^{\alpha - \left( {n + 1} \right)}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)}} &\leqslant \frac{{{s^{\alpha - \left( {n + 1} \right)}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)\mathop \int \nolimits_0^\varepsilon \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|{y^{ - \alpha }}{\textrm{d}}y}}{{{s^{\alpha - \left( {n + 1} \right)}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)}},\\& \leqslant \mathop \int \nolimits_0^\varepsilon \left| {\left( {\left( {n + 1} \right){y^n}{e^{ - y}} - {y^{n + 1}}{e^{ - y}}} \right)} \right|{y^{ - \alpha }}{\textrm{d}}y\end{align*}

and because one can take $\varepsilon \gt 0$ as small as needed, this shows that

$${I_1} + {I_2} = o\!\left( {{s^{\alpha - \left( {n + 1} \right)}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {{1}/{s}} \right)} \right),{\textrm{ as }}s \to {0^ + }.$$

Finally, because $n + 1 = \left\lceil \alpha \right\rceil $ and using Karamata’s Tauberian Theorem (1), which implies that $\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}\sim {C_\alpha }{s^{\alpha - \lceil\alpha\rceil }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {{1}/{s}} \right){\textrm{ as }}s \to {0^ + }$ , this shows that

$$\varphi _X^{\left( {n + 1} \right)}( s ) = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( {n + 1} \right)}} \right){\textrm{ as }}s \to {0^ + }.$$

Lemma 2. Suppose $\left( {X,{K_A}} \right)$ is regularly varying with index $\alpha \in \left( {1,2} \right)$ and slowly varying function ${L_{X + \mathbb{E}\left[ X \right]{K_A}}}({\cdot})$ and that $X$ has a negligible tail compared to the modulus $X + \mathbb{E}\left[ X \right]{K_A}$ , i.e. $\mathbb{P}\,( {X \gt x} ) = o\!\left( {\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt x} \right)} \right){\textrm{ as }}x \to \infty $ . Then,

$$\frac{{\varphi _X^{\left( 1 \right)}( s ) + \mathbb{E}\left[ X \right]}}{s} = o\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( 2 \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( 2 \right)}( s )} \right){\textrm{ as }}s \to {0^ + }.$$

Proof Let $\alpha \in \left( {1,2} \right)$ . We assess

\begin{align*}\frac{{\varphi _X^{\left( 1 \right)}( s ) + \mathbb{E}\left[ X \right]}}{s} & = \mathbb{E}\left[ {\frac{{X\!\left( {1 - {e^{ - sX}}} \right)}}{s}} \right]\\ & = \mathop \int \nolimits_0^\infty \frac{{x\!\left( {1 - {e^{ - sx}}} \right)}}{s}{\textrm{d}}(\!-\!\mathbb{P}\left( {X \geqslant x)} \right)\\ & = \Bigg[ - \frac{{x\!\left( {1 - {e^{ - sx}}} \right)}}{s}\mathbb{P}\left( {X \geqslant x} \right)\Bigg]_0^\infty\! + \mathop \int \nolimits_0^\infty \!\!\left( {\frac{{\left( {1 - {e^{ - sx}}} \right)}}{s} + x{e^{ - sx}}} \right)\!\mathbb{P}\left( {X \geqslant x} \right){\textrm{d}}x.\end{align*}

Since $X$ is integrable, one has that $x\mathbb{P}\left( {X \geqslant x} \right) = o\!\left( 1 \right)$ as $x \to \infty $ , so the first expression on the right-hand side above vanishes; for the second integral, make $\varepsilon \gt 0$ small and write

\begin{align*}\mathop \int \nolimits_0^\infty \left( {\frac{{\left( {1 - {e^{ - sx}}} \right)}}{s} + x{e^{ - sx}}} \right)\mathbb{P}\left( {X \geqslant x} \right){\textrm{d}}x & = \mathop \int \nolimits_0^\infty {s^{ - 2}}\left( {1 - {e^{ - y}} + y{e^{ - y}}} \right)\mathbb{P}\left( {X \geqslant y/s} \right)\!{\textrm{d}}y,\\& = \mathop \int \nolimits_0^\varepsilon {s^{ - 2}}\left( {1 - {e^{ - y}} + y{e^{ - y}}} \right)\mathbb{P}\left( {X \geqslant y/s} \right)\!{\textrm{d}}y\\&\quad + \mathop \int \nolimits_\varepsilon ^\infty {s^{ - 2}}\left( {1 - {e^{ - y}} + y{e^{ - y}}} \right)\mathbb{P}\left( {X \geqslant y/s} \right)\!{\textrm{d}}y\\& =\! :\left( {{I_1} + {I_2}} \right)\!.\end{align*}

First consider integral ${I_2}$ . A similar argument as in the proof of Lemma 1 for integral ${I_2}$ yields the following upper bound:

\begin{align*} {I_2} & \leqslant \mathop \int \nolimits_\varepsilon ^\infty {s^{ - 2}}\left( {1 - {e^{ - y}} + y{e^{ - y}}} \right){\delta _\varepsilon }\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \gt y/s} \right){\textrm{d}}y\\& \leqslant {s^{\alpha - 2}}{\delta _\varepsilon }\mathop \int \nolimits_\varepsilon ^\infty \left( {1 - {e^{ - y}} + y{e^{ - y}}} \right){y^{ - \alpha }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {y/s} \right){\textrm{d}}y.\end{align*}

As $\varepsilon \to 0$ , the above integral diverges. But for a (small) fixed value of $\varepsilon \gt 0$ , by using Proposition 4.1.2 (b) in [Reference Bingham, Goldie and Teugels7], as $s \to {0^ + }$ ,

\begin{align*} & {s^{\alpha - 2}}\mathop \int \nolimits_\varepsilon ^\infty \left( {1 - {e^{ - y}} + y{e^{ - y}}} \right){y^{ - \alpha }}{\delta _\varepsilon }{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {y/s} \right){\textrm{d}}y\\&\qquad\qquad\sim {s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right){\delta _\varepsilon }\mathop \int \nolimits_\varepsilon ^\infty \left( {1 - {e^{ - y}} + y{e^{ - y}}} \right){y^{ - \alpha }}{\textrm{d}}y.\end{align*}

As $s \to {0^ + }$ and as in the proof of Lemma (1), it is possible to take ${\delta _\varepsilon } \gt 0$ as small as needed in order to ensure that

$${\delta _\varepsilon }\mathop \int \nolimits_\varepsilon ^\infty \left( {1 - {e^{ - y}} + y{e^{ - y}}} \right){y^{ - \alpha }}{\textrm{d}}y = o\!\left( 1 \right){\textrm{ as\;}}s \to {0^ + }.$$

This implies that as $s \to {0^ + }$ ,

$$\frac{{{I_2}}}{{{s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)}} \leqslant \frac{{{s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right){\delta _\varepsilon }\mathop \int \nolimits_\varepsilon ^\infty \left( {1 - {e^{ - y}} + y{e^{ - y}}} \right){y^{ - \alpha }}{\textrm{d}}y}}{{{s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)}} \leqslant o\!\left( 1 \right)\!.$$

Consider now integral ${I_1}$ . Because $X$ is stochastically dominated by $X + \mathbb{E}\left[ X \right]{K_A}$ , for any fixed $\varepsilon \gt 0$ , we have

\begin{align*}{I_1} & \leqslant \mathop \int \nolimits_0^\varepsilon {s^{ - 2}}\left( {1 - {e^{ - y}} + y{e^{ - y}}} \right)\mathbb{P}\left( {X + \mathbb{E}\left[ X \right]{K_A} \geqslant y/s} \right){\textrm{d}}y\\& = \mathop \int \nolimits_0^\varepsilon {s^{\alpha - 2}}\left( {1 - {e^{ - y}} + y{e^{ - y}}} \right){y^{ - \alpha }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {y/s} \right){\textrm{d}}y.\end{align*}

A Taylor expansion on the function $\,f( y ) = {e^{ - y}} + y{e^{ - y}}$ yields $1 = {e^{ - y}} - y{e^{ - y}} + 2y{e^{ - y}} - {y^2}{e^{ - y}} + o\!\left( { - y} \right)$ , and we get that

\begin{align*}& \mathop \int \nolimits_0^\varepsilon {s^{\alpha - 2}}\left( {1 - {e^{ - y}} + y{e^{ - y}}} \right){y^{ - \alpha }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {y/s} \right){\textrm{d}}y\\&\qquad \approx {s^{\alpha - 2}}\mathop \int \nolimits_0^\varepsilon \left( {2y{e^{ - y}} - {y^2}{{\textrm{e}}^{ - y}}} \right){y^{ - \alpha }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {y/s} \right){\textrm{d}}y.\end{align*}

Because the integral $\mathop \int \nolimits_0^\varepsilon \left( {2y{e^{ - y}} - {y^2}{e^{ - y}}} \right){y^{ - \alpha }}{\textrm{d}}y \lt \infty $ for $\alpha \in \left( {1, 2} \right)$ when $\varepsilon \gt 0$ is small, even if it is potentially large for values of $\alpha $ close to 2, it follows from Proposition 4.1.2. (a) in [Reference Bingham, Goldie and Teugels7] that as $s \to {0^ + }$ ,

\begin{align*}& {s^{{{\alpha }} - 2}}\mathop \int \nolimits_0^\varepsilon \left( {2y{e^{ - y}} - {y^2}{e^{ - y}}} \right){y^{ - \alpha }}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {\frac{y}{s}} \right){\textrm{d}}y \\&\qquad\sim {s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)\mathop \int \nolimits_0^\varepsilon \left( {2y{e^{ - y}} - {y^2}{e^{ - y}}} \right){y^{ - \alpha }}{\textrm{d}}y.\end{align*}

Hence, as $s \to {0^ + }$

\begin{align*}\frac{{{I_1}}}{{{s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)}} & \leqslant \frac{{{s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)\mathop \int \nolimits_0^\varepsilon \left( {2y{e^{ - y}} - {y^2}{e^{ - y}}} \right){y^{ - \alpha }}{\textrm{d}}y}}{{{s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)}}\\& \leqslant \mathop \int \nolimits_0^\varepsilon \left( {2y{e^{ - y}} - {y^2}{e^{ - y}}} \right){y^{ - \alpha }}{\textrm{d}}y,\end{align*}

and because one can take $\varepsilon \gt 0$ as small as needed, it essentially follows, all in all, that

$$\left( {{I_1} + {I_2}} \right) = o\!\left( {{s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)} \right)\!,{\textrm{as}}s \to {0^ + }.$$

Because $X + \mathbb{E}\left[ X \right]{K_A}$ is regularly varying, by Karamata’s Tauberian Theorem (1),

$$\frac{{{s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)}}{{\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( 2 \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( 2 \right)}( s )}} \sim \frac{{{s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right)}}{{{C_\alpha }{s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {1/s} \right) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( 2 \right)}( s )}}{\textrm{ as }}s \to {0^ + }.$$

Applying the result of Lemma 1, this yields

$${s^{\alpha - 2}}{L_{X + \mathbb{E}\left[ X \right]{K_A}}}\left( {{1}/{s}} \right) = \mathcal{O}\!\left( {\varphi _{X + \mathbb{E}\left[ X \right]{K_A}}^{\left( 2 \right)}( s ) + \mathbb{E}\left[ {{K_A}} \right]\varphi _X^{\left( 2 \right)}( s )} \right){\textrm{ as }}s \to {0^ + },$$

which yields the desired result.

Acknowledgements

The authors would like to thank the two anonymous referees for their suggestions, which helped to shorten the present article; for pointing out unexplored relevant references; and for the numerous helpful comments making this article more readable. The authors would also like to acknowledge the French Agence Nationale de la Recherche (ANR) and the project with reference ANR-20-CE40-0025-01 (T-REX project) and, more specifically, the members of the T-REX project for organising the VALPRED3 and VALPRED4 workshops at the CNRS Centre Paul Langevin in Aussois, during which fruitful discussions led to great improvement of this article.

Funding information

There are no funding bodies to thank relating to the creation of this article.

Competing interests

There were no competing interests that arose during the preparation or publication process of this article to declare.

References

Asmussen, S. and Foss, S. (2018). Regular variation in a fixed-point problem for single- and multiclass branching processes and queues. Advances in Applied Probability. 50, 4761.CrossRefGoogle Scholar
Basrak, B., Conroy, M., Olvera-Cravioto, M. and Palmowski, Z. (2022). Importance sampling for maxima on trees. Stochastic Processes and Their Applications. 148, 139179.Google Scholar
Basrak, B., Davis, R. A. and Mikosch, T. (2002). A characterization of multivariate regular variation. Annals of Applied Probability. 908920.Google Scholar
Basrak, B., Kulik, R. and Palmowski, Z. (2013). Heavy-tailed branching process with immigration. Stochastic Models. 29, 413434.CrossRefGoogle Scholar
Basrak, B., Milinc̆ević, N. and Žugec, P. (2023). On extremes of random clusters and marked renewal cluster processes. Journal of Applied Probability. 60, 367381.CrossRefGoogle Scholar
Basrak, B., Wintenberger, O. and Žugec, P. (2019). On the total claim amount for marked Poisson cluster models. Advances in Applied Probability. 51, 541569.CrossRefGoogle Scholar
Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1989). Regular Variation, Vol. 27. Cambridge University Press.Google Scholar
Brémaud., P. (2020). Point Process Calculus in Time and Space: An Introduction with Applications, Vol. 98. Springer Nature.CrossRefGoogle Scholar
Chavez-Demoulin, V., Davison, A. C. and McNeil, A. J. (2005). Estimating value-at-risk: a point process approach. Quantitative Finance. 5, 227234.CrossRefGoogle Scholar
Chen, N., Litvak, N. and Olvera-Cravioto, M. Pagerank in scale-free random graphs. In Algorithms and Models for the Web Graph: 11th International Workshop, WAW 2014, Beijing, China, December 17–18, 2014, Proceedings 11, pages 120131. SpringerCrossRefGoogle Scholar
Chen, N., Litvak, N. and Olvera-Cravioto, M. (2017). Generalized pagerank on directed configuration networks. Random Structures & Algorithms. 51, 237274.CrossRefGoogle Scholar
Cline, D. B. H. and Hsing, T. (1991). Large deviation probabilities for sums and maxima of random variables with heavy or subexponential tails. Preprint, Texas A&M University, 501.Google Scholar
Daley, D. J. and Vere-Jones, D. (2003). An Introduction to the Theory of Point Processes, Vol. I, Probability and Its Applications. Springer-Verlag, New York.Google Scholar
Daley, D. J. and Vere-Jones, D. (2008). An Introduction to the Theory of Point Processes, Volume II, General Theory and Structure. Springer, New York.CrossRefGoogle Scholar
De Haan, L. and Ferreira, A. (2006). Extreme Value Theory: An Introduction, Vol. 21. Springer.CrossRefGoogle Scholar
Denisov, D., Foss, S. and Korshunov, D. (2010). Asymptotics of randomly stopped sums in the presence of heavy tails. Bernoulli, 971–994.CrossRefGoogle Scholar
Embrechts, P., Klüppelberg, K. and Mikosch, T. (2013). Modelling Extremal Events: for Insurance and Finance, Vol. 33. Springer Science & Business Media.Google Scholar
Ernst, P. A., Asmussen, S. and Hasenbein, J. J. (2018). Stability and busy periods in a multiclass queue with state-dependent arrival rates. Queueing Systems. 90, 207224.CrossRefGoogle Scholar
Faÿ, G., González-Arévalo, B., Mikosch, T. and Samorodnitsky, G. (2006). Modeling teletraffic arrivals by a poisson cluster process. Queueing Systems. 54, 121140.CrossRefGoogle Scholar
Foufoula-Georgiou, E. and Lettenmaier, D. P. (1987). A Markov renewal model for rainfall occurrences. Water resources research. 23, 875884.Google Scholar
Hawkes, A. G. (1971). Spectra of some self-exciting and mutually exciting point processes. Biometrika. 58, 8390.CrossRefGoogle Scholar
Hawkes, A. G. (2018). Hawkes processes and their applications to finance: a review. Quantitative Finance. 18, 193198.Google Scholar
Hawkes, A. G. and Oakes, D. (1974). A cluster process representation of a self-exciting process. Journal of Applied Probability. 11, 493503.CrossRefGoogle Scholar
Heyde, C. C. (1967). On large deviation problems for sums of random variables which are not attracted to the normal law. The Annals of Mathematical Statistics. 38, 15751578.CrossRefGoogle Scholar
Hult, H. and Lindskog, F. (2006). Regular variation for measures on metric spaces. Publications de l’Institut Mathématique. 80, 121140.Google Scholar
Hult, H. and Samorodnitsky, G. (2008). Tail probabilities for infinite series of regularly varying random vectors. Bernoulli. 14, 838864.Google Scholar
Jelenković, P. R. and Olvera-Cravioto, M. (2010). Information ranking and power laws on trees. Advances in Applied Probability. 42, 1057–1093.Google Scholar
Jessen, H. A. and Mikosch, T. (2006). Regularly varying functions. Publications de l’Institut Mathématique. 80, 171192.CrossRefGoogle Scholar
Karabash, D. and Zhu, L. (2015). Limit theorems for marked Hawkes processes with application to a risk model. Stochastic Models. 31, 433451.CrossRefGoogle Scholar
Karamata, J. (1933). Sur un mode de croissance régulière: théorèmes fondamentaux. Bulletin de la Société Mathématique de France. 61, 5562.CrossRefGoogle Scholar
Karpelevich, F., Kelbert, M. Y. and Suhov, Y. M. (1994). Higher-order Lindley equations. Stochastic Processes and their Applications. 53, 6596.CrossRefGoogle Scholar
Klüppelberg, C. and Mikosch, T. (1997). Large deviations of heavy-tailed random sums with applications in insurance and finance. Journal of Applied Probability. 34, 293308.CrossRefGoogle Scholar
Lindskog, F., Resnick, S. I. and Roy, J. (2014). Regularly varying measures on metric spaces: hidden regular variation and hidden jumps. Probability Surveys. 11, 270314.CrossRefGoogle Scholar
Litvak, N., Scheinhardt, W. R. W. and Volkovich, Y. (2007). In-degree and Pagerank: why do they follow similar power laws? Internet Mathematics. 4, 175198.CrossRefGoogle Scholar
Liu, L. (2009). Precise large deviations for dependent random variables with heavy tails. Statistics & Probability Letters. 79, 12901298.CrossRefGoogle Scholar
Markovich, N. (2023). Extremal properties of evolving networks: local dependence and heavy tails. Annals of Operations Research. 132. https://doi.org/10.1007/s10479-023-05175-y Google Scholar
Markovich, N. M. and Rodionov, I. V. (2020). Maxima and sums of non-stationary random length sequences. Extremes. 23, 451464.CrossRefGoogle Scholar
Mikosch, T. (2009). Non-life insurance mathematics: an introduction with the Poisson process. Springer Science & Business Media.CrossRefGoogle Scholar
Mikosch, T. and Nagaev, A. V. (1998). Large deviations of heavy-tailed sums with applications in insurance. Extremes. 1, 81110.CrossRefGoogle Scholar
Mikosch, T. and Wintenberger, O. (2024). Extreme Value Theory for Time Series: Models with Power-Law Tails. Springer Series in Operations Research and Financial Engineering. Springer Cham. https://link.springer.com/book/9783031591556 CrossRefGoogle Scholar
Musmeci, F. and Vere-Jones, D. (1992). A space-time clustering model for historical earthquakes. Annals of the Institute of Statistical Mathematics. 44, 111.Google Scholar
Nagaev, A. V. (1969). Integral limit theorems taking large deviations into account when Cramér’s condition does not hold. I. Theory of Probability & Its Applications. 14, 5164.Google Scholar
Nagaev, A. V. (1969). Integral limit theorems taking large deviations into account when Cramér’s condition does not hold. II. Theory of Probability & Its Applications. 14, 193208.CrossRefGoogle Scholar
Nagaev, S. V. (1979). Large deviations of sums of independent random variables. The Annals of Probability. 7, 745789.Google Scholar
Ng, K. W., Tang, Q., Yan, J.-A. and Yang, H. (2004). Precise large deviations for sums of random variables with consistently varying tails. Journal of Applied Probability. 41, 93107.CrossRefGoogle Scholar
Ogata, Y. (1988). Statistical models for earthquake occurrences and residual analysis for point processes. Journal of the American Statistical Association. 83, 927.CrossRefGoogle Scholar
Olvera-Cravioto, M. (2021). PageRank’s behavior under degree correlations. The Annals of Applied Probability. 31, 14031442.CrossRefGoogle Scholar
Onof, C. et al. (2000). Rainfall modelling using Poisson-cluster processes: a review of developments. Stochastic Environmental Research and Risk Assessment. 14, 384411.CrossRefGoogle Scholar
Resnick, S. I. (1987). Extremes Values, Regular Variation and Point Processes. Springer-Verlag, New York.CrossRefGoogle Scholar
Resnick, S. I. (2007). Heavy-Tail Phenomena: Probabilistic and Statistical Modeling. Springer Science & Business Media.Google Scholar
Reynaud-Bouret, P. and Schbath, S. (2010). Adaptive estimation for Hawkes processes: application to genome analysis. The Annals of Statistics. 38, 27812822.Google Scholar
Robert, C. Y and Segers, J. (2008). Tails of random sums of a heavy-tailed number of light-tailed terms. Insurance: Mathematics and Economics. 43, 8592.Google Scholar
Segers, J., Zhao, Y. and Meinguet, T. (2016). Polar decomposition of regularly varying time series in star-shaped metric spaces. Preprint, arXiv:1604.00241.Google Scholar
Stabile, G. and Torrisi, G. L. (2010). Risk processes with non-stationary Hawkes claims arrivals. Methodology and Computing in Applied Probability. 12, 415429.CrossRefGoogle Scholar
Swishchuk, A. (2018). Risk model based on compound Hawkes process. Wilmott. 94, 5057.CrossRefGoogle Scholar
Tang, Q. (2006). Insensitivity to Negative Dependence of the Asymptotic Behavior of Precise Large Deviations. Electronic Journal of Probability. 11, 107–120.CrossRefGoogle Scholar
Tang, Q., Su, C., Jiang, T. and Zhang, J. (2001). Large deviations for heavy-tailed random sums in compound renewal model. Statistics & Probability Letters. 52, 91100.CrossRefGoogle Scholar
Tillier, C. and Wintenberger, O. (2018). Regular variation of a random length sequence of random variables and application to risk assessment. Extremes. 21, 2756.CrossRefGoogle Scholar
Vere-Jones, D. and Ozaki, T. (1982). Some examples of statistical estimation applied to earthquake data: I. Cyclic Poisson and self-exciting models. Annals of the Institute of Statistical Mathematics. 34, 189207.Google Scholar
Volkovich, Y. and Litvak, N. Asymptotic analysis for personalized web search. Advances in Applied Probability. 42, 577604.Google Scholar