Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-10T10:17:43.229Z Has data issue: false hasContentIssue false

Renewal theory for iterated perturbed random walks on a general branching process tree: Early generations

Published online by Cambridge University Press:  02 September 2022

Alexander Iksanov*
Affiliation:
Taras Shevchenko National University of Kyiv
Bohdan Rashytov*
Affiliation:
Taras Shevchenko National University of Kyiv
Igor Samoilenko*
Affiliation:
Taras Shevchenko National University of Kyiv
*
*Postal address: Faculty of Computer Science and Cybernetics, Taras Shevchenko National University of Kyiv, Volodymyrska 64/13, Kyiv, Ukraine, 01601.
*Postal address: Faculty of Computer Science and Cybernetics, Taras Shevchenko National University of Kyiv, Volodymyrska 64/13, Kyiv, Ukraine, 01601.
*Postal address: Faculty of Computer Science and Cybernetics, Taras Shevchenko National University of Kyiv, Volodymyrska 64/13, Kyiv, Ukraine, 01601.
Rights & Permissions [Opens in a new window]

Abstract

Let $(\xi_k,\eta_k)_{k\in\mathbb{N}}$ be independent identically distributed random vectors with arbitrarily dependent positive components. We call a (globally) perturbed random walk a random sequence $T\,{:\!=}\, (T_k)_{k\in\mathbb{N}}$ defined by $T_k\,{:\!=}\, \xi_1+\cdots+\xi_{k-1}+\eta_k$ for $k\in\mathbb{N}$ . Consider a general branching process generated by T and let $N_j(t)$ denote the number of the jth generation individuals with birth times $\leq t$ . We treat early generations, that is, fixed generations j which do not depend on t. In this setting we prove counterparts for $\mathbb{E}N_j$ of the Blackwell theorem and the key renewal theorem, prove a strong law of large numbers for $N_j$ , and find the first-order asymptotics for the variance of $N_j$ . Also, we prove a functional limit theorem for the vector-valued process $(N_1(ut),\ldots, N_j(ut))_{u\geq0}$ , properly normalized and centered, as $t\to\infty$ . The limit is a vector-valued Gaussian process whose components are integrated Brownian motions.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let $(\xi_i, \eta_i)_{i\in\mathbb{N}}$ be independent copies of an $\mathbb{R}^2$ -valued random vector $(\xi, \eta)$ with arbitrarily dependent components. Let $(S_i)_{i\in\mathbb{N}_0}$ ( $\mathbb{N}_0\,{:\!=}\, \mathbb{N}\cup\{0\}$ ) denote the standard random walk with increments $\xi_i$ for $i\in\mathbb{N}$ , i.e. $S_0\,{:\!=}\, 0$ and $S_i\,{:\!=}\, \xi_1+\cdots+\xi_i$ for $i\in\mathbb{N}$ . Define

\begin{equation*}T_i\,{:\!=}\, S_{i-1}+\eta_i,\quad i\in \mathbb{N}.\end{equation*}

The sequence $T\,{:\!=}\, (T_i)_{i\in\mathbb{N}}$ is called perturbed random walk.

In the following we assume that $\xi$ and $\eta$ are almost surely (a.s.) positive. Now we define a general branching process generated by T. At time 0 there is one individual, the ancestor. The ancestor produces offspring (the first generation) with birth times given by the points of T. The first generation produces the second generation. The shifts of birth times of the second generation individuals with respect to their mothers’ birth times are distributed according to copies of T, and for different mothers these copies are independent. The second generation produces the third one, and so on. All individuals act independently of each other. For $t\geq 0$ and $j\in\mathbb{N}$ , let $N_j(t)$ denote the number of the jth generation individuals with birth times $\leq t$ and put $V_j(t)\,{:\!=}\, \mathbb{E} [N_j(t)]$ , and $V_j(t)\,{:\!=}\, 0$ for $t<0$ . Then $N_1(t)=\sum_{r\geq 1}\unicode{x1D7D9}_{\{T_r\leq t\}}$ and

(1) \begin{equation}N_j(t)=\sum_{r\geq 1}N^{(r)}_{j-1}(t-T_r)\unicode{x1D7D9}_{\{T_r\leq t\}}=\sum_{k\geq1}N^{(k)}_{1,\,j}\bigl(t-T^{(\,j-1)}_k\bigr)\unicode{x1D7D9}_{\{T^{(\,j-1)}_k\leq t\}} ,\quad j\geq 2,\ t\geq 0,\end{equation}

where $N_{j-1}^{(r)}(t)$ is the number of the jth generation individuals who are descendants of the first-generation individual with birth time $T_r$ , and whose birth times fall in $[T_r, T_r+t]$ ; $T^{(\,j-1)}\,{:\!=}\, \bigl(T^{(\,j-1)}_k\bigr)_{k\geq 1}$ is some enumeration of the birth times in the $(\,j-1)$ th generation; $N_{1,j}^{(k)}(t)$ is the number of the jth generation individuals that are children of the $(\,j-1)$ th generation individual with birth time $T^{(\,j-1)}_k$ , and whose birth times fall in

\begin{equation*}\bigl[T_k^{(\,j-1)}, T_k^{(\,j-1)}+t\bigr].\end{equation*}

By the branching property,

\begin{equation*}\bigl(N_{j-1}^{(1)}(t)\bigr)_{t\geq 0},\quad \bigl(N_{j-1}^{(2)}(t)\bigr)_{t\geq 0},\ldots\end{equation*}

are independent copies of $N_{j-1}$ which are also independent of T, and

\begin{equation*}\bigl(N_{1,\,j}^{(1)}(t)\bigr)_{t\geq 0},\quad\bigl(N_{1,j}^{(2)}(t)\bigr)_{t\geq 0},\ldots\end{equation*}

are independent copies of $(N_1(t))_{t\geq 0}$ which are also independent of $T^{(\,j-1)}$ . In what follows we write N for $N_1$ and V for $V_1$ . Passing in (1) to expectations we infer, for $j\geq 2$ and $t\geq 0$ ,

(2) \begin{equation}V_j(t)=V^{\ast(\,j)}(t)=(V_{j-1}\ast V)(t)=\int_{[0,\,t]} V_{j-1}(t-y) \,{\textrm{d}} V(y)=\int_{[0,\,t]} V(t-y) \,{\textrm{d}} V_{j-1}(y),\end{equation}

where $V^{\ast(\,j)}$ is the j-fold convolution of V with itself. We call the sequence $\mathcal{T}\,{:\!=}\, (T^{(\,j)})_{j\in\mathbb{N}}$ iterated perturbed random walk on a general branching process tree.

The motivation behind the study of $\mathcal{T}$ is at least threefold.

  1. (i) Originally, the iterated perturbed random walks were introduced in Section 3 of [Reference Buraczewski, Dovgay and Iksanov4] as an auxiliary tool for investigating the nested occupancy scheme in random environment generated by stick-breaking. This scheme is a hierarchical extension of the Karlin [Reference Karlin9] infinite balls-in-boxes occupancy scheme, in which, for $k\in\mathbb{N}$ , the hitting (random) probability of the box k is equal to ${\textrm{e}}^{-T^\ast_k}$ . Here $(T_k^\ast)_{k\in\mathbb{N}}$ is a particular perturbed random walk defined by

    \begin{equation*}T_k^\ast\,{:\!=}\, |\log W_1|+\cdots+|\log W_{k-1}|+\log |1-W_k|,\quad k\in\mathbb{N},\end{equation*}
    and $(W_k)_{k\in\mathbb{N}}$ are independent identically distributed random variables taking values in (0, 1). The sequence $({\textrm{e}}^{-T_k^\ast})_{k\in\mathbb{N}}$ is called a stick-breaking sequence or residual allocation model.
  2. (ii) Assuming that j is fixed, the sequence $T^{(\,j)}$ and the process $N_j$ are a natural generalization of the perturbed random walk T and the counting process N. The reader who is interested in standard random walks and not interested in perturbed random walks may put $\eta=\xi$ . Then T and N become a standard random walk and the corresponding renewal process, respectively. Thus any results obtained for $T^{(\,j)}$ and $N_j$ can be thought of as a contribution to an interesting extension of renewal theory.

  3. (iii) The sequence $\mathcal{T}$ is a particular yet non-trivial instance of a branching random walk; see [Reference Shi13] for a survey. As a consequence, an outcome of the analysis of $\mathcal{T}$ is a contribution to the theory of branching random walks and the theory of general branching processes.

Following [Reference Bohun, Iksanov, Marynych and Rashytov3], we call the jth generation early, intermediate, or late depending on whether j is fixed, $j=j(t)\to\infty$ , and $j(t)={\textrm{o}}(t)$ as $t\to\infty$ , or $j=j(t)$ is of order t. In [Reference Bohun, Iksanov, Marynych and Rashytov3], Bohun, Iksanov, Marynych, and Rashytov prove counterparts of the elementary renewal theorem, the Blackwell theorem, and the key renewal theorem for some intermediate generations. In the present work we investigate early generations. Although the analysis of early generations is simpler than that of intermediate generations, we solve here a larger collection of problems. More precisely, we prove a strong law of large numbers (Theorem 5) for $N_j(t)$ and a functional limit theorem (Theorem 6) for the vector-valued process $(N_1(ut),N_2(ut),\ldots,N_k(ut))_{u\geq 0}$ for each $k\in\mathbb{N}$ , properly normalized and centered as $t\to\infty$ , investigate the rate of convergence (Theorem 1) in a counterpart for $V_j$ of the elementary renewal theorem, and find the asymptotics of the variance $\textrm{Var} [N_j(t)]$ (Theorem 4). Also, counterparts for $V_j$ of the key renewal theorem (Theorem 2) and the Blackwell theorem (Theorem 3) are given. These two results which follow from a modified version of the proof of Theorem 2.7(a) of [Reference Bohun, Iksanov, Marynych and Rashytov3] are included for completeness. Theorem 1 is an early generations analogue of Proposition 2.3 of [Reference Bohun, Iksanov, Marynych and Rashytov3] which treats intermediate generations. In the latter result the asymptotic relation (4) is stated under rather strong assumptions: the distribution of $\xi$ has an absolutely continuous component, the random variables $\xi$ and $\eta$ have some finite exponential moments, and $\mathbb{E} [\xi^2]>2\mathbb{E} [\xi]\mathbb{E}[\eta]$ . In contrast, the assumptions of Theorem 1 are optimal. No intermediate generation versions of Theorems 4, 5, and 6 are available. The reason is that here we exploit rather precise asymptotics of various involved functions. Such asymptotics are not known in the setting of intermediate generations. To be more specific, we only mention that Theorems 4 and 6 rely heavily upon Lemma 3 and Assertion 3, respectively. At the moment, intermediate generation counterparts of these are beyond our reach.

The remainder of the paper is structured as follows. In Section 2 we state our main results. Some auxiliary statements are discussed in Section 3. The proofs of the main results are given in Section 4. Finally, the Appendix collects a couple of assertions borrowed from other articles.

2. Results

In Sections 2 and 3 we work under the assumption $\texttt{m}=\mathbb{E}[\xi]<\infty$ . In view of the standing assumption $\xi>0$ a.s. (see Section 1), we actually have $\texttt{m}\in (0,\infty)$ .

2.1. A counterpart of the elementary renewal theorem and the rate of convergence result

Assertion 1 is a counterpart for $V_j=\mathbb{E} [N_j]$ of the elementary renewal theorem.

Assertion 1. Assume that $\texttt{m}=\mathbb{E}[\xi]<\infty$ . Then, for fixed $j\in\mathbb{N}$ ,

(3) \begin{equation}\lim_{t\to\infty}\dfrac{V_j(t)}{t^{\,j}}=\dfrac{1}{j!\texttt{m}^{\,j}}.\end{equation}

The standard version of the elementary renewal theorem can be found, for instance, in Theorem 3.3.3 of [Reference Resnick11].

Theorem 1 quantifies the rate of convergence in (3) under the assumptions $\mathbb{E}[\xi^2]<\infty$ and $\mathbb{E}[\eta]<\infty$ . As usual, $f(t)\sim g(t)$ as $t\to\infty$ means that $\lim_{t\to\infty}\!(\,f(t)/g(t))=1$ . Recall that the distribution of a positive random variable is called nonlattice if it is not concentrated on any lattice $(nd)_{n\in\mathbb{N}_0}$ , $d>0$ .

Theorem 1. Assume that the distribution of $\xi$ is nonlattice, and that $\mathbb{E}[\xi^2]<\infty$ and $\mathbb{E}[\eta]<\infty$ . Then, for any fixed $j\in\mathbb{N}$ ,

(4) \begin{equation}V_j(t)-\dfrac{t^{\,j}}{j!\texttt{m}^{\,j}} \sim \dfrac{b_V\ jt^{j-1}}{(\,j-1)!\texttt{m}^{j-1}},\quad t\to\infty,\end{equation}

where $\texttt{m}=\mathbb{E}[\xi]<\infty$ and $b_V\,{:\!=}\, \texttt{m}^{-1}(\mathbb{E} [\xi^2]/(2\texttt{m})-\mathbb{E} [\eta])\in\mathbb{R}$ .

Theorem 1 is an extension of the previously known result concerning renewal functions, which can be obtained, for instance, as a specialization of Theorem 9.2 of [Reference Gut6].

2.2. Counterparts of the key renewal theorem and the Blackwell theorem

Theorem 2 is a counterpart for $V_j$ of the key renewal theorem; see Theorem 1.12 of [Reference Mitov and Omey10].

Theorem 2. Let $f\colon [0,\infty)\to [0,\infty)$ be a directly Riemann integrable function on $[0,\infty)$ . Assume that the distribution of $\xi$ is nonlattice and that $\texttt{m}<\infty$ . Then, for fixed $j\in\mathbb{N}$ ,

\begin{equation*} \int_{[0,\,t]}f(t-y) \,{\textrm{d}} V_j(y) \sim \biggl(\dfrac{1}{\texttt{m}}\int_0^\infty f(y) \,{\textrm{d}} y\biggr) V_{j-1}(t) \sim \int_0^\infty f(y)\,{\textrm{d}} y\dfrac{t^{j-1}}{(\,j-1)!\texttt{m}^{\,j}},\quad t\to\infty.\end{equation*}

Theorem 3, a counterpart for $V_j$ of the Blackwell theorem, is just a specialization of Theorem 2 with $f(y)=\unicode{x1D7D9}_{[0,\,h)}(y)$ for $y\geq 0$ . Nevertheless, we find it instructive to provide an alternative proof. The reason is that the proof given in Section 4.2 nicely illustrates basic concepts of the renewal theory and may be adapted to other settings.

Theorem 3. Assume that the distribution of $\xi$ is nonlattice and $\texttt{m}=\mathbb{E}[\xi]<\infty$ . Then, for fixed $j\in\mathbb{N}$ and fixed $h>0$ ,

(5) \begin{equation}V_j(t+h)-V_j(t) \sim \dfrac{ht^{j-1}}{(\,j-1)!\texttt{m}^{\,j}},\quad t\to\infty.\end{equation}

The standard version of the Blackwell theorem is given, for instance, in Theorem 1.10 of [Reference Mitov and Omey10]. In the case $\eta=0$ a.s. limit relation (5) can be found in Theorem 1.16 of [Reference Mitov and Omey10].

Remark 1. Assume that the distributions of $\xi$ and $\eta$ are d-arithmetic for some $d>0$ . This means that these are concentrated on the lattice $(nd)_{n\in\mathbb{N}_0}$ for some $d>0$ and not concentrated on $(nd_1)_{n\in\mathbb{N}_0}$ for any $d_1>d$ . Then one may expect that Theorems 1, 2, and 3 admit natural counterparts in which the limit should be taken along (nd) rather than continuously. Further, the constant $b_V$ in Theorem 1 should take a slightly different form, the integral in Theorem 2 should be replaced by a sum, and the parameter h in Theorem 3 should be restricted to the lattice (nd). The case when the distribution of $\xi$ is d-arithmetic, whereas the distribution of $\eta$ is not, looks more challenging, and we refrain from discussing it here.

2.3. Asymptotics of the variance

In this section we find, for fixed $j\in\mathbb{N}$ , the asymptotics of $\textrm{Var} [N_j(t)]$ as $t\to\infty$ under the assumption $\eta=\xi$ a.s., so that $T_k=S_k$ for $k\in\mathbb{N}$ . In other words, below we treat iterated standard random walks. Theorem 4 is a strengthening of Lemma 4.2 of [Reference Iksanov and Kabluchko8] in which an asymptotic upper bound for $\textrm{Var}[N_j(t)]$ was obtained. Although we do not know the asymptotic behavior of $\textrm{Var} [N_j(t)]$ for (genuine) iterated perturbed random walks, Remark 2 contains a partial result in that direction.

Theorem 4. Assume that $\eta=\xi$ a.s., that the distribution of $\xi$ is nonlattice, and $\texttt{s}^2\,{:\!=}\, \textrm{Var} [\xi]\in (0, \infty)$ . Then, for any $j\in\mathbb{N}$ ,

(6) \begin{equation}\lim_{t\to\infty} \dfrac{\textrm{Var} [N_j(t)]}{t^{2j-1}}=\dfrac{\texttt{s}^2}{(2j-1)((\,j-1)!)^2 \texttt{m}^{2j+1}},\end{equation}

where $\texttt{m}=\mathbb{E}[\xi]<\infty$ .

Theorem 4 is an extension of the previously known result for $\textrm{Var}\ N_1$ ; see e.g. Theorem 9.1(ii) of [Reference Gut6].

2.4. Strong law of large numbers

Theorem 5 is a strong law of large numbers for $N_j$ .

Theorem 5. Assume that $\texttt{m}=\mathbb{E}[\xi]<\infty$ . Then, for fixed $j\in\mathbb{N}$ ,

(7) \begin{equation}\lim_{t\to\infty}\dfrac{N_j(t)}{t^{\,j}}=\dfrac{1}{\texttt{m}^{\,j} j!}\quad {a.s.}\end{equation}

The strong law of large numbers for renewal processes is given, for instance, in Theorem 5.1 of [Reference Gut6].

2.5. A functional limit theorem

Let $B\,{:\!=}\, (B(s))_{s\geq 0}$ be a standard Brownian motion and, for $q\geq 0$ , let

\begin{equation*}B_q(s)\,{:\!=}\, \int_{[0,\,s]}(s-y)^q \,{\textrm{d}} B(y),\quad s\geq 0.\end{equation*}

The process $B_q\,{:\!=}\, (B_q(s))_{s\geq 0}$ is a centered Gaussian process called the fractionally integrated Brownian motion or the Riemann–Liouville process. Plainly $B=B_0$ , $B_1(s)=\int_0^s B(y)\,{\textrm{d}} y$ , $s\geq 0$ and, for integer $q\geq 2$ ,

\begin{equation*}B_q(s)=q!\int_0^s\int_0^{s_2}\cdots\int_0^{s_q} B(y) \,{\textrm{d}} y \,{\textrm{d}} s_q\cdots {\textrm{d}} s_2,\quad s\geq 0.\end{equation*}

In the following we write $\Rightarrow$ and ${\overset{ {\textrm{d}} }\longrightarrow}$ to denote weak convergence in a function space and weak convergence of one-dimensional distributions, respectively. As usual, we let D denote the Skorokhod space of right-continuous functions defined on $[0,\infty)$ with finite limits from the left at positive points. We prefer to use $(X_t(u))_{u\geq 0} \Rightarrow (X(u))_{u\geq 0}$ in place of the formally correct notation $X_t({\cdot})\Rightarrow X({\cdot})$ .

Given next is a functional limit theorem for $(N_1(ut), N_2(ut),\ldots)_{u\geq0}$ , properly normalized and centered, as $t\to\infty$ .

Theorem 6. Assume that $\texttt{m}=\mathbb{E}[\xi]<\infty$ , $\texttt{s}^2=\textrm{Var} [\xi]\in(0,\infty)$ , and $\mathbb{E}[\eta^a]<\infty$ for some $a>0$ . Then

(8) \begin{equation} \biggl((\,j-1)! \biggl(\dfrac{N_j(ut)-V_j(ut)}{\sqrt{\texttt{m}^{-2j-1}\texttt{s}^2 t^{2j-1}}}\biggr)_{u\geq 0} \biggr)_{j\in\mathbb{N}} \Rightarrow((B_{j-1}(u))_{u\geq 0})_{j\in\mathbb{N}},\quad t\to\infty\end{equation}

in the product $J_1$ -topology on $D^\mathbb{N}$ .

If $\mathbb{E} [\eta^{1/2}]<\infty$ , then the centering $V_j(ut)$ can be replaced by $(ut)^{\,j}/(\,j! \texttt{m}^{\,j})$ . If $\mathbb{E}[\eta^{1/2}]=\infty$ , the centering $V_j(ut)$ can be replaced by

(9) \begin{align}&\dfrac{\mathbb{E} [(ut-(\eta_1+\cdots+\eta_j))^{\,j}\unicode{x1D7D9}_{\{\eta_1+\cdots+\eta_j\leq ut\}}]}{j! \texttt{m}^{\,j}} \notag \\ &\quad =\dfrac{\int_0^{t_1}\int_0^{t_2}\cdots\int_0^{t_j}\mathbb{P}\{\eta_1+\cdots+\eta_j\leq y\} \,{\textrm{d}} y \,{\textrm{d}} t_j\cdots {\textrm{d}} t_2}{\texttt{m}^{\,j}},\end{align}

where $t_1=ut$ .

In the renewal-theoretic context (the cases $\eta=0$ or $\eta=\xi$ a.s.), under the assumption that the distribution of $\xi$ belongs to the domain of attraction of a stable distribution, various functional limit theorems for $N_1$ can be found, for instance, in Section 5 of [Reference Gut6], on page 115 of [Reference Iksanov7], and in Section 7.3.2 of [Reference Whitt14].

Now we derive a one-dimensional central limit theorem for $N_j$ . To this end, it is enough to restrict attention to just one coordinate in (8), put $u=1$ there, and note that $B_{j-1}(1)$ has the same distribution as $(2j-1)^{-1/2}B(1)$ .

Corollary 1. Under the assumptions of Theorem 6, for fixed $j\in\mathbb{N}$ ,

(10) \begin{equation}\dfrac{(\,j-1)!(2j-1)^{1/2}\texttt{m}^{j+1/2}}{\texttt{s}t^{j-1/2}}\bigl(N_j(t)-V_j(t)\bigr)\ {\overset{ {\textrm{d}} }\longrightarrow}\ B(1),\quad t\to\infty.\end{equation}

Remark 2. In view of Corollary 1 the result of Theorem 4 is expected, yet the complete proof requires some effort. Of course, the relation

\begin{equation*}{\lim\inf}_{t\to\infty} \dfrac{\textrm{Var} [N_j(t)]}{t^{2j-1}}\geq\dfrac{\texttt{s}^2}{(2j-1)((\,j-1)!)^2\texttt{m}^{2j+1}}\end{equation*}

is an immediate consequence of (10) and Fatou’s lemma.

3. Auxiliary tools

For $t\geq 0$ , put $U(t)\,{:\!=}\, \sum_{n\geq 0}\mathbb{P}\{S_n\leq t\}$ , so that U is the renewal function of $(S_n)_{n\in \mathbb{N}_0}$ . Whenever $\mathbb{E}\xi^2<\infty$ , we have

(11) \begin{equation}0\leq U(t)-\texttt{m}^{-1}t \leq c_U,\quad t\geq 0,\end{equation}

where $c_U\,{:\!=}\, \texttt{m}^{-2} \mathbb{E}[\xi^2]$ . The right-hand side is called Lorden’s inequality. Perhaps it is not commonly known that Lorden’s inequality takes the same form for nonlattice and lattice distributions of $\xi$ , and we refer to Section 3 of [Reference Bohun, Iksanov, Marynych and Rashytov3] for an explanation of this fact. The left-hand inequality in (11) is a consequence of Wald’s identity $t\leq \mathbb{E} S_{\nu(t)}=\texttt{m} U(t)$ , where $\nu(t)\,{:\!=}\, \inf\{k\in\mathbb{N}\colon S_k>t\}$ for $t\geq 0$ .

Let us show that the left-hand inequality in (11) extends to the convolution powers $U_j=U^{\ast(\,j)}$ (see (2) for the definition) in the following sense.

Lemma 1. Let $j\in\mathbb{N}$ and $\texttt{m}=\mathbb{E}[\xi]<\infty$ . Then

(12) \begin{equation}U_j(t)\geq \dfrac{t^{\,j}}{j!\texttt{m}^{\,j}},\quad t\geq 0.\end{equation}

Proof. We use mathematical induction. When $j=1$ , (12) reduces to the left-hand inequality in (11). Assuming that (12) holds for $j\leq k$ , we infer

\begin{align*} U_{k+1}(t)-\dfrac{t^{k+1}}{(k+1)!\texttt{m}^{k+1}} & =\int_{[0,\,t]}\biggl(U(t-y)-\dfrac{t-y}{\texttt{m}}\biggr)\,{\textrm{d}} U_k(y)+\dfrac{1}{\texttt{m}}\int_0^t \biggl(U_k(y)-\dfrac{y^k}{k!\texttt{m}^k}\biggr)\,{\textrm{d}} y\\ &\geq 0,\end{align*}

that is, (12) holds with $j=k+1$ .

Put $G(t)\,{:\!=}\, \mathbb{P}\{\eta\leq t\}$ for $t\in\mathbb{R}$ . Observe that $G(t)=0$ for $t<0$ . Since $V(t)\leq U(t)$ for $t\geq 0$ , we conclude that

\begin{equation*} V(t)-\texttt{m}^{-1}t \leq c_U,\quad t\geq 0\end{equation*}

for any distribution of $\eta$ . On the other hand, assuming that $\mathbb{E}[\eta]<\infty$ ,

\begin{align*}V(t)-\texttt{m}^{-1}t&=\int_{[0,\,t]}(U(t-y)-\texttt{m}^{-1}(t-y)) \,{\textrm{d}} G(y)-\texttt{m}^{-1} \int_0^t (1-G(y)) \,{\textrm{d}} y\\ &\geq -\texttt{m}^{-1}\int_0^t (1-G(y))\,{\textrm{d}} y\\ & \geq-\texttt{m}^{-1}\mathbb{E}[\eta],\end{align*}

having utilized $U(t)\geq \texttt{m}^{-1}t$ for $t\geq 0$ . Assuming that $\mathbb{E}[\eta^a]<\infty$ for some $a\in (0,1)$ , which in particular implies that $\lim_{t\to\infty}t^a\mathbb{P}\{\eta>t\}=0$ , we infer

\begin{equation*}V(t)-\texttt{m}^{-1}t\geq -\texttt{m}^{-1}\int_0^t (1-G(y)) \,{\textrm{d}} y\geq-c_1-c_2t^{1-a},\quad t\geq 0.\end{equation*}

Thus we have proved the following.

Lemma 2. Assume that $\mathbb{E} [\xi^2]<\infty$ . If $\mathbb{E}[\eta]<\infty$ , then

(13) \begin{equation}|V(t)-\texttt{m}^{-1}t|\leq c_V,\quad t\geq 0 ,\end{equation}

where $c_V\,{:\!=}\, \max\!(c_U, \texttt{m}^{-1}\mathbb{E}[\eta])$ and $c_U=\texttt{m}^{-2} \mathbb{E}[\xi^2]$ . If $\mathbb{E}[\eta^a]<\infty$ for some $a\in (0,1)$ , then

(14) \begin{equation}-c_1-\!c_2t^{1-a}\leq V(t)-\texttt{m}^{-1}t\leq c_V,\quad t\geq 0\end{equation}

for appropriate positive constants $c_1$ and $c_2$ .

Lemma 3 is needed for the proof of Theorem 4. The convolution integral in (15), with $2j-1$ replacing j, is one of the summands appearing in a representation of $\textrm{Var}\ N_j$ . It is important that (15) is a two-term expansion of the integral, because the coefficient $b_U-1$ contributes to the leading coefficient in the asymptotics of $\textrm{Var}\ N_j$ . Put

\begin{equation*}\tilde U(t)\,{:\!=}\, \sum_{r\geq 1}\mathbb{P}\{S_r\leq t\}=U(t)-1\quad \text{for $t\geq 0$.}\end{equation*}

Lemma 3. Assume that the distribution of $\xi$ is nonlattice and $\mathbb{E}[\xi^2]<\infty$ . Then, for fixed $j\in\mathbb{N}$ ,

(15) \begin{equation}\int_{[0,\,t]}(t-y)^{\,j} \,{\textrm{d}} \tilde U(y)=\dfrac{t^{j+1}}{(\,j+1)\texttt{m}}+ (b_U-1)t^{\,j}+{\textrm{o}}(t^{\,j}),\quad t\to\infty,\end{equation}

where $b_U\,{:\!=}\, \mathbb{E} [\xi^2]/(2\texttt{m}^2)$ .

Proof. Using the easily checked formula

\begin{equation*}\int_{[0,\,t]}(t-y)^{\,j} \,{\textrm{d}} \tilde{U}(y)\,=\,j\int_0^t\int_{[0,\,s]}(s-y)^{j-1} \,{\textrm{d}} \tilde U(y) \,{\textrm{d}} s,\quad j\in\mathbb{N},\ t\geq 0\end{equation*}

and mathematical induction, one can show that

\begin{equation*}\int_{[0,\,t]}(t-y)^{\,j} \,{\textrm{d}} \tilde{U}(y)=j!\int_0^t\int_0^{y_j}\cdots\int_0^{y_2}\tilde U(y_1) \,{\textrm{d}} y_1\cdots {\textrm{d}} y_j,\quad j\in\mathbb{N},\ t\geq 0.\end{equation*}

Here the right-hand side reads $\int_0^t\tilde U(y) \,{\textrm{d}} y$ when $j=1$ . Now (15) follows from the latter equality and the relation $\tilde U(t)=\texttt{m}^{-1}t+b_U-1+{\textrm{o}}(1)$ as $t\to\infty$ , which is simply formula (4) with $j=1$ and $\eta=\xi$ a.s. To obtain the asymptotic relation

\begin{equation*}\int_0^t\int_0^{y_j}\cdots\int_0^{y_2}{\textrm{o}}(1) \,{\textrm{d}} y_1\cdots {\textrm{d}} y_j={\textrm{o}}(t^{\,j}),\quad t\to\infty,\end{equation*}

we have used L’Hôpital’s rule $(\,j-1)$ times in the situation that the ${\textrm{o}}(1)$ -term is integrable and j times in the situation that it is not.

Lemma 4 will be essential to the proof of Theorem 5 when showing that one of the two summands in a representation of $N_j$ is asymptotically negligible.

Lemma 4. Assume that $\texttt{m}=\mathbb{E}[\xi]<\infty$ . Then

\begin{equation*}\lim_{t\to\infty}\dfrac{\mathbb{E}[(N(t))^2]}{t^2}=\dfrac{1}{\texttt{m}^2}.\end{equation*}

Proof. The relation

\begin{equation*}{\lim\inf}_{t\to\infty}\dfrac{\mathbb{E} [(N(t))^2]}{t^2}\geq\dfrac{1}{\texttt{m}^2}\end{equation*}

follows from $\mathbb{E} [(N(t))^2]\geq (V(t))^2$ and Assertion 1. The converse inequality for the limit superior is implied by the inequality

\begin{equation*}N(t)\leq \nu(t)=\sum_{i\geq 0}\unicode{x1D7D9}_{\{S_i\leq t\}},\quad t\geq0\quad\text{a.s.}\end{equation*}

and $\lim_{t\to\infty}t^{-2}\mathbb{E} [(\nu(t))^2]=\texttt{m}^{-2}$ (see Theorem 5.1(ii) of [Reference Gut6]).

Assertions 2 and 3 are important ingredients in the proof of Theorem 6, which is based on an application of Theorem 7 given in the Appendix. While the former provides a functional limit theorem for the counting process in the first generation, thereby ensuring that condition (37) of Theorem 7 is satisfied, the latter is responsible for checking condition (36) of the same theorem.

Assertion 2. Assume that $\texttt{m}=\mathbb{E}[\xi]<\infty$ , $\texttt{s}^2=\textrm{Var} [\xi]\in(0,\infty)$ , and $\mathbb{E}[\eta^a]<\infty$ for some $a>0$ . Then

\begin{equation*} \biggl(\dfrac{N(ut)-V(ut)}{\sqrt{\texttt{m}^{-3}\texttt{s}^2 t}}\biggr)_{u\geq 0}\ \Rightarrow\ (B(u))_{u\geq 0},\quad t\to\infty\end{equation*}

in the $J_1$ -topology on D, where $(B(u))_{u\geq 0}$ is a standard Brownian motion.

Proof. According to part (B1) of Theorem 3.2 of [Reference Alsmeyer, Iksanov and Marynych1],

\begin{equation*}\biggl(\dfrac{N(ut)-\texttt{m}^{-1}\int_0^{ut}G(y) \,{\textrm{d}} y}{\sqrt{\texttt{m}^{-3}\texttt{s}^2 t}}\biggr)_{u\geq 0}\ \Rightarrow\ (B(u))_{u\geq 0},\quad t\to\infty\end{equation*}

in the $J_1$ -topology on D, where, as before, G is the distribution function of $\eta$ . Thus it is enough to show that, for all $T>0$ ,

(16) \begin{equation}\lim_{t\to\infty} t^{-1/2}\sup_{u\in[0,\,T]}\biggl|V(ut)-\texttt{m}^{-1}\int_0^{ut}G(y) \,{\textrm{d}} y\biggr|=0.\end{equation}

According to (11), for $u\in[0,T]$ and $t\geq 0$ ,

\begin{equation*}0\leq V(ut)-\texttt{m}^{-1}\int_0^{ut} G(y) \,{\textrm{d}} y= \int_{[0,\,ut]}(U(tu-y)-\texttt{m}^{-1}(tu-y)) \,{\textrm{d}} G(y)\leq c_U,\end{equation*}

and (16) follows.

Assertion 3. Assume that $\texttt{m}=\mathbb{E}[\xi]<\infty$ , $\texttt{s}^2=\textrm{Var} [\xi]\in(0,\infty)$ , and $\mathbb{E}[\eta^a]<\infty$ for some $a>0$ . Then

\begin{equation*}\mathbb{E} \biggl[\sup_{s\in [0,\,t]}(N(s)-V(s))^2\biggr]={\textrm{O}}(t),\quad t\to\infty.\end{equation*}

Proof. In the case $\mathbb{E}[\eta]<\infty$ , this limit relation is proved in Lemma 4.2(b) of [Reference Gnedin and Iksanov5].

From now on we assume that $\mathbb{E}[\eta^a]<\infty$ for some $a\in (0,1)$ and $\mathbb{E}[\eta]=\infty$ . As in the proof of Lemma 4.2(b) of [Reference Gnedin and Iksanov5], we shall use a decomposition

\begin{equation*}N(t)-V(t)=\sum_{k\geq 0}(\unicode{x1D7D9}_{\{S_k+\eta_{k+1}\leq t\}}-G(t-S_k))+\sum_{k\geq 0}G(t-S_k)-V(t),\end{equation*}

where G is the distribution function of $\eta$ . It suffices to prove that

(17) \begin{equation}\mathbb{E}\Biggl[\sup_{s\in [0,\,t]}\Biggl(\sum_{k\geq0}(\unicode{x1D7D9}_{\{S_k+\eta_{k+1}\leq s\}}-\!G(s-S_k))\Biggr)^2\Biggr]={\textrm{O}}(t),\quad t\to\infty\end{equation}

and

(18) \begin{equation}\mathbb{E}\Biggl[\sup_{s\in [0,\,t]}\Biggl(\sum_{k\geq0}G(t-S_k)-V(t)\Biggr)^2\Biggr]={\textrm{O}}(t),\quad t\to\infty.\end{equation}

The proof of (18) given in [Reference Gnedin and Iksanov5] goes through for any distribution of $\eta$ and as such applies without changes under the present assumptions. On the other hand, the proof of (17) given in [Reference Gnedin and Iksanov5] depends crucially on the assumption $\mathbb{E}[\eta]<\infty$ imposed in [Reference Gnedin and Iksanov5, Lemma 4.2(b)]. Thus another argument has to be found.

For $u,t\geq 0$ , put

\begin{equation*}Z_t(u)\,{:\!=}\, \sum_{k\geq 0}\bigl(\unicode{x1D7D9}_{\{S_k+\eta_{k+1}\leq ut\}}-\!G(ut-S_k)\bigr)\unicode{x1D7D9}_{\{S_k\leq ut\}},\end{equation*}

so that

\begin{equation*}\mathbb{E}\Biggl[\sup_{s\in[0,\,t]}\Biggl(\sum_{k\geq 0}(\unicode{x1D7D9}_{\{S_k+\eta_{k+1}\leq s\}}-\!G(s-S_k))\Biggr)^2\Biggr]=\mathbb{E} \biggl[\sup_{u\in [0,\,1]}(Z_t(u))^2\biggr].\end{equation*}

In what follows we write $\sup_{u\in K}$ when the supremum is taken over an uncountable set K and $\max_{m\leq k\leq n}$ when the maximum is taken over the discrete set $\{m,m+1,\ldots, n\}$ . We start by observing that, for positive integer $I=I(t)$ to be chosen later in (25),

\begin{align*}\sup_{u\in [0,\,1]}|Z_t(u)| & =\max_{0\leq k\leq 2^I-1}\sup_{u\in [0,\,2^{-I}]}\bigl|Z_t(k2^{-I}+u)-Z_t(k2^{-I})+Z_t(k2^{-I})\bigr|\\[3pt] &\leq\max_{0\leq k\leq 2^I}|Z_t(k2^{-I})|+\max_{0\leq k\leq 2^I-1}\sup_{u\in[0,\,2^{-I}]}|Z_t(k2^{-I}+u)-Z_t(k2^{-I})|.\end{align*}

We have used subadditivity of the supremum for the last inequality. We proceed as on page 764 of [Reference Resnick and Rootzén12]. Put $F_j\,{:\!=}\, \{k2^{-j}\colon 0\leq k\leq2^{\,j}\}$ for $j\in\mathbb{N}_0$ and fix $u\in F_I$ . Now define $u_j\,{:\!=}\, \max\{w\in F_j\colon w\leq u\}$ for non-negative integer $j\leq I$ . Then $u_{j-1}=u_j$ or $u_{j-1}=u_j-2^{-j}$ . With this at hand,

\begin{equation*}|Z_t(u)|=\Biggl|\sum_{j=1}^I(Z_t(u_j)-Z_t(u_{j-1}))+Z_t(u_0)\Biggr|\leq \sum_{j=0}^I \max_{1\leq k\leq 2^{\,j}}|Z_t(k2^{-j})-Z_t((k-1)2^{-j})|.\end{equation*}

Combining the fragments together, we arrive at the inequality which is a starting point of our subsequent work:

\begin{align*} & \sup_{u\in [0,\,1]}|Z_t(u)| \\[3pt] & \quad \leq \sum_{j=0}^I \max_{1\leq k\leq 2^{\,j}}|Z_t(k2^{-j})-Z_t((k-1)2^{-j})| +\max_{0\leq k\leq 2^I-1}\sup_{u\in[0,\,2^{-I}]}|Z_t(k2^{-I}+u)-Z_t(k2^{-I})|.\end{align*}

Thus (17) follows if we can show that

(19) \begin{equation}\mathbb{E}\Biggl[\Biggl(\sum_{j=0}^I \max_{1\leq k\leq2^{\,j}}|Z_t(k2^{-j})-Z_t((k-1)2^{-j})|\Biggr)^2\Biggr]={\textrm{O}}(t),\quad t\to\infty\end{equation}

and that

(20) \begin{equation}\mathbb{E} \biggl[\max_{0\leq k\leq 2^{I}-1}\sup_{u\in[0,\,2^{-I}]}(Z_t(k2^{-I}+u)-Z_t(k2^{-I}))^2\biggr]={\textrm{O}}(t),\quad t\to\infty.\end{equation}

We intend to prove that, for $u,v\geq 0$ , $u>v$ , and $t\geq 0$ ,

(21) \begin{equation}\mathbb{E} [(Z_t(u)-Z_t(v))^2]\leq 2\mathbb{E} [\nu(1)] a((u-v)t),\end{equation}

where, for $t\geq 0$ ,

\begin{equation*}a(t)\,{:\!=}\, \sum_{k=0}^{[t]+1}(1-G(k))\end{equation*}

and

\begin{equation*}\nu(t)=\inf\{k\in\mathbb{N}\colon S_k>t\}=\sum_{k\geq 0}\unicode{x1D7D9}_{\{S_k\leq t\}}.\end{equation*}

Indeed,

\begin{align*} & \mathbb{E} [(Z_t(u)-Z_t(v))^2]\\ &\quad =\mathbb{E} \biggl[\int_{(vt,\,ut]}G(ut-y)(1-G(ut-y))\,{\textrm{d}} \nu(y)\biggr]\\ &\quad\quad +\mathbb{E} \biggl[\int_{[0,\,vt]}(G(ut-y)-G(vt-y))(1-G(ut-y)+G(vt-y))\,{\textrm{d}} \nu(y)\biggr]\\ &\quad \leq \mathbb{E} \biggl[\int_{(vt,\,ut]}(1-G(ut-y))\,{\textrm{d}} \nu(y)\biggr]+\mathbb{E}\biggl[\int_{[0,\,vt]}(G(ut-y)-G(vt-y)) \,{\textrm{d}} \nu(y)\biggr].\end{align*}

Using Lemma 5 with $f(y)=(1-G(y))\unicode{x1D7D9}_{[0,\,(u-v)t)}(y)$ and $f(y)=G((u-v)t+y)-G(y)$ , respectively, we obtain

(22) \begin{align}\mathbb{E} \biggl[ \int_{(vt,\,ut]}(1-G(ut-y)) \,{\textrm{d}} \nu(y)\biggr]&=\mathbb{E}\biggl[\int_{[0,\,ut]}(1-G(ut-y))\unicode{x1D7D9}_{[0,\,(u-v)t)}(ut-y) \,{\textrm{d}} \nu(y)\biggr]\notag \\ &\leq\mathbb{E} [\nu(1)] \sum_{n=0}^{[ut]}\sup_{y\in [n,\,n+1)}((1-G(y))\unicode{x1D7D9}_{[0,\,(u-v)t)}(y))\notag \\ &\leq \mathbb{E} [\nu(1)]\sum_{n=0}^{[(u-v)t]}(1-G(n))\leq \mathbb{E} [\nu(1)] a((u-v)t) \end{align}

and

(23) \begin{align} & \mathbb{E} \biggl[\int_{[0,\,vt]}(G(ut-y)-G(vt-y)) \,{\textrm{d}} \nu(y)\biggr]\notag \\ &\quad \leq \mathbb{E} [\nu(1)]\sum_{n=0}^{[vt]}\sup_{y\in [n,\,n+1)}(G((u-v)t+y)-G(y))\notag \\ &\quad \leq \mathbb{E} [\nu(1)] \Biggl(\sum_{n=0}^{[vt]}(1-G(n))-\sum_{n=0}^{[vt]}(1-G((u-v)t+n+1))\Biggr)\notag \\ &\quad \leq \mathbb{E} [\nu(1)]\Biggl(\sum_{n=0}^{[vt]}(1-G(n))-\sum_{n=0}^{[ut]+1}(1-G(n))+\sum_{n=0}^{[(u-v)t] +1}(1-G(n))\Biggr)\notag \\ &\quad \leq \mathbb{E} [\nu(1)] a((u-v)t).\end{align}

Combining (22) and (23) yields (21).

Proof of (19). The assumption $\mathbb{E} [\eta^a]<\infty$ entails $\lim_{t\to\infty}t^a(1-G(t))=0$ and thereupon, given $C>0$ , there exists $t_1>0$ such that $a(t)\leq Ct^{1-a}$ whenever $t\geq t_1$ . Using this in combination with (21) yields

(24) \begin{equation}\mathbb{E} [(Z_t(k2^{-j})-Z_t((k-1)2^{-j}))^2] \leq 2C\mathbb{E} [\nu(1)]2^{-j(1-a)}-: C_12^{-j(1-a)}\end{equation}

whenever $2^{-j}t\geq t_1$ . Let $I=I(t)$ denote the integer number satisfying

(25) \begin{equation}2^{-I}t\geq t_1>2^{-I-1}t.\end{equation}

Then the inequalities (24) and

\begin{align*}\mathbb{E} \biggl[\max_{1\leq k\leq 2^{\,j}}(Z_t(k2^{-j})-Z_t((k-1)2^{-j}))^2\biggr]&\leq\sum_{k=1}^{2^{\,j}}\mathbb{E} [(Z_t(k2^{-j})-Z_t((k-1)2^{-j}))^2] \\ &\leq C_1 2^{aj}\end{align*}

hold whenever $j\leq I$ . Invoking the triangle inequality for the $L_2$ -norm yields

\begin{align*}&\mathbb{E} \Biggl[ \Biggl(\sum_{j=0}^I \max_{1\leq k\leq 2^{\,j}}|Z_t(k2^{-j})-Z_t((k-1)2^{-j})|\Biggr)^2\Biggr] \\ &\quad \leq \Biggl(\sum_{j=0}^I \biggl(\mathbb{E}\biggl[\max_{1\leq k\leq 2^{\,j}}(Z_t(k2^{-j})-Z_t((k-1)2^{-j}))^2\biggr]\biggr)^{1/2}\Biggr)^2 \\&\quad \leq C_1\Biggl(\sum_{j=0}^I 2^{aj/2}\Biggr)^2\\&\quad =\,\bigl({\textrm{O}}\bigl(2^{aI/2}\bigr)\bigr)^2\\ &\quad =\,{\textrm{O}}(t^a),\quad t\to\infty.\end{align*}

Here the last equality is ensured by the choice of I.

Proof of (20). We shall use the decomposition

\begin{align*} & Z_t(k2^{-I}+u)-Z_t(k2^{-I})\\ &\quad =\sum_{j\geq 0}\bigl(\unicode{x1D7D9}_{\{S_j+\eta_{j+1}\leq(k2^{-I}+u)t\}}-\!G((k2^{-I}+u)t-S_j)\bigr)\unicode{x1D7D9}_{\{k2^{-I}t<S_j\leq (k2^{-I}+u)t\}}\\ &\quad\quad + \sum_{j\geq 0}\bigl(\unicode{x1D7D9}_{\{k2^{-I}t<S_j+\eta_{j+1}\leq(k2^{-I}+u)t\}}-\!(G((k2^{-I}+u)t-S_j)-G(k2^{-I}t-S_j))\bigr)\unicode{x1D7D9}_{\{S_j\leq k2^{-I}t\}}\\ &\quad -: J_1(t,k,u)+J_2(t,k,u).\end{align*}

For $i=1,2$ we have

(26) \begin{equation}\mathbb{E} \biggl[\max_{0\leq k\leq 2^I-1}\sup_{u\in [0,\,2^{-I}]}(J_i(t,k,u))^2\biggr]={\textrm{O}}(t),\quad t\to\infty,\end{equation}

completing the proof.

Proof of (26) for $i\,{=}\,1$ . Since $|J_1(t,k,u)|\leq \nu((k2^{-I}+u)t)-\nu(k2^{-I}t)$ and $t\mapsto \nu(t)$ is a.s. non-decreasing, we infer $\sup_{u\in [0,\, 2^{-I}]} |J_1(t,k,u)|\leq \nu((k+1)2^{-I}t)-\nu(k2^{-I}t)$ a.s. Hence

\begin{align*}& \mathbb{E} \biggl[\max_{0\leq k\leq 2^I-1}\sup_{u\in [0,\,2^{-I}]}(J_i(t,k,u))^2\biggr]\\ &\quad \leq \mathbb{E}\biggl[\max_{0\leq k\leq 2^I-1}(\nu((k+1)2^{-I}t)-\nu(k2^{-I}t))^2\biggr]\\ &\quad \leq\sum_{k=0}^{2^I-1} \mathbb{E} [(\nu((k+1)2^{-I}t)-\nu(k2^{-I}t))^2]\leq 2^I \mathbb{E} [(\nu(2^{-I}t))^2] \\ &\quad \leq (t/t_1)\mathbb{E} [(\nu(2t_1))^2]={\textrm{O}}(t),\quad t\to\infty.\end{align*}

Here the second inequality follows from distributional subadditivity of $\nu(t)$ (see e.g. formula (5.7) of [Reference Gut6]), and the third inequality is secured by the choice of I.

Proof of (26) for $i\,{=}\,2$ . We have

\begin{align*} &\sup_{u\in [0,\, 2^{-I}]} |J_2(t,k,u)|\\ &\quad \leq \sup_{u\in [0,\, 2^{-I}]}\biggl(\sum_{j\geq 0}\unicode{x1D7D9}_{\{k2^{-I}t<S_j+\eta_{j+1}\leq (k2^{-I}+u)t\}}\unicode{x1D7D9}_{\{S_j\leq k2^{-I}t\}}\\ &\quad\quad +\sum_{j\geq0}\bigl(G((k2^{-I}+u)t-S_j)-G(k2^{-I}t-S_j))\bigr)\unicode{x1D7D9}_{\{S_j\leq k2^{-I}t\}}\biggr)\\&\quad \leq \sum_{j\geq0}\unicode{x1D7D9}_{\{k2^{-I}t<S_j+\eta_{j+1}\leq (k+1)2^{-I}t\}}\unicode{x1D7D9}_{\{S_j\leq k2^{-I}t\}}\\&\quad\quad +\sum_{j\geq 0}\bigl(G(((k+1)2^{-I})t-S_j)-G(k2^{-I}t-S_j)\bigr)\unicode{x1D7D9}_{\{S_j\leq k2^{-I}t\}}\\&\quad \leq \Biggl|\sum_{j\geq 0}\bigl(\unicode{x1D7D9}_{\{k2^{-I}t<S_j+\eta_{j+1}\leq(k+1)2^{-I}t\}}-\bigl(G(((k+1)2^{-I})t-S_j)-G(k2^{-I}t-S_j)\bigr)\bigr)\unicode{x1D7D9}_{\{S_j\leq k2^{-I}t\}}\Biggr|\\&\quad\quad + 2\sum_{j\geq0}\bigl(G(((k+1)2^{-I})t-S_j)-G(k2^{-I}t-S_j))\bigr)\unicode{x1D7D9}_{\{S_j\leq k2^{-I}t\}}\\ &\quad -: J_{21}(t,k)+2J_{22}(t,k).\end{align*}

Using (23) with $u=(k+1)2^{-I}$ and $v=k2^{-I}$ , we obtain

\begin{equation*}\mathbb{E}[(J_{22}(t,k))^2] \leq \mathbb{E} [(\nu(1))^2] (a(2^{-I}t))^2 \leq \mathbb{E}[(\nu(1))^2] (a(2t_1))^2,\end{equation*}

which implies

\begin{align*}\mathbb{E} \biggl[\biggl(\max_{0\leq k\leq 2^{I}-1}J_{22}(t,k)\biggr)^2\biggr] &\leq 2^I \max_{0\leq k\leq2^I-1}\mathbb{E} [(J_{22}(t,k))^2] \\ &\leq (t/t_1) \mathbb{E} [(\nu(1))^2] (a(2t_1))^2\\ &=\,{\textrm{O}}(t),\quad t\to\infty.\end{align*}

Further, by (23),

\begin{equation*}\mathbb{E} [(J_{21}(t,k))^2]= \mathbb{E} \biggl[\int_{[0,\,k2^{-I}t]}\bigl(G((k+1)2^{-I}t-y)-G(k2^{-I}t-y)\bigr) \,{\textrm{d}} \nu(y)\biggr]\leq \mathbb{E}[\nu(1)] a(2^{-I}t).\end{equation*}

Hence $\mathbb{E} [(\!\max_{0\leq k\leq 2^{I}-1}J_{21}(t,k))^2]={\textrm{O}}(t)$ by the same reasoning as above, and (26) for $i=2$ follows. The proof of Assertion 3 is complete.

4. Proofs of the main results

4.1. Proofs of Assertion 1 and Theorem 1

Proof of Assertion 1. The simplest way to prove this is to use Laplace transforms. Indeed, for fixed $j\in\mathbb{N}$ ,

\begin{equation*}\int_{[0,\,\infty)}\,{\textrm{e}}^{-st} \,{\textrm{d}} V_j(t)=\biggl(\dfrac{\mathbb{E}[{\textrm{e}}^{-s\eta}]}{1-\mathbb{E}[{\textrm{e}}^{-s\xi}]}\biggr)^{\,j} \sim \dfrac{1}{\texttt{m}^{\,j}s^{\,j}},\quad s\to0+.\end{equation*}

By Karamata’s Tauberian theorem (Theorem 1.7.1 of [Reference Bingham, Goldie and Teugels2]), (3) holds.

Proof of Theorem 1. We use induction on j. Let $j=1$ . Write

(27) \begin{equation}V(t)-\texttt{m}^{-1}t=\int_{[0,\,t]}(U(t-y)-\texttt{m}^{-1}(t-y)) \,{\textrm{d}} G(y)-\texttt{m}^{-1}\int_0^t (1-G(y)) \,{\textrm{d}} y.\end{equation}

Plainly, the second term converges to $-\texttt{m}^{-1}\mathbb{E} [\eta]$ as $t\to\infty$ . It is a simple consequence of the Blackwell theorem that

\begin{equation*}\lim_{t\to\infty}(U(t)-\texttt{m}^{-1}t)= (2 \texttt{m}^2)^{-1}\mathbb{E}[\xi^2]=b_U.\end{equation*}

Using this in combination with (11), we invoke Lebesgue’s dominated convergence theorem to infer that the first term in (27) converges to $b_U$ as $t\to\infty$ . Thus we have shown that (4) with $j=1$ holds true.

Assume that (4) holds for $j=k$ . In view of (4) with $j=1$ , given $\varepsilon>0$ there exists $t_0>0$ such that

(28) \begin{equation}|V(t)-\texttt{m}^{-1}t-b_V|\leq \varepsilon\end{equation}

whenever $t\geq t_0$ . Write, for $t\geq t_0$ ,

\begin{align*} &V_{k+1}(t)-\dfrac{t^{k+1}}{(k+1)!\texttt{m}^{k+1}}\\ &\quad =\int_{[0,\,t-t_0]}(V(t-y)-\texttt{m}^{-1}(t-y)) \,{\textrm{d}} V_k(y)\\ &\quad \quad +\int_{(t-t_0,\,t]}(V(t-y)-\texttt{m}^{-1}(t-y))\,{\textrm{d}} V_k(y)+\texttt{m}^{-1}\int_0^t\biggl(V_k(y)-\dfrac{y^k}{k!\texttt{m}^k}\biggr)\,{\textrm{d}} y\\ &\quad =I_1(t)+I_2(t)+I_3(t).\end{align*}

In view of (28),

\begin{equation*}(b_V-\varepsilon)V_k(t-t_0)\leq I_1(t)\leq(b_V+\varepsilon)V_k(t-t_0),\end{equation*}

whence

\begin{equation*}\dfrac{b_V-\varepsilon}{k!\texttt{m}^k}\leq{\lim\inf}_{t\to\infty}\dfrac{I_1(t)}{t^k}\leq{\lim\sup}_{t\to\infty}\dfrac{I_1(t)}{t^k}\leq \dfrac{b_V+\varepsilon}{k!\texttt{m}^k}\end{equation*}

by Assertion 1.

Using (13) we obtain $|I_2(t)|\leq c_V (V_k(t)-V_k(t-t_0))$ for all $t\geq t_0$ , whence $\lim_{t\to\infty}t^{-k}I_2(t)=0$ by Theorem 3. A combination of this with the last centered formula and sending $\varepsilon\to 0+$ , we infer

\begin{equation*}I_1(t)+I_2(t) \sim \dfrac{b_V t^k}{k!\texttt{m}^k},\quad t\to\infty.\end{equation*}

Finally, by the induction assumption and L’Hôpital’s rule,

\begin{equation*}I_3(t)\sim \dfrac{b_V t^k}{(k-1)!\texttt{m}^k},\quad t\to\infty.\end{equation*}

Combining fragments together, we arrive at (4) with $j=k+1$ .

4.2. Proofs of Theorems 2 and 3

Proof of Theorem 2. The proof of Theorem 2.7(a) of [Reference Bohun, Iksanov, Marynych and Rashytov3] applies, with obvious simplifications. Note that for early generations (j is fixed), asymptotic relation (3) holds under the sole assumption $\texttt{m}<\infty$ . This is not the case for intermediate generations ( $j=j(t)\to\infty$ , $j(t)={\textrm{o}}(t)$ as $t\to\infty$ ) treated in [Reference Bohun, Iksanov, Marynych and Rashytov3], which explains the appearance of the additional assumption $\mathbb{E}[\xi^r]<\infty$ for some $r\in (1,2]$ in [Reference Bohun, Iksanov, Marynych and Rashytov3, Theorem 2.7].

Proof of Theorem 3. When $j=1$ , relation (5) holds by Lemma 4.2(a) of [Reference Bohun, Iksanov, Marynych and Rashytov3]. Write

\begin{align*} V_j(t+h)-V_j(t)& =\int_{[0,\,t]}(V(t+h-y)-V(t-y))\,{\textrm{d}} V_{j-1}(y)+\int_{(t,\,t+h]}V(t+h-y) \,{\textrm{d}} V_{j-1}(y)\\ & -: A_j(t)+B_j(t).\end{align*}

We first show that the contribution of $B_j(t)$ is negligible. Indeed, using monotonicity of V and $\lim_{t\to\infty}(\!V_j(t+h)/V_j(t))=1$ (see Assertion 1), we obtain

\begin{equation*}B_j(t)\leq V(h)(V_{j-1}(t+h)-V_{j-1}(t))={\textrm{o}}(V_{j-1}(t)),\quad t\to\infty.\end{equation*}

In view of (5) with $j=1$ , given $\varepsilon>0$ there exists $t_0>0$ such that

\begin{equation*}|V(t+h)-V(t)-\texttt{m}^{-1}h|\leq \varepsilon\end{equation*}

whenever $t\geq t_0$ . Thus we have, for $t\geq t_0$ ,

\begin{align*} A_j(t)&=\int_{[0,\,t-t_0]}(V(t+h-y)-V(t-y))\,{\textrm{d}} V_{j-1}(y) +\int_{(t-t_0,\,t]}V(t+h-y)\,{\textrm{d}} V_{j-1}(y)\\ & -: A_{j,1}(t)+A_{j,2}(t).\end{align*}

By the argument used for $B_j(t)$ , we infer $A_{j,2}(t)={\textrm{o}}(V_{j-1}(t))$ . Further,

\begin{equation*}A_{j,1}(t)\leq (\texttt{m}^{-1}h+\varepsilon)V_{j-1}(t-t_0),\end{equation*}

whence

\begin{equation*}\lim\sup_{t\to\infty}(A_j(t)/V_{j-1}(t))\leq \texttt{m}^{-1}h.\end{equation*}

A symmetric argument proves the converse inequality for the limit inferior. Invoking Assertion 1 completes the proof of Theorem 3.

4.3. Proof of Theorem 4

For $j\in\mathbb{N}$ and $t\geq 0$ , put $D_j(t)\,{:\!=}\, \textrm{Var} [N_j(t)]$ . Recall that, as a consequence of the assumption $\eta=\xi$ a.s., $T_r=S_r$ for $r\in\mathbb{N}$ and $V(t)=\tilde U(t)=\sum_{r\geq 1}\mathbb{P}\{S_r\leq t\}$ for $t\geq 0$ . However, we prefer to write $V_j$ rather than $\tilde U_j$ .

Let $j\in\mathbb{N}$ , $j\geq 2$ . We obtain with the help of (1)

\begin{align*}N_j(t)-V_j(t)&=\sum_{r\geq 1}\bigl(N^{(r)}_{j-1}(t-S_r)-V_{j-1}(t-S_r)\bigr)\unicode{x1D7D9}_{\{S_r\leq t\}} +\Biggl(\sum_{r\geq 1}V_{j-1}(t-S_r)\unicode{x1D7D9}_{\{S_r\leq t\}}-V_j(t)\Biggr) \notag \\ & -: N_{j,\,1}(t)+N_{j,\,2}(t),\quad t\geq 0. \end{align*}

Since $N_{j-1}^{(1)}$ , $N_{j-1}^{(2)},\ldots$ are independent of $(S_r)_{r\in\mathbb{N}}$ and $\mathbb{E} N_{j-1}^{(r)}(t)=V_{j-1}(t)$ , $r\in\mathbb{N}$ , $t\geq 0$ , we infer $\mathbb{E} (N_{j,\,1}(t)N_{j,\,2}(t)\mid (S_r)_{r\in\mathbb{N}})=0$ a.s., whence

(29) \begin{equation}D_j(t)=\mathbb{E} [(N_{j,\,1}(t))^2]+\mathbb{E} [(N_{j,\,2}(t))^2].\end{equation}

We start by showing that the following asymptotic relations hold, for $j\geq 2$ , as $t\to\infty$ :

(30) \begin{align} &\mathbb{E} [(N_{j,\,2}(t))^2]=\textrm{Var}\Biggl[\sum_{r\geq 1}V_{j-1}(t-S_r)\unicode{x1D7D9}_{\{S_r\leq t\}}\Biggr] \notag \\ &\quad =\mathbb{E} \Biggl[\Biggl(\sum_{r\geq 1}V_{j-1}(t-S_r)\unicode{x1D7D9}_{\{S_r\leq t\}}\Biggr)^2\Biggr]-V_j^2(t) \sim \dfrac{\texttt{s}^2}{(2j-1)((\,j-1)!)^2 \texttt{m}^{2j+1}}t^{2j-1}, \quad t\to\infty.\end{align}

Proof of (30). We shall use the equality (see formula (4.9) of [Reference Iksanov and Kabluchko8])

(31) \begin{align} &\mathbb{E}\Biggl[\Biggl(\sum_{r\geq 1}V_{j-1}(t-S_r)\unicode{x1D7D9}_{\{S_r\leq t\}}\Biggr)^2\Biggr] \notag \\ &\quad =2\int_{[0,\,t]}V_{j-1}(t-y)V_j(t-y)\,{\textrm{d}} \tilde U(y) +\int_{[0,\,t]}V_{j-1}^2(t-y) \,{\textrm{d}} \tilde U(y).\end{align}

In view of (4),

\begin{align*}&\int_{[0,\,t]}V_{j-1}(t-y)V_j(t-y) \,{\textrm{d}} \tilde U(y)\\[3pt] &\quad=\int_{[0,\,t]}\biggl(\dfrac{(t-y)^{j-1}}{(\,j-1)!\texttt{m}^{j-1}}+\dfrac{b_V(\,j-1)(t-y)^{j-2}}{(\,j-2)!\texttt{m}^{j-2}}+{\textrm{o}}((t-y)^{j-2})\biggr)\\[3pt] &\quad\quad \times \biggl(\dfrac{(t-y)^{\,j}}{j!\texttt{m}^{\,j}}+\dfrac{b_V\ j(t-y)^{j-1}}{(\,j-1)!\texttt{m}^{j-1}}+{\textrm{o}}((t-y)^{j-1})\biggr)\,{\textrm{d}} \tilde U(y)\\[3pt] &\quad=\dfrac{1}{(\,j-1)!j!\texttt{m}^{2j-1}}\int_{[0,\,t]}(t-y)^{2j-1}\,{\textrm{d}} \tilde U(y)\\[3pt] &\quad\quad +\dfrac{b_V(2j^2-2j+1)}{(\,j-1)!j!\texttt{m}^{2j-2}}\int_{[0,\,t]}(t-y)^{2j-2} \,{\textrm{d}} \tilde{U}(y)+\int_{[0,\,t]}{\textrm{o}}((t-y)^{2j-2}) \,{\textrm{d}} \tilde U(y),\end{align*}

where $b_V=\mathbb{E} [\xi^2]/(2\texttt{m}^2)-1$ because under the present assumption $\mathbb{E}[\eta]=\texttt{m}$ . According to (15),

\begin{align*}\int_{[0,\,t]}(t-y)^{2j-1} \,{\textrm{d}} \tilde U(y)&=\dfrac{t^{2j}}{2j\texttt{m}}+b_V\ t^{2j-1}+{\textrm{o}}(t^{2j-1}),\quad t\to\infty,\\ \int_{[0,\,t]}(t-y)^{2j-2} \,{\textrm{d}} \tilde U(y)&=\dfrac{t^{2j-1}}{(2j-1)\texttt{m}}+{\textrm{o}}(t^{2j-1}),\quad t\to\infty.\end{align*}

Also, it can be checked that

\begin{equation*}\int_{[0,\,t]}{\textrm{o}}((t-y)^{2j-2}) \,{\textrm{d}} \tilde U(y)={\textrm{o}}(t^{2j-1}),\quad t\to\infty.\end{equation*}

By Assertion 1,

\begin{equation*}V_{j-1}^2(t) \sim \dfrac{t^{2j-2}}{((\,j-1)!)^2 \texttt{m}^{2j-2}}\quad\text{and}\quad V(t)=\tilde U(t) \sim \dfrac{t}{\texttt{m}},\quad t\to\infty.\end{equation*}

As in the proof of Assertion 1 we now invoke Karamata’s Tauberian theorem (Theorem 1.7.1 of [Reference Bingham, Goldie and Teugels2]) to obtain

\begin{equation*}\int_{[0,\,t]}V_{j-1}^2(t-y) \,{\textrm{d}} \tilde{U}(y) \sim \dfrac{t^{2j-1}}{(2j-1)((\,j-1)!)^2\texttt{m}^{2j-1}},\quad t\to\infty.\end{equation*}

Using the aforementioned asymptotic relations and recalling (31), we infer that

\begin{align*}\mathbb{E}\Biggl[\Biggl(\sum_{r\geq 1}V_{j-1}(t-S_r)\unicode{x1D7D9}_{\{S_r\leq t\}}\Biggr)^2\Biggr]&=\dfrac{t^{2j}}{(\,j!)^2\texttt{m}^{2j}}+\dfrac{2b_V\ t^{2j-1}}{(\,j-1)!j!\texttt{m}^{2j-1}}+\dfrac{2b_V(2j^2-2j+1)t^{2j-1}}{(2j-1)(\,j-1)!j!\texttt{m}^{2j-1}}\\ &\quad +\dfrac{t^{2j-1}}{(2j-1)((\,j-1)!)^2\texttt{m}^{2j-1}}+{\textrm{o}}(t^{2j-1}),\quad t\to\infty.\end{align*}

Further, as $t\to\infty$ ,

\begin{align*}V_j^2(t)&=\dfrac{t^{2j}}{(\,j!)^2\texttt{m}^{2j}}+\dfrac{2t^{\,j}}{j!\texttt{m}^{\,j}}\bigg(V_j(t)-\dfrac{t^{\,j}}{j!\texttt{m}^{\,j}}\bigg)+\bigg(V_j(t)-\dfrac{t^{\,j}}{j!\texttt{m}^{\,j}}\bigg)^2\\ &= \dfrac{t^{2j}}{(\,j!)^2\texttt{m}^{2j}}+\dfrac{2b_V\ t^{2j-1}}{((\,j-1)!)^2 \texttt{m}^{2j-1}}+ {\textrm{o}}(t^{2j-1}) ,\end{align*}

having utilized (4). The last two asymptotic relations entail (30).

With (30) at hand we are ready to prove (6). To this end, we shall use mathematical induction. If $j=1$ , (6) takes the form $D_1(t)=\mathbb{E} [(N(t)-\tilde U(t))^2]\sim \texttt{s}^2 \texttt{m}^{-3} t$ as $t\to\infty$ . This can be checked along the lines of the proof of (30). Alternatively, this relation follows from Theorem 3.8.4 of [Reference Gut6], where the assumption that the distribution of $\xi$ is nonlattice is not made. Assume that (6) holds for $j=k-1\geq 1$ , that is,

\begin{equation*}D_{k-1}(t) \sim \dfrac{\texttt{s}^2}{(2k-3)((k-2)!)^2 \texttt{m}^{2k-1}}t^{2k-3},\quad t\to\infty.\end{equation*}

Using this and the equality

\begin{equation*}\mathbb{E} [(N_{k,1}(t))^2]=\mathbb{E} \Biggl[\sum_{r\geq 1}D_{k-1}(t-S_r)\unicode{x1D7D9}_{\{S_r\leq t\}}\Biggr]=\int_{[0,\,t]}D_{k-1}(t-y) \,{\textrm{d}} \tilde{U}(y)\end{equation*}

in combination with Karamata’s Tauberian theorem (Theorem 1.7.1 of [Reference Bingham, Goldie and Teugels2]) or, more simply, a bare hands calculation, we infer

\begin{equation*}\mathbb{E} [(N_{k,\,1}(t))^2] \sim \dfrac{\texttt{s}^2}{(2k-3)(2k-2)((k-2)!)^2 \texttt{m}^{2k}} t^{2k-2},\quad t\to\infty.\end{equation*}

By virtue of (29) and (30) we conclude that (6) holds for $j=k$ . The proof of Theorem 4 is complete.

4.4. Proof of Theorem 5

We use mathematical induction. When $j=1$ , (7) holds true by formula (24) of [Reference Alsmeyer, Iksanov and Marynych1]. Assuming that it holds for $j=k$ , we intend to show that (7) also holds for $j=k+1$ . To this end, we write with the help of (1) for $j=k+1$

\begin{equation*} N_{k+1}(t)=\sum_{r\geq1}\bigl(N^{(r)}_{1,\,k+1}\bigl(t-T_r^{(k)}\bigr)-V\bigl(t-T_r^{(k)}\bigr)\bigr)\unicode{x1D7D9}_{\{T_r^{(k)}\leq t\}}+\int_{[0,\,t]}V(t-y) \,{\textrm{d}} N_k(y),\quad t\geq 0.\end{equation*}

By Assertion 1, given $\varepsilon>0$ there exists $t_0>0$ such that $|t^{-1} V(t)-\texttt{m}^{-1}|\leq \varepsilon$ whenever $t\geq t_0$ . We have

\begin{equation*}\int_{(t-t_0,\,t]}V(t-y) \,{\textrm{d}} N_k(y)\leq V(t_0)(N_k(t)-N_k(t-t_0))={\textrm{o}}(t^k)\quad \text{a.s.}\quad \text{as}\ t\to\infty\end{equation*}

by the induction assumption. Analogously,

\begin{equation*}\int_{(t-t_0,\,t]}(t-y) \,{\textrm{d}} N_k(y)={\textrm{o}}(t^k)\quad \text{a.s.}\quad \text{as $ t\to\infty$.}\end{equation*}

Further,

\begin{align*}\int_{[0,\,t-t_0]}V(t-y) \,{\textrm{d}} N_k(y)&\geq (\texttt{m}^{-1}-\varepsilon)\int_{[0,\,t-t_0]}(t-y) \,{\textrm{d}} N_k(y)\\ &=(\texttt{m}^{-1}-\varepsilon)\biggl(\int_{[0,\,t]}(t-y)\,{\textrm{d}} N_k(y)-\int_{(t-t_0,\,t]}(t-y) \,{\textrm{d}} N_k(y)\biggr).\end{align*}

Using

\begin{equation*}\int_{[0,\,t]}(t-y) \,{\textrm{d}} N_k(y)=\int_0^t N_k(y) \,{\textrm{d}} y\end{equation*}

and applying L’Hôpital’s rule together with the induction assumption, we infer

\begin{equation*}\lim_{t\to\infty}\dfrac{\int_{[0,\,t]}(t-y)\,{\textrm{d}} N_k(y)}{t^{k+1}}=\dfrac{1}{\texttt{m}^k(k+1)!}\quad\text{a.s.}\end{equation*}

Combining the pieces together, we arrive at

\begin{equation*}{\lim\inf}_{t\to\infty}\dfrac{\int_{[0,\,t]}V(t-y)\,{\textrm{d}} N_k(y)}{t^{k+1}}\geq \dfrac{1}{\texttt{m}^{k+1}(k+1)!}\quad\text{a.s.}\end{equation*}

The converse inequality for the limit superior follows similarly, whence

(32) \begin{equation}\lim_{t\to\infty}\dfrac{\int_{[0,\,t]}V(t-y) \,{\textrm{d}} N_k(y)}{t^{k+1}}=\dfrac{1}{\texttt{m}^{k+1}(k+1)!}\quad\text{a.s.}\end{equation}

Further,

\begin{align*}&\mathbb{E} \Biggl[\Biggl(\sum_{r\geq1}\bigl(N^{(r)}_{1,\,k+1}\bigl(t-T_r^{(k)}\bigr)-V\bigl(t-T_r^{(k)}\bigr)\bigr)\unicode{x1D7D9}_{\{T_r^{(k)}\leq t\}}\Biggr)^2\Biggr]\\ &\quad =\sum_{r\geq 1}\mathbb{E}\bigl[\bigl(N^{(r)}_{1,\,k+1}\bigl(t-T_r^{(k)}\bigr)-V\bigl(t-T_r^{(k)}\bigr)\bigr)^2\unicode{x1D7D9}_{\{T_r^{(k)} \leq t\}}\big]\\ &\quad \leq \int_{[0,\,t]}\mathbb{E} [(N(t-y))^2] \,{\textrm{d}} V_k(y)\\&\quad \leq \mathbb{E} [(N(t))^2] V_k(t)\\ &\quad ={\textrm{O}}(t^{k+2}),\quad t\to\infty ,\end{align*}

having utilized monotonicity of $t\mapsto \mathbb{E} [(N(t))^2]$ for the last inequality and Lemmas 1 and 4 for the last equality. Now invoking the Markov inequality and the Borel–Cantelli lemma, we conclude that

\begin{equation*} \lim_{n\to\infty}\dfrac{\sum_{r\geq1}\bigl(N^{(r)}_{1,\,k+1}\bigl(n^2-T_r^{(k)}\bigr)-V\bigl(n^2-T_r^{(k)}\bigr)\bigr)\unicode{x1D7D9}_{\{T_r^{(k)}\leq n^2\}}}{n^{2(k+1)}}=0\quad\text{a.s.}\end{equation*}

(n approaches $\infty$ along integers). This together with (32) yields

\begin{equation*}\lim_{n\to\infty}\dfrac{N_{k+1}(n^2)}{n^{2(k+1)}}=\dfrac{1}{\texttt{m}^{k+1}(k+1)!}\quad\text{a.s.}\end{equation*}

Thus it remains to show that we may pass to the limit continuously. To this end, note that for each $t\ge 0$ there exists $n\in\mathbb{N}$ such that $t\in [(n-1)^2, n^2)$ , and use a.s. monotonicity of $N_{k+1}$ to obtain

\begin{equation*}\dfrac{(n-1)^{2(k+1)}}{n^{2(k+1)}}\dfrac{N_{k+1}((n-1)^2)}{(n-1)^{2(k+1)}}\leq\dfrac{N_{k+1}(t)}{t^{k+1}}\leq\dfrac{N_{k+1}(n^2)}{n^{2(k+1)}}\dfrac{n^{2(k+1)}}{(n-1)^{2(k+1)}}\quad\text{a.s.}\end{equation*}

Letting t tend to $\infty$ , we arrive at

\begin{equation*}\lim_{t\to\infty}\dfrac{N_{k+1}(t)}{t^{k+1}}=\dfrac{1}{\texttt{m}^{k+1}(k+1)!}\quad\text{a.s.},\end{equation*}

thereby completing the induction step. The proof of Theorem 5 is complete.

4.5. Proof of Theorem 6

In the case $\mathbb{E} [\eta]<\infty$ this result follows from Theorem 3.2 and Lemma 4.2 of [Reference Gnedin and Iksanov5]. Thus we concentrate on the case $\mathbb{E}[\eta^a]<\infty$ for $a\in (0,1)$ and $\mathbb{E}[\eta]=\infty$ .

We are going to apply Theorem 7, stated in the Appendix, with $N^\ast_j=N_j$ for $j\in\mathbb{N}$ . According to (14),

\begin{equation*}-c_1-\!c_2 t^{1-a}\leq V(t)-\texttt{m}^{-1}t\leq c_U,\quad t\geq 0\end{equation*}

for some positive constants $c_1$ and $c_2$ and $c_U=\texttt{m}^{-2} \mathbb{E}[\xi^2]$ , that is, condition (35) holds with $c=\texttt{m}^{-1}$ , $\omega=1$ , $\varepsilon_1=a$ , $\varepsilon_2=1$ , $a_0+a_1=c_U$ , $b_0=-c_1$ , $b_1=-c_2$ . By Assertion 3,

\begin{equation*}\mathbb{E} \biggl[\sup_{s\in [0,\,t]}(N(s)-V(s))^2\biggr]={\textrm{O}}(t),\quad t\to\infty,\end{equation*}

that is, condition (36) holds with $\gamma=1/2$ . By Assertion 2,

\begin{equation*}\biggl(\dfrac{N(ut)-V(ut)}{\sqrt{\texttt{m}^{-3}\texttt{s}^2 t}}\biggr)_{u\geq 0}\ \Rightarrow\ (B(u))_{u\geq 0},\quad t\to\infty\end{equation*}

in the $J_1$ -topology on D. This means that condition (37) holds with $\gamma=1/2$ , $b=\texttt{m}^{-3/2}\texttt{s}$ and $W=B$ , a Brownian motion. Recall that the process B is locally Hölder-continuous with exponent $\beta$ for any $\beta\in (0,1/2)$ . Thus, by Theorem 7, relation (8) is a specialization of (38) with $\gamma=1/2$ , $\omega=1$ , $R^{(1)}_j=B_{j-1}$ , $j\in\mathbb{N}$ , and $\rho_j=1/(\texttt{m}^{\,j} j!)$ , $j\in\mathbb{N}_0$ .

Now we prove the claim that the centering $V_j(ut)$ can be replaced by that given in (9). We first note that the equality in (9) follows with the help of mathematical induction in k from the representation

\begin{align*}\mathbb{E} [(t-R_i)^k\unicode{x1D7D9}_{\{R_i\leq t\}}]& =\int_{[0,\,t]}(t-y)^k \,{\textrm{d}} \mathbb{P}\{R_i\leq y\}\\ &=\,k\int_0^t \int_{[0,\,s]}(s-y)^{k-1} \,{\textrm{d}} \mathbb{P}\{R_i\leq y\} \,{\textrm{d}} s,\quad i,k\in\mathbb{N},\ t\geq 0,\end{align*}

where $R_i\,{:\!=}\, \eta_1+\cdots+\eta_i$ . Here the first step of induction is justified by the equality

\begin{equation*}\int_{[0,\,t]}(t-y) \,{\textrm{d}} \mathbb{P}\{R_i\leq y\}=\int_0^t \mathbb{P}\{R_i\leq s\} \,{\textrm{d}} s,\quad i\in\mathbb{N},\ t\geq 0.\end{equation*}

Further, we show that whenever $\mathbb{E}[\xi^2]<\infty$ , irrespective of the distribution of $\eta$ , for all $T>0$ ,

(33) \begin{equation}\lim_{t\to\infty}t^{-(\,j-1/2)}\sup_{u\in [0,\,T]}|V_j(ut)-(\,j!\texttt{m}^{\,j})^{-1}\mathbb{E}[(ut-R_j)^{\,j}\unicode{x1D7D9}_{\{R_j\leq ut\}}]|=0.\end{equation}

To this end, we recall that, according to formula (4.4) of [Reference Buraczewski, Dovgay and Iksanov4] (we use the formula with $\eta=0$ ),

\begin{equation*}U_j(t)-\dfrac{t^{\,j}}{j!\texttt{m}^{\,j}}\leq\sum_{i=0}^{j-1}\binom{j}{i}\dfrac{c_U^{j-i}t^i}{i!\texttt{m}^i},\quad t\geq 0,\end{equation*}

where $c_U=\texttt{m}^{-2} \mathbb{E} [\xi^2]$ . Using this and Lemma 1, we obtain, for $u\in [0,T]$ and $t\geq 0$ ,

\begin{align*}\biggl|V_j(ut)-\dfrac{\mathbb{E} [(ut-R_j)^{\,j}\unicode{x1D7D9}_{\{R_j\leq ut\}}]}{j!\texttt{m}^{\,j}}\biggr|&=\int_{[0,\,ut]}\biggl(U_j(ut-y)-\dfrac{(ut-y)^{\,j}}{j!\texttt{m}^{\,j}}\biggr)\,{\textrm{d}} \mathbb{P}\{R_j\leq y\}\\ &\leq\int_{[0,\,ut]}\sum_{i=0}^{j-1}\binom{j}{i}\dfrac{c_U^{j-i}(ut-y)^i}{i!\texttt{m}^i} \,{\textrm{d}} \mathbb{P}\{R_j\leq y\}\\ &\leq\sum_{i=0}^{j-1}\binom{j}{i}\dfrac{c_U^{j-i}(Tt)^i}{i!\texttt{m}^i}\\ &={\textrm{o}}(t^{j-1/2}),\quad t\to\infty ,\end{align*}

which proves (33).

It remains to show that if $\mathbb{E} [\eta^{1/2}]<\infty$ , then the centering $V_j(ut)$ can be replaced by $(ut)^{\,j}/(\,j! \texttt{m}^{\,j})$ . To justify this, it suffices to check that

(34) \begin{equation}\lim_{t\to\infty}\dfrac{\sup_{u\in [0,\,T]}\bigl((ut)^{\,j}- j!\int_0^{ut}\int_0^{t_2}\cdots\int_0^{t_j}\mathbb{P}\{R_j\leq y\} \,{\textrm{d}} y\,{\textrm{d}} t_j\cdots \,{\textrm{d}} t_2\bigr)}{t^{j-1/2}}=0.\end{equation}

The numerator of the ratio under the limit on the left-hand side of (34) is equal to

\begin{align*} & \sup_{u\in[0,\,T]} \int_0^{tu}\int_0^{t_2}\cdots\int_0^{t_j}\mathbb{P}\{R_j>y\}\,{\textrm{d}} y \,{\textrm{d}} t_j\cdots \,{\textrm{d}} t_2\\ &\quad = \int_0^{tT}\int_0^{t_2}\cdots\int_0^{t_j}\mathbb{P}\{R_j>y\} \,{\textrm{d}} y\,{\textrm{d}} t_j\cdots {\textrm{d}} t_2.\end{align*}

Hence we are left with showing that

\begin{equation*}\lim_{t\to\infty}t^{-(\,j-1/2)}\int_0^t\int_0^{t_2}\cdots\int_0^{t_j}\mathbb{P}\{R_j>y\} \,{\textrm{d}} y\ {\textrm{d}} t_j\cdots {\textrm{d}} t_2=0.\end{equation*}

Assume that $\mathbb{E}[\eta]<\infty$ , so $\mathbb{E} [R_j]<\infty$ and thus

\begin{equation*}\lim_{t\to\infty} \int_0^t\mathbb{P}\{R_j>y\} \,{\textrm{d}} y=\mathbb{E} [R_j]<\infty.\end{equation*}

Then, using L’Hôpital’s rule $(\,j-1)$ times, we obtain

\begin{equation*}\lim_{t\to\infty}\dfrac{\int_0^t\int_0^{t_2}\cdots\int_0^{t_j}\mathbb{P}\{R_j>y\}\,{\textrm{d}} y \,{\textrm{d}} t_j\cdots {\textrm{d}} t_2}{t^{j-1/2}}=\dfrac{2^{j-1}}{1\cdot3\cdot\ldots\cdot(2j-1)} \lim_{t\to\infty}\dfrac{\int_0^t\mathbb{P}\{R_j>y\}\,{\textrm{d}} y}{t^{1/2}}=0.\end{equation*}

Assume that $\mathbb{E}[\eta]=\infty$ . Since $\mathbb{E}[\eta^{1/2}]<\infty$ is equivalent to $\mathbb{E} \bigl[R_j^{1/2}\bigr]<\infty$ , we infer

\begin{equation*}\lim_{t\to\infty}t^{1/2}\mathbb{P}\{R_j>t\}=0.\end{equation*}

With this at hand, using L’Hôpital’s rule j times, we infer

\begin{equation*}\lim_{t\to\infty}\dfrac{\int_0^t\int_0^{t_2}\cdots\int_0^{t_j}\mathbb{P}\{R_j>y\}\,{\textrm{d}} y \,{\textrm{d}} t_j\cdots {\textrm{d}} t_2}{t^{j-1/2}}=\dfrac{2^{\,j}}{1\cdot3\cdot\ldots\cdot(2j-1)} \lim_{t\to\infty}t^{1/2}\mathbb{P}\{R_j>t\}=0.\end{equation*}

The proof of Theorem 6 is complete.

Appendix A. Auxiliary results

In this section we state several results borrowed from other sources. The first of these can be found in the proof of Lemma 7.3 of [Reference Alsmeyer, Iksanov and Marynych1].

Lemma 5. Let $f\colon [0,\infty)\to [0,\infty)$ be a locally bounded function. Then, for any $l\in\mathbb{N}$ ,

\begin{equation*} \mathbb{E} \Biggl[\Biggl(\sum_{k\geq 0}f(t-S_k)\unicode{x1D7D9}_{\{S_k\leq t\}}\Biggr)^l\Biggr]\leq\Biggl(\sum_{j=0}^{[t]}\sup_{y\in[j,\,j+1)}f(y)\Biggr)^l\mathbb{E} [(\nu(1))^l],\quad t\geq0,\end{equation*}

where $\nu(t)=\inf\{k\in\mathbb{N}\colon S_k>t\}$ .

For $j\in\mathbb{N}$ and $t\geq 0$ , let $N^\ast_j(t)$ denote the number of the jth generation individuals with birth times $\leq t$ in a general branching process generated by an arbitrary locally finite point process $T^\ast$ , and put $V^\ast_j(t)\,{:\!=}\, \mathbb{E} [N^\ast_j(t)]$ . In particular, $N^\ast_j=N_j$ for $j\in\mathbb{N}$ when $T^\ast=T$ . For notational simplicity, put $N^\ast\,{:\!=}\, N^\ast_1$ and $V^\ast\,{:\!=}\, V_1^\ast$ .

Let $W\,{:\!=}\, (W(s))_{s\geq 0}$ denote a centered Gaussian process which is a.s. locally Hölder-continuous and satisfies $W(0)=0$ . For each $u>0$ , put

\begin{equation*}R^{(u)}_1(s)\,{:\!=}\, W(s),\quad R^{(u)}_j(s)\,{:\!=}\, \int_{[0,\,s]}(s-y)^{u(\,j-1)}\,{\textrm{d}} W(y),\quad s\geq 0,\ j\geq 2.\end{equation*}

The next result follows from Theorem 3.2 of [Reference Gnedin and Iksanov5] and its proof.

Theorem 7. Assume the following conditions hold:

  1. (i)

    (35) \begin{align}\nonumber\\[-20pt]b_0+b_1 t^{\omega-\varepsilon_1}\leq V^\ast(t)-c t^\omega\leq a_0+a_1t^{\omega-\varepsilon_2}\end{align}
    for all $t\geq 0$ and some constants $c,\omega, a_0, a_1>0$ , $0<\varepsilon_1,\varepsilon_2\leq \omega$ and $b_0, b_1\in\mathbb{R}$ ,
  2. (ii)

    (36) \begin{align}\nonumber\\[-20pt]\mathbb{E} \biggl[\sup_{s\in [0,\,t]}(N^\ast(s)-V^\ast(s))^2\biggr]={\textrm{O}}(t^{2\gamma}),\quad t\to\infty\end{align}
    for some $\gamma\in (0, \omega)$ .
  3. (iii)

    (37) \begin{align}\nonumber\\[-20pt]\biggl(\dfrac{N^\ast(ut)-V^\ast(ut)}{bt^\gamma}\biggr)_{u\geq 0}\ \Rightarrow\ (W(u))_{u\geq 0},\quad t\to\infty\end{align}
    in the $J_1$ -topology on D for some $b>0$ and the same $\gamma$ as in (36).

Then

(38) \begin{equation}\bigg(\biggl(\dfrac{N^\ast_j(ut)-V^\ast_j(ut)}{b\rho_{j-1}t^{\gamma+\omega(\,j-1)}}\biggr)_{u\geq0}\bigg)_{j\in\mathbb{N}}\ \Rightarrow\ \big(\big(R^{(\omega)}_j(u)\big)_{u\geq0}\big)_{j\in\mathbb{N}}\end{equation}

in the $J_1$ -topology on $D^\mathbb{N}$ , where

\begin{equation*}\rho_j\,{:\!=}\, \dfrac{(c\Gamma(\omega+1))^{\,j}}{\Gamma(\omega j+1)},\quad j\in\mathbb{N}_0,\end{equation*}

with $\Gamma({\cdot})$ denoting the gamma function.

Acknowledgements

The authors thank the Associate Editor for a number of detailed suggestions concerning the presentation and the two anonymous referees for several useful comments.

Funding information

The present work was supported by the National Research Foundation of Ukraine (project 2020.02/0014 ‘Asymptotic regimes of perturbed random walks: On the edge of modern and classical probability’).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Alsmeyer, G., Iksanov, A. and Marynych, A. (2017). Functional limit theorems for the number of occupied boxes in the Bernoulli sieve. Stoch. Process. Appl. 127, 9951017.CrossRefGoogle Scholar
Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1989). Regular Variation. Cambridge University Press.Google Scholar
Bohun, V., Iksanov, A., Marynych, A. and Rashytov, B. (2022). Renewal theory for iterated perturbed random walks on a general branching process tree: Intermediate generations. J. Appl. Prob. 59, 421–446.CrossRefGoogle Scholar
Buraczewski, D., Dovgay, B. and Iksanov, A. (2020). On intermediate levels of nested occupancy scheme in random environment generated by stick-breaking I. Electron. J. Prob. 25, 123.10.1214/20-EJP534CrossRefGoogle Scholar
Gnedin, A. and Iksanov, A. (2020). On nested infinite occupancy scheme in random environment. Prob. Theory Related Fields 177, 855890.CrossRefGoogle Scholar
Gut, A. (2009). Stopped Random Walks: Limit Theorems and Applications, 2nd edn. Springer.CrossRefGoogle Scholar
Iksanov, A. (2016). Renewal Theory for Perturbed Random walks and Similar Processes. Birkhäuser.CrossRefGoogle Scholar
Iksanov, A. and Kabluchko, Z. (2018). A functional limit theorem for the profile of random recursive trees. Electron. Commun. Prob. 23, 87.CrossRefGoogle Scholar
Karlin, S. (1967). Central limit theorems for certain infinite urn schemes. J. Math. Mech. 17, 373401.Google Scholar
Mitov, K. V. and Omey, E. (2014). Renewal Processes. Springer.CrossRefGoogle Scholar
Resnick, S. I. (2002). Adventures in Stochastic Processes , 3rd printing. Birkhäuser.Google Scholar
Resnick, S. and Rootzén, H. (2000). Self-similar communication models and very heavy tails. Ann. Appl. Prob. 10, 753778.CrossRefGoogle Scholar
Shi, Z. (2015). Branching Random Walks. Springer.CrossRefGoogle Scholar
Whitt, W. (2002). Stochastic-Process Limits: An Introduction to Stochastic-Process Limits and their Application to Queues. Springer.CrossRefGoogle Scholar