1. Introduction
Consider a self-replicating system evolving in the discrete-time setting according to the following rules:
-
Rule 1: The system is founded by a single individual, the founder, born at time 0.
-
Rule 2: The founder dies at a random age L and gives a random number N of births at random ages $\tau_j$ satisfying $1\le\tau_1\le \ldots\le \tau_N\le L$ .
-
Rule 3: Each new individual lives independently from others according to the same life law as the founder.
An individual that was born at time $t_1$ and dies at time $t_2$ is considered to be alive during the time interval $[t_1,t_2-1]$ . Letting Z(t) stand for the number of individuals alive at time t, we study the random dynamics of the sequence
which is a natural extension of the well-known Galton–Watson process, or GW process for short; see [Reference Watson and Galton13]. The process $Z({\cdot})$ is the discrete-time version of what is usually called the Crump–Mode–Jagers process or the general branching process; see [Reference Jagers5]. To emphasise the discrete-time setting, we call it a GW process with overlapping generations, or GWO process for short.
Put $b\,:\!=\,\frac{1}{2}\mathrm{var}(N)$ . This paper deals with the GWO processes satisfying
The condition $\mathrm{E}(N)=1$ says that the reproduction regime is critical, implying $\mathrm{E}(Z(t))\equiv1$ and making extinction inevitable, provided $b>0$ . According to [Reference Athreya and Ney1, Chapter I.9], given (1), the survival probability
of a GW process satisfies the asymptotic formula $tQ(t)\to b^{-1}$ as $t\to\infty$ (this was first proven in [Reference Kolmogorov6] under a third moment assumption). A direct extension of this classical result for the GWO processes,
was obtained in [Reference Durham3, Reference Holte4] under the conditions (1), $a<\infty$ ,
plus an additional condition. (Notice that by our definition, $a\ge1$ , and $a=1$ if and only if $L\equiv1$ , that is, when the GWO process in question is a GW process.) Treating a as the mean generation length (see [Reference Jagers5, Reference Sagitov8]), we may conclude that the asymptotic behaviour of the critical GWO process with short-living individuals (see the condition (2)) is similar to that of the critical GW process, provided time is counted generation-wise.
New asymptotic patterns for the critical GWO processes are found under the assumption
which, compared to (2), allows the existence of long-living individuals given $d>0$ . The condition (3) was first introduced in the pioneering paper [Reference Vatutin12] dealing with the Bellman–Harris processes. In the current discrete-time setting, the Bellman–Harris process is a GWO process subject to two restrictions: (a) $\mathrm{P}(\tau_1=\ldots=\tau_N= L)=1$ , so that all births occur at the moment of an individual’s death, and (b) the random variables L and N are independent. For the Bellman–Harris process, the conditions (1) and (3) imply $a=\mathrm{E}(L)$ , $a<\infty$ , and according to [Reference Vatutin12, Theorem 3], we get
As was shown in [Reference Topchii11, Corollary B] (see also [Reference Sagitov7, Lemma 3.2] for an adaptation to the discrete-time setting), the relation (4) holds even for the GWO processes satisfying the conditions (1), (3), and $a<\infty$ .
The main result of this paper, Theorem 1 of Section 2, considers a critical GWO process under the above-mentioned set of assumptions (1), (3), $a<\infty$ , and establishes the convergence of the finite-dimensional distributions conditioned on survival at a remote time of observation. A remarkable feature of this result is that its limit process is fully described by a single parameter $c\,:\!=\,4bda^{-2}$ , regardless of complicated mutual dependencies between the random variables $\tau_j, N,L$ .
Our proof of Theorem 1, requiring an intricate asymptotic analysis of multi-dimensional probability generating functions, is split into two sections for the sake of readability. Section 3 presents a new proof of (4) inspired by the proof of [Reference Vatutin12]. The crucial aspect of this approach, compared to the proof of [Reference Sagitov7, Lemma 3.2], is that certain essential steps do not rely on the monotonicity of the function Q(t). In Section 4, the technique of Section 3 is further developed to finish the proof of Theorem 1.
We conclude this section by mentioning the illuminating family of GWO processes called the Sevastyanov processes [Reference Sevastyanov9]. The Sevastyanov process is a generalised version of the Bellman–Harris process, with possibly dependent L and N. In the critical case, the mean generation length of the Sevastyanov process, $a=\mathrm{E}(L N)$ , can be represented as
Thus, if L and N are positively correlated, the average generation length a exceeds the average life length $\mathrm{E}(L)$ .
Turning to a specific example of the Sevastyanov process, take
where $n_t\,:\!=\,\lfloor t(\!\ln t)^{-1}\rfloor$ and $(p_1,p_2)$ are such that
In this case, for some positive constant $c_1$ ,
implying that the condition (1) is satisfied. Clearly, the condition (3) holds with $d=0$ . At the same time,
where $c_2$ is a positive constant. This example demonstrates that for the GWO process, unlike for the Bellman–Harris process, the conditions (1) and (3) do not automatically imply the condition $a<\infty$ .
2. The main result
Theorem 1. For a GWO process satisfying (1), (3) and $a<\infty$ , there holds a weak convergence of the finite-dimensional distributions
The limiting process is a continuous-time pure death process $(\eta(y),0\le y<\infty)$ , whose evolution law is determined by a single compound parameter $c=4bda^{-2}$ , as specified next.
The finite-dimensional distributions of the limiting process $\eta({\cdot})$ are given below in terms of the k-dimensional probability generating functions $\mathrm{E}\Big(z_1^{\eta(y_1)}\cdots z_k^{\eta(y_k)}\Big)$ , $k\ge1$ , assuming
Here the index j highlights the pivotal value 1 corresponding to the time of observation t of the underlying GWO process.
As will be shown in Section 4.2, if $j=0$ , then
and if $j\ge1$ ,
In particular, for $k=1$ , we have
It follows that $\mathrm{P}(\eta(y)\ge0)=1$ for $y>0$ , and moreover, putting here first $z=1$ and then $z=0$ yields
implying that $\mathrm{P}(\eta(y)=\infty)>0$ for all $y>0$ . In fact, letting $y\to0$ , we may set $\mathrm{P}(\eta(0)= \infty)=1.$
To demonstrate that the process $\eta({\cdot})$ is indeed a pure death process, consider the function
determined by
This function is given by two expressions:
where $\gamma_i\,:\!=\,\Gamma_i-\Gamma_{i+1}$ and $\Gamma_{k+1}=0$ . Setting $k=2$ , $z_1=z$ , and $z_2=1$ , we deduce that the function
is given by one of the following three expressions, depending on whether $j=2$ , $j=1$ , or $j=0$ :
Since the generating function (6) is finite at $z=0$ , we conclude that
This implies
meaning that unless the process $\eta({\cdot})$ is sitting at the infinity state, it evolves by negative integer-valued jumps until it gets absorbed at zero.
Consider now the conditional probability generating function
In accordance with the three expressions given above for (6), the generating function (7) is specified by the following three expressions:
In particular, setting $z=0$ here, we obtain
Notice that given $0<y_1\le1$ ,
which is expected because of $\eta(y_1)\ge\eta(1)\ge1$ and $\eta(y_2)\to0$ as $y_2\to\infty$ .
The random times
are major characteristics of a trajectory of the limit pure death process. Since
in accordance with the above-mentioned formulas for $\mathrm{E}\big(z^{\eta(y)}\big)$ , we get the following marginal distributions:
The distribution of $T_0$ is free from the parameter c and has the Pareto probability density function
In the special case (2), that is, when (3) holds with $d=0$ , we have $c=0$ and $\mathrm{P}(T=T_0)=1$ . If $d>0$ , then $T\le T_0$ , and the distribution of T has the following probability density function:
which has a positive jump at $y=1$ of size $f(1)-f(1-)=(1+c)^{-1/2}$ ; see Figure 1. Observe that $\frac{f(1-)}{f(1)}\to\frac{1}{2}$ as $c\to\infty$ .
Intuitively, the limiting pure death process counts the long-living individuals in the GWO process, that is, those individuals whose life length is of order t. These long-living individuals may have descendants, however none of them would live long enough to be detected by the finite-dimensional distributions at the relevant time scale, see Lemma 2 below. Theorem 1 suggests a new perspective on Vatutin’s dichotomy (see [Reference Vatutin12]), claiming that the long-term survival of a critical age-dependent branching process is due to either a large number of short-living individuals or a small number of long-living individuals. In terms of the random times $T\le T_0$ , Vatutin’s dichotomy discriminates between two possibilities: if $T>1$ , then $\eta(1)=\infty$ , meaning that the GWO process has survived thanks to a large number of individuals, while if $T\le 1<T_0$ , then $1\le \eta(1)<\infty$ , meaning that the GWO process has survived thanks to a small number of individuals.
3. Proof that $\boldsymbol{{tQ(t)}}\to \boldsymbol{{h}}$
This section deals with the survival probability of the critical GWO process
By its definition, the GWO process can be represented as the sum
involving N independent daughter processes $Z_j({\cdot})$ generated by the founder individual at the birth times $\tau_j$ , $j=1,\ldots,N$ (here it is assumed that $Z_j(t)=0$ for all negative t). The branching property (8) implies the relation
which says that the GWO process goes extinct by the time t if, on one hand, the founder is dead at time t and, on the other hand, all daughter processes are extinct by the time t. After taking expectations of both sides, we can write
As shown next, this nonlinear equation for $P({\cdot})$ implies the asymptotic formula (4) under the conditions (1), (3), and $a<\infty$ .
3.1. Outline of the proof of (4)
We start by stating four lemmas and two propositions. Let
where $0\le z\le 1$ , $u>0$ , $t\ge h$ , and X is an arbitrary random variable.
Lemma 1. Given (10), (11), (12), and (13), assume that $0< u\le t$ and $t\ge h$ . Then
Lemma 2. If (1) and (3) hold, then $\mathrm{E}(N;\,L>ty)=o\big(t^{-1}\big)$ as $t\to\infty$ for any fixed $y>0$ .
Lemma 3. If (1), (3), and $a<\infty$ hold, then for any fixed $0<y<1$ ,
Lemma 4. Let $k\ge1$ . If $0\le f_j,g_j\le 1$ for $ j=1,\ldots,k$ , then
where $0\le r_j\le1$ and
for some $R_j\ge0$ . If moreover $f_j\le q$ and $g_j\le q$ for some $q>0$ , then
According to these two propositions, there exists a triplet of positive numbers $(q_1,q_2,t_0)$ such that
The claim $tQ(t)\to h$ is derived using (14) by accurately removing asymptotically negligible terms from the relation for $Q({\cdot})$ stated in Lemma 1, after setting $u=ty$ with a fixed $0<y<1$ , and then choosing a sufficiently small y. In particular, as an intermediate step, we will show that
Then, restating our goal as $\phi(t)\to 0$ in terms of the function $\phi(t)$ , defined by
we rewrite (15) as
It turns out that the three terms involving h, outside W(t), effectively cancel each other, yielding
Treating W(t) in terms of Lemma 4 yields
where $r_j(t)$ is a counterpart of $r_j$ in Lemma 4. To derive from here the desired convergence $\phi(t)\to0$ , we will adapt a clever trick from Chapter 9.1 of [Reference Sewastjanow10], which was further developed in [Reference Vatutin12] for the Bellman–Harris process, with possibly infinite $\mathrm{var}(N)$ . Define a non-negative function m(t) by
Multiplying (19) by $\ln t$ and using the triangle inequality, we obtain
where $v(t)\ge 0$ and $v(t)=o(t^{-1}\ln t)$ as $t\to\infty$ . It will be shown that this leads to $m(t)=o(\!\ln t)$ , thereby concluding the proof of (4).
3.2. Proof of lemmas and propositions
Proof of Lemma 1.For $0<u\le t$ , the relations (9) and (13) give
On the other hand, for $t\ge h$ ,
Adding the latter relation to
and subtracting (22) from the sum, we get
with D(u, t) defined by (12). After a rearrangement, we obtain the statement of the lemma.
Proof of Lemma 2.For any fixed $\epsilon>0$ ,
and the assertion follows as $\epsilon\to0$ .
Proof of Lemma 3.For $t=1,2,\ldots$ and $y>0$ , put
For any $0<u<ty$ , using
we get
For the first term on the right-hand side, we have $\tau_j\le L\le ty$ , so that
For the second term, $\tau_j\le L\le u$ and therefore
This yields
implying
Since $A_u\to0$ as $u\to\infty$ , we conclude that $B_t(y)\to 0$ as $t\to\infty$ .
Proof of Lemma 4.Let
Then $0\le r_j\le1$ , and the first stated equality is obtained by telescopic summation of
The second stated equality is obtained with
by performing telescopic summation of
By the above definition of $R_j$ , we have $R_j\ge0$ . Furthermore, given $f_j\le q$ and $g_j\le q$ , we get
It remains to observe that
and from the definition of $R_j$ ,
Proof of Proposition 1.By the definition of $\Phi({\cdot})$ , we have
for any $0<u<t$ . This and (22) yield
We therefore obtain the upper bound
which together with Lemma 4 and the monotonicity of $Q({\cdot})$ implies
Borrowing an idea from [Reference Topchii11], suppose to the contrary that
is finite for any natural n. It follows that
Putting $t=t_n$ into (24) and using the monotonicity of $\Phi({\cdot})$ , we find
Setting $u=t_n/2$ here and applying Lemma 3 together with (3), we arrive at the relation
Observe that under the condition (1), the L’Hospital rule gives
The resulting contradiction, $n^{2}t_n^{-2}=O\big(nt_n^{-2}\big)$ as $n\to\infty$ , finishes the proof of the proposition.
Proof of Proposition 2.The relation (23) implies
By Lemma 4,
where $0\le r_j^*(t)\le 1$ is a counterpart of the term $r_j$ in Lemma 4. By the monotonicity of $P({\cdot})$ , we have, again referring to Lemma 4,
Thus, for $0<y<1$ ,
The assertion $\liminf_{t\to\infty} tQ(t)>0$ is proven by contradiction. Assume that $\liminf_{t\to\infty} tQ(t)=0$ , so that
is finite for any natural n. Plugging $t=t_n$ into (26) and using
we get
Given $L\le ty$ , we have
where the second inequality is based on the already proven part of (14). Therefore,
and we derive
Sending $n\to\infty$ and applying (25), Lemma 2, and Lemma 3, we arrive at the inequality
which is false for sufficiently small y.
3.3. Proof of (18) and (19)
Fix an arbitrary $0<y<1$ . Lemma 1 with $u=ty$ gives
Let us show that
Using Lemma 2 and (14), we find that for an arbitrarily small $\epsilon>0$ ,
On the other hand,
so that in view of (3),
This, (12), and Lemma 2 imply (28).
Observe that
we derive (15), which in turn gives (17). The latter implies (18) since by Lemmas 2 and 4,
Turning to the proof of (19), observe that the random variable
can be represented in terms of Lemma 4 as
by assigning
Here $0\le r_j(t)\le 1$ , and for sufficiently large t,
After plugging into (18) the expression
we get
The latter expectation is non-negative, and for an arbitrary $\epsilon>0$ , it has the following upper bound:
Thus, in view of Lemma 3,
Multiplying this relation by t, we arrive at (19).
3.4. Proof of $\phi(t)\to 0$
Recall (20). If the non-decreasing function
is bounded from above, then $\phi(t)=O\big(\frac{1}{\ln t}\big)$ , proving that $\phi(t)\to 0$ as $t\to\infty$ . If $M(t)\to\infty$ as $t\to\infty$ , then there is an integer-valued sequence $0<t_1<t_2<\ldots,$ such that the sequence $M_n\,:\!=\,M(t_n)$ is strictly increasing and converges to infinity. In this case,
Since $|\phi(t)|\le \frac{M_{n}}{\ln t_{n}}$ for $t_n\le t<t_{n+1}$ , to finish the proof of $\phi(t)\to 0$ , it remains to verify that
Fix an arbitrary $y\in(0,1)$ . Putting $t=t_n$ in (21) and using (32), we find
Here and elsewhere, $o_n$ stands for a non-negative sequence such that $o_n\to0$ as $n\to\infty$ . In different formulas, the sign $o_n$ represents different such sequences. Since
and $r_j(t_n)\in[0,1]$ , it follows that
Recalling that $a=\mathrm{E}(\!\sum_{j=1}^{N}\tau_j)$ , observe that
Combining the last two relations, we conclude
Now it is time to unpack the term $r_j(t)$ . By Lemma 4 with (30),
where, provided $\tau_j\le ty$ ,
for a sufficiently large $t^*$ . This allows us to rewrite (34) in the form
To estimate the last expectation, observe that if $\tau_j\le ty$ , then for any $\epsilon>0$ ,
implying that for sufficiently large n,
so that
Since
we obtain
By (16) and (14), we have $\phi(t)\ge q_1-h$ for $t\ge t_0$ . Thus, for $\tau_j\le L\le t_ny$ and sufficiently large n,
This gives
which, after multiplying by $t_nM_n$ and taking expectations, yields
Finally, since
we derive that for any $0<\epsilon<y<1$ , there is a finite $n_\epsilon$ such that for all $n>n_\epsilon$ ,
By (29), we have $bh\ge a$ , and therefore
Thus, choosing $y=y_0$ such that $bq_1-2bhy_0-y_0=\frac{bq_1}{2}$ , we see that
which implies (33) as $\epsilon\to0$ , concluding the proof of $\phi(t)\to 0$ .
4. Proof of Theorem 1
We will use the following notational conventions for the k-dimensional probability generating function
with $0< t_1\le \ldots\le t_k$ and $z_1,\ldots,z_k\in[0,1]$ . We define
and write, for $t\ge0$ ,
Moreover, for $0< y_1<\ldots<y_k$ , we write
and assuming $0< y_1<\ldots<y_k<1$ ,
These conventions will be similarly applied to the functions
Our special interest is in the function
to be viewed as a counterpart of the function Q(t) treated by Theorem 2. Recalling the compound parameters
and $c=4bda^{-2}$ , put
The key step of the proof of Theorem 1 is to show that for any given $1=y_1<y_2<\ldots<y_k$ ,
This is done following the steps of our proof of $tQ(t)\to h$ given in Section 3.
Unlike Q(t), the function $Q_k(t)$ is not monotone over t. However, monotonicity of Q(t) was used in the proof of Theorem 2 only for the proof of (14). The corresponding statement
follows from the bounds $(1-z_1)Q(t)\le Q_k(t)\le Q(t)$ , which hold by the monotonicity of the underlying generating functions over $z_1,\ldots,z_{n}$ . Indeed,
and on the other hand,
where
4.1. Proof of $\boldsymbol{{tQ}}_{\boldsymbol{{k}}}\boldsymbol{{(t)}}\to \boldsymbol{{h}}_{\boldsymbol{{k}}}$
The branching property (8) of the GWO process gives
Given $0< t_1<\ldots<t_k< t_{k+1}=\infty$ , we use
to deduce the following counterpart of (9):
This implies
Using this relation we establish the following counterpart of Lemma 1.
Lemma 5. Consider the function (36) and put $P_k(t)\,:\!=\,1-Q_k(t)=P_k\big(t+\bar t,\bar z\big)$ . For $0<u<t$ , the relation
holds with $t_{k+1}=\infty$ ,
and
Proof. According to (39),
By the definition of $\Phi({\cdot})$ ,
and after subtracting the two last equations, we get
with $D_k(u,t)$ satisfying (42). After a rearrangement, the relation (40) follows together with (41).
With Lemma 5 in hand, the convergence (38) is proven by applying almost exactly the same argument as used in the proof of $tQ(t)\to h$ . An important new feature emerges because of the additional term in the asymptotic relation defining the limit $h_k$ . Let $1=y_1<y_2<\ldots<y_k<y_{k+1}=\infty$ . Since
we see that
where $g_k$ is defined by (37). Assuming $0\le z_1,\ldots,z_k<1$ , we ensure that $g_k>0$ , and as a result, we arrive at a counterpart of the quadratic equation (29),
which gives
justifying our definition (37). We conclude that for $k\ge1$ ,
4.2. Conditioned generating functions
To finish the proof of Theorem 1, consider the generating functions conditioned on the survival of the GWO process. Given (5) with $j\ge1$ , we have
and therefore,
Similarly, if (5) holds with $j=0$ , then
Letting $t^{\prime}=ty_1$ , we get
and applying the relation (43), we have
where $\Gamma_i=c({y_1}/{y_i} )^2$ . On the other hand, since
we also get
We conclude that as stated in Section 2,
Acknowledgements
The author is grateful to two anonymous referees for their valuable comments, corrections, and suggestions, which helped enhance the readability of the paper.
Funding information
There are no funding bodies to thank in relation to the creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.