Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-25T18:05:11.137Z Has data issue: false hasContentIssue false

Convergence of the derivative martingale for the branching random walk in time-inhomogeneous random environment

Published online by Cambridge University Press:  02 December 2024

Wenming Hong*
Affiliation:
Beijing Normal University
Shengli Liang*
Affiliation:
Southern University of Science and Technology
*
*Postal address: School of Mathematical Sciences and Laboratory of Mathematics and Complex Systems, Beijing Normal University, Beijing, 100875, China. Email address: wmhong@bnu.edu.cn
**Postal address: Shenzhen International Center for Mathematics, Southern University of Science and Technology, Shenzhen, 518055, Guangdong, China. Email address: liangsl@sustech.edu.cn
Rights & Permissions [Opens in a new window]

Abstract

Consider a branching random walk on the real line with a random environment in time (BRWRE). A necessary and sufficient condition for the non-triviality of the limit of the derivative martingale is formulated. To this end, we investigate the random walk in a time-inhomogeneous random environment (RWRE), which is related to the BRWRE by the many-to-one formula. The key step is to figure out Tanaka’s decomposition for the RWRE conditioned to stay non-negative (or above a line), which is interesting in itself.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction and main result

Consider a discrete-time branching random walk on $\mathbb{R}$ in random environment (BRWRE). The random environment is represented by a sequence of random variables $\xi=\left(\xi_{n}, n\geq 1\right)$ which is defined on some probability space $\left(\Omega, \mathcal{A}, \mathrm{P}\right)$ . We assume throughout that $\left(\xi_{n}, n\geq 1\right)$ are independent and identically distributed (i.i.d.) random variables with values in the space of point process laws (i.e. probability distributions on $\cup_{k=1}^{\infty}\mathbb{R}^k$ ). Each realization of $\xi_{n}$ corresponds to a point process law $\mathcal{L}_{n}$ . Given the environment, the time-inhomogeneous branching random walk is described as follows. It starts at time 0 with an initial particle (denoted by $\varnothing$ ) positioned at the origin. This particle dies at time 1 and gives birth to a random number of children which form the first generation and whose positions are given by a point process $L_{1}$ with law $\mathcal{L}_{1}$ . For any integer $n\geq1$ , each particle alive at generation n dies at time $n+1$ and gives birth, independently of all others, to its own children, which are in the $(n+1)$ th generation and are positioned (with respect to the position of their parent) according to the point process $L_{n+1}$ with law $\mathcal{L}_{n+1}$ . All particles behave independently conditioned on the environment $\xi$ . The process goes on as described above if there are particles alive. We denote by $\mathbb{T}$ the genealogical tree of the process. For a given vertex $u\in \mathbb{T}$ , we denote by $V\left(u\right)\in\mathbb{R}$ its position and by $|u|$ its generation. We write $u_{i}$ $\left(0\leq i\leq|u|\right)$ for its ancestor in the ith generation (with the convention that $u_{0}\,:\!=\,\varnothing$ and $u_{|u|}=u$ ). Given a realization of $\xi$ , we write $\mathbb{P}_{\xi}$ for the conditional (or quenched) probability and $\mathbb{E}_{\xi}$ for the corresponding expectation. The joint (or annealed) probability of the environment and the branching random walk is defined as $\mathbb{P}\,:\!=\,\mathbb{P}_{\xi}\otimes\mathrm{P}$ , that is,

$$\mathbb{P}({\cdot})=\int_{\Omega}\mathbb{P}_{\xi}({\cdot})\, \mathrm{d}\mathrm{P},$$

with the corresponding expectation $\mathbb{E}$ .

This model was first introduced by Biggins and Kyprianou [Reference Biggins and Kyprianou7]. Recently, some results for the homogeneous branching random walk have been extended to the BRWRE. Huang and Liu [Reference Huang and Liu17] proved a law of large numbers for the maximal position and large deviations principles for the counting measure of the process. Gao, Liu and Wang [Reference Gao, Liu and Wang12] obtained the central limit theorem. Wang and Huang [Reference Wang and Huang31] considered the $L^{p}$ convergence rate of the additive martingale and a moderate deviations principle for the counting measure. Mallein and Miłoś [Reference Mallein and Miłoś26] investigated the second-order asymptotic behavior of maximal displacement. Also, many authors have focused on other kinds of random environments. For example, Greven and den Hollander [Reference Greven and den Hollander13] considered the branching random walk with the reproduction law of the particles depending on their location. Yoshida [Reference Yoshida33] and Hu and Yoshida [Reference Hu and Yoshida16] investigated the branching random walk with space–time i.i.d. offspring distributions.

In this paper we consider the limit of the derivative martingale for the BRWRE, which has been proved to play an important role in the convergence of both the minimal position and the additive martingale for the classical branching random walk; see Aïdékon [Reference Aïdékon3] and Aïdékon and Shi [Reference Aïdékon and Shi4], respectively. The use of the derivative martingale in the present paper is related to the study of the Seneta–Heyde norming of the additive martingale for BRWRE. The convergence of the derivative martingale first sparked interest because of its association with the F-KPP equation and branching Brownian motion, as described by McKean [Reference McKean28]. Impressively, this martingale provides a link to the critical speed travelling wave of the F-KPP equation, as investigated by Lalley and Sellke [Reference Lalley and Sellke20] and Harris [Reference Harris14]. From these results, questions naturally arise regarding the convergence of the martingale and its implications for the branching random walk.

For each $n\geq1$ , $t\in\mathbb{R}$ , we introduce the $\log$ -Laplace transform of the point process $L_{n}$ as follows:

$$\Psi_{n}\left(t\right)\,:\!=\,\log\mathbb{E}_{\xi}\left[\int_{\mathbb{R}}\mathrm{e}^{-tx}L_{n}\left(\mathrm{d}x\right)\right]=\log\mathbb{E}_{\xi}\left[\sum_{x\in L_{n}}\mathrm{e}^{-tx}\right].$$

The additive martingale is defined as

$$W_{n}(t)\,:\!=\,\sum_{|u|=n}\mathrm{e}^{-tV(u)-\sum_{i=1}^{n}\Psi_{i}(t)}.$$

Let $\mathcal{F}_{n}\,:\!=\,\sigma\big(\xi_{1},\xi_{2},\cdots,(u, V(u)), |u|\leq n\big)$ . It is well known that for each fixed t, $\left(W_{n}(t),n\geq0\right)$ forms a non-negative martingale with respect to the filtration $\left(\mathcal{F}_{n},n\geq0\right)$ under both laws $\mathbb{P}_{\xi}$ and $\mathbb{P}$ . By the martingale convergence theorem, $W_{n}(t)$ converges almost surely (a.s.) to a non-negative limit. In the deterministic-environment case, Biggins [Reference Biggins5] gave a necessary and sufficient condition for the $L^{1}$ -convergence of $W_{n}(t)$ ; we refer to Lyons [Reference Lyons23] for a simple probabilistic proof based on the spinal decomposition. Later, Biggins and Kyprianou [Reference Biggins and Kyprianou7] extended this to the random-environment case.

To ensure the non-extinction and non-triviality of the BRWRE, we assume that

(1.1) \begin{equation} \begin{aligned} &\mathbb{P}_{\xi}\bigg(\sum_{|u|=1}1\geq1\bigg)=1, \quad \mathrm{P}\text{-a.s.}, \quad\quad \mathrm{P}\Bigg(\mathbb{P}_{\xi}\bigg(\sum_{|u|=1}1 \gt 1\bigg) \gt 0\Bigg) \gt 0,\\ &\mathbb{E}_{\xi}\bigg(\sum_{|u|=1}\mathbf{1}_{\left\{V(u) \gt 0\right\}}\mathrm{e}^{-V(u)}\bigg) \gt 0, \quad \mathrm{P}\text{-a.s.} \end{aligned}\end{equation}

We consider the boundary case (in the quenched sense) in this paper, that is,

(1.2) \begin{equation} \log\mathbb{E}_{\xi}\bigg(\sum_{|u|=1}\mathrm{e}^{-V(u)}\bigg)=\mathbb{E}_{\xi}\bigg(\sum_{|u|=1}V(u)\mathrm{e}^{-V(u)}\bigg)=0, \quad \mathrm{P}\text{-a.s.}\end{equation}

In fact, if we assume that there exists $t^{*} \gt 0$ such that, P-a.s., $\Psi_{1}$ is differentiable at the point $t^{*}$ and $\Psi_{1}(t^*)=t^*\Psi^{\prime}_{1}(t^*)$ , then without loss of generality we can assume that $t=1$ and $\Psi_{1}(1)=\Psi^{\prime}_{1}(1)=0$ , P-a.s. For a general case, we can construct a new BRWRE with position replaced by $\tilde{V}(u)\,:\!=\,t^*V(u)+\sum_{i=1}^{|u|}\Psi_{i}(t^*)$ , $u\in\mathbb{T}$ ; the $\log$ -Laplace transform of this new process satisfies $\tilde{\Psi}_{1}(1)=\tilde{\Psi}^{\prime}_{1}(1)=0$ , P-a.s.

We are interested in the derivative martingale, defined by

$$D_{n}\,:\!=\,\sum_{|u|=n}V(u)\mathrm{e}^{-V(u)},\quad n\geq 0.$$

It is easy to show that, in the boundary case, $\left(D_{n},n\geq0\right)$ is a signed martingale with respect to the filtration $\left(\mathcal{F}_{n},n\geq0\right)$ under both laws $\mathbb{P}_{\xi}$ and $\mathbb{P}$ . For the branching random walk with constant environment, the derivative martingale has been studied in many contexts. From the perspective of the smoothing transformation in the sense of Durrett and Liggett [Reference Durrett and Liggett10] and Liu [Reference Liu21], the limit of the derivative martingale serves as a fixed point of the smoothing transformation; the existence, uniqueness, and asymptotic behavior of such a fixed point has been investigated in [Reference Biggins and Kyprianou8, Reference Kyprianou19, Reference Liu22]. In [Reference Biggins and Kyprianou7], Biggins and Kyprianou derived a sufficient condition for the non-triviality (and triviality) of the limit of the derivative martingale. Later, Aïdékon [Reference Aïdékon3] gave the optimal condition for the non-triviality, which was proved to be necessary by Chen [Reference Chen9]. For branching Brownian motion, a necessary and sufficient condition for the non-degeneracy of the limit of the derivative martingale was given by Yang and Ren [Reference Yang and Ren32]. Recently, Mallein and Shi [Reference Mallein and Shi27] obtained a necessary and sufficient condition for branching Lévy processes.

In addition, we assume that there exists $\delta \gt 0$ such that

(1.3) \begin{equation} \mathbb{E}\bigg(\sum_{|u|=1}V(u)^{2+\delta}\mathrm{e}^{-V(u)}\bigg) \lt \infty.\end{equation}

The assumption (1.3) allows us to prove the existence and some useful asymptotic behaviors of the quenched harmonic function in Section 2.

The main result of this paper is a proof of the existence of the limit of the derivative martingale for the BRWRE and obtain a necessary and sufficient condition for the non-degeneracy of the limit. It is stated as follows.

Theorem 1. Under the assumptions (1.1), (1.2), and (1.3), we have the following:

  1. (i) The derivative martingale $\left(D_{n},n\geq0\right)$ converges almost surely to a non-negative finite limit, which we denote by $D_{\infty}$ , i.e.

    $$\lim_{n\to\infty}D_{n}=D_{\infty}\geq0, \quad \mathbb{P}\text{-a.s.}$$
  2. (ii) For almost all $\xi$ , $D_{\infty}$ is non-trivial if and only if

    (1.4) \begin{equation} \mathbb{E}\left[Y\log^2_{+}Y+Z\log_{+}Z\right] \lt \infty. \end{equation}
    More precisely,
    \begin{equation*} \begin{aligned} &\mathbb{P}_{\xi}\left(D_{\infty} \gt 0\right) \gt 0, \quad \mathrm{P}\text{-a.s.}, \quad \Longleftrightarrow \quad \mathbb{E}\left[Y\log^2_{+}Y+Z\log_{+}Z\right] \lt \infty,\\ &\mathbb{P}_{\xi}\left(D_{\infty}=0\right)=1, \quad \mathrm{P}\text{-a.s.}, \quad \Longleftrightarrow \quad \mathbb{E}\left[Y\log^2_{+}Y+Z\log_{+}Z\right]=\infty. \end{aligned} \end{equation*}
    Here $\log_{+}x\,:\!=\,\max\left\{0, \log x\right\}$ and $\log^2_{+}x\,:\!=\,\left(\log_{+}x\right)^2$ for any $x\geq0$ , and
    $$Y\,:\!=\,\sum_{|u|=1}\mathrm{e}^{-V(u)}, \quad\quad Z\,:\!=\,\sum_{|u|=1}V(u)\mathrm{e}^{-V(u)}\mathbf{1}_{\left\{V(u)\geq 0\right\}}.$$

The idea of the proof of this theorem is to follow the general argument of Biggins and Kyprianou [Reference Biggins and Kyprianou7] and Chen [Reference Chen9]. Two significant challenges arise because of the random environment. Firstly, a harmonic function is required to construct a non-negative martingale which is formulated in our previous work [Reference Hong and Liang15]. Secondly, the random walk in a time-inhomogeneous random environment (RWRE) is hard to handle. Note that for the constant-environment situation (see [Reference Chen9]), a basic tool is Tanaka’s decomposition for the random walk conditioned to stay non-negative. In our scenario, we are dealing with a random environment. To address this, we examine the RWRE, which is connected to the BRWRE via the many-to-one formula. Based on the quenched harmonic function (refer to [Reference Hong and Liang15]) for the RWRE, we figure out Tanaka’s decomposition for the RWRE conditioned to stay non-negative (Propositions 2 and 3), which is a novelty of this paper and is also interesting in itself.

Let us briefly describe the proof of Theorem 1. To demonstrate the convergence of the derivative martingale $D_n$ , we introduce the truncated martingale $D^{(\beta)}_n$ , which is formulated via a quenched harmonic function of the associated random walk, and use it to approach $D_n$ . To establish the necessary and sufficient condition for the non-degeneracy of the limit $D_{\infty}$ , we utilize the general argument of Biggins and Kyprianou [Reference Biggins and Kyprianou7] with certain adaptations. A new probability measure is defined using the truncated martingale, which clearly characterizes the branching random walk by a spinal decomposition. Propositions 5 and 6 provide additional details. Using the spinal decomposition, we can determine the conditions for $D^{(\beta)}_n$ to converge in $L^1$ or for the limit $D^{(\beta)}_{\infty}$ to become degenerate (Proposition 7). These conditions are equivalent to the triviality or non-triviality of the limit $D_{\infty}$ of the derivative martingale $D_n$ , as stated in Lemma 4. Therefore, verification of the conditions outlined in Proposition 7 is crucial. For the almost sure convergence of the random series in Proposition 7(i), it is demonstrated that its expectation is finite under (1.4). To establish the almost sure divergence of the random series in Proposition 7(ii), an equivalent integral condition (Proposition 4) is proven, which holds true when (1.4) is invalid.

The rest of this paper is organized as follows. In Section 2, we introduce a quenched harmonic function which is used to constructed the random walk conditioned to stay above a line. Then we give a version of Tanaka’s decomposition for the random walk conditioned to stay non-negative in our setting, using which we prove an equivalent integral condition for the almost sure divergence of the random series associated with the conditioned random walk. In Section 3, we use a truncated martingale to make a change of measure and give a spinal decomposition of the time-inhomogeneous branching random walk; the proofs are provided in Appendix A. Finally, in Section 4, we derive a necessary and sufficient condition for the non-trivial limit of the derivative martingale.

Throughout the paper, we denote by $\left(c_{i},i\geq0\right)$ positive constants and by $c_{i}(\beta)$ a positive constant depending on $\beta$ . The indicator function is written as $\mathbf{1}_{\{{\cdot}\}}$ . We use $x_{n}\sim y_{n}\ (n\to\infty)$ to denote $\lim_{n\to\infty}\frac{x_{n}}{y_{n}}=1$ , and when $x_{n}$ and $y_{n}$ are random variables, the limit holds in the sense of almost sure convergence. For $x\in\mathbb{R}\cup\{\infty\}\cup\{-\infty\}$ , we write $x_{+}\,:\!=\,\max\{x,0\}$ . We also adopt the notation $\sum_{\emptyset}\left(\cdots\right)\,:\!=\,0$ and $\prod_{\emptyset}\left(\cdots\right)\,:\!=\,1$ .

2. Quenched harmonic function and conditioned random walk

In this section we present the many-to-one lemma that connects BRWRE and RWRE. Then, based on the quenched harmonic function ([Reference Hong and Liang15]) for the RWRE, we define the law of the random walk conditioned to stay above a line. After exploring the relationship between the quenched probability $\mathbb{P}^{+,(\beta)}_{\xi}(x;\mathrm{d}y)$ and the annealed renewal measure $\mathcal{R}(\mathrm{d}y)$ , we give a version of Tanaka’s decomposition. As a result, an equivalent integral condition for the almost sure divergence of the random series about the conditioned random walk is proved.

2.1. The many-to-one lemma

The well-known many-to-one lemma is a powerful tool in the study of branching random walks; see Shi [Reference Shi29] and the references therein. In this paper, we need a time-inhomogeneous version of this lemma. For all $n\geq1$ , we define the probability measure $\mu_{n}$ on $\mathbb{R}$ by

$$\mu_{n}\left(B\right)\,:\!=\,\mathbb{E}_{\xi}\left[\sum_{x\in L_{n}}\mathbf{1}_{\left\{x\in B\right\}}\mathrm{e}^{-x}\right], \quad \text{for all}\,\ B\in\mathcal{B}(\mathbb{R}).$$

Note that $\mu_{n}\left(B\right)$ is a random variable depending on $\xi$ . Up to a possible enlargement of the probability space, we define a sequence $\left(X_{n}, n\geq1\right)$ of independent random variables, where $X_{n}$ has law $\mu_{n}$ . Let $S_{n}\,:\!=\,S_{0}+\sum_{i=1}^{n}X_{i}$ . The process $\left(S_{n},n\geq0\right)$ is a random walk in a time-dependent random environment. For convenience, $\mathbb{P}_{\xi}$ also stands for the joint law of the BRWRE and the RWRE, given the environment $\xi$ . If we emphasize that the process starts from $a\in\mathbb{R}$ , this law will be denoted by $\mathbb{P}_{\xi,a}$ and $\mathbb{P}_{\xi}\,:\!=\,\mathbb{P}_{\xi,0}$ . The following time-inhomogeneous many-to-one lemma can be found in Lemma 2.2 of Mallein [Reference Mallein25].

Lemma 1. (Many-to-one) For all $n\geq1$ and any measurable function $f:\mathbb{R}^{n}\to\mathbb{R}_{+}$ , we have

(2.1) \begin{equation} \mathbb{E}_{\xi, a}\left[\sum_{|u|=n}f\left(V(u_{1}), \cdots, V(u_{n})\right)\right]=\mathbb{E}_{\xi, a}\left[\mathrm{e}^{S_{n}-a}f\left(S_{1}, \cdots, S_{n}\right)\right], \quad \mathrm{P}\text{-}\mathit{a.s.,} \end{equation}

with $\mathbb{P}_{\xi, a}\left(S_{0}=a\right)=1$ , P-a.s.

2.2. Quenched harmonic function

In this subsection, we introduce the quenched harmonic function which will be used to construct the random walk conditioned to stay in a given interval.

It follows from (1.1), (1.2), (1.3) and (2.1) that

(2.2) \begin{equation} \begin{aligned} &\mathbb{P}_{\xi}\left(S_{1} \gt 0\right)=\mathbb{E}_{\xi}\left[\sum_{|u|=1}\mathbf{1}_{\left\{V(u) \gt 0\right\}}\mathrm{e}^{-V(u)}\right] \gt 0, \quad \mathrm{P}\text{-a.s.},\\ &\mathbb{E}_{\xi}(S_{1})=\mathbb{E}_{\xi}\left[\sum_{|u|=1}V(u)\mathrm{e}^{-V(u)}\right]=0,\quad \mathrm{P}\text{-a.s.},\\ &\mathbb{E}(S^{2+\delta}_{1})=\mathbb{E}\left[\sum_{|u|=1}V(u)^{2+\delta}\mathrm{e}^{-V(u)}\right] \lt \infty. \end{aligned}\end{equation}

Under (2.2), we can formulate the quenched harmonic function as follows.

Let $y\geq0$ , and denote by $\tau_{y}$ the first time when $\{S_n\}$ enters the interval $({-}\infty, -y)$ :

$$\tau_{y}\,:\!=\,\inf\left\{n\geq1: y+S_n \lt 0\right\}.$$

Define

$$U_n(\xi,y)\,:\!=\,\mathbb{E}_\xi\left((y+S_n)\mathbf{1}_{\left\{\tau_{y} \gt n\right\}}\right).$$

Let $\theta$ be the shift operator, i.e. $\theta\xi\,:\!=\,(\xi_2,\xi_3,\cdots)$ . For $n\geq1$ , $\theta^n\xi\,:\!=\,\theta(\theta^{n-1}\xi)$ , with the convention that $\theta^0\xi\,:\!=\,\xi$ .

The following proposition (see Hong and Liang [Reference Hong and Liang15]) proves the existence and asymptotic behavior of a positive quenched harmonic function.

Proposition 1. For almost all $\xi$ , we have the following statements:

  1. (i) There exists a random variable $U(\xi,y)$ such that

    $$\lim\limits_{n\to \infty}U_{n}(\xi,y)=U(\xi,y)\,:\!=\,-\mathbb{E}_\xi(S_{\tau_y}) \lt \infty.$$
  2. (ii) The random variable $U(\xi,y)$ satisfies the quenched harmonic property:

    $$U(\xi,y)=\mathbb{E}_{\xi}\left[U\left(\theta \xi,y+S_{1}\right)\mathbf{1}_{\left\{\tau_{y} \gt 1\right\}}\right].$$
  3. (iii) The sequence $\big(U(\theta^n\xi,y+S_n)\mathbf{1}_{\{\tau_{y} \gt n\}},n\geq1\big)$ is a martingale under $\mathbb{P}_\xi.$

  4. (iv) The random variable $U(\xi,y)$ is positive and non-decreasing in y, with $U(\xi,y)\geq y$ and $\lim\limits_{y\to\infty}U(\xi,y)/y=1$ .

  5. (v) For any $y_n\geq0$ with $y_n\to \infty$ as $n\to \infty$ ,

    (2.3) \begin{equation} \lim\limits_{n\to\infty}\frac{U(\theta^n\xi,y_n)}{y_n}=1. \end{equation}

Remark 1. For classical random walks, one important tool for analyzing the behavior of this process conditioned to stay non-negative is the well-known Wiener–Hopf factorization; we refer to the standard book of Feller [Reference Feller11]. For any oscillating random walk, the renewal function associated with the ladder height process is harmonic; see (2.5). These techniques essentially rely on the so-called duality principle, which unfortunately fails in our setting because the random walk is time-inhomogeneous given the environment. In [Reference Hong and Liang15], the quenched harmonic function is obtained by strong approximation. To put it simply, by approximating a random walk with Brownian motion, we can demonstrate the existence of the limit $\lim\limits_{n\to\infty}U_{n}(\xi,y)=U(\xi,y)\,:\!=\,-\mathbb{E}_\xi(S_{\tau_y})$ for almost every realization of $\xi$ .

2.3. Random walk conditioned to stay above a line

2.3.1. Quenched probability $\mathbb{P}^{+,(\beta)}_{\xi}\left(x;\mathrm{d}y\right)$ and annealed renewal measure $\mathcal{R}(\mathrm{d}y)$

For any fixed $\beta\geq0$ , we introduce the quenched random walk conditioned to stay above $-\beta$ for almost all $\xi$ , denoted by $\zeta^{(\beta)}_{n}$ , in the sense of Doob’s h-transform. By Proposition 1(iii), for any $n\geq1$ and $B\in\mathcal{B}(\mathbb{R})$ , we can define the law of $\zeta^{(\beta)}_{n}$ by

(2.4) \begin{equation} \mathbb{P}_{\xi,a}(\zeta^{(\beta)}_{0}=a)\,:\!=\,1, \quad \mathbb{P}_{\xi,a}(\zeta^{(\beta)}_{n}\in B)\,:\!=\,\frac{\mathbb{E}_{\xi,a}\left(U(\theta^n \xi, S_n+\beta)\mathbf{1}_{\{\tau_{\beta} \gt n\}}\mathbf{1}_{\left\{S_{n}\in B\right\}}\right)}{U(\xi,a+\beta)}.\end{equation}

The process $\big(\zeta^{(\beta)}_{n},n\geq0\big)$ is called a random walk in time-inhomogeneous random environment conditioned to stay above $-\beta$ . In fact, $\big(\zeta^{(\beta)}_{n},n\geq0\big)$ is a Markov chain with state space $[{-}\beta,\infty)$ and a transition kernel that is given by

$$\mathbb{P}^{+,(\beta)}_{\xi}\left(x;\mathrm{d}y\right)\,:\!=\,\frac{U(\theta\xi,y+\beta)\mathbf{1}_{\left\{y\geq -\beta\right\}}}{U(\xi,x+\beta)}\mathbb{P}_{\xi,x}\left(S_{1}\in \mathrm{d}y\right).$$

On the other hand, $\left(S_{n},n\geq0\right)$ is simply a random walk under the annealed law $\mathbb{P}$ , thanks to the i.i.d. nature of the environment. Recall that given the environment $\xi$ , $X_{n}$ has the law $\mu_{n}$ . Let $\mathrm{E}\left(\mu_{n}\right)$ be the annealed probability measure corresponding to averaging $\mu_{n}$ over $\xi$ , and let $\mu^{\infty}\,:\!=\,\prod_{n=1}^{\infty}\mathrm{E}\left(\mu_{n}\right)$ be the product probability measure; denote by $\mathbb{E}_{\mu^{\infty}}$ the corresponding expectation. Then $\left(S_{n},n\geq0\right)$ is a usual random walk under $\mu^{\infty}$ . When we are considering the annealed random walk (that is, $\left(S_{n},n\geq0\right)$ under $\mathbb{P}_{a}$ ), we shall identify the law $\mathbb{P}_{a}$ with the law $\mu^{\infty}_{a}$ .

In what follows, we state the usual construction of a classical random walk conditioned to stay above a given value, which is indicated in Remark 1. Define the strict descending ladder epochs of the random walk $(S_{n},n\geq0)$ as

$$\gamma_{0}\,:\!=\,0,\quad\quad \gamma_{k+1}\,:\!=\,\inf\left\{n \gt \gamma_{k}:S_{n}\lt S_{\gamma_{k}}\right\},\quad k\geq0.$$

Let $R^{-}$ be the function associated with $(S_{n},n\geq0)$ that is defined by

$$R^{-}(0)\,:\!=\,1, \quad\quad R^{-}(x)\,:\!=\,\sum_{k=0}^{\infty}\mu^{\infty}(S_{\gamma_{k}}\geq -x),\quad x \gt 0.$$

Then $R^{-}(x)$ is a renewal function of the ladder heights $({-}S_{\gamma_{k}})$ . Let $\mathcal{R}^{-}(\mathrm{d}x)$ be the corresponding renewal measure. By the renewal theorem (cf. [Reference Feller11, Chapter XI, Section 1]), in our setting we have

\begin{equation*} \lim_{x\to \infty}\frac{R^{-}(x)}{x}=c_{0}\in(0,\infty).\end{equation*}

The function $R^{-}(x)$ satisfies (cf. [Reference Tanaka30, Lemma 1])

(2.5) \begin{equation} \mu^{\infty}\left[R^{-}(x+X_{1})\mathbf{1}_{\left\{x+X_{1}\geq 0\right\}}\right]=R^{-}(x),\quad \text{for}\ x\geq0.\end{equation}

From (2.5), it follows that $(R^{-}(S_{n}+\beta)\mathbf{1}_{\left\{\underline{S}_{n}\geq -\beta\right\}},n\geq1)$ is a martingale under $\mu^{\infty}$ , where $\underline{S}_{n}\,:\!=\,\min\left\{S_{0},{\cdots},S_{n}\right\}$ . Thus, we can construct the random walk conditioned to stay above $-\beta$ , denoted by $\eta^{(\beta)}_{n}$ ; that is, for any $B\in\mathcal{B}(\mathbb{R})$ ,

\begin{equation*} \mu^{\infty}_{a}(\eta^{(\beta)}_{n}\in B)\,:\!=\,\frac{\mu^{\infty}_{a}[R^{-}(S_{n}+\beta)\mathbf{1}_{\left\{\underline{S}_{n}\geq -\beta\right\}}\mathbf{1}_{\left\{S_{n}\in B\right\}}]}{R^{-}(a+\beta)}.\end{equation*}

Similarly, define the weak ascending ladder epochs of the random walk $(S_{n},n\geq0)$ as

$$\Gamma_{0}\,:\!=\,0, \quad \Gamma_{k+1}\,:\!=\,\inf\left\{n \gt \Gamma_{k}\,:\,S_{n}\geq S_{\Gamma_{k}}\right\},\quad k\geq0.$$

Let R(x) be the renewal function associated with the weak ascending ladder height process $\left(S_{\Gamma_{n}},n\geq1\right)$ , i.e.

$$R(0)\,:\!=\,1, \quad R(x)\,:\!=\,\sum_{n=1}^{\infty}\mu^{\infty}(S_{\Gamma_{n}}\lt x), \quad x \gt 0,$$

and denote by $\mathcal{R}(\mathrm{d}x)$ the corresponding renewal measure. Define $R^{(\beta)}(x)\,:\!=\,R(x+\beta)$ with the corresponding measure $\mathcal{R}^{(\beta)}(\mathrm{d}x)$ . By the renewal theorem again, there exist $c_{1},c_{2} \gt 0$ such that for any non-negative measurable function f,

(2.6) \begin{equation} c_{1}\int_{0}^{\infty}f(x-\beta)\,\mathrm{d}x\leq \int_{-\beta}^{\infty}f(x)\mathcal{R}^{(\beta)}(\mathrm{d}x)\leq c_{2}\int_{0}^{\infty}f(x-\beta)\,\mathrm{d}x.\end{equation}

We give the following lemma, which allows us to express the expectation of the series of the form $\sum_{n=1}^{\infty}\frac{G(\theta^n\xi,\zeta^{(\beta)}_{n})}{U(\theta^n\xi,\zeta^{(\beta)}_{n}+\beta)}$ (where $G(\theta^n\xi,x)$ is a non-negative measurable function depending on the n-step shifted environment $\theta^n\xi$ ) under the annealed probability $\mathbb{P}$ in the form of the integral with respect to $\mathcal{R}^{(\beta)}(\mathrm{d}x)$ .

Lemma 2. Let $\zeta^{(\beta)}_{n}$ be defined as in (2.4), and for almost all $\xi$ , let $G(\xi,{\cdot}):[{-}\beta,\infty)\to[0,\infty)$ be a measurable function, with $G\left(x\right)\,:\!=\,\mathrm{E}[G(\xi,x)] \lt \infty$ for all $x\geq -\beta$ . Then

\begin{equation*} \mathbb{E}\left[\sum_{n=1}^{\infty}\frac{G\big(\theta^n\xi,\zeta^{(\beta)}_{n}\big)}{U(\theta^n\xi,\zeta^{(\beta)}_{n}+\beta)}U(\xi,\beta)\right]=\int_{-\beta}^{\infty}G(x)\mathcal{R}^{(\beta)}(\mathrm{d}x). \end{equation*}

Proof. By the duality principle for classical random walks, following the arguments of Sections 2 and 6 of [Reference Biggins6], for any non-negative measurable function f we get

(2.7) \begin{equation} \sum_{n=1}^{\infty}\mathbb{E}_{\mu^{\infty}}\left(f\left(S_{n}\right)\mathbf{1}_{\left\{\underline{S}_{n}\geq -\beta\right\}}\right)=\int_{-\beta}^{\infty}f(x)\mathcal{R}^{(\beta)}(\mathrm{d}x). \end{equation}

Then, by the definition of $\zeta^{(\beta)}_{n}$ , we obtain

\begin{equation*} \begin{aligned} \mathbb{E}\left[\sum_{n=1}^{\infty}\frac{G\big(\theta^n\xi,\zeta^{(\beta)}_{n}\big)}{U(\theta^n\xi,\zeta^{(\beta)}_{n}+\beta)}U(\xi,\beta)\right]=&\mathrm{E}\left[\sum_{n=1}^{\infty}\mathbb{E}_{\xi}\left(\frac{G\big(\theta^n\xi,\zeta^{(\beta)}_{n}\big)}{U\big(\theta^n\xi,\zeta^{(\beta)}_{n}+\beta\big)}\right)U(\xi,\beta)\right]\\ = &\mathrm{E}\left[\sum_{n=1}^{\infty}\mathbb{E}_{\xi}\!\left(\!\frac{U(\theta^n\xi,S_n+\beta)\mathbf{1}_{\{\tau_{\beta} \gt n\}}G(\theta^n\xi,S_{n})}{U(\theta^n\xi,S_{n}+\beta)U(\xi,\beta)}\!\right)\!U(\xi,\beta)\right]\end{aligned} \end{equation*}
\begin{equation*} \begin{aligned} & = \mathrm{E}\left[\sum_{n=1}^{\infty}\mathbb{E}_{\xi}\left(\mathbf{1}_{\{\tau_{\beta} \gt n\}}G\left(\theta^n\xi,S_{n}\right)\right)\right]\\=&\mathrm{E}\left[\sum_{n=1}^{\infty}\int_{-\beta}^{\infty}G\left(\theta^n\xi,x\right)\mathbb{P}_{\xi}\left(S_{n}\in \mathrm{d}x,\tau_{\beta} \gt n\right)\right]. \end{aligned} \end{equation*}

Because of the i.i.d. random environment, for each fixed x, $G(\theta^n\xi,x)$ forms a stationary and ergodic sequence (see e.g. [Reference Kallenberg18, Lemmas 10.1 and 10.5]), $\mathrm{E}\left[G(\theta^n\xi,x)\right]=G(x)$ , and by the independence, we deduce

\begin{equation*} \begin{aligned} \mathrm{E}\left[\sum_{n=1}^{\infty}\int_{-\beta}^{\infty}G\left(\theta^n\xi,x\right)\mathbb{P}_{\xi}\left(S_{n}\in\mathrm{d}x,\tau_{\beta} \gt n\right)\right]=&\sum_{n=1}^{\infty}\int_{-\beta}^{\infty}\mathrm{E}\left[G\left(\theta^n\xi,x\right)\mathbb{P}_{\xi}\left(S_{n}\in \mathrm{d}x,\tau_{\beta} \gt n\right)\right]\\ =&\int_{-\beta}^{\infty}G(x)\sum_{n=1}^{\infty}\mathbb{P}\left(S_{n}\in \mathrm{d}x,\tau_{\beta} \gt n\right)\\ =&\int_{-\beta}^{\infty}G(x)\sum_{n=1}^{\infty}\mu^{\infty}\left(S_{n}\in \mathrm{d}x,\underline{S}_{n}\geq -\beta\right)\\ =&\int_{-\beta}^{\infty}G(x)\mathcal{R}^{(\beta)}(\mathrm{d}x), \end{aligned} \end{equation*}

where the last equality follows from (2.7). This yields the lemma.

2.3.2. Quenched Tanaka’s decomposition

Tanaka’s decomposition is a fundamental tool for investigating the behavior of the random walk conditioned to stay non-negative; see [Reference Afanasyev, Geiger, Kersting and Vatutin1, Reference Biggins6, Reference Tanaka30], for example. With the above preparations in hand, we can now specify a quenched version of Tanaka’s decomposition for the RWRE conditioned to stay non-negative. We will then proceed to discuss the relationship between two kinds of probability measures. For simplicity, we will restrict our analysis to the case where $\beta=0$ and write $\zeta_{n}\,:\!=\,\zeta^{(0)}_{n}$ . Let $\nu$ denote the time at which the first prospective minimal value of the process $(\zeta_{n},n\geqslant0)$ occurs, i.e.

(2.8) \begin{equation} \nu\,:\!=\,\inf\left\{m\geq1\,:\,\zeta_{m+n}\geq \zeta_{m},\ \text{for all}\,\ n\geq0\right\}.\end{equation}

Write $\zeta^{\nu}_{k}\,:\!=\,\zeta_{\nu+k}-\zeta_{\nu}$ , $k\geq1$ .

Proposition 2. (Quenched Tanaka’s decomposition.) For almost all $\xi$ , we have the following:

  1. (i) $\zeta_{n}\to\infty$ $\mathbb{P}_{\xi}$ -a.s. as $n\to\infty$ ;

  2. (ii) $\left(\nu,\zeta_{1},\cdots,\zeta_{\nu}\right)$ and $\left(\zeta^{\nu}_{1},\zeta^{\nu}_{2},\cdots\right)$ are independent with respect to $\mathbb{P}_{\xi}$ ;

  3. (iii) $U(\xi,0)\mathbb{P}_{\xi}\left(\nu=k,\zeta_{\nu}\in \mathrm{d}x\right)=U\left(\theta^k\xi,0\right)\mathbb{P}_{\xi}\left(S_{k} \lt S_{k-1},\cdots,S_{k} \lt S_{1},S_{k}\in \mathrm{d}x\right)$ for all $k\geq1$ .

Proof. (i) We claim that $U(y)\,:\!=\,\mathrm{E}\left[U(\xi,y)\right] \lt \infty$ for any $y\geq0$ . In fact, by the definition of $U(\xi,y)$ , we have

\begin{equation*} \begin{aligned} U(0)=\mathrm{E}\left[\mathbb{E}_{\xi}\left({-}S_{\tau_{0}}\right)\right]=&\mathrm{E}\left[\sum_{n=1}^{\infty}\mathbb{E}_{\xi}\left({-}S_{n},\tau_{0}=n\right)\right]\\ =&\mathrm{E}\left[\sum_{n=1}^{\infty}\mathbb{E}_{\xi}\left({-}S_{n},S_{1}\geq 0,\cdots,S_{n-1}\geq 0,S_{n} \lt 0\right)\right]\\ =&\sum_{n=1}^{\infty}\mathbb{E}_{\mu^{\infty}}\left({-}S_{n},S_{1}\geq 0,\cdots,S_{n-1}\geq 0,S_{n} \lt 0\right)\\ =&\sum_{n=1}^{\infty}\mathbb{E}_{\mu^{\infty}}\left({-}S_{n},\gamma_{1}=n\right)\\ =&-\mathbb{E}_{\mu^{\infty}}\left(S_{\gamma_{1}}\right) \lt \infty, \end{aligned} \end{equation*}

where the validity of the finiteness of $\mathbb{E}_{\mu^{\infty}}\left(S_{\gamma_{1}}\right)$ is proved by Theorem 1 in Chapter XVIII.5 of [Reference Feller11]. By Proposition 1(ii), U(y) satisfies the annealed harmonic property:

(2.9) \begin{equation} \mathbb{E}\left[U\left(y+S_{1}\right)\mathbf{1}_{\left\{y+S_{1}\geq 0\right\}}\right]=U(y),\quad y\geq0. \end{equation}

Since $\mathbb{P}\left(S_{1} \gt y_{0}\right) \gt 0$ for some $y_{0} \gt 0$ , by (2.9) with $y=0$ we have $U(y_{1}) \lt \infty$ for some $y_{1} \gt y_{0}$ . Again applying (2.9) with $y=y_{1}$ , we have $U(y_{2}) \lt \infty$ for some $y_{2} \gt y_{1}+y_{0}$ . Repeating this argument, we deduce that there exists a sequence $\left(y_{n},n\geq1\right)$ such that $U\left(y_{n}\right) \lt \infty$ for all n. By the monotonicity of U(y), we conclude that $U(y) \lt \infty$ for all $y\geq0$ .

Since $U(\xi,y)$ is positive and non-decreasing in y, for any $y \gt 0$ , by the definition of $\zeta_{n}$ , we have

\begin{equation*} \begin{aligned} \mathrm{E}\left[U\left(\xi,0\right)\sum_{n=1}^{\infty}\mathbb{P}_{\xi}\left(\zeta_{n} \lt y\right)\right]=&\mathrm{E}\left[\sum_{n=1}^{\infty}\frac{\mathbb{E}_{\xi}\left(U(\theta^n\xi,S_n)\mathbf{1}_{\{\tau_{0} \gt n\}}\mathbf{1}_{\left\{S_{n} \lt y\right\}}\right)}{U(\xi,0)}U\left(\xi,0\right)\right]\\ \leq&\sum_{n=1}^{\infty}\mathrm{E}\left[U(\theta^n\xi,y)\mathbb{E}_{\xi}\left(\mathbf{1}_{\{\tau_{0} \gt n\}}\mathbf{1}_{\left\{S_{n} \lt y\right\}}\right)\right]. \end{aligned} \end{equation*}

Following the same argument as in the proof of Lemma 2, we have

\begin{equation*} \begin{aligned} \sum_{n=1}^{\infty}\mathrm{E}\left[U(\theta^n\xi,y)\mathbb{E}_{\xi}\left(\mathbf{1}_{\{\tau_{0} \gt n\}}\mathbf{1}_{\left\{S_{n} \lt y\right\}}\right)\right]=&\sum_{n=1}^{\infty}U(y)\mathbb{P}\left(S_{n} \lt y,\tau_{0} \gt n\right)\\ =&U(y)\sum_{n=1}^{\infty}\mu^{\infty}\left(S_{n} \lt y,\underline{S}_{n}\geq0\right)\\ =&U(y)\int_{0}^{y}\mathcal{R}^{(0)}(\mathrm{d}x) \lt \infty. \end{aligned} \end{equation*}

Thus, for almost all $\xi$ , $\sum_{n=1}^{\infty}\mathbb{P}_{\xi}\left(\zeta_{n} \lt y\right) \lt \infty$ . The Borel–Cantelli lemma yields that, for almost all $\xi$ , $\zeta_{n}\to\infty$ $\mathbb{P}_{\xi}$ -a.s. as $n\to\infty$ .

(ii) Let

$$H\left(\xi,x,z\right)\,:\!=\,\frac{U\left(\xi,x-z\right)}{U\left(\xi,x\right)}, \quad x\geq z\geq0, \quad\quad H\left(\xi,x,z\right)\,:\!=\,0, \quad x \lt z,$$

and

\begin{align*} \mathbb{P}^+_{\theta^{j-1}\xi}\left(y_{j-1};\mathrm{d}y_{j}\right)\,:\!=\,&\mathbb{P}_{\xi}\left.\left(\zeta_{j}\in \mathrm{d}y_{j}\,\right| \zeta_{j-1}=y_{j-1}\right)\\ =&\frac{U(\theta^j\xi,y_{j})\mathbf{1}_{\left\{y_{j}\geq 0\right\}}}{U(\theta^{j-1}\xi,y_{j-1})}\mathbb{P}_{\theta^{j-1}\xi,y_{j-1}}\left(S_{1}\in \mathrm{d}y_{j}\right), \quad j\geq 1 .\end{align*}

Then, by Proposition 1(ii), $H\left(\xi,{\cdot},z\right)$ is quenched harmonic with respect to the transition kernel $\mathbb{P}^+_{\xi}$ in the following sense:

(2.10) \begin{equation} \int H\left(\theta\xi,y,z\right)\mathbb{P}^+_{\xi}\left(x;\mathrm{d}y\right)=H\left(\xi,x,z\right), \quad x\geq z\geq0. \end{equation}

Define $\hat{\tau}_{z}\,:\!=\,\inf\left\{n \gt 0:\zeta_{n} \lt z\right\}$ . Since $\hat{\tau}_{z}$ is a stopping time, it follows from (2.10) that the process $\left(H\left(\theta^{n\wedge\hat{\tau}_{z}}\xi,\zeta_{n\wedge\hat{\tau}_{z}},z\right),n\geq 0\right)$ is a martingale under $\mathbb{P}_{\xi}$ . Thus, for all $n\geq 0$ , we have

$$\mathbb{E}_{\xi,x}\left[H\left(\theta^{n\wedge\hat{\tau}_{z}}\xi,\zeta_{n\wedge\hat{\tau}_{z}},z\right) \right]=H\left(\xi,x,z\right).$$

Note that for almost all $\xi$ , $0\leq H\left(\xi,x,z\right)\leq1$ , $H\left(\xi,x,z\right)\,:\!=\,0$ for $x \lt z$ , and $H\left(\theta^n\xi,\zeta_{n},z\right)\to1$ as $n\to\infty$ by (2.3) and Part (i). It follows that

$$\lim_{n\to\infty}H\left(\theta^{n\wedge\hat{\tau}_{z}}\xi,\zeta_{n\wedge\hat{\tau}_{z}},z\right)=\lim_{n\to\infty}H\left(\theta^{n}\xi,\zeta_{n},z\right)\mathbf{1}_{\left\{\hat{\tau}_{z} \gt n\right\}}=\mathbf{1}_{\left\{\zeta_{n}\geq z\ \text{for all}\ n \gt 0\right\}}.$$

By the dominated convergence theorem, we get

$$\mathbb{P}_{\xi,x}\left(\zeta_{n}\geq z\,\ \text{for all}\,\ n \gt 0\right)=\mathbb{E}_{\xi,x}\left[\lim_{n\to\infty}H\left(\theta^{n\wedge\hat{\tau}_{z}}\xi,\zeta_{n\wedge\hat{\tau}_{z}},z\right) \right]=H\left(\xi,x,z\right).$$

This tells us that $H\left(\xi,x,z\right)$ is the probability that, starting at x, the process $\left(\zeta_{n},n\geq0\right)$ never hits $\left({-}\infty,z\right)$ .

For any $x_{1},\cdots,x_{k},y_{1},\cdots,y_{m}\geq 0$ with $x_{0}=y_{0}=0$ , we have

\begin{equation*} \begin{aligned} &\prod_{j=1}^{m}\mathbb{P}^+_{\theta^{k+j-1}\xi}\left(y_{j-1}+x_{k};\mathrm{d}y_{j}+x_{k}\right)H\left(\theta^{k+m}\xi,y_{m}+x_{k},x_{k}\right)\\ =&\prod_{j=1}^{m}\mathbb{P}^+_{\theta^{k+j-1}\xi}\left(y_{j-1}+x_{k};\mathrm{d}y_{j}+x_{k}\right)\frac{U\left(\theta^{k+m}\xi,y_{m}\right)}{U\left(\theta^{k+m}\xi,y_{m}+x_{k}\right)}\\ =&\prod_{j=1}^{m}\mathbb{P}_{\theta^{k+j-1}\xi}\left(y_{j-1}+x_{k};\mathrm{d}y_{j}+x_{k}\right)\frac{U\left(\theta^{k+j}\xi,y_{j}+x_{k}\right)}{U\left(\theta^{k+j-1}\xi,y_{j-1}+x_{k}\right)}\frac{U\left(\theta^{k+m}\xi,y_{m}\right)}{U\left(\theta^{k+m}\xi,y_{m}+x_{k}\right)}\\ = & \prod_{j=1}^{m}\mathbb{P}_{\theta^{k+j-1}\xi}\left(y_{j-1};\mathrm{d}y_{j}\right)\frac{U\left(\theta^{k+m}\xi,y_{m}\right)}{U\left(\theta^{k}\xi,x_{k}\right)} \end{aligned} \end{equation*}
\begin{equation*} \begin{aligned} =&\prod_{j=1}^{m}\mathbb{P}_{\theta^{k+j-1}\xi}\left(y_{j-1};\mathrm{d}y_{j}\right)\frac{U\left(\theta^{k+j}\xi,y_{j}\right)}{U\left(\theta^{k+j-1}\xi,y_{j-1}\right)}\frac{U\left(\theta^{k}\xi,0\right)}{U\left(\theta^{k}\xi,x_{k}\right)}\\ =&\prod_{j=1}^{m}\mathbb{P}^+_{\theta^{k+j-1}\xi}\left(y_{j-1};\mathrm{d}y_{j}\right)H\left(\theta^{k}\xi,x_{k},x_{k}\right). \end{aligned} \end{equation*}

As a result,

\begin{align*} & \mathbb{P}_{\xi}\left(\nu=k,\zeta_{1}\in \mathrm{d}x_{1},\cdots,\zeta_{k}\in \mathrm{d}x_{k},\zeta^{\nu}_{1}\in \mathrm{d}y_{1},\cdots,\zeta^{\nu}_{m}\in \mathrm{d}y_{m}\right)\\ = & \mathbf{1}_{\left\{x_{1},\cdots,x_{k-1} \gt x_{k}\right\}}\mathbf{1}_{\left\{y_{1},\cdots,y_{m}\geq0\right\}}\prod_{i=1}^{k}\mathbb{P}^+_{\theta^i\xi}\left(x_{i-1};\mathrm{d}x_{i}\right)\\ & \times\prod_{j=1}^{m}\mathbb{P}^+_{\theta^{k+j-1}\xi}\left(y_{j-1}+x_{k};\mathrm{d}y_{j}+x_{k}\right)H\left(\theta^{k+m}\xi,y_{m}+x_{k},x_{k}\right)\\ =& \mathbf{1}_{\left\{x_{1},\cdots,x_{k-1} \gt x_{k}\right\}}\prod_{i=1}^{k}\mathbb{P}^+_{\theta^i\xi}\left(x_{i-1};\mathrm{d}x_{i}\right)H\left(\theta^{k}\xi,x_{k},x_{k}\right)\prod_{j=1}^{m}\mathbb{P}^+_{\theta^{k+j-1}\xi}\left(y_{j-1};\mathrm{d}y_{j}\right)\\ =& \mathbb{P}_{\xi}\left(\nu=k,\zeta_{1}\in \mathrm{d}x_{1},\cdots,\zeta_{k}\in \mathrm{d}x_{k}\right)\mathbb{P}_{\xi}\left(\zeta^{\nu}_{1}\in \mathrm{d}y_{1},\cdots,\zeta^{\nu}_{m}\in \mathrm{d}y_{m}\right), \end{align*}

which proves that $\left(\nu,\zeta_{1},\cdots,\zeta_{\nu}\right)$ and $\left(\zeta^{\nu}_{1},\zeta^{\nu}_{2},\cdots\right)$ are independent with respect to $\mathbb{P}_{\xi}$ .

(iii) For all $k\geq1$ and $x\geq 0$ , we have

\begin{equation*} \begin{aligned} \mathbb{P}_{\xi}\left(\nu=k,\zeta_{\nu}\in \mathrm{d}x\right)=&\mathbb{P}_{\xi}\left(\zeta_{k} \lt \zeta_{k-1},\cdots,\zeta_{k} \lt \zeta_{1},\zeta_{k}\in \mathrm{d}x\right)H\left(\theta^{k}\xi,x,x\right)\\ =&\mathbb{P}_{\xi}\left(S_{k} \lt S_{k-1},\cdots,S_{k} \lt S_{1},S_{k}\in \mathrm{d}x\right)\frac{U\left(\theta^k\xi,x\right)}{U(\xi,0)}\frac{U\left(\theta^k\xi,0\right)}{U(\theta^k\xi,x)}\\ =&\mathbb{P}_{\xi}\left(S_{k} \lt S_{k-1},\cdots,S_{k}\lt S_{1},S_{k}\in \mathrm{d}x\right)\frac{U\left(\theta^k\xi,0\right)}{U(\xi,0)}, \end{aligned} \end{equation*}

as desired.

Define $\tilde{\mathbb{P}}({\cdot})\,:\!=\,\int_{\Omega}\frac{U(\xi,0)}{U(0)}\mathbb{P}_{\xi}({\cdot})\,\mathrm{d}\mathrm{P}$ and denote by $\tilde{\mathbb{E}}$ the corresponding expectation. We show that the excursion $\left(\zeta_{1},\cdots,\zeta_{\nu}\right)$ under the annealed probability $\tilde{\mathbb{P}}$ is distributed as $\left(S_{\Gamma_1}-S_{\Gamma_1-1},\cdots,S_{\Gamma_1}\right)$ under $\mu^{\infty}$ .

Proposition 3. (Annealed excursion distribution.) If $\nu$ is the time of the first prospective minimal value of the process $(\zeta_{n},n\geqslant0)$ defined as in (2.8), then we have the following:

  1. (i) $\tilde{\mathbb{P}}\left(\zeta_{\nu}\in \mathrm{d}x\right)=\mu^{\infty}\left(S_{\Gamma_1}\in \mathrm{d}x\right)$ ;

  2. (ii) $\tilde{\mathbb{E}}\left[f\left(\nu,\zeta_{1},\cdots,\zeta_{\nu}\right)\right]=\mathbb{E}_{\mu^{\infty}}\left[f\left(\Gamma_1,S_{\Gamma_1}-S_{\Gamma_1-1},\cdots,S_{\Gamma_1}\right)\right]$ for any measurable function f.

Proof. (i) Proposition 2(iii) yields

\begin{align*} \mathrm{E}\left[U(\xi,0)\mathbb{P}_{\xi}\left(\nu=k,\zeta_{\nu}\in \mathrm{d}x\right)\right] = & \mathrm{E}\left[\mathbb{P}_{\xi}\left(S_{k} \lt S_{k-1},\cdots,S_{k} \lt S_{1},S_{k}\in \mathrm{d}x\right)U\left(\theta^k\xi,0\right)\right]\\ =&U\left(0\right)\mu^{\infty}\left(S_{k}-S_{k-1} \lt 0,\cdots,S_{k}-S_{1} \lt 0,S_{k}\in \mathrm{d}x\right) \end{align*}
\begin{align*} =&U\left(0\right)\mu^{\infty}\left(S_{1} \lt 0,\cdots,S_{k-1} \lt 0,S_{k}\in \mathrm{d}x\right)\\ =&U\left(0\right)\mu^{\infty}\left(\Gamma_1=k,S_{\Gamma_1}\in \mathrm{d}x\right). \end{align*}

Dividing by U(0) and summing over k, we have $\tilde{\mathbb{P}}\left(\zeta_{\nu}\in \mathrm{d}x\right)=\mu^{\infty}\left(S_{\Gamma_1}\in \mathrm{d}x\right)$ .

(ii) Similarly to the proof of Proposition 2(iii), we get

\begin{equation*} \begin{aligned} &\tilde{\mathbb{E}}\left[f\left(\nu,\zeta_{1},\cdots,\zeta_{\nu}\right)\right]\\ =&\,\mathrm{E}\left[\int f\left(k,x_{1},\cdots,x_{k}\right)\frac{U(\xi,0)}{U(0)}\mathbb{P}_{\xi}\left(\nu=k,\zeta_{1}\in \mathrm{d}x_{1},\cdots,\zeta_{k}\in \mathrm{d}x_{k}\right)\right]\\ =&\,\mathrm{E}\left[\int f\left(k,x_{1},\cdots,x_{k}\right)\frac{U\left(\theta^k\xi,0\right)}{U(0)}\mathbb{P}_{\xi}\left(S_{k} \lt S_{k-1},\cdots,S_{k} \lt S_{1}, S_{1}\in \mathrm{d}x_{1},\cdots,S_{k}\in \mathrm{d}x_{k}\right)\right]\\ =&\int f\left(k,x_{1},\cdots,x_{k}\right)\mu^{\infty}\left(S_{k} \lt S_{k-1},\cdots,S_{k} \lt S_{1}, S_{1}\in \mathrm{d}x_{1},\cdots,S_{k}\in \mathrm{d}x_{k}\right)\\ =&\int f\left(k,x_{1},\cdots,x_{k}\right)\mu^{\infty}\left(\tilde{S}_{1} \lt 0,\cdots,\tilde{S}_{k-1} \lt 0, \tilde{S}_{k}-\tilde{S}_{k-1}\in \mathrm{d}x_{1},\cdots,\tilde{S}_{k}\in \mathrm{d}x_{k}\right)\\ =&\, \mathbb{E}_{\mu^{\infty}}\left[f\left(\Gamma_1,S_{\Gamma_1}-S_{\Gamma_1-1},\cdots,S_{\Gamma_1}\right)\right], \end{aligned} \end{equation*}

where $\tilde{S}_{j}\,:\!=\,S_{k}-S_{k-j}$ , $j\leq k$ , and the last equality follows from the duality property for the random walk under $\mu^{\infty}$ .

2.3.3. Application of Tanaka’s decomposition

The following proposition gives an integral criterion for the almost sure divergence of the infinite series $\sum_{n=1}^{\infty}U(\xi,\beta)F\big(\zeta^{(\beta)}_{n}\big)$ .

Proposition 4. Let $\zeta^{(\beta)}_{n}$ be defined as in (2.4), and let $F:[{-}\beta,\infty)\to[0,\infty)$ be a non-increasing measurable function. Then

\begin{equation*} \begin{aligned} \int_{-\beta}^{\infty}F(x)(x+\beta)\, \mathrm{d}x=\infty \quad &\Longleftrightarrow \quad \sum_{n=1}^{\infty}\frac{U(\xi,\beta)F\big(\zeta^{(\beta)}_{n}\big)\big(\zeta^{(\beta)}_{n}+\beta\big)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}=\infty,\quad \mathbb{P}\text{-a.s.}\\ \quad &\Longleftrightarrow \quad \sum_{n=1}^{\infty}U(\xi,\beta)F\big(\zeta^{(\beta)}_{n}\big)=\infty,\quad \mathbb{P}\text{-a.s.} \end{aligned} \end{equation*}

Proof. For the second equivalence, note that for almost all $\xi$ , $\zeta^{(\beta)}_{n}+\beta\to\infty$ $\mathbb{P}_{\xi}$ -a.s. as $n\to\infty$ (by Proposition 2(i)), and by (2.3), we have $U\big(\theta^n\xi,\zeta^{(\beta)}_{n}+\beta\big)\sim\zeta^{(\beta)}_{n}+\beta$ as $n\to\infty$ ; hence,

$$\sum_{n=1}^{\infty}\frac{U(\xi,\beta)F\big(\zeta^{(\beta)}_{n}\big)\big(\zeta^{(\beta)}_{n}+\beta\big)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}=\infty\quad \Longleftrightarrow\quad \sum_{n=1}^{\infty}U(\xi,\beta)F\big(\zeta^{(\beta)}_{n}\big)=\infty, \quad \mathbb{P}\text{-a.s.}$$

By Lemma 2, we have

$$\mathbb{E}\left[\sum_{n=1}^{\infty}\frac{U(\xi,\beta)F\big(\zeta^{(\beta)}_{n}\big)\big(\zeta^{(\beta)}_{n}+\beta\big)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\right]=\int_{-\beta}^{\infty}F(x)(x+\beta)\mathcal{R}^{(\beta)}(\mathrm{d}x).$$

It follows from (2.6) that

$$\sum_{n=1}^{\infty}\frac{U(\xi,\beta)F\big(\zeta^{(\beta)}_{n}\big)\big(\zeta^{(\beta)}_{n}+\beta\big)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}=\infty,\quad \mathbb{P}\text{-a.s.} \quad \Longrightarrow \quad \int_{-\beta}^{\infty}F(x)(x+\beta)\, \mathrm{d}x=\infty.$$

Thus we need only to prove that

$$\int_{-\beta}^{\infty}F(x)(x+\beta)\, \mathrm{d}x=\infty \quad \Longrightarrow \quad \sum_{n=1}^{\infty}U(\xi,\beta)F\big(\zeta^{(\beta)}_{n}\big)=\infty,\quad \mathbb{P}\text{-a.s.}$$

For simplicity, we only consider $\beta=0$ and write $\zeta_{n}\,:\!=\,\zeta^{(0)}_{n}$ , since the case of $\beta \gt 0$ is similar to this case. Note that

$$\sum_{n=1}^{\infty}U\left(\xi,0\right)F\left(\zeta_{n}\right)=\infty,\quad \mathbb{P}\text{-a.s.}\quad \Longleftrightarrow\quad \sum_{n=1}^{\infty}F\left(\zeta_{n}\right)=\infty,\quad \tilde{\mathbb{P}}\text{-a.s.}$$

Therefore it remains to show that

(2.11) \begin{equation} \int_{0}^{\infty}F(x)x\, \mathrm{d}x=\infty \quad \Longrightarrow\quad \sum_{n=1}^{\infty}F\left(\zeta_{n}\right)=\infty,\quad \tilde{\mathbb{P}}\text{-a.s.} \end{equation}

To prove (2.11), it suffices to check that

$$\tilde{\mathbb{P}}\left(\sum_{n=1}^{\infty}F\left(\zeta_n\right)=\infty\right) \lt 1 \quad \Longrightarrow\quad \int_0^{\infty} F(x)x \,\mathrm{d}x \lt \infty.$$

We assume that $\tilde{\mathbb{P}}\left(\sum_{n=1}^{\infty}F\left(\zeta_n\right)=\infty\right) \lt 1$ , that is, $\tilde{\mathbb{P}}\left(\sum_{n=1}^{\infty}F\left(\zeta_n\right) \lt \infty\right) \gt 0$ .

We first use Tanaka’s decomposition (Propositions 2 and 3) to reconstruct the process $\left(\zeta_{n},n\geq0\right)$ . Recall that

$$\nu\,:\!=\,\inf\left\{m\geq1:\zeta_{m+n}\geq \zeta_{m}, \,\text{for all}\,\ n\geq0\right\}.$$

We have an excursion $\left(\zeta_j, 0\leq j \leq \nu\right)$ , which is denoted by $\omega=(\omega(j), 0\leq j \leq \nu)$ . Let $\left\{\omega_k=\left(\omega_k(j), 0 \leq j \leq \nu_k\right), k \geq 1\right\}$ be a sequence of independent copies of $\omega$ under $\tilde{\mathbb{P}}$ . Define

$$V_0\,:\!=\,0,\quad\quad V_k\,:\!=\,\nu_1+\cdots+\nu_k,\quad \text{for all}\,\ k \geq 1.$$

The process

$$\zeta_0=0,\quad\quad \zeta_n=\sum_{i=1}^{k}\omega_i\left(\nu_i\right)+\omega_{k+1}\left(n-V_{k}\right), \quad \text{for}\ V_k \lt n \leq V_{k+1},$$

is what we need. Then,

\begin{equation*} \begin{aligned} \sum_{n=1}^{\infty}F\left(\zeta_{n}\right)&=\sum_{k=1}^{\infty}\sum_{n=V_{k-1}+1}^{V_k}F\left(\sum_{i=1}^{k-1}\omega_i\left(\nu_i\right)+\omega_{k}\left(n-V_{k-1}\right)\right)\\ &=\sum_{k=1}^{\infty}\sum_{j=1}^{\nu_k}F\left(\sum_{i=1}^{k}\omega_i\left(\nu_i\right)-\left(\omega_{k}\left(\nu_k\right)-\omega_{k}\left(j\right)\right)\right). \end{aligned} \end{equation*}

Hence, by hypothesis,

$$\tilde{\mathbb{P}}\left(\sum_{n=1}^{\infty}F\left(\zeta_{n}\right) \lt \infty\right)=\tilde{\mathbb{P}}\left(\sum_{k=1}^{\infty}\sum_{j=1}^{\nu_k}F\left(\sum_{i=1}^{k}\omega_i\left(\nu_i\right)-\left(\omega_{k}\left(\nu_k\right)-\omega_{k}\left(j\right)\right)\right) \lt \infty\right) \gt 0.$$

On the other hand, by Proposition 3(i), it follows from the strong law of large numbers that

$$\lim _{k \rightarrow \infty} \frac{\sum_{i=1}^{k}\omega_i\left(\nu_i\right)}{k}=C,\quad \tilde{\mathbb{P}}\text{-a.s.,}$$

where $C\,:\!=\,\mathbb{E}_{\mu^\infty}\left(S_{\Gamma_1}\right) \lt \infty$ is due to [Reference Feller11, Chapter XVIII, Section 5, Theorem 1]. Let $\epsilon \gt 0$ and $A\,:\!=\,(C+\epsilon)\vee 1$ ; then, for all sufficiently large k, $\sum_{i=1}^{k}\omega_i\left(\nu_i\right)\leq Ak$ . Since F is non-increasing, we obtain

\begin{equation*} \begin{aligned} &\tilde{\mathbb{P}}\left(\sum_{k=1}^{\infty}\sum_{j=1}^{\nu_k}F\left(Ak-\left(\omega_{k}\left(\nu_k\right)-\omega_{k}\left(j\right)\right)\right) \lt \infty\right)\\ \geq\ &\tilde{\mathbb{P}}\left(\sum_{k=1}^{\infty}\sum_{j=1}^{\nu_k}F\left(\sum_{i=1}^{k}\omega_i\left(\nu_i\right)-\left(\omega_{k}\left(\nu_k\right)-\omega_{k}\left(j\right)\right)\right) \lt \infty\right) \gt 0. \end{aligned} \end{equation*}

Let

$$\chi_k(\nu,\omega,F)\,:\!=\,\sum_{j=1}^{\nu_k} F\left(Ak-\left(\omega_k(\nu_{k})-\omega_k(j)\right)\right);$$

then $\tilde{\mathbb{P}}\left(\sum_{k=1}^{\infty}\chi_k(\nu,\omega,F) \lt \infty\right) \gt 0$ . Note that the independence of the sequence $\left\{\omega_k, k \geq 1\right\}$ yields the independence of the sequence $\left\{\chi_k\left(\nu,\omega,F\right), k \geq 1\right\}$ . By Kolmogorov’s 0–1 law, it follows that

(2.12) \begin{equation} \tilde{\mathbb{P}}\left(\sum_{k=1}^{\infty}\chi_k\left(\nu,\omega,F\right) \lt \infty\right)=1. \end{equation}

From now on, we proceed in the same way as [Reference Chen9]. Let $E_M\,:\!=\,\left\{\sum_{k=1}^{\infty}\chi_k\left(\nu,\omega,F\right) \lt M\right\}$ for any $M \gt 0$ . Either $\mathbb{P}\left(E_{M_0}\right)=1$ for some $M_0 \lt \infty$ , or $\tilde{\mathbb{P}}\left(E_M\right) \lt 1$ for all $M \in(0, \infty)$ . For the first case—that is, if there exists some $M_0 \lt \infty$ such that $\mathbb{P}\left(E_{M_0}\right)=1$ —we have

$$ \begin{aligned} M_0 &\geq \tilde{\mathbb{E}}\left(\sum_{k=1}^{\infty}\chi_k\left(\nu,\omega,F\right)\right)\\ & =\ {\mathbb{E}}\left(\sum_{k=1}^{\infty}\sum_{j=1}^{\nu_k}F\left(Ak-\left(\omega_k(\nu_{k})-\omega_k(j)\right)\right)\right)\\ &=\sum_{k=1}^{\infty}\tilde{\mathbb{E}}\left(\sum_{j=1}^{\nu} F\left(Ak-\left(\omega(\nu)-\omega(j)\right)\right)\right)\\ &=\sum_{k=1}^{\infty}\mu^{\infty}\left(\sum_{j=0}^{\Gamma_1-1} F\left(Ak-S_{j}\right)\right), \end{aligned} $$

where the last equality follows from Proposition 3(ii). Similarly to (2.7), we have

$$ \begin{aligned} \mathbb{E}_{\mu^{\infty}}\left(\sum_{j=0}^{\Gamma_1-1} F\left(Ak-S_{j}\right)\right)=&\,\mathbb{E}_{\mu^{\infty}}\left(\sum_{n=0}^{\infty}F\left(Ak-S_{n}\right)\mathbf{1}_{\left\{n \lt \Gamma_1\right\}}\right)\\ =&\,\mathbb{E}_{\mu^{\infty}}\left(F\left(Ak-S_{n}\right)\mathbf{1}_{\left\{\max_{j\leq n}S_{j} \lt 0\right\}}\right)\\ =&\int_0^{\infty}F(Ak+x)\mathcal{R}^{-}(\mathrm{d}x), \end{aligned} $$

where $\mathcal{R}^{-}(\mathrm{d}x)$ is the renewal measure of $R^{-}(x)$ , i.e. the renewal measure associated with the strict descending ladder height process. Thus, $\sum_{k=1}^{\infty}\int_0^{\infty}F(Ak+x)\mathcal{R}^{-}(\mathrm{d}x) \lt \infty$ , which implies that

$$\int_0^{\infty}F(x)x \,\mathrm{d}x \lt \infty.$$

For the second case, $\tilde{\mathbb{P}}\left(E_M\right) \lt 1$ for all $M \in(0, \infty)$ , so $\lim _{M \to \infty} \tilde{\mathbb{P}}\left(E_M\right)=1$ by (2.12). Let

$$\Lambda_{k,l}(\nu,\omega)\,:\!=\,\sum_{j=1}^{\nu_k}\mathbf{1}_{\left\{A(l-1)\leq-\left(\omega_k(\nu_{k})-\omega_k(j)\right) \lt Al\right\}}, \quad \text{for all}\,\ k\geq1,\ l\geq1.$$

Note that, for any $k \geq 1$ ,

$$ \begin{aligned} \chi_k\left(\nu,\omega,F\right)&=\sum_{j=1}^{\nu_k} F\left(Ak-\left(\omega_k(\nu_{k})-\omega_k(j)\right)\right) \sum_{l=1}^{\infty} \mathbf{1}_{\left\{A(l-1) \leq-\left(\omega_k(\nu_{k})-\omega_k(j)\right) \lt Al\right\}}\\ &=\sum_{l=1}^{\infty} \sum_{j=1}^{\nu_k} F\left(Ak-\left(\omega_k(\nu_{k})-\omega_k(j)\right)\right) \mathbf{1}_{\left\{A(l-1) \leq-\left(\omega_k(\nu_{k})-\omega_k(j)\right) \lt Al\right\}}\\ &\geq \sum_{l=1}^{\infty} F(Ak+Al) \Lambda_{k,l}(\nu,\omega), \end{aligned} $$

where the last inequality holds because F is non-increasing. Thus, we have

$$ \begin{aligned} \sum_{k=1}^{\infty} \chi_k\left(\nu,\omega,F\right) &\geq \sum_{k=1}^{\infty}\sum_{l=1}^{\infty}F(Ak+Al) \Lambda_{k,l}(\nu,\omega)\\ &=\sum_{m=1}^{\infty} F(Am+A) \sum_{k=1}^{m} \Lambda_{k,m+1-k}(\nu,\omega)\\ &=\sum_{m=1}^{\infty} F(Am+A) m Y_m, \end{aligned} $$

where $Y_m\,:\!=\,\sum_{k=1}^m \Lambda_{k,m+1-k}(\nu,\omega) / m$ for all $m \geq 1$ . Note that $\left(\Lambda_{k,{\cdot}}(\nu,\omega), k\geq1\right)$ are i.i.d. under $\tilde{\mathbb{P}}$ , and for all $l\geq1$ , $\Lambda_{1,l}(\nu,\omega)$ has the same law as $\sum_{j=0}^{\Gamma_{1}-1}\mathbf{1}_{\left\{A(l-1)\leq -S_j \lt Al\right\}}$ under $\mu^{\infty}$ . Following the same first- and second-moment argument for $Y_m$ as [Reference Chen9], we obtain that there exists a sufficiently large number $M \gt 0$ so that, for any $m \geq 1$ ,

$$ C_2 \geq \tilde{\mathbb{E}}\left(Y_m \mathbf{1}_{E_M}\right) \geq C_1 \gt 0, $$

where $C_1, C_2$ are positive constants. Therefore, we have

$$ \begin{aligned} M & \geq \tilde{\mathbb{E}}\left(\sum_{k=1}^{\infty} \chi_k\left(\nu,\omega,F\right) \mathbf{1}_{E_M}\right)\\ & \geq \tilde{\mathbb{E}}\left(\sum_{m=1}^{\infty} F(Am+A) m Y_m \mathbf{1}_{E_M}\right)\\ & = \sum_{m=1}^{\infty} F(Am+A) m \tilde{\mathbb{E}}\left(Y_m \mathbf{1}_{E_M}\right)\\ & \geq \sum_{m=1}^{\infty} F(Am+A) m C_1. \end{aligned} $$

This yields

$$\sum_{m=1}^{\infty} F(A m+A) m \leq \frac{M}{C_1} \lt \infty,$$

which implies that $\int_0^{\infty} F(y) y \, \mathrm{d} y \lt \infty$ and completes the proof of (2.11).

3. Change of measure and spinal decomposition

In this section, we introduce the truncated martingale via the quenched harmonic function of the associated random walk. Subsequently, we demonstrate the existence of the limit of the derivative martingale and the equivalence between the non-triviality of the limit and the mean convergence of the truncated martingales. Finally, we provide the spinal decomposition for the time-inhomogeneous branching random walk. The main idea is similar to that of the constant-environment situation. All proofs in this section are postponed to Appendix A.

3.1. Truncated martingales and change of probabilities

To investigate the limit of the derivative martingale, we introduce a non-negative process with a barrier.

Let $\beta\geq0$ and $V\left(\varnothing\right)=a\geq0$ . We define

$$D^{(\beta)}_{n}\,:\!=\,\sum_{|u|=n}U\left(\theta^{n}\xi,V(u)+\beta\right)\mathrm{e}^{-V(u)}\mathbf{1}_{\left\{\min_{0\leq k\leq n}V(u_{k})\geq-\beta\right\}}, \quad n\geq1,$$

and $D^{(\beta)}_{0}\,:\!=\,U\left(\xi,a+\beta\right)\mathrm{e}^{-a}$ .

Lemma 3. (Truncated martingale.) For any $\beta\geq0$ and $a\geq0$ , the process $\big(D^{(\beta)}_{n},n\geq0\big)$ is a non-negative martingale with respect to the filtration $\left(\mathcal{F}_{n},n\geq0\right)$ under both laws $\mathbb{P}_{\xi,a}$ and $\mathbb{P}_{a}$ . Therefore, for almost all $\xi$ , $D^{(\beta)}_{n}$ converges $\mathbb{P}_{\xi,a}\text{-a.s.}$ to a non-negative finite limit, which we denote by $D^{(\beta)}_{\infty}$ .

The lemma presented below establishes a link between the limits of the truncated martingales and the derivative martingale. Therefore, we can examine the non-triviality of the limit of the derivative martingale by the mean convergence of the truncated martingales.

Lemma 4.

  1. (i) Assume that (1.1), (1.2) and (1.3) hold; then $\lim_{n\to\infty}D_{n}=D_{\infty}\geq0$ , $\mathbb{P}\text{-a.s.}$

  2. (ii) If there exists $\beta\geq0$ such that, for almost all $\xi$ , $D^{(\beta)}_{n}$ converges in $L^{1}\left(\mathbb{P}_{\xi}\right)$ , then $D_{\infty}$ is non-degenerate for almost all $\xi$ , i.e. $\mathbb{P}_{\xi}\left(D_{\infty} \gt 0\right) \gt 0$ , P-a.s.

  3. (iii) If, for almost all $\xi$ , $D^{(\beta)}_{\infty}=0$ $\mathbb{P}_{\xi}$ -a.s. for all $\beta\geq0$ , then $D_{\infty}$ is degenerate for almost all $\xi$ , i.e. $\mathbb{P}_{\xi}\left(D_{\infty}=0\right)=1$ , P-a.s.

Remark 2. In proving Theorem 1, we also show that the following two statements are equivalent:

  1. (i) There exists $\beta\geq0$ such that $D^{(\beta)}_{n}$ is $L^{1}\left(\mathbb{P}_{\xi}\right)$ -convergent for almost all $\xi$ .

  2. (ii) For any $\beta\geq0$ , $D^{(\beta)}_{n}$ is $L^{1}\left(\mathbb{P}_{\xi}\right)$ -convergent for almost all $\xi$ .

Since $\big(D^{(\beta)}_{n},n\geq0\big)$ is a non-negative martingale with $\mathbb{E}_{\xi,a}\big(D^{(\beta)}_{n}\big)=U(\xi,a+\beta)\mathrm{e}^{-a}$ , it follows from Kolmogorov’s extension theorem that there exists a unique probability measure $\mathbb{Q}^{(\beta)}_{\xi,a}$ on $\mathcal{F}_{\infty}\,:\!=\,\vee_{n\geq0}\mathcal{F}_{n}$ such that, for all $n\geq1$ ,

(3.1) \begin{equation} \frac{\mathrm{d}\mathbb{Q}^{(\beta)}_{\xi,a}}{\mathrm{d}\mathbb{P}_{\xi,a}}\bigg|_{\mathcal{F}_{n}}\,:\!=\,\frac{D^{(\beta)}_{n}}{U\left(\xi,a+\beta\right)\mathrm{e}^{-a}}.\end{equation}

An intuitive description of the new probability measure is presented in the next subsection.

3.2. Spinal decomposition of the time-inhomogeneous branching random walk

This subsection is devoted to giving a time-inhomogeneous version of the spinal decomposition of the branching random walk. The spinal decomposition method was first introduced by Lyons, Pemantle and Peres [Reference Lyons, Pemantle and Peres24] to investigate Galton–Watson processes. Lyons [Reference Lyons23] subsequently applied this approach to study the additive martingale for the branching random walk. Later, Biggins and Kyprianou [Reference Biggins and Kyprianou7] expanded on this work, adapting it to general martingales based on additive functional of multitype branching.

The spinal decomposition provides an alternative explanation for the distribution of a branching random walk that is biased by a non-negative martingale as a branching random walk with a special infinite ray called the ‘spine’. For clarity, we first give the main steps and then give a detailed description below. Firstly, we define the branching random walk with a random infinite ray $w^{(\beta)}=\big(w^{(\beta)}_{n},n\geq0\big)$ : $w^{(\beta)}_{0}\,:\!=\,\varnothing$ , and $w^{(\beta)}_{n}$ is a child of $w^{(\beta)}_{n-1}$ with $|w^{(\beta)}_{n}|=n$ for each $n\geq1$ . Secondly, we construct a new probability measure via a non-negative martingale. The special individual (spine) reproduces according to the new probability measure, while the other normal particles behave as before. Lastly, we identify this new process as the branching random walk under the new probability measure. The spine approach helps us tackle difficult moment calculations for branching random walks.

Now we introduce the time-inhomogeneous branching random walk with a spine. The process starts with a single particle $w^{(\beta)}_{0}$ at position $V(w^{(\beta)}_{0})=a$ . It dies at time 1 and gives birth to children distributed as $\hat{L}^{(\beta)}_{1,a}$ , whose distribution is the law of $L_{1}$ under $\mathbb{Q}^{(\beta)}_{\xi,a}$ . The particle u is chosen as the spine element $w^{(\beta)}_{1}$ among the children of $w^{(\beta)}_{0}$ with probability proportional to $U\big(\theta\xi,V(u)\big)\mathrm{e}^{-V(u)}\mathbf{1}_{\left\{V(u)\geq-\beta\right\}}$ , while all other children are normal particles. For any $n\geq1$ , each particle alive at generation n dies at time $n+1$ and gives birth independently to children. The children of the normal particle z are distributed as $L_{n+1,V(z)}$ (i.e. $L_{n+1}$ under $\mathbb{P}_{\xi,V(z)}$ ), while the spine element $w^{(\beta)}_{n}$ reproduces according to the point process $\hat{L}^{(\beta)}_{n+1,V(w^{(\beta)}_{n})}$ , which is distributed as $L_{n+1}$ under $\mathbb{Q}^{(\beta)}_{\xi,V(w^{(\beta)}_{n})}$ , and the particle v is chosen as the spine element $w^{(\beta)}_{n+1}$ among the children of $w^{(\beta)}_{n}$ with probability proportional to $U\big(\theta^{n+1}\xi,V(v)+\beta\big)\mathrm{e}^{-V(v)}\mathbf{1}_{\left\{\min_{0\leq k\leq n+1}V(v_{k})\geq-\beta\right\}}$ ; all other children are normal particles. The process continues as described as above. We continue to use $\mathbb{T}$ to denote the genealogical tree. Let $\hat{\mathbb{P}}^{(\beta)}_{\xi,a}$ denote the law of the new process, which is a probability measure on the product of the space of all marked trees and the space of all infinite spines.

The following spinal decomposition consists of an alternative construction of the law $\mathbb{Q}^{(\beta)}_{\xi,a}$ as the projection of the law $\hat{\mathbb{P}}^{(\beta)}_{\xi,a}$ on the space of all marked trees. By an abuse of notation, the marginal law of $\hat{\mathbb{P}}^{(\beta)}_{\xi,a}$ on the space of marked trees is also denoted by $\hat{\mathbb{P}}^{(\beta)}_{\xi,a}$ . This alternative construction allows us to study the mean convergence of the corresponding martingale in Section 4.

Proposition 5. (Spinal decomposition.) The branching random walk umder $\mathbb{Q}^{(\beta)}_{\xi,a}$ is distributed as $\hat{\mathbb{P}}^{(\beta)}_{\xi,a}$ .

Using Proposition 5, we will identify the branching random walk under $\mathbb{Q}^{(\beta)}_{\xi,a}$ with $\hat{\mathbb{P}}^{(\beta)}_{\xi,a}$ in the following.

Proposition 6. (Law of the spine.) Let the spine $w^{(\beta)}=\big(w^{(\beta)}_{n}\big)$ and probability measure $\mathbb{Q}^{(\beta)}_{\xi,a}$ be defined as above. We have the following:

  1. (i) For any n and any vertex $v\in\mathbb{T}$ with $|v|=n$ , we have

    $$\mathbb{Q}^{(\beta)}_{\xi,a}\big(w^{(\beta)}_{n}=v\mid \mathcal{F}_{n}\big)=\frac{U\left(\theta^{n}\xi,V(v)+\beta\right)\mathrm{e}^{-V(v)}\mathbf{1}_{\left\{\min_{0\leq k\leq n}V(v_{k})\geq-\beta\right\}}}{D^{(\beta)}_{n}}.$$
  2. (ii) The process $\big(V(w^{(\beta)}_{n}),n\geq0\big)$ under $\mathbb{Q}^{(\beta)}_{\xi,a}$ is distributed as the random walk $\left(S_{n},n\geq0\right)$ under $\mathbb{P}_{\xi,a}$ conditioned to stay in $[{-}\beta,\infty)$ . Equivalently, for all n and any measurable function $f:\mathbb{R}^{n+1}\to\mathbb{R}_{+}$ , we have P-a.s.

(3.2) \begin{equation} \mathbb{E}_{\mathbb{Q}^{(\beta)}_{\xi,a}}\left[f\big(V(w^{(\beta)}_{0}), \cdots, V(w^{(\beta)}_{n})\big)\right]=\mathbb{E}_{\xi, a}\left[f\left(S_{0}, \cdots, S_{n}\right)\frac{U(\theta^{n}\xi,S_{n}+\beta)}{U(\xi,a+\beta)}\mathbf{1}_{\left\{\min_{0\leq k\leq n}S_{k}\geq-\beta\right\}}\right].\qquad\quad \end{equation}

Note that $\mathbb{Q}^{(\beta)}_{\xi,a}\big(D^{(\beta)}_{n} \gt 0\big)=\mathbb{E}_{\xi,a}\Big(\frac{D^{(\beta)}_{n}}{U(\xi,a+\beta)\mathrm{e}^{-a}}\Big)=1$ ; the right-hand side of the identity in Proposition 6(i) is $\mathbb{Q}^{(\beta)}_{\xi,a}\text{-a.s.}$ well-defined. For (ii), by (2.4) and (3.2), we obtain the following identity:

\begin{equation*} \mathbb{E}_{\mathbb{Q}^{(\beta)}_{\xi,a}}\left[f\big(V(w^{(\beta)}_{0}), \cdots, V(w^{(\beta)}_{n})\big)\right]=\mathbb{E}_{\xi, a}\left[f\big(\zeta^{(\beta)}_{0}, \cdots, \zeta^{(\beta)}_{n}\big)\right].\end{equation*}

4. Proof of Theorem 1

Lemma 4(i) yields Theorem 1(i). In this section we prove that (1.4), under the assumptions (1.1), (1.2), and (1.3), is a necessary and sufficient condition for the truncated martingale $\big(D^{(\beta)}_{n},n\geq0\big)$ to converge in $L^{1}\left(\mathbb{P}_{\xi}\right)$ for almost all $\xi$ . Then, by Lemma 4(ii)–(iii), this condition is equivalent to the non-degeneracy of the limit $D_{\infty}$ of the derivative martingale, which establishes Theorem 1(ii).

Building on the general approach outlined by Biggins and Kyprianou [Reference Biggins and Kyprianou7] for multitype branching processes and their application to deriving the mean convergence of martingales produced by the mean-harmonic function, we establish the following result (Proposition 7) by analyzing the mean convergence of the derivative martingale.

Recalling that $\overleftarrow{u}$ is the parent of u, for any $u\in\mathbb{T} \backslash \left\{\varnothing\right\}$ , we define its relative position by

$$\Delta V(u)\,:\!=\,V(u)-V(\overleftarrow{u}).$$

Under $\mathbb{P}_{\xi,\zeta^{(\beta)}_{n}}$ , we define

(4.1) \begin{equation} \begin{aligned} \tilde{X}\,:\!=\,&\frac{\sum_{|u|=1}U\big(\theta^{n+1}\xi,\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathrm{e}^{-\zeta^{(\beta)}_{n}-\Delta V(u)}\mathbf{1}_{\left\{\zeta^{(\beta)}_{n}+\Delta V(u)\geq-\beta\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)\mathrm{e}^{-\zeta^{(\beta)}_{n}}}\\ =&\frac{\sum_{|u|=1}U\big(\theta^{n+1}\xi,\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\zeta^{(\beta)}_{n}+\Delta V(u)\geq-\beta\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}, \end{aligned}\end{equation}

where $\left(\Delta V(u),|u|=1\right)$ is independent of $\zeta^{(\beta)}_{n}$ under $\mathbb{P}_{\xi}$ .

Proposition 7. Let $\zeta^{(\beta)}_{n}$ be defined as in (2.4). For all $\beta\geq0$ , we have the following:

  1. (i) If, for almost all $\xi$ ,

    \begin{equation*} \sum_{n=1}^{\infty}\mathbb{E}_{\xi,\zeta^{(\beta)}_{n}}\left[\tilde{X}\left(\big(U(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta)\mathrm{e}^{-\zeta^{(\beta)}_{n}}\tilde{X}\big) \wedge 1\right)\right]U(\xi,\beta) \lt \infty, \quad \mathbb{P}_{\xi}\text{-a.s.}, \end{equation*}
    then $\mathbb{E}_{\xi}\big(D^{(\beta)}_{\infty}\big)=U(\xi,\beta)$ , P-a.s.
  2. (ii) If, for any $c\geq1$ ,

    \begin{equation*} \sum_{n=1}^{\infty}\mathbb{E}_{\zeta^{(\beta)}_{n}}\left[\tilde{X}\mathbf{1}_{\left\{U(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta)\mathrm{e}^{-\zeta^{(\beta)}_{n}}\tilde{X}\geq c\right\}}\right]U(\xi,\beta)=\infty, \quad \mathbb{P}\text{-a.s.}, \end{equation*}
    then $\mathbb{E}\big(D^{(\beta)}_{\infty}\big)=0$ .

Proof. (i) Thanks to the harmonic function and the spinal decomposition outlined in previous sections, this result follows by the same argument as used in the proof of Theorem 2.1 in [Reference Biggins and Kyprianou7].

(ii) For the case of degeneracy, the proof differs slightly from the proof of (i), as we use the annealed probability instead of the quenched probability in the expression. We will indicate only the alterations that need to be made in the proof, as the primary concept aligns with [Reference Biggins and Kyprianou7]. Let

$$\mathbb{Q}^{(\beta)}_{a}({\cdot})\,:\!=\,\mathrm{E}\left[\mathbb{Q}^{(\beta)}_{\xi,a}({\cdot})\right];$$

then we have

\begin{equation*} \frac{\mathrm{d}\mathbb{Q}^{(\beta)}_{a}}{\mathrm{d}\mathbb{P}_{a}}\bigg|_{\mathcal{F}_{n}}=\frac{D^{(\beta)}_{n}}{U\left(\xi,a+\beta\right)\mathrm{e}^{-a}}. \end{equation*}

It follows from the Corollary 1 of Athreya [Reference Athreya2] that

$$\mathbb{E}\big(D^{(\beta)}_{\infty}\big)=0 \quad \Longleftrightarrow \quad \mathbb{Q}\big(D^{(\beta)}_{\infty}=\infty\big)=1.$$

Note that, by the spinal decomposition,

$$\mathbb{Q}\big(D^{(\beta)}_{\infty}=\infty\big)\geq \mathbb{Q}\Big(\limsup_{n\to\infty}U\big(\theta^{n}\xi,V(w^{(\beta)}_{n})+\beta\big)\mathrm{e}^{-V(w^{(\beta)}_{n})}\tilde{X}\big(V(w^{(\beta)}_{n})\big)=\infty\Big).$$

Thus it suffices to prove that, for any $c\geq1$ ,

$$\limsup_{n\to\infty}U\big(\theta^{n}\xi,V(w^{(\beta)}_{n})+\beta\big)\mathrm{e}^{-V(w^{(\beta)}_{n})}\tilde{X}\big(V(w^{(\beta)}_{n})\big)\geq c, \quad \mathbb{Q}\text{-a.s.}$$

By the conditional Borel–Cantelli lemma, this is equivalent to showing that, for any $c\geq1$ ,

$$\sum_{n=1}^{\infty}\mathbb{Q}\Big(U\big(\theta^{n}\xi,V(w^{(\beta)}_{n})+\beta\big)\mathrm{e}^{-V(w^{(\beta)}_{n})}\tilde{X}\big(V(w^{(\beta)}_{n})\big)\geq c\ \big|\ \mathcal{G}^{(\beta)}_{n}\Big)=\infty,\quad \mathbb{Q}\text{-a.s.},$$

where $\left\{\mathcal{G}^{(\beta)}_{n}\right\}$ is the filtration containing all the information of the spine and its siblings. By applying the spinal decomposition and the definition of $\mathbb{Q}$ , we achieve the desired result.

4.1. The sufficient condition

In this subsection, we show that for all $\beta\geq0$ , $D^{(\beta)}_{n}$ converges in $L^{1}\left(\mathbb{P}_{\xi}\right)$ to $D^{(\beta)}_{\infty}$ for almost all $\xi$ under the assumption (1.4).

Lemma 5. If (1.4) holds, then for all $\beta\geq0$ , $\mathbb{E}_{\xi}\big(D^{(\beta)}_{\infty}\big)=U(\xi,\beta),\quad \mathrm{P}$ -a.s.

Proof. According to Proposition 7(i), it suffices to prove that

(4.2) \begin{equation} \sum_{n=1}^{\infty}\mathbb{E}_{\xi,\zeta^{(\beta)}_{n}}\left[\tilde{X}\left(\big(U(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta)\mathrm{e}^{-\zeta^{(\beta)}_{n}}\tilde{X}\big) \wedge 1\right)\right]U(\xi,\beta) \lt \infty, \quad \mathbb{P}\text{-a.s.} \end{equation}

By (2.3) and the fact that, for almost all $\xi$ , $\zeta^{(\beta)}_{n}+\beta\to\infty$ $\mathbb{P}_{\xi}$ -a.s. as $n\to\infty$ , we have, as $n\to\infty$ ,

\begin{equation*} \begin{aligned} &\frac{\sum_{|u|=1}U\big(\theta^{n+1}\xi,\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\zeta^{(\beta)}_{n}+\Delta V(u)\geq -\beta\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ \sim&\frac{\sum_{|u|=1}\big(\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\zeta^{(\beta)}_{n}+\Delta V(u)\geq -\beta\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}. \end{aligned} \end{equation*}

Since $\big(\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathbf{1}_{\left\{\zeta^{(\beta)}_{n}+\Delta V(u)\geq -\beta\right\}}\leq \zeta^{(\beta)}_{n}+\beta+\Delta V(u)\mathbf{1}_{\left\{\Delta V(u)\geq 0\right\}}$ , we obtain

\begin{align*} &\frac{\sum_{|u|=1}\big(\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\zeta^{(\beta)}_{n}+\Delta V(u)\geq -\beta\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ \leq&\frac{\sum_{|u|=1}\big(\zeta^{(\beta)}_{n}+\beta\big)\mathrm{e}^{-\Delta V(u)}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}+\frac{\sum_{|u|=1}\Delta V(u)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\Delta V(u)\geq 0\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)} \end{align*}
\begin{align*} =&:\frac{\big(\zeta^{(\beta)}_{n}+\beta\big)\tilde{Y}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}+\frac{\tilde{Z}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ \leq&2\max\left\{\frac{\big(\zeta^{(\beta)}_{n}+\beta\big)\tilde{Y}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)},\frac{\tilde{Z}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\right\}, \end{align*}

where $\big(\tilde{Y}(\xi_{n+1}),\tilde{Z}(\xi_{n+1})\big)$ is independent of $\zeta^{(\beta)}_{n}$ under $\mathbb{P}_{\xi}$ .

Therefore, we only need to show that

(4.3) \begin{equation} \begin{cases} \mathbb{E}\left\{\sum_{n=1}^{\infty}\mathbb{E}_{\xi,\zeta^{(\beta)}_{n}}\left[\frac{\big(\zeta^{(\beta)}_{n}+\beta\big)\tilde{Y}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\left(\big(\mathrm{e}^{-\zeta^{(\beta)}_{n}}\big(\zeta^{(\beta)}_{n}+\beta\big)\tilde{Y}(\xi_{n+1})\big) \wedge 1\right)\right]U(\xi,\beta)\right\} \lt \infty,\\ \\ \mathbb{E}\left\{\sum_{n=1}^{\infty}\mathbb{E}_{\xi,\zeta^{(\beta)}_{n}}\left[\frac{\tilde{Z}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\left(\big(\mathrm{e}^{-\zeta^{(\beta)}_{n}}\tilde{Z}(\xi_{n+1})\big) \wedge 1\right)\right]U(\xi,\beta)\right\} \lt \infty, \end{cases} \end{equation}

which implies (4.2).

To prove the first term of (4.3), by the inequality $\mathrm{e}^{x/2}\geq x$ for all $x \gt 0$ , we have

\begin{equation*} \begin{aligned} &\mathbb{E}\left\{\sum_{n=1}^{\infty}\mathbb{E}_{\xi,\zeta^{(\beta)}_{n}}\left[\frac{\big(\zeta^{(\beta)}_{n}+\beta\big)\tilde{Y}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\left(\big(\mathrm{e}^{-\zeta^{(\beta)}_{n}}\big(\zeta^{(\beta)}_{n}+\beta\big)\tilde{Y}(\xi_{n+1})\big) \wedge1\right)\right]U(\xi,\beta)\right\}\\ \leq&\mathbb{E}\left\{\sum_{n=1}^{\infty}\mathbb{E}_{\xi,\zeta^{(\beta)}_{n}}\left[\frac{\big(\zeta^{(\beta)}_{n}+\beta\big)\tilde{Y}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\left(\big(\mathrm{e}^{-\zeta^{(\beta)}_{n}/2+\beta/2}\tilde{Y}(\xi_{n+1})\big) \wedge1\right)\right]U(\xi,\beta)\right\}\\ \leq&\mathbb{E}\left\{\sum_{n=1}^{\infty}\mathbb{E}_{\xi}\left[\frac{\big(\zeta^{(\beta)}_{n}+\beta\big)\mathrm{e}^{-\zeta^{(\beta)}_{n}/2+\beta/2}\tilde{Y}^{2}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\mathbf{1}_{\left\{\zeta^{(\beta)}_{n}\geq2\log\tilde{Y}(\xi_{n+1})+\beta\right\}}\bigg | \zeta^{(\beta)}_{n}\right]U(\xi,\beta)\right\}\\ &+\mathbb{E}\left\{\sum_{n=1}^{\infty}\mathbb{E}_{\xi}\left[\frac{\big(\zeta^{(\beta)}_{n}+\beta\big)\tilde{Y}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\mathbf{1}_{\left\{\zeta^{(\beta)}_{n} \lt 2\log\tilde{Y}(\xi_{n+1})+\beta\right\}}\bigg | \zeta^{(\beta)}_{n}\right]U(\xi,\beta)\right\}. \end{aligned} \end{equation*}

Hence, it follows by Lemma 2 and (2.6) that

\begin{align*} &\mathbb{E}\left\{\sum_{n=1}^{\infty}\mathbb{E}_{\xi,\zeta^{(\beta)}_{n}}\left[\frac{\big(\zeta^{(\beta)}_{n}+\beta\big)\tilde{Y}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\left(\big(\mathrm{e}^{-\zeta^{(\beta)}_{n}}\big(\zeta^{(\beta)}_{n}+\beta\big)\tilde{Y}(\xi_{n+1})\big) \wedge1\right)\right]U(\xi,\beta)\right\}\\ \leq&\int_{-\beta}^{\infty}\mathbb{E}\left[\left(x+\beta\right)\mathrm{e}^{-x/2+\beta/2}\tilde{Y}^{2}(\xi_{n+1})\mathbf{1}_{\left\{x\geq2\log\tilde{Y}(\xi_{n+1})+\beta\right\}}\right]\mathcal{R}^{(\beta)}(\mathrm{d}x)\\ &+\int_{-\beta}^{\infty}\mathbb{E}\left[\left(x+\beta\right)\tilde{Y}(\xi_{n+1})\mathbf{1}_{\left\{x \lt 2\log\tilde{Y}(\xi_{n+1})+\beta\right\}}\right]\mathcal{R}^{(\beta)}(\mathrm{d}x)\\ \leq&c_{2}\int_{0}^{\infty}\mathbb{E}\left[x\mathrm{e}^{-x/2+\beta}\tilde{Y}^{2}(\xi_{n+1})\mathbf{1}_{\left\{x\geq2\log\tilde{Y}(\xi_{n+1})+2\beta\right\}}\right]\mathrm{d}x\\ &+c_{2}\int_{0}^{\infty}\mathbb{E}\left[x\tilde{Y}(\xi_{n+1})\mathbf{1}_{\left\{x \lt 2\log\tilde{Y}(\xi_{n+1})+2\beta\right\}}\right]\mathrm{d}x \end{align*}
\begin{align*}=&c_{2}\mathrm{e}^{\beta}\mathbb{E}\left[\tilde{Y}^{2}(\xi_{n+1})\int_{2\left(\log\tilde{Y}(\xi_{n+1})+\beta\right)_{+}}^{\infty}x\mathrm{e}^{-x/2}\,\mathrm{d}x\right]+c_{2}\mathbb{E}\left[\tilde{Y}(\xi_{n+1})\int_{0}^{2\left(\log\tilde{Y}(\xi_{n+1})+\beta\right)_{+}}x\,\mathrm{d}x\right]\\ \leq&c_{1}(\beta)\mathbb{E}\big(\tilde{Y}(\xi_{n+1})\log_{+}\tilde{Y}(\xi_{n+1})\big)+c_{2}(\beta)\mathbb{E}\big(\tilde{Y}(\xi_{n+1})\log^2_{+}\tilde{Y}(\xi_{n+1})\big)\\ =&c_{1}(\beta)\mathbb{E}(Y\log_{+}Y)+c_{2}(\beta)\mathbb{E}(Y\log^2_{+}Y) \lt \infty. \end{align*}

For the second term of (4.3), by the same argument, we have

\begin{equation*} \begin{aligned} &\mathbb{E}\left\{\sum_{n=1}^{\infty}\mathbb{E}_{\xi,\zeta^{(\beta)}_{n}}\left[\frac{\tilde{Z}(\xi_{n+1})}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\left(\big(\mathrm{e}^{-\zeta^{(\beta)}_{n}}\tilde{Z}(\xi_{n+1})\big) \wedge 1\right)\right]U(\xi,\beta)\right\}\\ \leq&c_{3}(\beta)\mathbb{E}\big(\tilde{Z}(\xi_{n+1})\log_{+}\tilde{Z}(\xi_{n+1})\big)\\ =&c_{3}(\beta)\mathbb{E}(Z\log_{+}Z) \lt \infty. \end{aligned} \end{equation*}

This completes the proof of (4.3), and so Lemma 5 is now proved.

Proof of the sufficient condition of Theorem 1(ii). Assume that (1.4) holds. By Lemma 5, for all $\beta\geq0$ , $D^{(\beta)}_{n}$ is $L^{1}\left(\mathbb{P}_{\xi}\right)$ -convergent for almost all $\xi$ . Therefore, in view of Lemma 4(ii), we prove that $D_{\infty}$ is non-degenerate for almost all $\xi$ , which completes the proof of sufficiency.

4.2. The necessary condition

In this subsection, we show that $D^{(\beta)}_{\infty}=0$ $\mathbb{P}\text{-a.s.}$ for all $\beta\geq0$ when (1.4) does not hold.

Lemma 6. If (1.4) does not hold, then for all $\beta\geq0$ , $\mathbb{E}\big(D^{(\beta)}_{\infty}\big)=0$ , which is equivalent to $D^{(\beta)}_{\infty}=0$ $\mathbb{P}_{\xi}$ -a.s. for almost all $\xi$ .

Proof. According to Proposition 7(ii), it suffices to prove that, for any $c\geq1$ ,

(4.4) \begin{align} & \mathbb{E}\left[Y\log^2_{+}Y+Z\log_{+}Z\right]=\infty\nonumber\\ \Longrightarrow\ & \sum_{n=1}^{\infty}\mathbb{E}_{\zeta^{(\beta)}_{n}}\left[\tilde{X}\mathbf{1}_{\left\{U(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta)\mathrm{e}^{-\zeta^{(\beta)}_{n}}\tilde{X}\geq c\right\}}\right]U(\xi,\beta)=\infty, \quad \mathbb{P}\text{-a.s.} \end{align}

Following the idea of Chen [Reference Chen9], we divide the assumption on the left-hand side of (4.4) into three cases as follows:

(4.5) \begin{equation} \begin{cases} (a)\,\, \mathbb{E}\left[Y\log^2_{+}Y\right]=\infty,\quad \mathbb{E}\left[Y\log_{+}Y\right] \lt \infty,\\ (b)\,\ \mathbb{E}\left[Y\log_{+}Y\right]=\infty,\\ (c)\,\ \mathbb{E}\left[Z\log_{+}Z\right]=\infty. \end{cases} \end{equation}

Note that under $\mathbb{P}_{\xi,\zeta^{(\beta)}_{n}}$ , $\left(\Delta V(u)\,:\,|u|=1\right)$ is distributed as $L_{n+1}$ . For any $x\in\mathbb{R}$ , we define

$$Y_{+}\left(\xi_{n+1},x\right)\,:\!=\,\sum_{|u|=1}\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\Delta V(u)\geq -x\right\}},$$
$$Y_{-}\left(\xi_{n+1},x\right)\,:\!=\,\sum_{|u|=1}\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\Delta V(u) \lt -x\right\}}.$$

Observe that $\tilde{Y}\left(\xi_{n+1}\right)=Y_{+}\left(\xi_{n+1},x\right)+Y_{-}\left(\xi_{n+1},x\right)$ , and $\left(Y_{+}\left(\xi_{n+1},x\right),Y_{-}\left(\xi_{n+1},x\right),x\in\mathbb{R}\right)$ is independent of $\zeta^{(\beta)}_{n}$ under $\mathbb{P}_{\xi}$ .

Firstly, we give the proof of (4.4) under the assumption (a) of (4.5). By (4.1) and Proposition 1(iv), under $\mathbb{P}_{\xi,\zeta^{(\beta)}_{n}}$ , we have

\begin{equation*} \begin{aligned} \tilde{X}=&\frac{\sum_{|u|=1}U\big(\theta^{n+1}\xi,\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\geq 0\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ \geq&\frac{\sum_{|u|=1}\big(\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\Delta V(u)\geq -(\zeta^{(\beta)}_{n}+\beta)/2\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ \geq&\frac{\sum_{|u|=1}\big(\zeta^{(\beta)}_{n}+\beta\big)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\Delta V(u)\geq -(\zeta^{(\beta)}_{n}+\beta)/2\right\}}}{2U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ =&\frac{\zeta^{(\beta)}_{n}+\beta}{2U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}Y_{+}\bigg(\xi_{n+1},\frac{\zeta^{(\beta)}_{n}+\beta}{2}\bigg). \end{aligned} \end{equation*}

Thus we only need to show that, for any fixed $c\geq1$ , $\mathbb{P}\text{-a.s.}$ ,

\begin{equation*} \sum_{n=1}^{\infty}\mathbb{E}_{\zeta^{(\beta)}_{n}}\left[\frac{\big(\zeta^{(\beta)}_{n}+\beta\big)Y_{+}\big(\xi_{n+1},(\zeta^{(\beta)}_{n}+\beta)/2\big)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\mathbf{1}_{\left\{\mathrm{e}^{-\zeta^{(\beta)}_{n}}(\zeta^{(\beta)}_{n}+\beta)Y_{+}\big(\xi_{n+1},\frac{\zeta^{(\beta)}_{n}+\beta}{2}\big)\geq c\right\}}\right]U(\xi,\beta)=\infty. \end{equation*}

Since, for almost all $\xi$ , $\zeta^{(\beta)}_{n}+\beta\to\infty$ $\mathbb{P}_{\xi}$ -a.s. as $n\to\infty$ , and by (2.3), it suffices to prove that

\begin{equation*} \sum_{n=1}^{\infty}\mathbb{E}\left[Y_{+}\bigg(\xi_{n+1},\frac{\zeta^{(\beta)}_{n}+\beta}{2}\bigg)\mathbf{1}_{\left\{\log Y_{+}\big(\xi_{n+1},\frac{\zeta^{(\beta)}_{n}+\beta}{2}\big)\geq\zeta^{(\beta)}_{n}\right\}}\ \Big|\ \zeta^{(\beta)}_{n}\right]U(\xi,\beta)=\infty,\quad \mathbb{P}\text{-a.s.}, \end{equation*}

which we can write as

(4.6) \begin{equation} \sum_{n=1}^{\infty}F\bigg(\frac{\zeta^{(\beta)}_{n}+\beta}{2},\zeta^{(\beta)}_{n}\bigg)U(\xi,\beta)=\infty, \quad \mathbb{P}\text{-a.s.}, \end{equation}

where $F\left(x,y\right)\,:\!=\,\mathrm{E}\left[F\left(\xi_{n+1},x,y\right)\right]$ and $F\left(\xi_{n+1},x,y\right)\,:\!=\,\mathbb{E}_{\xi}\left[Y_{+}\left(\xi_{n+1},x\right)\right.$ $\left.\mathbf{1}_{\left\{\log Y_{+}\left(\xi_{n+1},x\right)\geq y\right\}}\right]$ , $x,y\in\mathbb{R}$ .

Let

$$F_{1}\left(\xi_{n+1},y\right)\,:\!=\,\mathbb{E}_{\xi}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq y\right\}}\right], \quad y\in\mathbb{R},$$

and $F_{1}\left(y\right)\,:\!=\,\mathrm{E}\left[F_{1}\left(\xi_{n+1},y\right)\right]$ . Note that, by the assumption (1.2),

$$0\leq F\left(\xi_{n+1},x,y\right)\leq F_{1}\left(\xi_{n+1},y\right)\leq \mathbb{E}_{\xi}\big(\tilde{Y}(\xi_{n+1})\big)=1, \quad \mathrm{P}\text{-a.s.}$$

It follows that $F_{1}\left(y\right)$ is a non-negative, non-increasing function. By the assumption (a) of (4.5), we obtain

\begin{align*} \int_{-\beta}^{\infty}F_{1}\left(y\right)(y+\beta)\,\mathrm{d}y&=\int_{-\beta}^{\infty}\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq y\right\}}\right](y+\beta)\,\mathrm{d}y\\ &=\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\int_{-\beta}^{\log_{+}\tilde{Y}\left(\xi_{n+1}\right)}(y+\beta)\,\mathrm{d}y\right] \end{align*}
\begin{align*} &=\frac{\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\left(\log_{+}\tilde{Y}\left(\xi_{n+1}\right)+\beta\right)^2\right]}{2}\\ &=\frac{\mathbb{E}\left[Y\left(\log_{+}Y+\beta\right)^2\right]}{2}=\infty.\end{align*}

By Proposition 4, we have

(4.7) \begin{equation} \sum_{n=1}^{\infty}F_{1}\left(\zeta^{(\beta)}_{n}\right)U(\xi,\beta)=\infty, \quad \mathbb{P}\text{-a.s.} \end{equation}

It remains to prove that

\begin{equation*} \sum_{n=1}^{\infty}\left[F_{1}\left(\zeta^{(\beta)}_{n}\right)-F\bigg(\frac{\zeta^{(\beta)}_{n}+\beta}{2},\zeta^{(\beta)}_{n}\bigg)\right]U(\xi,\beta) \lt \infty, \quad \mathbb{P}\text{-a.s.}, \end{equation*}

which, together with (4.7), implies (4.6). Equivalently, we only need to prove that

(4.8) \begin{equation} \sum_{n=1}^{\infty}\frac{\zeta^{(\beta)}_{n}+\beta}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\left[F_{1}\left(\zeta^{(\beta)}_{n}\right)-F\bigg(\frac{\zeta^{(\beta)}_{n}+\beta}{2},\zeta^{(\beta)}_{n}\bigg)\right]U(\xi,\beta) \lt \infty, \quad \mathbb{P}\text{-a.s.} \end{equation}

We begin our proof by giving an upper bound for $F_{1}\left(\xi_{n+1},y\right)-F\left(\xi_{n+1},x,y\right)$ . For any $x,y\in\mathbb{R}$ , we obtain

\begin{equation*} \begin{aligned} &F_{1}\left(\xi_{n+1},y\right)-F\left(\xi_{n+1},x,y\right)\\ =&\mathbb{E}_{\xi}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq y\right\}}\right]-\mathbb{E}_{\xi}\left[Y_{+}\left(\xi_{n+1},x\right)\mathbf{1}_{\left\{\log Y_{+}\left(\xi_{n+1},x\right)\geq y\right\}}\right]\\ =&\mathbb{E}_{\xi}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq y \gt \log Y_{+}\left(\xi_{n+1},x\right)\right\}}\right]+\mathbb{E}_{\xi}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log Y_{+}\left(\xi_{n+1},x\right)\geq y\right\}}\right]\\ &-\mathbb{E}_{\xi}\left[Y_{+}\left(\xi_{n+1},x\right)\mathbf{1}_{\left\{\log Y_{+}\left(\xi_{n+1},x\right)\geq y\right\}}\right]\\ =&\mathbb{E}_{\xi}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq y \gt \log Y_{+}\left(\xi_{n+1},x\right)\right\}}\right]+\mathbb{E}_{\xi}\left[Y_{-}\left(\xi_{n+1},x\right)\mathbf{1}_{\left\{\log Y_{+}\left(\xi_{n+1},x\right)\geq y\right\}}\right]\\ \leq&\mathbb{E}_{\xi}\left[2Y_{-}\left(\xi_{n+1},x\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq y \gt \log Y_{+}\left(\xi_{n+1},x\right),\ Y_{-}\left(\xi_{n+1},x\right)\geq Y_{+}\left(\xi_{n+1},x\right)\right\}}\right]\\ &+\mathbb{E}_{\xi}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq y \gt \log Y_{+}\left(\xi_{n+1},x\right),\ Y_{-}\left(\xi_{n+1},x\right) \lt Y_{+}\left(\xi_{n+1},x\right)\right\}}\right]\\ &+\mathbb{E}_{\xi}\left[Y_{-}\left(\xi_{n+1},x\right)\mathbf{1}_{\left\{\log Y_{+}\left(\xi_{n+1},x\right)\geq y\right\}}\right]\\ \leq&3\mathbb{E}_{\xi}\left[Y_{-}\left(\xi_{n+1},x\right)\right]+\mathbb{E}_{\xi}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq y \gt \log Y_{+}\left(\xi_{n+1},x\right),\ Y_{-}\left(\xi_{n+1},x\right) \lt Y_{+}\left(\xi_{n+1},x\right)\right\}}\right]\\ \leq&3\mathbb{E}_{\xi}\left[Y_{-}\left(\xi_{n+1},x\right)\right]+\mathbb{E}_{\xi}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq y \gt \log \left(\tilde{Y}\left(\xi_{n+1}\right)/2\right)\right\}}\right]\\ =&:3A_{1}\left(\xi_{n+1},x\right)+A_{2}\left(\xi_{n+1},y\right), \end{aligned} \end{equation*}

where the first and last inequalities follow from the fact that $\tilde{Y}\left(\xi_{n+1}\right)\leq2\max\left\{Y_{+}\left(\xi_{n+1},x\right),Y_{-}\left(\xi_{n+1},x\right)\right\}$ . Therefore,

\begin{equation*} \begin{aligned} &\sum_{n=1}^{\infty}\frac{\zeta^{(\beta)}_{n}+\beta}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\left[F_{1}\big(\zeta^{(\beta)}_{n}\big)-F\bigg(\frac{\zeta^{(\beta)}_{n}+\beta}{2},\zeta^{(\beta)}_{n}\bigg)\right]U(\xi,\beta)\\ \leq&\sum_{n=1}^{\infty}\frac{\zeta^{(\beta)}_{n}+\beta}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\mathrm{E}\left[3A_{1}\bigg(\xi_{n+1},\frac{\zeta^{(\beta)}_{n}+\beta}{2}\bigg)+A_{2}\big(\xi_{n+1},\zeta^{(\beta)}_{n}\big)\right]U(\xi,\beta). \end{aligned} \end{equation*}

Taking the expectation on both sides, we have

\begin{equation*} \begin{aligned} &\mathbb{E}\left\{\sum_{n=1}^{\infty}\frac{\zeta^{(\beta)}_{n}+\beta}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\left[F_{1}\big(\zeta^{(\beta)}_{n}\big)-F\bigg(\frac{\zeta^{(\beta)}_{n}+\beta}{2},\zeta^{(\beta)}_{n}\bigg)\right]U(\xi,\beta)\right\}\\ \leq&\mathbb{E}\left\{\sum_{n=1}^{\infty}\frac{\zeta^{(\beta)}_{n}+\beta}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\mathrm{E}\left[3A_{1}\bigg(\xi_{n+1},\frac{\zeta^{(\beta)}_{n}+\beta}{2}\bigg)+A_{2}\big(\xi_{n+1},\zeta^{(\beta)}_{n}\big)\right]U(\xi,\beta)\right\}. \end{aligned} \end{equation*}

Then, using Lemma 2, we deduce that

\begin{equation*} \begin{aligned} &\mathbb{E}\left\{\sum_{n=1}^{\infty}\frac{\zeta^{(\beta)}_{n}+\beta}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\left[F_{1}\big(\zeta^{(\beta)}_{n}\big)-F\bigg(\frac{\zeta^{(\beta)}_{n}+\beta}{2},\zeta^{(\beta)}_{n}\bigg)\right]U(\xi,\beta)\right\}\\ \leq&\int_{-\beta}^{\infty}3(x+\beta)\mathrm{E}\left[A_{1}\bigg(\xi_{n+1},\frac{x+\beta}{2}\bigg)\right]\mathcal{R}^{(\beta)}(\mathrm{d}x)+\int_{-\beta}^{\infty}(x+\beta)\mathrm{E}\left[A_{2}\left(\xi_{n+1},x\right)\right]\mathcal{R}^{(\beta)}(\mathrm{d}x). \end{aligned} \end{equation*}

Our aim is to prove that the values of the two integrals in the last equality are finite, which completes the proof of (4.8). For the first term, by the many-to-one lemma, we have

(4.9) \begin{equation} \mathrm{E}\left[A_{1}\left(\xi_{n+1},x\right)\right]=\mathbb{E}\left[Y_{-}\left(\xi_{n+1},x\right)\right]=\mathbb{E}\left[\sum_{|u|=1}\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\Delta V(u) \lt -x\right\}}\right]=\mathbb{P}\left(S_{1} \lt -x\right). \end{equation}

Thus, from (4.9), (2.6), and the assumption (1.3), we obtain

\begin{equation*} \begin{aligned} \int_{-\beta}^{\infty}(x+\beta)\mathrm{E}\left[A_{1}\left(\xi_{n+1},\frac{x+\beta}{2}\right)\right]\mathcal{R}^{(\beta)}(\mathrm{d}x)=&\int_{-\beta}^{\infty}(x+\beta)\mathbb{P}\left(S_{1} \lt -\frac{x+\beta}{2}\right)\mathcal{R}^{(\beta)}(\mathrm{d}x)\\ \leq&c_{2}\int_{0}^{\infty}x\mathbb{P}\left(S_{1} \lt -\frac{x}{2}\right)\mathrm{d}x\\ =&c_{2}\mathbb{E}\left[\int_{0}^{2({-}S_{1})_{+}}x\,\mathrm{d}x\right]\\ =&2c_{2}\mathbb{E}\left[\left(({-}S_{1})_{+}\right)^{2}\right] \lt \infty. \end{aligned} \end{equation*}

For the second term, by (2.6) and the assumption (a) of (4.5), we have

\begin{equation*} \begin{aligned} &\int_{-\beta}^{\infty}(x+\beta)\mathrm{E}\left[A_{2}\left(\xi_{n+1},x\right)\right]\mathcal{R}^{(\beta)}(\mathrm{d}x)\\ =&\int_{-\beta}^{\infty}(x+\beta)\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq x \gt \log \left(\tilde{Y}\left(\xi_{n+1}\right)/2\right)\right\}}\right]\mathcal{R}^{(\beta)}(\mathrm{d}x)\\ \leq&c_{2}\int_{0}^{\infty}x\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)+\beta \geq x \gt \log \left(\tilde{Y}\left(\xi_{n+1}\right)/2\right)+\beta\right\}}\right]\mathrm{d}x\\ =&c_{2}\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\int_{\left(\log\left(\tilde{Y}(\xi_{n+1})/2\right)+\beta\right)_{+}}^{\left(\log\tilde{Y}(\xi_{n+1})+\beta\right)_{+}}x\,\mathrm{d}x\right]\\ \leq&c_{4}(\beta)\mathbb{E}\left[Y\log_{+}Y\right] \lt \infty. \end{aligned} \end{equation*}

We conclude that (4.4) holds for the first case in (4.5).

Secondly, we give the proof of (4.4) under the assumption (b) of (4.5). By (4.1) and Proposition 1 (iv), under $\mathbb{P}_{\xi,\zeta^{(\beta)}_{n}}$ , we get

\begin{equation*} \begin{aligned} \tilde{X}=&\frac{\sum_{|u|=1}U\big(\theta^{n+1}\xi,\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\geq 0\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ \geq&\frac{\sum_{|u|=1}\big(\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\geq 1\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ \geq&\frac{\sum_{|u|=1}\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\Delta V(u)\geq -\left(\zeta^{(\beta)}_{n}+\beta-1\right)\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ =&\frac{Y_{+}\big(\xi_{n+1},\zeta^{(\beta)}_{n}+\beta-1\big)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}. \end{aligned} \end{equation*}

Hence, it suffices to prove that, for any fixed $c\geq1$ ,

\begin{equation*} \sum_{n=1}^{\infty}\mathbb{E}_{\zeta^{(\beta)}_{n}}\left[\frac{Y_{+}\big(\xi_{n+1},\zeta^{(\beta)}_{n}+\beta-1\big)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\mathbf{1}_{\left\{\mathrm{e}^{-\zeta^{(\beta)}_{n}}Y_{+}(\xi_{n+1},\zeta^{(\beta)}_{n}+\beta-1)\geq c\right\}}\right]U(\xi,\beta)=\infty, \quad \mathbb{P}\text{-a.s.} \end{equation*}

Since, for almost all $\xi$ , $\zeta^{(\beta)}_{n}+\beta\to\infty$ $\mathbb{P}_{\xi}$ -a.s. as $n\to\infty$ , and by (2.3), this is equivalent to

\begin{equation*} \sum_{n=1}^{\infty}\mathbb{E}\left[\frac{Y_{+}\big(\xi_{n+1},\zeta^{(\beta)}_{n}+\beta-1\big)}{\zeta^{(\beta)}_{n}+\beta+1}\mathbf{1}_{\left\{\log Y_{+}(\xi_{n+1},\zeta^{(\beta)}_{n}+\beta-1)\geq\log c+\zeta^{(\beta)}_{n}\right\}}\ \Big|\ \zeta^{(\beta)}_{n}\right]U(\xi,\beta)=\infty, \quad \mathbb{P}\text{-a.s.}, \end{equation*}

which we can write as

(4.10) \begin{equation} \sum_{n=1}^{\infty}\frac{F\big(\zeta^{(\beta)}_{n}+\beta-1,\log c+\zeta^{(\beta)}_{n}\big)}{\zeta^{(\beta)}_{n}+\beta+1}U(\xi,\beta)=\infty, \quad \mathbb{P}\text{-a.s.}, \end{equation}

recalling that $F\left(x,\log c+y\right)=\mathbb{E}\left[Y_{+}\left(\xi_{n+1},x\right)\mathbf{1}_{\left\{\log Y_{+}\left(\xi_{n+1},x\right)\geq \log c+y\right\}}\right]$ , $x,y\in\mathbb{R}$ .

Let

$$F_{2}\left(\xi_{n+1},y\right)\,:\!=\,\frac{\mathbb{E}_{\xi}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq \log c+y\right\}}\right]}{y+\beta+1}=\frac{F_{1}\left(\xi_{n+1},\log c+y\right)}{y+\beta+1}, \quad y\geq -\beta,$$

and let $F_{2}\left(y\right)\,:\!=\,\mathrm{E}\left[F_{2}\left(\xi_{n+1},y\right)\right]$ . Clearly $F_{2}\left(y\right)$ is non-increasing and

$$0\leq F_{2}\left(y\right)=\mathrm{E}\left[\frac{F_{1}\left(\xi_{n+1},\log c+y\right)}{y+\beta+1}\right]=\frac{F_{1}\left(\log c+y\right)}{y+\beta+1}\leq1.$$

By the assumption (b) of (4.5), we have

\begin{align*} \int_{-\beta}^{\infty}F_{2}\left(y\right)(y+\beta)\,\mathrm{d}y&=\int_{-\beta}^{\infty}\frac{F_{1}\left(\log c+y\right)}{y+\beta+1}(y+\beta)\,\mathrm{d}y\\ &\geq \int_{1}^{\infty}\frac{F_{1}\left(\log c+y\right)}{y+\beta+1}(y+\beta)\,\mathrm{d}y\\ &\geq \frac{1}{2}\int_{1}^{\infty}F_{1}\left(\log c+y\right)\,\mathrm{d}y \end{align*}
\begin{align*} &=\frac{1}{2}\int_{1}^{\infty}\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq \log c+y\right\}}\right]\,\mathrm{d}y\\ &=\frac{1}{2}\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\big(\log(\tilde{Y}\left(\xi_{n+1}\right)/c)-1\big)_{+}\right]=\infty, \end{align*}

which implies $\int_{-\beta}^{\infty}F_{2}\left(y\right)(y+\beta)\,\mathrm{d}y=\infty$ . By Proposition 4, we get

(4.11) \begin{equation} \sum_{n=1}^{\infty}\frac{F_{1}\big(\log c+\zeta^{(\beta)}_{n}\big)}{\zeta^{(\beta)}_{n}+\beta+1}U(\xi,\beta)=\infty, \quad \mathbb{P}\text{-a.s.} \end{equation}

It remains to prove that

\begin{equation*} \sum_{n=1}^{\infty}\frac{F_{1}\big(\log c+\zeta^{(\beta)}_{n}\big)-F\big(\zeta^{(\beta)}_{n}+\beta-1,\log c+\zeta^{(\beta)}_{n}\big)}{\zeta^{(\beta)}_{n}+\beta+1}U(\xi,\beta) \lt \infty, \quad \mathbb{P}\text{-a.s.}, \end{equation*}

which, combined with (4.11), implies (4.10). Equivalently, we only need to prove that

(4.12) \begin{equation} \sum_{n=1}^{\infty}\frac{F_{1}\big(\log c+\zeta^{(\beta)}_{n}\big)-F\big(\zeta^{(\beta)}_{n}+\beta-1,\log c+\zeta^{(\beta)}_{n}\big)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}U(\xi,\beta) \lt \infty, \quad \mathbb{P}\text{-a.s.} \end{equation}

By the same argument as in the proof of the first part, we have

\begin{equation*} \begin{aligned} &\sum_{n=1}^{\infty}\frac{F_{1}\big(\log c+\zeta^{(\beta)}_{n}\big)-F\big(\zeta^{(\beta)}_{n}+\beta-1,\log c+\zeta^{(\beta)}_{n}\big)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}U(\xi,\beta)\\ \leq&\sum_{n=1}^{\infty}\frac{\mathrm{E}\left[3A_{1}\big(\xi_{n+1},\zeta^{(\beta)}_{n}+\beta-1\big)+A_{2}\big(\xi_{n+1},\log c+\zeta^{(\beta)}_{n}\big)\right]}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}U(\xi,\beta). \end{aligned} \end{equation*}

Taking the expectation on both sides, we get

\begin{equation*} \begin{aligned} &\mathbb{E}\left\{\sum_{n=1}^{\infty}\frac{F_{1}\big(\log c+\zeta^{(\beta)}_{n}\big)-F\big(\zeta^{(\beta)}_{n}+\beta-1,\log c+\zeta^{(\beta)}_{n}\big)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}U(\xi,\beta)\right\}\\ \leq&\mathbb{E}\left\{\sum_{n=1}^{\infty}\frac{\mathrm{E}\left[3A_{1}\big(\xi_{n+1},\zeta^{(\beta)}_{n}+\beta-1\big)+A_{2}\big(\xi_{n+1},\log c+\zeta^{(\beta)}_{n}\big)\right]}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}U(\xi,\beta)\right\}. \end{aligned} \end{equation*}

Then, by Lemma 2, we have

\begin{equation*} \begin{aligned} &\mathbb{E}\left\{\sum_{n=1}^{\infty}\frac{F_{1}\big(\xi_{n+1},\log c+\zeta^{(\beta)}_{n}\big)-F\big(\xi_{n+1},\zeta^{(\beta)}_{n}+\beta-1,\log c+\zeta^{(\beta)}_{n}\big)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}U(\xi,\beta)\right\}\\ \leq&\int_{-\beta}^{\infty}\left[3\mathrm{E}\left(A_{1}\left(\xi_{n+1},x+\beta-1\right)\right)+\mathrm{E}\left(A_{2}\left(\xi_{n+1},\log c+x\right)\right)\right]\mathcal{R}^{(\beta)}(\mathrm{d}x). \end{aligned} \end{equation*}

We now turn to proving the finiteness of the above two integrals, which completes the proof of (4.12). For the first integral, by (4.9) and (2.6), we obtain

\begin{align*} \int_{-\beta}^{\infty}\mathrm{E}\left[A_{1}\left(\xi_{n+1},x+\beta-1\right)\right]\mathcal{R}^{(\beta)}(\mathrm{d}x)=&\int_{-\beta}^{\infty}\mathbb{P}\left(S_{1} \lt -(x+\beta-1)\right)\mathcal{R}^{(\beta)}(\mathrm{d}x)\\ \leq&c_{2}\int_{0}^{\infty}\mathbb{P}\left(S_{1} \lt -(x-1)\right)\,\mathrm{d}x \end{align*}
\begin{align*} =&c_{2}\mathbb{E}\left[\int_{0}^{({-}S_{1}+1)_{+}}\,\mathrm{d}x\right]\\ =&c_{2}\mathbb{E}\left[({-}S_{1}+1)_{+}\right] \lt \infty. \end{align*}

For the second integral, by (2.6), we get

\begin{equation*} \begin{aligned} &\int_{-\beta}^{\infty}\mathrm{E}\left[A_{2}\left(\xi_{n+1},\log c+x\right)\right]\mathcal{R}^{(\beta)}(\mathrm{d}x)\\ =&\int_{-\beta}^{\infty}\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)\geq \log c+x \gt \log \left(\tilde{Y}\left(\xi_{n+1}\right)/2\right)\right\}}\right]\mathcal{R}^{(\beta)}(\mathrm{d}x)\\ \leq&c_{2}\int_{0}^{\infty}\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Y}\left(\xi_{n+1}\right)+\beta\geq \log c+x \gt \log \left(\tilde{Y}\left(\xi_{n+1}\right)/2\right)+\beta\right\}}\right]\mathrm{d}x\\ =&c_{2}\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\int_{\left(\log\left(\tilde{Y}(\xi_{n+1})/2c\right)+\beta\right)_{+}}^{\left(\log\left(\tilde{Y}(\xi_{n+1})/c\right)+\beta\right)_{+}}\,\mathrm{d}x\right]\\ \leq&c_{5}(\beta)\mathbb{E}\left[\tilde{Y}\left(\xi_{n+1}\right)\right] \lt \infty. \end{aligned} \end{equation*}

We obtain that (4.4) holds for the second case in (4.5).

Finally, we give the proof of (4.4) under the assumption (c) of (4.5). By (4.1) and Proposition 1(iv), under $\mathbb{P}_{\xi,\zeta^{(\beta)}_{n}}$ , we get

\begin{equation*} \begin{aligned} \tilde{X}=&\frac{\sum_{|u|=1}U\big(\theta^{n+1}\xi,\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\geq 0\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ \geq&\frac{\sum_{|u|=1}\big(\zeta^{(\beta)}_{n}+\Delta V(u)+\beta\big)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\Delta V(u)\geq 0\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ \geq&\frac{\sum_{|u|=1}\Delta V(u)\mathrm{e}^{-\Delta V(u)}\mathbf{1}_{\left\{\Delta V(u)\geq 0\right\}}}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\\ =&\frac{\tilde{Z}\left(\xi_{n+1}\right)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}. \end{aligned} \end{equation*}

Hence we just need to prove that, for any fixed $c\geq1$ ,

\begin{equation*} \sum_{n=1}^{\infty}\mathbb{E}\left[\frac{\tilde{Z}\left(\xi_{n+1}\right)}{U\big(\theta^{n}\xi,\zeta^{(\beta)}_{n}+\beta\big)}\mathbf{1}_{\left\{\mathrm{e}^{-\zeta^{(\beta)}_{n}}\tilde{Z}\left(\xi_{n+1}\right)\geq c\right\}}\ \bigg|\ \zeta^{(\beta)}_{n}\right]U(\xi,\beta)=\infty, \quad \mathbb{P}\text{-a.s.} \end{equation*}

Since, for almost all $\xi$ , $\zeta^{(\beta)}_{n}+\beta\to\infty$ $\mathbb{P}_{\xi}$ -a.s. as $n\to\infty$ , and by (2.3), this is equivalent to

\begin{equation*} \sum_{n=1}^{\infty}\mathbb{E}\left[\frac{\tilde{Z}\left(\xi_{n+1}\right)}{\zeta^{(\beta)}_{n}+\beta+1}\mathbf{1}_{\left\{\log \tilde{Z}\left(\xi_{n+1}\right)\geq \log c+\zeta^{(\beta)}_{n}\right\}}\ \bigg|\ \zeta^{(\beta)}_{n}\right]U(\xi,\beta)=\infty, \quad \mathbb{P}\text{-a.s.}, \end{equation*}

which can be written as

(4.13) \begin{equation} \sum_{n=1}^{\infty}F_{3}\big(\zeta^{(\beta)}_{n}\big)U(\xi,\beta)=\infty, \quad \mathbb{P}\text{-a.s.}, \end{equation}

where

$$F_{3}\left(x\right)\,:\!=\,\frac{\mathbb{E}\left[\tilde{Z}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Z}\left(\xi_{n+1}\right)\geq \log c+x\right\}}\right]}{x+\beta+1}, \quad x\geq -\beta.$$

It is obvious that $F_{3}(x)$ is non-increasing and

$$0\leq F_{3}\left(x\right)=\frac{\mathbb{E}\left[\tilde{Z}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Z}\left(\xi_{n+1}\right)\geq \log c+x\right\}}\right]}{x+\beta+1}\leq\mathbb{E}(Z)=\mathbb{E}\left[(S_{1})_{+}\right] \lt \infty.$$

Observe that, by the assumption (c) of (4.5),

\begin{equation*} \begin{aligned} \int_{-\beta}^{\infty}F_{3}\left(x\right)(x+\beta)\,\mathrm{d}x&\geq\int_{1}^{\infty}\frac{\mathbb{E}\left[\tilde{Z}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Z}\left(\xi_{n+1}\right)\geq\log c+x\right\}}\right]}{x+\beta+1}(x+\beta)\,\mathrm{d}x\\ &\geq\frac{1}{2}\int_{1}^{\infty}\mathbb{E}\left[\tilde{Z}\left(\xi_{n+1}\right)\mathbf{1}_{\left\{\log \tilde{Z}\left(\xi_{n+1}\right)\geq \log c+x\right\}}\right]\,\mathrm{d}x\\ &=\frac{1}{2}\mathbb{E}\left[\tilde{Z}\left(\xi_{n+1}\right)\big(\log(\tilde{Z}\left(\xi_{n+1}\right)/c)-1\big)_{+}\right]=\infty. \end{aligned} \end{equation*}

By Proposition 4, it follows that (4.13) is valid. Therefore, we conclude that (4.4) holds for the third case in (4.5).

Proof of the necessary condition of Theorem 1(ii). By Lemma 6, for almost all $\xi$ , $D^{(\beta)}_{\infty}=0$ $\mathbb{P}_{\xi}$ -a.s. for all $\beta\geq0$ when (1.4) does not hold. Therefore, using Lemma 4(iii), we obtain that $D_{\infty}$ is degenerate for almost all $\xi$ , which completes the proof of necessity.

Appendix A. Proof of the results in Section 3

Proof of Lemma 3. For any $v\in\mathbb{T} \backslash \left\{\varnothing\right\}$ , we denote by $\overleftarrow{v}$ its parent. By the branching property, many-to-one lemma, and quenched harmonic property, we have

\begin{equation*} \begin{aligned} &\mathbb{E}_{\xi,a}\left[D^{(\beta)}_{n+1}\mid\mathcal{F}_{n}\right]\\ =&\mathbb{E}_{\xi,a}\left[\sum_{|u|=n}\ \sum_{|v|=n+1:\overleftarrow{v}=u}U\left(\theta^{n+1}\xi,V(v)+\beta\right)\mathrm{e}^{-V(v)}\mathbf{1}_{\left\{\min_{0\leq k\leq n}V(u_{k})\geq-\beta\right\}}\mathbf{1}_{\left\{V(v)\geq-\beta\right\}}\ \bigg|\ \mathcal{F}_{n}\right]\\ =&\sum_{|u|=n}\mathbf{1}_{\left\{\min_{0\leq k\leq n}V(u_{k})\geq-\beta\right\}} \mathbb{E}_{\xi,V(u)}\left[\sum_{|v|=1}U\left(\theta^{n+1}\xi,V(v)+\beta\right)\mathrm{e}^{-V(v)}\mathbf{1}_{\left\{V(v)\geq-\beta\right\}}\right]\\ =&\sum_{|u|=n}\mathbf{1}_{\left\{\min_{0\leq k\leq n}V(u_{k})\geq-\beta\right\}}\mathrm{e}^{-V(u)} \mathbb{E}_{\xi,V(u)}\left[U\left(\theta^{n+1}\xi,S_{1}+\beta\right)\mathbf{1}_{\left\{S_{1}\geq-\beta\right\}}\right]\\ =&\sum_{|u|=n}\mathbf{1}_{\left\{\min_{0\leq k\leq n}V(u_{k})\geq-\beta\right\}}\mathrm{e}^{-V(u)}U\left(\theta^{n}\xi,V(u)+\beta\right)\\ =&D^{(\beta)}_{n}. \end{aligned} \end{equation*}

It follows that $\left(D^{(\beta)}_{n},n\geq0\right)$ is a non-negative martingale under $\mathbb{P}_{\xi,a}$ and $\mathbb{P}_{a}$ . By the martingale convergence theorem, we have the almost sure convergence of $D^{(\beta)}_{n}$ .

Proof of Lemma 4.

  1. (i) By Theorem 7.1 of Biggins and Kyprianou [Reference Biggins and Kyprianou7], we have $W_{n}\to0$ $\mathbb{P}\text{-a.s.}$ as $n\to\infty$ . Since $\mathrm{e}^{-\inf_{|u|=n}V(u)}\leq W_{n}$ , it follows that

    $$\inf_{|u|=n}V(u)\to\infty,\quad\quad \inf_{u\in\mathbb{T}}V(u) \gt -\infty,\quad \mathbb{P}\text{-a.s.}$$
    Hence, for any $\epsilon \gt 0$ , there exists $\beta\,:\!=\,\beta(\epsilon)$ such that
    $$\mathbb{P}\Big(\inf_{u\in\mathbb{T}}V(u)\geq-\beta\Big)\geq1-\epsilon.$$
    On the one hand, by Lemma 3,
    $$D^{(\beta)}_{n}=\sum_{|u|=n}U\left(\theta^{n}\xi,V(u)+\beta\right)\mathrm{e}^{-V(u)}\mathbf{1}_{\left\{\min_{0\leq k\leq n}V(u_{k})\geq-\beta\right\}}\to D^{(\beta)}_{\infty},\quad \mathbb{P}\text{-a.s.}$$
    Note that on the event $\left\{\inf_{u\in\mathbb{T}}V(u)\geq-\beta\right\}$ , we have, by (2.3),
    $$D^{(\beta)}_{n}=\sum_{|u|=n}U\left(\theta^{n}\xi,V(u)+\beta\right)\mathrm{e}^{-V(u)}\sim\sum_{|u|=n}\left(V(u)+\beta\right)\mathrm{e}^{-V(u)}=D_{n}+\beta W_{n},\quad \mathbb{P}\text{-a.s.}$$
    Since $W_{n}\to0$ $\mathbb{P}\text{-a.s.}$ , it follows that with probability at least $1-\epsilon$ , $D_{n}$ converges to a non-negative finite limit. This yields the $\mathbb{P}\text{-a.s.}$ convergence of $D_{n}$ if we let $\beta\to\infty$ .
  2. (ii) If there exists $\beta\geq0$ such that $D^{(\beta)}_{n}$ converges in $L^{1}(\mathbb{P}_{\xi})$ for almost all $\xi$ , then we have $\mathbb{E}_{\xi}\big(D^{(\beta)}_{\infty}\big)=U(\xi,\beta) \gt 0$ , P-a.s.; in particular, $\mathbb{P}_{\xi}\big(D^{(\beta)}_{\infty} \gt 0\big) \gt 0$ , P-a.s. Since $D^{(\beta)}_{n}$ is non-decreasing in $\beta$ , we deduce by (i) that $\mathbb{P}_{\xi}(D_{\infty} \gt 0) \gt 0$ , P-a.s.

  3. (iii) If, for almost all $\xi$ , $D^{(\beta)}_{\infty}=0$ $\mathbb{P}_{\xi}$ -a.s. for all $\beta\geq0$ , then, by (i) again, we have $\mathbb{P}_{\xi}\left(D_{\infty}=0\right)=1$ P-a.s.

Proof of Proposition 5. To describe the probabilities $\mathbb{P}_{\xi,a}$ , $\mathbb{Q}^{(\beta)}_{\xi,a}$ , and $\hat{\mathbb{P}}^{(\beta)}_{\xi,a}$ , we use the Ulam–Harris–Neveu notation to encode the genealogical tree $\mathbb{T}$ with $\mathcal{U}\,:\!=\,\cup_{k=1}^{\infty}\left(\mathbb{N}^*\right)^k\cup\left\{\varnothing\right\}$ , where $\mathbb{N}^*\,:\!=\,\left\{1,2,\cdots\right\}$ . The vertices of the tree are labeled by their line of descent. For example, the vertex $u=k_{1}\cdots k_{n}$ means the $k_{n}$ th child of $\cdots$ of the $k_{1}$ th child of the initial vertex $\varnothing$ . Given two strings u and v, we write uv for the concatenated string. We refer to Section 1.1 of Mallein [Reference Mallein25] for a rigorous presentation of the time-inhomogeneous branching random walk. Let $\left(g_{u},u\in\mathcal{U}\right)$ be a family of non-negative measurable functions. By the standard argument for the measure extension theorem, it suffices to prove that for any $n\geq0$ ,

$$\mathbb{E}_{\hat{\mathbb{P}}^{(\beta)}_{\xi,a}}\left[\prod_{|u|\leq n}g_{u}\left(\xi,V(u)\right)\right]=\mathbb{E}_{\mathbb{Q}^{(\beta)}_{\xi,a}}\left[\prod_{|u|\leq n}g_{u}\left(\xi,V(u)\right)\right],$$

where $\mathbb{E}_{\hat{\mathbb{P}}^{(\beta)}_{\xi,a}}$ and $\mathbb{E}_{\mathbb{Q}^{(\beta)}_{\xi,a}}$ denote the corresponding expectations of $\hat{\mathbb{P}}^{(\beta)}_{\xi,a}$ and $\mathbb{Q}^{(\beta)}_{\xi,a}$ , respectively. That is, by (3.1) (the definition of $\mathbb{Q}^{(\beta)}_{\xi,a}$ ),

(A.1) \begin{equation} \mathbb{E}_{\hat{\mathbb{P}}^{(\beta)}_{\xi,a}}\left[\prod_{|u|\leq n}g_{u}\left(\xi,V(u)\right)\right]=\mathbb{E}_{\xi,a}\left[\frac{D^{(\beta)}_{n}}{U\left(\xi,a+\beta\right)\mathrm{e}^{-a}}\prod_{|u|\leq n}g_{u}\left(\xi,V(u)\right)\right]. \end{equation}

Let us define

$$D^{(\beta)}_{n}(v)\,:\!=\,U\left(\theta^{n}\xi,V(v)+\beta\right)\mathrm{e}^{-V(v)}\mathbf{1}_{\left\{\min_{0\leq k\leq n}V(v_{k})\geq-\beta\right\}}, \quad n\geq1,$$

and $D^{(\beta)}_{0}(\varnothing)\,:\!=\,U\left(\xi,a+\beta\right)\mathrm{e}^{-a}$ if $V\left(\varnothing\right)=a$ . Clearly, $D^{(\beta)}_{n}=\sum_{|v|=n}D^{(\beta)}_{n}(v)$ , $n\geq0$ . We claim that for any $v\in\mathcal{U}$ with $|v|=n$ ,

(A.2) \begin{equation} \mathbb{E}_{\hat{\mathbb{P}}^{(\beta)}_{\xi,a}}\left[\mathbf{1}_{\left\{w^{(\beta)}_{n}=v\right\}}\prod_{|u|\leq n}g_{u}\left(\xi,V(u)\right)\right]=\mathbb{E}_{\xi,a}\left[\frac{D^{(\beta)}_{n}(v)}{U\left(\xi,a+\beta\right)\mathrm{e}^{-a}}\prod_{|u|\leq n}g_{u}\left(\xi,V(u)\right)\right], \end{equation}

which implies (A.1) if we sum over $|v|=n$ .

We turn to the proof of (A.2). We introduce some notation for our statement. For any $u\in\mathcal{U}$ , we denote by $\underrightarrow{u}$ its children and by $\mathbb{T}_{u}$ the subtree rooted at u. We write $\left\{\varnothing\rightsquigarrow u\right\}\,:\!=\,\left\{\varnothing,u_{1},\cdots,u_{|u|}\right\}$ for the set of vertices in the unique shortest path connecting $\varnothing$ to u. Decomposing the product $\prod_{|u|\leq n}g_{u}\left(\xi,V(u)\right)$ along the path $\left\{\varnothing\rightsquigarrow v\right\}$ , we can write (A.2) as

(A.3) \begin{equation} \begin{aligned} &\mathbb{E}_{\hat{\mathbb{P}}^{(\beta)}_{\xi,a}}\left[\mathbf{1}_{\left\{w^{(\beta)}_{n}=v\right\}}\prod_{i=0}^{n}g_{v_{i}}\left(\xi,V(v_{i})\right)\prod_{u\in \underrightarrow{v_{i-1}} \backslash v_{i}}h_{u}\left(\xi,V(u)\right)\right]\\ =&\mathbb{E}_{\xi,a}\left[\frac{D^{(\beta)}_{n}(v)}{U\left(\xi,a+\beta\right)\mathrm{e}^{-a}}\prod_{i=0}^{n}g_{v_{i}}\left(\xi,V(v_{i})\right)\prod_{u\in \underrightarrow{v_{i-1}} \backslash v_{i}}h_{u}\left(\xi,V(u)\right)\right], \end{aligned} \end{equation}

where $h_{u}(\xi,{\cdot})\,:\!=\,\mathbb{E}_{\theta^{|u|}\xi}\left[\prod_{z\in\mathbb{T}_{u}}g_{uz}\left(\xi,{\cdot}+V(z)\right)\mathbf{1}_{\left\{|z|\leq n-|u|\right\}}\right]$ and $\underrightarrow{v_{i-1}} \backslash v_{i}$ means the set of the siblings of $v_{i}$ .

Now we prove (A.3) by induction. The equation obviously holds for $n=0$ . Assume that it holds for $n-1$ ; we need to show that it is true for n. Let us introduce the filtration

$$\mathcal{G}^{(\beta)}_{n}\,:\!=\,\sigma\Big(w^{(\beta)}_{i},V\big(w^{(\beta)}_{i}\big),0\leq i\leq n\Big)\vee \sigma\Big(\underrightarrow{w^{(\beta)}_{i}},V\big(\underrightarrow{w^{(\beta)}_{i}}\big),0\leq i \lt n\Big),$$

the information of the spine and its siblings. By the construction of $\hat{\mathbb{P}}^{(\beta)}_{\xi,a}$ , given that $\left\{w^{(\beta)}_{n-1}=v_{n-1}\right\}$ , the probability of the event $\left\{w^{(\beta)}_{n}=v\right\}$ is

$$\frac{D^{(\beta)}_{n}(v)}{D^{(\beta)}_{n}(v)+\sum_{u\in \underrightarrow{v_{n-1}} \backslash v}D^{(\beta)}_{n}(u)},$$

and the point process generated by $w^{(\beta)}_{n-1}=v_{n-1}$ under $\hat{\mathbb{P}}^{(\beta)}_{\xi,a}$ has Radon–Nikodym derivative

$$\frac{D^{(\beta)}_{n}(v)+\sum_{u\in \underrightarrow{v_{n-1}} \backslash v}D^{(\beta)}_{n}(u)}{D^{(\beta)}_{n-1}(v_{n-1})}$$

with respect to the point process generated by $v_{n-1}$ under $\mathbb{P}_{\xi,a}$ . As a result, for the nth term $\mathbf{1}_{\left\{w^{(\beta)}_{n}=v\right\}}g_{v}\left(\xi,V(v)\right)\prod_{u\in \underrightarrow{v_{n-1}} \backslash v}h_{u}\left(\xi,V(u)\right)$ in the product inside the left-hand side of (A.3), conditioned on $\mathcal{G}^{(\beta)}_{n-1}$ , we have

\begin{equation*} \begin{aligned} &\mathbb{E}_{\hat{\mathbb{P}}^{(\beta)}_{\xi,a}}\left[\mathbf{1}_{\left\{w^{(\beta)}_{n}=v\right\}}g_{v}\left(\xi,V(v)\right)\prod_{u\in \underrightarrow{v_{n-1}} \backslash v}h_{u}\left(\xi,V(u)\right)\bigg| \mathcal{G}^{(\beta)}_{n-1}\right]\\ =&\mathbf{1}_{\left\{w^{(\beta)}_{n-1}=v_{n-1}\right\}}\mathbb{E}_{\hat{\mathbb{P}}^{(\beta)}_{\xi,a}}\left[\frac{D^{(\beta)}_{n}(v)g_{v}\left(\xi,V(v)\right)}{D^{(\beta)}_{n}(v)+\sum_{u\in \underrightarrow{v_{n-1}} \backslash v}D^{(\beta)}_{n}(u)}\prod_{u\in \underrightarrow{v_{n-1}} \backslash v}h_{u}\left(\xi,V(u)\right)\ \bigg|\ \mathcal{G}^{(\beta)}_{n-1}\right]\\ =&\mathbf{1}_{\left\{w^{(\beta)}_{n-1}=v_{n-1}\right\}}\mathbb{E}_{\hat{\mathbb{P}}^{(\beta)}_{\xi,a}}\left[\frac{D^{(\beta)}_{n}(v)g_{v}\left(\xi,V(v)\right)}{D^{(\beta)}_{n}(v)+\sum_{u\in \underrightarrow{v_{n-1}} \backslash v}D^{(\beta)}_{n}(u)}\prod_{u\in \underrightarrow{v_{n-1}} \backslash v}h_{u}\left(\xi,V(u)\right)\bigg| \big(w^{(\beta)}_{n-1},V(w^{(\beta)}_{n-1})\big)\right]\\ =&\mathbf{1}_{\left\{w^{(\beta)}_{n-1}=v_{n-1}\right\}}\mathbb{E}_{\xi,a}\left[\frac{D^{(\beta)}_{n}(v)g_{v}\left(\xi,V(v)\right)}{D^{(\beta)}_{n-1}(v_{n-1})}\prod_{u\in \underrightarrow{v_{n-1}} \backslash v}h_{u}\left(\xi,V(u)\right)\bigg| V\left(v_{n-1}\right)\right]\\ =&:\mathbf{1}_{\left\{w^{(\beta)}_{n-1}=v_{n-1}\right\}}f\left(\xi,V\left(v_{n-1}\right)\right). \end{aligned} \end{equation*}

It follows from the above expression and the inductive hypothesis that

\begin{align*} &\mathbb{E}_{\hat{\mathbb{P}}^{(\beta)}_{\xi,a}}\left[\mathbf{1}_{\left\{w^{(\beta)}_{n}=v\right\}}\prod_{i=0}^{n}g_{v_{i}}\left(\xi,V(v_{i})\right)\prod_{u\in \underrightarrow{v_{i-1}} \backslash v_{i}}h_{u}\left(\xi,V(u)\right)\right]\\ & = \mathbb{E}_{\hat{\mathbb{P}}^{(\beta)}_{\xi,a}}\left[\mathbf{1}_{\left\{w^{(\beta)}_{n-1}=v_{n-1}\right\}}f\left(\xi,V\left(v_{n-1}\right)\right)\prod_{i=0}^{n-1}g_{v_{i}}\left(\xi.V(v_{i})\right)\prod_{u\in \underrightarrow{v_{i-1}} \backslash v_{i}}h_{u}\left(\xi,V(u)\right)\right]\\ & = \mathbb{E}_{\xi,a}\left[\frac{D^{(\beta)}_{n-1}(v_{n-1})}{U\left(\xi,a+\beta\right)\mathrm{e}^{-a}}f\left(\xi,V\left(v_{n-1}\right)\right)\prod_{i=0}^{n-1}g_{v_{i}}\left(\xi,V(v_{i})\right)\prod_{u\in \underrightarrow{v_{i-1}} \backslash v_{i}}h_{u}\left(\xi,V(u)\right)\right]\\ & = \mathbb{E}_{\xi,a}\left[\frac{D^{(\beta)}_{n}(v)}{U\left(\xi,a+\beta\right)\mathrm{e}^{-a}}\prod_{i=0}^{n}g_{v_{i}}\left(\xi,V(v_{i})\right)\prod_{u\in \underrightarrow{v_{i-1}} \backslash v_{i}}h_{u}\left(\xi,V(u)\right)\right]. \end{align*}

This proves (A.3) and hence completes the proof of Proposition 5.

Proof of Proposition 6.

  1. (i) Recalling (A.2) in the proof of Proposition 5 and (3.1) (the definition of $\mathbb{Q}^{(\beta)}_{\xi,a}$ ), we have

    \begin{equation*} \begin{aligned} \mathbb{E}_{\mathbb{Q}^{(\beta)}_{\xi,a}}\left[\mathbf{1}_{\left\{w^{(\beta)}_{n}=v\right\}}\prod_{|u|\leq n}g_{u}\left(\xi,V(u)\right)\right]=&\mathbb{E}_{\xi,a}\left[\frac{D^{(\beta)}_{n}(v)}{U\left(\xi,a+\beta\right)\mathrm{e}^{-a}}\prod_{|u|\leq n}g_{u}\left(\xi,V(u)\right)\right]\\ =&\mathbb{E}_{\mathbb{Q}^{(\beta)}_{\xi,a}}\left[\frac{D^{(\beta)}_{n}(v)}{D^{(\beta)}_{n}}\prod_{|u|\leq n}g_{u}\left(\xi,V(u)\right)\right], \end{aligned} \end{equation*}
    which implies
    $$\mathbb{Q}^{(\beta)}_{\xi,a}\Big(w^{(\beta)}_{n}=v\ \big|\ \mathcal{F}_{n}\Big)=\frac{D^{(\beta)}_{n}(v)}{D^{(\beta)}_{n}}.$$
    This proves the first part of the proposition if we recall the definition of $D^{(\beta)}_{n}(v)$ .
  2. (ii) For all n and any measurable function $f:\mathbb{R}^{n+1}\to\mathbb{R}_{+}$ , by Part (i), we have

    \begin{align*} & \mathbb{E}_{\mathbb{Q}^{(\beta)}_{\xi,a}}\left[f\big(V(w^{(\beta)}_{0}), \cdots, V(w^{(\beta)}_{n})\big)\right]\\ = & \mathbb{E}_{\mathbb{Q}^{(\beta)}_{\xi,a}}\left[\sum_{|v|=n}f\left(V(v_{0}), \cdots, V(v_{n})\right)\mathbf{1}_{\left\{w^{(\beta)}_{n}=v\right\}}\right]\\ = & \mathbb{E}_{\mathbb{Q}^{(\beta)}_{\xi,a}}\left[\sum_{|v|=n}f\left(V(v_{0}), \cdots, V(v_{n})\right)\frac{U\left(\theta^{n}\xi,V(v)+\beta\right)\mathrm{e}^{-V(v)}\mathbf{1}_{\left\{\min_{0\leq k\leq n}V(v_{k})\geq-\beta\right\}}}{D^{(\beta)}_{n}}\right]. \end{align*}
    Then, from the definition of $\mathbb{Q}^{(\beta)}_{\xi,a}$ and the many-to-one lemma, we obtain
    \begin{align*} & \mathbb{E}_{\mathbb{Q}^{(\beta)}_{\xi,a}}\left[\sum_{|v|=n}f\left(V(v_{0}), \cdots, V(v_{n})\right)\frac{U\left(\theta^{n}\xi,V(v)+\beta\right)\mathrm{e}^{-V(v)}\mathbf{1}_{\left\{\min_{0\leq k\leq n}V(v_{k})\geq-\beta\right\}}}{D^{(\beta)}_{n}}\right]\\ = & \mathbb{E}_{\xi,a}\left[\sum_{|v|=n}f\left(V(v_{0}), \cdots, V(v_{n})\right)\frac{U\left(\theta^{n}\xi,V(v)+\beta\right)\mathrm{e}^{-V(v)}\mathbf{1}_{\left\{\min_{0\leq k\leq n}V(v_{k})\geq-\beta\right\}}}{U\left(\xi,a+\beta\right)\mathrm{e}^{-a}}\right]\\ =&\mathbb{E}_{\xi, a}\left[f\left(S_{0}, \cdots, S_{n}\right)\frac{U(\theta^{n}\xi,S_{n}+\beta)}{U(\xi,a+\beta)}\mathbf{1}_{\left\{\min_{0\leq k\leq n}S_{k}\geq-\beta\right\}}\right]. \end{align*}
    This completes the second part of the proposition.

Acknowledgements

We would like to thank the anonymous referees for their valuable comments and suggestions, which improved the original manuscript.

Funding information

This work was supported in part by the National Natural Science Foundation of China (No. 11971062) and the National Key Research and Development Program of China (No. 2020YFA0712900).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Afanasyev, V. I., Geiger, J., Kersting, G. and Vatutin, V. A. (2005). Criticality for branching processes in random environment. Ann. Prob. 33, 645673.CrossRefGoogle Scholar
Athreya, K. B. (2000). Change of measures for Markov chains and the $L\log L$ theorem for branching processes. Bernoulli 6, 323338.CrossRefGoogle Scholar
Aïdékon, E. (2013). Convergence in law of the minimum of a branching random walk. Ann. Prob. 41, 13621426.CrossRefGoogle Scholar
Aïdékon, E. and Shi, Z. (2014). The Seneta–Heyde scaling for the branching random walk. Ann. Prob. 42, 959993.CrossRefGoogle Scholar
Biggins, J. D. (1977). Martingale convergence in the branching random walk. J. Appl. Prob. 14, 2537.CrossRefGoogle Scholar
Biggins, J. D. (2003). Random walk conditioned to stay positive. J. London Math. Soc. 67, 259272.CrossRefGoogle Scholar
Biggins, J. D. and Kyprianou, A. E. (2004). Measure change in multitype branching. Adv. Appl. Prob. 36, 544581.CrossRefGoogle Scholar
Biggins, J. D. and Kyprianou, A. E. (2005). Fixed points of the smoothing transform: the boundary case. Electron. J. Prob. 10, 609631.CrossRefGoogle Scholar
Chen, X. (2015). A necessary and sufficient condition for the nontrivial limit of the derivative martingale in a branching random walk. Adv. Appl. Prob. 47, 741760.CrossRefGoogle Scholar
Durrett, R. and Liggett, M. (1983). Fixed points of the smoothing transform. Z. Wahrscheinlichkeitsth. 64, 275301.CrossRefGoogle Scholar
Feller, W. (1971). An Introduction to Probability Theory and its Applications, Vol. II, 2nd edn. John Wiley, New York.Google Scholar
Gao, Z., Liu, Q. and Wang, H. (2014). Central limit theorems for a branching random walk with a random environment in time. Acta Math. Sci. 34, 501512.CrossRefGoogle Scholar
Greven, A. and den Hollander, F. (1992). Branching random walk in random environment: phase transition for local and global growth rates. Prob. Theory Relat. Fields 91, 195249.CrossRefGoogle Scholar
Harris, S. C. (1999). Travelling-waves for the FKPP equation via probabilistic arguments. Proc. R. Soc. Edinburgh A 129, 503517.CrossRefGoogle Scholar
Hong, W. and Liang, S. (2022). Random walks in time-inhomogeneous random environment conditioned to stay positive. Preprint. Available at https://arxiv.org/abs/2211.15017.Google Scholar
Hu, Y. and Yoshida, N. (2009). Localization for branching random walks in random environment. Stoch. Process. Appl. 119, 16321651.CrossRefGoogle Scholar
Huang, C. and Liu, Q. (2014). Branching random walk with a random environment in time. Preprint. Available at https://arxiv.org/abs/1407.7623.CrossRefGoogle Scholar
Kallenberg, O. (2002). Foundations of Modern Probability, 2nd edn. Springer, New York.CrossRefGoogle Scholar
Kyprianou, A. E. (1998). Slow variation and uniqueness of solutions to the functional equation in the branching random walk. J. Appl. Prob. 35, 795802.CrossRefGoogle Scholar
Lalley, S. P. and Sellke, T. (1987). A conditional limit theorem for the frontier of a branching Brownian motion. Ann. Prob. 15, 10521061.CrossRefGoogle Scholar
Liu, Q. (1998). Fixed points of a generalized smoothing transform and applications to the branching processes. Adv. Appl. Prob. 30, 85112.CrossRefGoogle Scholar
Liu, Q. (2000). On generalized multiplicative cascades. Stoch. Process. Appl. 86, 263286.CrossRefGoogle Scholar
Lyons, R. (1997). A simple path to Biggins’ martingale convergence for branching random walk. In Classical and Modern Branching Processes, Springer, New York, pp. 217221.CrossRefGoogle Scholar
Lyons, R., Pemantle, R. and Peres, Y. (1995). Conceptual proofs of $L\log L$ criteria for mean behavior of branching processes. Ann. Prob. 23, 11251138.Google Scholar
Mallein, B. (2015). Maximal displacement in a branching random walk through interfaces. Electron. J. Prob. 20, 140.CrossRefGoogle Scholar
Mallein, B. and Miłoś, P. (2019). Maximal displacement of a supercritical branching random walk in a time-inhomogeneous random environment. Stoch. Process. Appl. 129, 32393260.CrossRefGoogle Scholar
Mallein, B. and Shi, Q. (2023). A necessary and sufficient condition for the convergence of the derivative martingale in a branching Lévy process. Bernoulli 29, 597624.CrossRefGoogle Scholar
McKean, H. P. (1975). Application of Brownian motion to the equation of Kolmogorov–Petrovskii–Piskunov. Commun. Pure Appl. Math. 28, 323331.CrossRefGoogle Scholar
Shi, Z. (2015). Branching Random Walks: École d’Été de Probabilités de Saint-Flour XLII—2012. Springer, Cham.CrossRefGoogle Scholar
Tanaka, H. (1989). Time reversal of random walks in one dimension. Tokyo J. Math. 12, 159174.CrossRefGoogle Scholar
Wang, X. and Huang, C. (2017). Convergence of martingale and moderate deviations for a branching random walk with a random environment in time. J. Theoret. Prob. 30, 961995.CrossRefGoogle Scholar
Yang, T. and Ren, Y.-X. (2011). Limit theorem for derivative martingale at criticality w.r.t. branching Brownian motion. Statist. Prob. Lett. 81, 195–200.CrossRefGoogle Scholar
Yoshida, N. (2008). Central limit theorem for branching random walks in random environment. Ann. Appl. Prob. 18, 16191635.CrossRefGoogle Scholar