Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-10T10:17:04.861Z Has data issue: false hasContentIssue false

Differences between Lyapunov exponents for the simple random walk in Bernoulli potentials

Published online by Cambridge University Press:  23 June 2023

Naoki Kubota*
Affiliation:
College of Science and Technology, Nihon University
*
*Postal address: 24-1, Narashinodai 7-chome, Funabashi-shi, Chiba 274-8501, Japan. Email address: kubota.naoki08@nihon-u.ac.jp
Rights & Permissions [Opens in a new window]

Abstract

We consider the simple random walk on the d-dimensional lattice $\mathbb{Z}^d$ ($d \geq 1$), traveling in potentials which are Bernoulli-distributed. The so-called Lyapunov exponent describes the cost of traveling for the simple random walk in the potential, and it is known that the Lyapunov exponent is strictly monotone in the parameter of the Bernoulli distribution. Hence the aim of this paper is to investigate the effect of the potential on the Lyapunov exponent more precisely, and we derive some Lipschitz-type estimates for the difference between the Lyapunov exponents.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

In this paper we consider the simple random walk in independent and identically distributed (i.i.d.) non-negative potentials on the d-dimensional lattice $\mathbb{Z}^d$ ( $d \geq 1$ ). A central object of its study is the so-called Lyapunov exponent, which measures the cost paid by the simple random walk for traveling in a landscape of potentials. In [Reference Kubota10] we proved that the Lyapunov exponent strictly increases in the law of potential with some order. Considering this result, a natural question arises as to how much the change in the law of potential affects the Lyapunov exponent. Hence the goal of this paper is to investigate this problem in the case where the potential is Bernoulli-distributed.

1.1. The model

Let $\mathbb{S}=(S_k)_{k=0}^\infty$ be the simple random walk on $\mathbb{Z}^d$ . For $x \in \mathbb{Z}^d$ , we write $P^x$ and $E^x$ , respectively, for the law of the simple random walk starting at x and its associated expectation. Independently of $\mathbb{S}$ , let $\omega=(\omega(x))_{x \in \mathbb{Z}^d}$ be a family of i.i.d. random variables taking values in $[0,\infty)$ , and $\omega$ is called the potential. Let $\mathbb{P}$ and $\mathbb{E}$ , respectively, denote the law of the potential $\omega$ and its associated expectation.

Let us now introduce the Lyapunov exponent, which is the main object of study in the present article. For any $y \in \mathbb{Z}^d$ , H(y) stands for the hitting time of the simple random walk to y, that is,

\begin{align*}H(y)\;:\!=\; \inf\{ k \geq 0\;:\; S_k=y \}. \end{align*}

Moreover, for $x,y \in \mathbb{Z}^d$ define

\begin{align*}e(x,y,\omega)\;:\!=\; E^x\left[ \exp\!\left\{ -\sum_{k=0}^{H(y)-1}\omega(S_k) \right\} \mathbf{1}_{\{ H(y)<\infty \}} \right],\end{align*}

with the convention that $e(x,y,\omega)\;:\!=\; 1$ if $x=y$ . Then the following quantities $a(x,y,\omega)$ and b(x, y), respectively, are called the quenched and annealed travel costs from x to y:

\begin{align*}a(x,y,\omega)\;:\!=\; -\log e(x,y,\omega)\end{align*}

and

\begin{align*}b(x,y)\;:\!=\; -\log\mathbb{E}[e(x,y,\omega)].\end{align*}

The asymptotic behavior of each travel cost induces a norm on $\mathbb{R}^d$ , and it is the Lyapunov exponent. More precisely, Flury [Reference Flury4, Theorem A], Mourrat [Reference Mourrat13, Theorem 1.1], and Zerner [Reference Zerner18, Proposition 4] obtained the following result. There exist (non-random) norms $\alpha(\!\cdot\!)$ and $\beta(\!\cdot\!)$ on $\mathbb{R}^d$ such that for all $x \in \mathbb{Z}^d$ ,

\begin{align*}\lim_{n \to \infty} \dfrac{1}{n}a(0,nx,\omega)&= \lim_{n \to \infty} \dfrac{1}{n}\mathbb{E}[a(0,nx,\omega)]\\[5pt] &= \inf_{n \in \mathbb{N}} \dfrac{1}{n}\mathbb{E}[a(0,nx,\omega)]= \alpha(x) \quad \text{$\mathbb{P}$-a.s. and in $L^1(\mathbb{P})$}\end{align*}

and

\begin{align*}\lim_{n \to \infty} \dfrac{1}{n}b(0,nx)= \inf_{n \in \mathbb{N}} \dfrac{1}{n} b(0,nx)= \beta(x).\end{align*}

We call $\alpha(\!\cdot\!)$ and $\beta(\!\cdot\!)$ , respectively, the quenched and annealed Lyapunov exponents.

Remark 1.1. By definition, the annealed travel cost b(x, y) and the Lyapunov exponents $\alpha(\!\cdot\!)$ and $\beta(\!\cdot\!)$ depend on the distribution function of $\omega(0)$ , say $\phi$ . From now on, we basically put a subscript $\phi$ on the above notations: $b(x,y)=b_\phi(x,y)$ , $\alpha(\!\cdot\!)=\alpha_\phi(\!\cdot\!)$ , and $\beta(\!\cdot\!)=\beta_\phi(\!\cdot\!)$ .

1.2. Main results

First of all, we state the motivation for the present work. Let F and G be distribution functions on $[0,\infty)$ such that $F \leq G$ , $F(0)<G(0)$ , and $\int_0^\infty t\,{\text{d}} F(t)$ is finite. Then Theorems 1.4 and 1.5 of [Reference Kubota10] give the following strict comparisons for the quenched and annealed Lyapunov exponents. There exist constants $0<C,C'<\infty$ (which may depend on d, F, and G) such that for all $x \in \mathbb{R}^d \setminus \{0\}$ ,

(1.1) \begin{align}\alpha_F(x)-\alpha_G(x) \geq C\|x\|_1 \quad \text{and}\quad \beta_F(x)-\beta_G(x) \geq C'\|x\|_1,\end{align}

where $\|\!\cdot\!\|_1$ is the $\ell^1$ -norm on $\mathbb{R}^d$ . The above inequalities do not provide any information on how much the differences in (1.1) are affected by F and G. Hence the goal of this paper is to estimate the differences in (1.1) more precisely by focusing on Bernoulli-distributed potentials.

Let $0 \leq r \leq 1$ and $\mathbb{P}_r$ is the law of the potential $\omega$ defined by

\begin{align*} \mathbb{P}_r(\omega(x)=0)=1-\mathbb{P}_r(\omega(x)=1)=r,\quad x \in \mathbb{Z}^d.\end{align*}

In this situation, we call $\omega$ the Bernoulli potential with parameter r. Moreover, write $F_r$ and $\mathbb{E}_r$ , respectively, for the distribution function and the expectation with respect to $\mathbb{P}_r$ . To shorten notation, set

\begin{align*}b_r(x,y)\;:\!=\; b_{F_r}(x,y),\quad \alpha_r(\!\cdot\!)\;:\!=\; \alpha_{F_r}(\!\cdot\!),\quad \beta_r(\!\cdot\!)\;:\!=\; \beta_{F_r}(\!\cdot\!).\end{align*}

Then the following two theorems are our main results, which estimate differences between quenched and annealed Lyapunov exponents.

Theorem 1.1. We have the following lower and upper bounds for differences between quenched Lyapunov exponents.

  1. (1) For all $0<p<q<1$ , we have

    \begin{align*}\inf_{x \in \mathbb{R}^d\setminus\{0\}}\dfrac{\alpha_p(x)-\alpha_q(x)}{\|x\|_1}\geq (1-{\text{e}}^{-1})(q-p).\end{align*}
  2. (2) Let $0<q_0<1$ . Then there exists a constant $0<C_1<\infty$ (which may depend on d and $q_0$ ) such that for all $0<p<q \leq q_0$ ,

    \begin{align*}\sup_{x \in \mathbb{R}^d\setminus\{0\}} \dfrac{\alpha_p(x)-\alpha_q(x)}{\|x\|_1} \leq C_1(q-p).\end{align*}

Theorem 1.2. The following results hold for the annealed Lyapunov exponent.

  1. (1) We obtain the statement of part (1) of Theorem 1.1 with $\alpha_p(\!\cdot\!)$ and $\alpha_q(\!\cdot\!)$ , respectively, replaced by $\beta_p(\!\cdot\!)$ and $\beta_q(\!\cdot\!)$ .

  2. (2) Let $0<q_0<1$ . Then there exists a constant $0<C_2<\infty$ (which may depend on d and $q_0$ ) such that for all $0<p<q \leq q_0$ ,

    (1.2) \begin{align}\sup_{x \in \mathbb{R}^d \setminus \{0\}}\dfrac{\beta_p(x)-\beta_q(x)}{\|x\|_1} \leq C_2(\log q-\log p).\end{align}
    In particular, if $d \geq 3$ , then the right-hand side of (1.2) can be replaced by $C_1(q-p)$ .

Remark 1.2. Le [Reference Le12] obtained the (ordinary) continuity of the quenched and annealed Lyapunov exponents in the law of potential. From this point of view, Theorems 1.1 and 1.2 provide stronger results than the (ordinary) continuity in the case where the potential is Bernoulli-distributed. Moreover, Le [Reference Le12] sometimes requires that the potential is bounded away from zero in the low-dimensional case ( $d=1,2$ ). Since the Bernoulli potential is not bounded away from zero, Theorems 1.1 and 1.2 also exhibit the continuity of the quenched and annealed Lyapunov exponents not dealt with in [Reference Le12].

Remark 1.3. At first, we believed that the right-hand side of (1.2) could easily be replaced by $C_2(q-p)$ in $d=1,2$ by using the same argument taken in the proof of part (2) of Theorem 1.1. Unfortunately, that argument does not work well, and this may suggest that in $d=1,2$ , the upper and lower bounds stated in Theorem 1.2 are not correct. However, if the range of p and q is restricted to a closed interval $[p_0,q_0] \subset (0,1)$ , then the mean value theorem implies that there exists a constant $0<C_1'<\infty$ (which may depend on d, $p_0$ , and $q_0$ ) such that for all $p_0 \leq p<q \leq q_0$ , the right-hand side of (1.2) is bounded from above by $C_2'(q-p)$ in $d=1,2$ . Consequently, even if $d=1,2$ , in the case where $p_0 \leq p<q \leq q_0$ , the quantity $q-p$ dominates the upper and lower bounds between the annealed Lyapunov exponents.

Let us finally comment on earlier works related to the above results. Recently, the coincidence of the quenched and annealed Lyapunov exponents has been studied positively: Flury [Reference Flury5] and Zygouras [Reference Zygouras19] proved that the quenched and annealed Lyapunov exponents coincide in $d \geq 4$ and the low-disorder regime. Furthermore, Wang [Reference Wang15,Reference Wang16] and Kosygina et al. [Reference Kosygina, Mountford and Zerner9] studied the asymptotic behavior of the quenched and annealed Lyapunov exponents as the potential tends to zero.

On the other hand, this paper treats the comparison between (quenched/annealed) Lyapunov exponents for different laws of the potential. As mentioned at the beginning of this subsection, the strict comparison for the Lyapunov exponents is obtained in [Reference Kubota10], and the goal of this article is to estimate more precisely differences between quenched and annealed Lyapunov exponents when the law of potential is restricted to the Bernoulli distribution. Even if we focus on the Bernoulli setting, it is not easy to analyze the Lyapunov exponent precisely. In fact, there is not much research related to our topic. In [Reference Cerf and Dembin1], [Reference Dembin2], and [Reference Dembin3] the Lipschitz continuity is obtained for the so-called time constant of the first passage percolation on $\mathbb{Z}^d$ and the isoperimetric constant of the supercritical percolation cluster (which are counterparts of the Lyapunov exponent), respectively. The aim of [Reference Wu and Feng17] is to derive a (non-trivial) lower bound for the difference between the time constants. This is a similar topic to part (1) of Theorem 1.1, and the key idea for its proof is actually inspired by [Reference Wu and Feng17]. However, the counterpart of the travel cost used there takes only non-negative integer values, and this is not always true for the travel cost. Hence we have to take a different approach from that of [Reference Wu and Feng17]. Fortunately, this modified approach is also useful for the proof of part (2) of Theorem 1.1.

1.3. Organization of the paper

Let us describe how the present article is organized. In Section 2 we prove part (1) of Theorem 1.1, which gives a lower bound for the difference between quenched Lyapunov exponents. Note that the quenched Lyapunov exponent is described by the expectation of the quenched travel cost. Hence our main task is to estimate differences between expectations of quenched travel costs from below. For this purpose, we use Russo’s formula for the independent Bernoulli site percolation on $\mathbb{Z}^d$ . Russo’s formula enables us to differentiate the expectation for the quenched travel cost with respect to parameter r (see Lemma 2.1 below). Then a standard calculation shows that the obtained derivative is bounded from above uniformly in r. This implies our desired conclusion since the expectation of the quenched travel cost is decreasing in r.

Section 3 is devoted to the proof of part (2) of Theorem 1.1, which gives an upper bound for the difference between quenched Lyapunov exponents. Considering the proof of part (1) of Theorem 1.1, it suffices to estimate the derivative of the expectation for the quenched travel cost from below uniformly in parameter r. To this end, we divide the proof into two cases: Sections 3.1 and 3.2, respectively, treat the high-dimensional case ( $d \geq 3$ ) and the low-dimensional case ( $d=1,2$ ). In the high-dimensional case, the transience of the simple random walk works very effectively, and the derivative of the expectation for the quenched travel cost is bounded from below uniformly in r. On the other hand, the simple random walk is recurrent in the low-dimensional case. This is the reason why the proof is divided into two cases, and we need some more analysis to estimate the derivative from below uniformly in r. Actually, in the low-dimensional case, the derivative is related to the range of the simple random walk equipped with random connected subsets of $\mathbb{Z}^d$ . Hence our efforts are focused on estimating this range under the path measure defined as in (2.3). To this end, we rely on the analysis of lattice animals developed by Fontes and Newman [Reference Fontes and Newman6] and Scholler [Reference Scholler14]. Roughly speaking, this analysis guarantees that even if $d=1,2$ , the potential makes the simple random walk transient under the path measure. Therefore, similarly to the high-dimensional case, we can also estimate the derivative from below uniformly in r.

The aim of Section 4 is to prove Theorem 1.2, which gives both lower and upper bounds for the difference between annealed Lyapunov exponents. Similar to the above, our task here is to estimate the derivative of the annealed travel cost from above and below. By differentiating under the integral sign, the derivative can be written as the expectation of some random variable with respect to the path measure defined as in (4.1) (see Lemma 4.1 below). This random variable consists of the function $f(t)\;:\!=\; (1-t)/\{ r+t(1-r) \}$ , $0 \leq t \leq 1$ , and the property of f(t) implies that the derivative is bounded from above uniformly in r and has a lower bound of order $r^{-1}$ . In particular, in the high-dimensional case, the derivative can be bounded from below uniformly in r due to transience of the simple random walk (see Lemma 4.2 below).

In Section 5 we comment on the quenched and annealed large deviation principles for the simple random walk in the Bernoulli potential. It is known that their rate functions are described by the Lyapunov exponents for the potentials $\omega+\lambda=(\omega(x)+\lambda)_{x \in \mathbb{Z}^d}$ , $\lambda \geq 0$ . Furthermore, Theorems 1.1 and 1.2 are also established for these Lyapunov exponents since the same arguments as in Sections 24 work, and we can estimate differences between quenched and annealed rate functions.

We close this section with some general notation. Write $\|\!\cdot\!\|_1$ and $\|\!\cdot\!\|_\infty$ for the $\ell^1$ - and $\ell^\infty$ -norms on $\mathbb{R}^d$ . Throughout the paper, c, c , and $C_i$ , $i=1,2,\dots$ , denote some constants with $0<c,c',C_i<\infty$ .

2. Lower bound for the quenched Lyapunov exponent

The aim of this section is to prove part (1) of Theorem 1.1. The key tool here is Russo’s formula for the independent Bernoulli site percolation on $\mathbb{Z}^d$ . Roughly speaking, by applying Russo’s formula to some events with respect to the quenched travel cost, we derive a lower bound for the differences between quenched Lyapunov exponents. However, Russo’s formula is not directly applicable because the quenched travel cost depends on the states of infinitely many sites. To overcome this problem, we first introduce a modification of the quenched travel cost depending only on the states of finitely many sites.

Let V be a subset of $\mathbb{R}^d$ and let T(V) denote the exit time of the simple random walk from V, that is,

(2.1) \begin{align}T(V)\;:\!=\; \inf\{ k \geq 0\;:\; S_k \not\in V \}.\end{align}

Then, for any $x,y \in \mathbb{Z}^d$ and $N \in \mathbb{N}$ , we define the quenched travel cost $a_N(x,y,\omega)$ from x to y restricted to the simple random walk before exiting $[\!-\!N,N]^d$ as follows:

\begin{align*}a_N(x,y,\omega)\;:\!=\; -\log e_N(x,y,\omega),\end{align*}

where

\begin{align*}e_N(x,y,\omega)\;:\!=\; E^x\Biggl[ \exp\!\Biggl\{ -\sum_{k=0}^{H(y)-1}\omega(S_k) \Biggr\} \mathbf{1}_{\{ H(y)<T([-N,N]^d) \}} \Biggr]\end{align*}

if $x \not= y$ , and $e_N(x,y,\omega)\;:\!=\; 1$ otherwise. Note that $a_N(x,y,\omega)$ depends only on the states of finitely many sites, and the monotone convergence theorem shows that for each $0 \leq r \leq 1$ ,

(2.2) \begin{align}\lim_{N \to \infty} \mathbb{E}_r[a_N(x,y,\omega)]= \mathbb{E}_r[a(x,y,\omega)].\end{align}

We need some preparation before applying Russo’s formula to the restricted quenched travel cost. For any $y \in \mathbb{Z}^d$ , the path measure $\widetilde{P}_{N,\omega}^{0,y}$ is defined by

(2.3) \begin{align}\dfrac{{\text{d}}\widetilde{P}_{N,\omega}^{0,y}}{{\text{d}} P^0}= e_N(0,y,\omega)^{-1} \exp\!\Biggl\{ -\sum_{k=0}^{H(y)-1}\omega(S_k) \Biggr\}\mathbf{1}_{\{ H(y)<T([-N,N]^d) \}},\end{align}

and let $\widetilde{E}_{N,\omega}^{0,y}$ denote the expectation with respect to $\widetilde{P}_{N,\omega}^{0,y}$ . Next, for any $z \in \mathbb{Z}^d$ and $m \in \mathbb{N}$ , let $\ell_z(m)$ be the number of visits to z by the simple random walk up to time $m-1$ , that is,

\begin{align*}\ell_z(m)\;:\!=\; \#\{ 0 \leq k<m\;:\; S_k=z \}.\end{align*}

The following proposition is a consequence of Russo’s formula for the restricted quenched travel cost, which is also useful to derive an upper bound for differences between quenched Lyapunov exponents.

Lemma 2.1. For all $0<r<1$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ , we have

\begin{align*}-\dfrac{{\text{d}}}{{\text{d}} r}\mathbb{E}_r[a_N(0,y,\omega)]&= \sum_{z \in \mathbb{Z}^d \cap [-N,N]^d} \bigl\{ \mathbb{E}_r \bigl[ \mathbf{1}_{\{ \omega(z)=1 \}} \log \widetilde{E}_{N,\omega}^{0,y}\bigl[{\text{e}}^{\ell_z(H(y))}\bigr] \bigr]\\&\quad +\mathbb{E}_r\bigl[ \mathbf{1}_{\{ \omega(z)=0 \}}\bigl( -\log \widetilde{E}_{N,\omega}^{0,y}\bigl[{\text{e}}^{-\ell_z(H(y))}\bigr] \bigr) \bigr] \bigr\}.\end{align*}

Proof. Fix $y \in \mathbb{Z}^d \setminus \{0\}$ and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ . Since the random variable $a_N(0,y,\omega)$ depends only on the states of sites in $[-N,N]^d$ , Russo’s formula (see e.g. [Reference Grimmett7, Theorem 2.32]) gives that for any $0<r<1$ ,

(2.4) \begin{align}-\dfrac{{\text{d}}}{{\text{d}} r}\mathbb{E}_r[a_N(0,y,\omega)]= \sum_{z \in \mathbb{Z}^d \cap [-N,N]^d} \mathbb{E}_r\bigl[ a_N\bigl(0,y,\omega_z^1\bigr)-a_N\bigl(0,y,\omega_z^0\bigr) \bigr],\end{align}

where

\begin{align*}\omega_z^1(x)\;:\!=\;\begin{cases}1 & \text{if $ x=z$,}\\\omega(x) & \text{if $ x \not=z$,}\end{cases}\quad\omega_z^0(x)\;:\!=\;\begin{cases}0 & \text{if $ x=z$,}\\\omega(x) & \text{if $ x \not=z$.}\end{cases}\end{align*}

Note that if $\omega(z)=1$ holds, then

\begin{align*}a_N\bigl(0,y,\omega_z^1\bigr)-a_N\bigl(0,y,\omega_z^0\bigr)= \log \widetilde{E}_{N,\omega}^{0,y}\bigl[ {\text{e}}^{\ell_z(H(y))} \bigr].\end{align*}

Furthermore, in the case where $\omega(z)=0$ ,

\begin{align*}a_N\bigl(0,y,\omega_z^1\bigr)-a_N\bigl(0,y,\omega_z^0\bigr)= -\log \widetilde{E}_{N,\omega}^{0,y}\bigl[ {\text{e}}^{-\ell_z(H(y))} \bigr].\end{align*}

This implies that the right-hand side of (2.4) is equal to

\begin{align*}&\sum_{z \in \mathbb{Z}^d \cap [-N,N]^d}\bigl\{ \mathbb{E}_r \bigl[ \mathbf{1}_{\{ \omega(z)=1 \}} \log \widetilde{E}_{N,\omega}^{0,y}\bigl[{\text{e}}^{\ell_z(H(y))}\bigr] \bigr] +\mathbb{E}_r\bigl[ \mathbf{1}_{\{ \omega(z)=0 \}}\bigl( -\log \widetilde{E}_{N,\omega}^{0,y}\bigl[{\text{e}}^{-\ell_z(H(y))}\bigr] \bigr) \bigr] \bigr\},\end{align*}

and the proposition follows.

We are now in a position to show part (1) of Theorem 1.1.

Proof of part (1) of Theorem 1.1. Let $y \in \mathbb{Z}^d \setminus \{0\}$ and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ . Jensen’s inequality and the fact that $-\log(1-t) \geq t$ holds for $0 \leq t<1$ imply that for any $z \in \mathbb{Z}^d \cap [-N,N]^d$ ,

\begin{align*}\log\widetilde{E}_{N,\omega}^{0,y}\bigl[{\text{e}}^{\ell_z(H(y))}\bigr]&\geq -\log\widetilde{E}_{N,\omega}^{0,y}\bigl[{\text{e}}^{-\ell_z(H(y))}\bigr]\\&\geq -\log\bigl\{ 1-(1-{\text{e}}^{-1})\widetilde{P}_{N,\omega}^{0,y}(\ell_z(H(y)) \geq 1) \bigr\}\\&\geq (1-{\text{e}}^{-1})\widetilde{P}_{N,\omega}^{0,y}(\ell_z(H(y)) \geq 1).\end{align*}

Hence Lemma 2.1 yields that for all $0<r<1$ ,

\begin{align*}-\dfrac{{\text{d}}}{{\text{d}} r}\mathbb{E}_r[a_N(0,y,\omega)]&\geq\sum_{z \in \mathbb{Z}^d \cap [-N,N]^d} \mathbb{E}_r \bigl[ (1-{\text{e}}^{-1})\widetilde{P}_{N,\omega}^{0,y}(\ell_z(H(y)) \geq 1) \bigr]\\&= (1-{\text{e}}^{-1})\, \mathbb{E}_r\Biggl[\widetilde{E}_{N,\omega}^{0,y}\Biggl[ \sum_{z \in \mathbb{Z}^d \cap [-N,N]^d} \mathbf{1}_{\{ \ell_z(H(y)) \geq 1 \}} \Biggr] \Biggr]\\&\geq (1-{\text{e}}^{-1})\|y\|_1.\end{align*}

Here the last inequality follows from the fact that $\widetilde{P}_{N,\omega}^{0,y} \textrm{-} \textrm{a.s.}$ , the simple random walk must visit at least $\|y\|_1$ sites before hitting y. Thus, for all $0<r<1$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ ,

\begin{align*}\dfrac{{\text{d}}}{{\text{d}} r}\bigl\{ \mathbb{E}_r[a_N(0,y,\omega)]+(1-{\text{e}}^{-1})r\|y\|_1 \bigr\} \leq 0,\end{align*}

which implies that the function in the above braces is decreasing in r. Therefore, for all $0<p<q<1$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ ,

\begin{align*}\mathbb{E}_p[a_N(0,y,\omega)]-\mathbb{E}_q[a_N(0,y,\omega)]\geq (1-{\text{e}}^{-1})(q-p)\|y\|_1.\end{align*}

It follows by (2.2) and the definition of the quenched Lyapunov exponent that for all $0<p<q<1$ and $x \in \mathbb{Z}^d \setminus \{0\}$ ,

\begin{align*}\dfrac{\alpha_p(x)-\alpha_q(x)}{\|x\|_1} \geq (1-{\text{e}}^{-1})(q-p).\end{align*}

Note that the right-hand side above does not depend on x, and $\alpha_p(\!\cdot\!)$ and $\alpha_q(\!\cdot\!)$ are norms on $\mathbb{R}^d$ . Consequently, the above inequality can be easily extended to all $x \in \mathbb{R}^d \setminus \{ 0 \}$ , and the proof is complete.

3. Upper bound for the quenched Lyapunov exponent

In this section, for a fixed $q_0 \in (0,1)$ , we prove part (2) of Theorem 1.1. The proof is not so long once a proposition is established (see Proposition 3.1 below).

Proof of part (2) of Theorem 1.1. Considering the proof of part (1) of Theorem 1.1, our task is to show that there exists a constant c (which depends only on d and $q_0$ ) such that for all $0<r \leq q_0$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ ,

(3.1) \begin{align}-\dfrac{{\text{d}}}{{\text{d}} r}\mathbb{E}_r[a_N(0,y,\omega)] \leq c\|y\|_1.\end{align}

To this end, note that for each $z \in \mathbb{Z}^d$ and the potential $\omega$ ,

\begin{align*}\dfrac{e_N(0,y,\omega_z)}{e_N(0,y,\omega)}=\begin{cases}\widetilde{E}_{N,\omega}^{0,y}\bigl[{\text{e}}^{\ell_z(H(y))}\bigr] & \text{if $ \omega(z)=1$,}\\[5pt] \widetilde{E}_{N,\omega}^{0,y}\bigl[{\text{e}}^{-\ell_z(H(y))}\bigr] & \text{if $ \omega(z)=0$,}\end{cases}\end{align*}

where $\omega_z$ is the potential which agrees with $\omega$ on all sites except for z, that is, for $x \in \mathbb{Z}^d$ ,

\begin{align*}\omega_z(x)\;:\!=\;\begin{cases}1-\omega(z) & \text{if $ x=z$,}\\\omega(x) & \text{if $ x \not=z$.}\end{cases}\end{align*}

Combining this and Lemma 2.1, we have

(3.2) \begin{align}-\dfrac{{\text{d}}}{{\text{d}} r}\mathbb{E}_r[a_N(0,y,\omega)]\leq \sum_{z \in \mathbb{Z}^d \cap [-N,N]^d}\mathbb{E}_r \biggl[ \biggl| \log\dfrac{e_N(0,y,\omega_z)}{e_N(0,y,\omega)} \biggr| \biggr],\end{align}

and the following proposition is the key to estimating the above sum.

Proposition 3.1. There exists a constant $C_3$ (which depends only on d and $q_0$ ) such that for all $0<r \leq q_0$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ ,

\begin{align*}\sum_{z \in \mathbb{Z}^d \cap [-N,N]^d}\mathbb{E}_r \left[ \left| \log\dfrac{e_N(0,y,\omega_z)}{e_N(0,y,\omega)} \right| \right]\leq C_3\,\mathbb{E}_r\bigl[ \widetilde{E}_{N,\omega}^{0,y}[\#\mathcal{A}(0,y)] \bigr],\end{align*}

where $\mathcal{A}(0,y)\;:\!=\; \{ S_k\;:\; 0 \leq k<H(y) \}$ .

Since the proof of the above proposition is fairly long, for now let us complete the proof of part (2) of Theorem 1.1. The same argument as in the proof of [Reference Zerner18, Lemma 3] implies that for any $0<r \leq q_0$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ ,

(3.3) \begin{align}\mathbb{E}_r\bigl[ \widetilde{E}_{N,\omega}^{0,y}[\#\mathcal{A}(0,y)] \bigr]\leq \dfrac{1+\log(2d)}{-\log\{ {\text{e}}^{-1}+(1-{\text{e}}^{-1})q_0 \}}\|y\|_1.\end{align}

From (3.2), (3.3), and Proposition 3.1, we conclude that for all $0<r \leq q_0$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ ,

\begin{align*}-\dfrac{{\text{d}}}{{\text{d}} r}\mathbb{E}_r[a_N(0,y,\omega)]\leq \dfrac{C_3(1+\log(2d))}{-\log\{ {\text{e}}^{-1}+(1-{\text{e}}^{-1})q_0 \}}\|y\|_1,\end{align*}

and (3.1) follows.

It remains to prove Proposition 3.1. We divide the proof into two cases: Sections 3.1 and 3.2, respectively, treat the high-dimensional case ( $d \geq 3$ ) and the low-dimensional case ( $d=1,2$ ).

3.1. High-dimensional case

This subsection is devoted to the proof of Proposition 3.1 for $d \geq 3$ . First of all, we state the following lemma, which is useful to show Proposition 3.1 not only for $d \geq 3$ but also for $d=1,2$ .

Lemma 3.1. Let $d \geq 1$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ . Moreover, for any $z \in \mathbb{Z}^d$ , define

\begin{align*}\psi_d(z,\omega)\;:\!=\; \Biggl( 1-E^z\Biggl[ \exp\!\Biggl\{ -\sum_{k=1}^{H^+(z)-1}\omega(S_k) \Biggr\} \mathbf{1}_{\{ H^+(z)<\infty \}}\Biggr] \Biggr)^{-1},\end{align*}

where $H^+(z)\;:\!=\; \inf\{ k>0\;:\; S_k=z \}$ . Then, for all $z \in \mathbb{Z}^d \cap [-N,N]^d$ ,

(3.4) \begin{align}\bigg| \log\dfrac{e_N(0,y,\omega_z)}{e_N(0,y,\omega)} \bigg|\leq 4\psi_d(z,\omega)\widetilde{P}_{N,\omega}^{0,y}(H(z)<H(y)).\end{align}

Proof. We first treat the case where $z \in \mathbb{Z}^d \cap [\!-\!N,N]^d$ satisfies $\omega(z)=1$ . The definition of $\omega_z$ and the strong Markov property show that

\begin{align*}1 \leq \dfrac{e_N(0,y,\omega_z)}{e_N(0,y,\omega)}\leq 1+\dfrac{e_N(z,y,\omega_z)}{e_N(z,y,\omega)}\widetilde{P}_{N,\omega}^{0,y}(H(z)<H(y)).\end{align*}

We use the strong Markov property again to obtain that

\begin{align*} e_N(z,y,\omega_z)& \leq \psi_d(z,\omega)E^z\Biggl[ \exp\!\Biggl\{ -\sum_{k=0}^{H(y)-1}\omega_z(S_k) \Biggr\}\mathbf{1}_{\{ H(y) \leq H^+(z),\,H(y)<T([-N,N]^d) \}} \Biggr]\\& \leq e\psi_d(z,\omega) e_N(z,y,\omega).\end{align*}

With these observations,

\begin{align*}0 \leq \log\dfrac{e_N(0,y,\omega_z)}{e_N(0,y,\omega)}\leq \log\bigl\{ 1+e\psi_d(z,\omega)\widetilde{P}_{N,\omega}^{0,y}(H(z)<H(y)) \bigr\}.\end{align*}

Since $t \geq \log(1+t)$ holds for $t \geq 0$ , the right-hand side above is not greater than

\[e\psi_d(z)\widetilde{P}_{N,\omega}^{0,y}(H(z)<H(y)),\]

and (3.4) follows.

We next consider the case where $z \in \mathbb{Z}^d \cap [-N,N]^d$ satisfies $\omega(z)=0$ . Then the same argument as in the proof of [Reference Zerner18, Lemma 12] shows that

(3.5) \begin{align}0&\leq -\log\dfrac{e_N(0,y,\omega_z)}{e_N(0,y,\omega)}\notag \\&= a_N(0,y,\omega_z)-a_N(0,y,\omega)\notag \\&\leq \min\bigl\{ -\log\widetilde{P}_{N,\omega}^{0,y}(H(y) \leq H(z)),1+\log\psi_d(z,\omega) \bigr\}.\end{align}

Note that the fact that $-\log t \leq 2(1-t)$ holds for $1/2 \leq t \leq 1$ implies that on the event $\bigl\{ \widetilde{P}_{N,\omega}^{0,y}(H(y) \leq H(z)) \geq 1/2 \bigr\}$ ,

\begin{align*}-\log\widetilde{P}_{N,\omega}^{0,y}(H(y) \leq H(z))\leq 2\widetilde{P}_{N,\omega}^{0,y}(H(z)<H(y)).\end{align*}

This, together with $\psi_d(z,\omega) \geq 1$ , shows that the right-hand side of (3.5) is smaller than or equal to

\begin{align*}&2\widetilde{P}_{N,\omega}^{0,y}(H(z)<H(y))\mathbf{1}_{\{ \widetilde{P}_{N,\omega}^{0,y}(H(y) \leq H(z)) \geq 1/2 \}}\\&\quad +2(1+\log\psi_d(z,\omega))\widetilde{P}_{N,\omega}^{0,y}(H(z)<H(y))\mathbf{1}_{\{ \widetilde{P}_{N,\omega}^{0,y}(H(y) \leq H(z))<1/2 \}}\\&\qquad \leq 2(\psi_d(z,\omega)+\log\psi_d(z,\omega))\widetilde{P}_{N,\omega}^{0,y}(H(z)<H(y)).\end{align*}

Therefore (3.4) immediately follows from the fact that $\log t \leq t$ for $t \geq 1$ .

We are now in a position to prove Proposition 3.1 for $d \geq 3$ .

Proof of Proposition 3.1 for ${d \geq 3}$ . Since the simple random walk is transient for $d \geq 3$ , we have $P^0(H^+(0)=\infty)>0$ . Hence, for all $z \in \mathbb{Z}^d$ ,

\begin{align*} \psi_d(z,\omega) \leq P^0(H^+(0)=\infty)^{-1}<\infty.\end{align*}

This together with Lemma 3.1 yields that for all $0<r \leq q_0$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ ,

\begin{align*}\sum_{z \in \mathbb{Z}^d \cap [-N,N]^d} \mathbb{E}_r \biggl[ \biggl| \log\dfrac{e_N(0,y,\omega_z)}{e_N(0,y,\omega)} \biggr| \biggr]&\leq \sum_{z \in \mathbb{Z}^d \cap [-N,N]^d}\mathbb{E}_r \bigl[ 4\psi_d(z,\omega)\widetilde{P}_{N,\omega}^{0,y}(H(z)<H(y)) \bigr]\\&\leq 4P^0(H^+(0)=\infty)^{-1} \mathbb{E}_r \bigl[ \widetilde{E}_{N,\omega}^{0,y}[\#\mathcal{A}(0,y)] \bigr].\end{align*}

Thus the proof is complete by taking $C_3\;:\!=\; 4P^0(H^+(0)=\infty)^{-1}$ .

3.2. Low-dimensional case

The aim of this section is to prove Proposition 3.1 for $d=1,2$ . In this case, the strategy taken in the previous subsection does not work because the simple random walk is recurrent. Hence we have to estimate $\psi_d(z,\omega)$ in another way.

Let $p_c=p_c(d) \in (0,1)$ be the critical probability for independent Bernoulli site percolation on $\mathbb{Z}^d$ , and fix $R \in 2\mathbb{N}$ such that

(3.6) \begin{align}q_0^{R^d}+q_0^{R^d-1}(1-q_0)R^d<p_c .\end{align}

(This is possible since $0<q_0<1$ .) We now consider the boxes $\Lambda(v)\;:\!=\; Rv+[-R/2,R/2)^d$ , $v \in \mathbb{Z}^d$ . These boxes form a partition of $\mathbb{R}^d$ , and we let $\overline{z}$ denote the (unique) index v such that $z \in \Lambda(v)$ . Furthermore, a site v is said to be open if $\Lambda(v)$ has at most one site x such that $\omega(x)=1$ , and closed otherwise. Note that under $\mathbb{P}_r$ , the family $(\mathbf{1}_{\{ v \text{ is open} \}})_{v \in \mathbb{Z}^d}$ is the independent Bernoulli site percolation on $\mathbb{Z}^d$ with parameter $r^{R^d}+r^{R^d-1}(1-r)R^d$ . This site percolation induces open clusters, which are connected components of open sites. In particular, for each $v \in \mathbb{Z}^d$ , we let $\mathcal{C}_v$ denote the open cluster containing v.

Although $\psi_d(z,\omega)$ is bounded for $d \geq 3$ , the following lemma says that for $d=1,2$ , $\psi_d(z,\omega)$ is dominated by the size of the open cluster containing $\overline{z}$ .

Lemma 3.2. Let $d=1,2$ . Then there exists a constant $C_4$ (which depends only on d and $q_0$ ) such that for all $z \in \mathbb{Z}^d$ ,

\begin{align*}\psi_d(z,\omega) \leq C_4 (1+\#\mathcal{C}_{\overline{z}}).\end{align*}

Proof. Since the simple random walk is recurrent for $d=1,2$ , we have for any $z \in \mathbb{Z}^d$ ,

\begin{align*}\psi_d(z,\omega)= \Biggl( 1-E^z\Biggl[ \exp\!\Biggl\{ -\sum_{k=1}^{H^+(z)-1}\omega(S_k) \Biggr\} \Biggr] \Biggr)^{-1}.\end{align*}

Let us first treat the case where $\overline{z}$ is closed. Then the box $\Lambda(\overline{z})$ contains at least two sites x with $\omega(x)=1$ . Hence we can find a site $x_0 \in \Lambda(\overline{z})$ such that $x_0 \not= z$ and $\omega(x_0)=1$ . It follows that

\begin{align*}1-E^z\Biggl[ \exp\!\Biggl\{ -\sum_{k=1}^{H^+(z)-1}\omega(S_k) \Biggr\} \Biggr]&\geq 1-P^z(H^+(z) \leq H(x_0))-{\text{e}}^{-1}P^z(H^+(z)>H(x_0))\\& = (1-{\text{e}}^{-1})P^z(H^+(z)>H(x_0))\\&\geq (1-{\text{e}}^{-1})\biggl( \dfrac{1}{2d} \biggr)^{{dR}}.\end{align*}

Thus, since $\#\mathcal{C}_{\overline{z}}=0$ in the case where $\overline{z}$ is closed, we have

\begin{align*}\psi_d(z,\omega) \leq \dfrac{(2d)^{dR}}{1-{\text{e}}^{-1}}= \dfrac{(2d)^{dR}}{1-{\text{e}}^{-1}}(1+\#\mathcal{C}_{\overline{z}}).\end{align*}

We next treat the case where $\overline{z}$ is open. Then the same argument as above does not work since the box $\Lambda(\overline{z})$ may not contain any site x with $x \not= z$ and $\omega(x)=1$ (this situation actually occurs when $\omega(z)=1$ and $\omega(x)=0$ for $x \in \Lambda(\overline{z}) \setminus \{z\}$ ). To overcome this problem, let us introduce the region $\mathcal{O}_z\;:\!=\; \bigcup_{v \in \mathcal{C}_{\overline{z}}} \Lambda(v)$ and the stopping time

\begin{align*}\sigma\;:\!=\; \inf\{ k \geq T(\mathcal{O}_z)\;:\; \omega(S_k)=1 \},\end{align*}

where $T(\mathcal{O}_z)$ is the exit time of the simple random walk from $\mathcal{O}_z$ (see (2.1)). The same computation as in the first case gives

(3.7) \begin{align}1-E^z\Biggl[ \exp\!\Biggl\{ -\sum_{k=1}^{H^+(z)-1}\omega(S_k) \Biggr\} \Biggr]\geq (1-{\text{e}}^{-1})P^z(\sigma<H^+(z)).\end{align}

Noting that the site $v\;:\!=\; \overline{S_{T(\mathcal{O}_z)}}$ is closed and $z \not\in \Lambda(v)$ , the event $\{ \sigma<H^+(z) \}$ occurs if the simple random walk follows a shortest path from v to a site $x \in \Lambda(v)$ with $\omega(x)=1$ after exiting $\mathcal{O}_z$ . Hence the strong Markov property shows that

(3.8) \begin{align}P^z(\sigma<H^+(z))\geq \biggl( \dfrac{1}{2d} \biggr)^{dR}P^z(T(\mathcal{O}_z)<H^+(z)).\end{align}

For the last probability, we use the following estimate on the probability that the simple random walk exits an $\ell^1$ -ball before revisiting its starting point (see e.g. [Reference Lawler11, (1.20), (1.38), and Theorem 1.6.6]): There exists a constant $c \geq 1$ (which depends only on d) such that

(3.9) \begin{align}P^0(\tau<H^+(0))^{-1} \leq c \times\begin{cases}\#\mathcal{O}_z & \text{if $ d=1$,}\\\log(\#\mathcal{O}_z) & \text{if $ d=2$,}\end{cases}\end{align}

where $\tau$ is the exit time for the simple random walk from the $\ell^1$ -ball of radius $\#\mathcal{O}_z$ and center 0, that is,

\begin{align*}\tau\;:\!=\; \inf\{ k \geq 0\;:\; \|S_k\|_1>\#\mathcal{O}_z \}.\end{align*}

Noting that $P^0(\tau<H^+(0)) \leq P^z(T(\mathcal{O}_z)<H^+(z))$ and $\#\mathcal{O}_z=R^d(\#\mathcal{C}_{\overline{z}})$ , we have from (3.7), (3.8), and (3.9),

\begin{align*}\psi_d(z,\omega)\leq \dfrac{c(2d)^{dR}}{1-{\text{e}}^{-1}}(\#\mathcal{O}_z)\leq \dfrac{c(2d)^{dR}R^d}{1-{\text{e}}^{-1}} (1+\#\mathcal{C}_{\overline{z}}).\end{align*}

Consequently, the lemma follows by taking $C_{4}\;:\!=\; c(2d)^{dR}R^d/(1-{\text{e}}^{-1})$ .

We now turn to the proof of Proposition 3.1 for $d=1,2$ .

Proof of Proposition 3.1 for ${d=1,2}$ . Lemmata 3.1 and 3.2 imply that for all $0<r \leq q_0$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ ,

\begin{align*}&\sum_{z \in \mathbb{Z}^d \cap [-N,N]^d} \mathbb{E}_r \biggl[ \biggl| \log\dfrac{e_N(0,y,\omega_z)}{e_N(0,y,\omega)} \biggr| \biggr]\\&\quad \leq 4C_{4} \sum_{z \in \mathbb{Z}^d \cap [-N,N]^d} \mathbb{E}_r\bigl[ (1+\#\mathcal{C}_{\overline{z}}) \widetilde{P}_{N,\omega}^{0,y}(H(z)<H(y)) \bigr]\\&\quad = 4C_{4} \Biggl( \mathbb{E}_r\bigl[ \widetilde{E}_{N,\omega}^{0,y}[\#\mathcal{A}(0,y)] \bigr]+\mathbb{E}_r\Biggl[ \widetilde{E}_{N,\omega}^{0,y} \Biggl[ \sum_{z \in \mathcal{A}(0,y)} \#\mathcal{C}_{\overline{z}} \Biggr] \Biggr] \Biggr).\end{align*}

Hence it suffices to prove that there exists a constant $C_5$ (which depends only on d and $q_0$ ) such that for all $0<r \leq q_0$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ ,

(3.10) \begin{align}\mathbb{E}_r\Biggl[ \widetilde{E}_{N,\omega}^{0,y} \Biggl[ \sum_{z \in \mathcal{A}(0,y)} \#\mathcal{C}_{\overline{z}} \Biggr] \Biggr]\leq C_{5}\mathbb{E}_r\bigl[ \widetilde{E}_{N,\omega}^{0,y}[\#\mathcal{A}(0,y)] \bigr].\end{align}

To show (3.10), we state some results for lattice animals on $\mathbb{Z}^d$ (which are finite connected subsets of $\mathbb{Z}^d$ ). For $n \geq 1$ , let $\mathbb{A}_n$ denote the set of all lattice animals, of size n, containing 0. Moreover, let $(\widetilde{\Gamma}_v)_{v \in \mathbb{Z}^d}$ be i.i.d. random lattice animals with the common law $\mathbb{P}_{q_0}(\mathcal{C}_0 \in \cdot)$ (we write P for the probability measure governing $(\widetilde{\Gamma}_v)_{v \in \mathbb{Z}^d}$ ). Then, due to (3.6), the following lemma is an immediate consequence of [Reference Fontes and Newman6, (2.6) and (2.7)] and [Reference Scholler14, page 183].

Lemma 3.3. The following results hold.

  1. (1) For all $n \geq 1$ and $t \geq 0$ , we have

    \begin{align*}\mathbb{P}_{q_0}\Biggl( \sup_{\Gamma \in \mathbb{A}_n} \dfrac{1}{\#\Gamma}\sum_{v \in \Gamma} \#\mathcal{C}_v \geq t \Biggr)\leq P\Biggl( \sup_{\Gamma \in \bigcup_{m=n}^\infty \mathbb{A}_m} \dfrac{1}{\#\Gamma} \sum_{v \in \Gamma} (\#\widetilde{\Gamma}_v)^2 \geq \dfrac{t}{2} \Biggr).\end{align*}
  2. (2) There exist constants $C_6$ and $C_7$ (which depend only on d and $q_0$ ) such that for all $n \geq 1$ ,

    \begin{align*}P\Biggl( \sup_{\Gamma \in \mathbb{A}_n} \dfrac{1}{\#\Gamma} \sum_{v \in \Gamma}(\#\widetilde{\Gamma}_v)^2 \geq C_{6} \Biggr)\leq C_{6}\exp\!\{-C_{9}n^{1/5}\}.\end{align*}

For any lattice animal $\Gamma$ on $\mathbb{Z}^d$ , define

\begin{align*}\mathcal{E}(\Gamma)\;:\!=\; \Biggl\{ \dfrac{1}{\#\Gamma} \sum_{v \in \Gamma} \#\mathcal{C}_v>2C_{6} \Biggr\}.\end{align*}

Moreover, for each $y \in \mathbb{Z}^d \setminus \{0\}$ , set

\begin{align*}\overline{\mathcal{A}}(0,y)\;:\!=\; \{ \overline{z}\;:\; z \in \mathcal{A}(0,y) \},\end{align*}

which is a lattice animal on $\mathbb{Z}^d$ containing 0. Then, for all $0<r \leq q_0$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ , the left-hand side of (3.10) is smaller than or equal to

\begin{align*}&R^d\mathbb{E}_r\Biggl[ \widetilde{E}_{N,\omega}^{0,y} \Biggl[ \sum_{v \in \overline{\mathcal{A}}(0,y)} \#\mathcal{C}_v \Biggr] \Biggr]\\&\quad \leq R^d \Biggl\{ 2C_{6}\mathbb{E}_r\bigl[ \widetilde{E}_{N,\omega}^{0,y}[\#\overline{\mathcal{A}}(0,y)] \bigr]+\mathbb{E}_r\Biggl[ \widetilde{E}_{N,\omega}^{0,y} \Biggl[ \Biggl( \sum_{v \in \overline{\mathcal{A}}(0,y)} \#\mathcal{C}_v \Biggr) \mathbf{1}_{\mathcal{E}(\overline{\mathcal{A}}(0,y))} \Biggr] \Biggr] \Biggr\}.\end{align*}

Since $\#\overline{\mathcal{A}}(0,y) \leq \#\mathcal{A}(0,y)$ , our task is now to prove that there exists a constant c (which depends only on d and $q_0$ ) such that for all $0<r \leq q_0$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ ,

(3.11) \begin{align}\mathbb{E}_r\Biggl[ \widetilde{E}_{N,\omega}^{0,y} \Biggl[ \Biggl( \sum_{v \in \overline{\mathcal{A}}(0,y)} \#\mathcal{C}_v \Biggr) \mathbf{1}_{\mathcal{E}(\overline{\mathcal{A}}(0,y))} \Biggr] \Biggr]\leq c.\end{align}

Indeed, once (3.11) is proved, the left-hand side of (3.10) is bounded from above by

\begin{align*}(2C_{6}+c)R^d\mathbb{E}_r\bigl[ \widetilde{E}_{N,\omega}^{0,y}[\#\mathcal{A}(0,y)] \bigr],\end{align*}

and (3.10) follows by taking $C_{5}\;:\!=\; (2C_{6}+c)R^d$ .

For any $0<r \leq q_0$ , $y \in \mathbb{Z}^d \setminus \{0\}$ , and $N \in \mathbb{N}$ with $N \geq \|y\|_\infty$ , the left-hand side of (3.11) is equal to

\begin{align*}&\sum_{n=1}^\infty \mathbb{E}_r\Biggl[ \widetilde{E}_{N,\omega}^{0,y} \Biggl[ \Biggl( \sum_{v \in \overline{\mathcal{A}}(0,y)} \#\mathcal{C}_v \Biggr) \mathbf{1}_{\mathcal{E}(\overline{\mathcal{A}}(0,y))} \mathbf{1}_{\{ \#\overline{\mathcal{A}}(0,y)=n \}} \Biggr] \Biggr] \leq \sum_{n=1}^\infty \sum_{v \in [-n,n]^d} \mathbb{E}_r\bigl[ \#\mathcal{C}_v \mathbf{1}_{\bigcup_{\Gamma \in \mathbb{A}_n} \mathcal{E}(\Gamma)} \bigr].\end{align*}

Note that $\mathbb{E}_r\bigl[ \#\mathcal{C}_v \mathbf{1}_{\bigcup_{\Gamma \in \mathbb{A}_n}\mathcal{E}(\Gamma)} \bigr]$ is increasing in r. This combined with the Cauchy–Schwarz inequality implies that for all $n \geq 1$ and $v \in [-n,n]^d$ ,

\begin{align*}\mathbb{E}_r\bigl[ \#\mathcal{C}_v \mathbf{1}_{\bigcup_{\Gamma \in \mathbb{A}_n} \mathcal{E}(\Gamma)} \bigr]&\leq \mathbb{E}_{q_0}\bigl[ \#\mathcal{C}_v \mathbf{1}_{\bigcup_{\Gamma \in \mathbb{A}_n} \mathcal{E}(\Gamma)} \bigr]\\&\leq \mathbb{E}_{q_0}\bigl[ \#\mathcal{C}_0^2 \bigr]^{1/2} \mathbb{P}_{q_0}\Biggl( \sup_{\Gamma \in \mathbb{A}_n} \dfrac{1}{\#\Gamma} \sum_{v' \in \Gamma}\#\mathcal{C}_{v'}>2C_{6} \Biggr)^{1/2}.\end{align*}

Lemma 3.3 says that there exists a constant c (which depends only on d and $q_0$ ) such that for all $n \geq 1$ ,

\begin{align*}\mathbb{P}_{q_0}\Biggl( \sup_{\Gamma \in \mathbb{A}_n} \dfrac{1}{\#\Gamma} \sum_{v' \in \Gamma}\#\mathcal{C}_{v'}>2C_{6} \Biggr)& \leq P\Biggl( \sup_{\Gamma \in \bigcup_{m=n}^\infty \mathbb{A}_m} \dfrac{1}{\#\Gamma} \sum_{v' \in \Gamma} (\#\widetilde{\Gamma}_{v'})^2>C_{6} \Biggr)\\& \leq \sum_{m=n}^\infty P\Biggl( \sup_{\Gamma \in \mathbb{A}_m} \dfrac{1}{\#\Gamma} \sum_{v \in \Gamma} (\#\widetilde{\Gamma}_v)^2 \geq C_{6} \Biggr)\\& \leq c'\exp\!\biggl\{ -\dfrac{C_{7}}{2}n^{1/5} \biggr\}.\end{align*}

With these observations, we can derive (3.11) as follows:

\begin{align*}\mathbb{E}_r\Biggl[ \widetilde{E}_{N,\omega}^{0,y} \Biggl[ \Biggl( \sum_{v \in \overline{\mathcal{A}}(0,y)} \#\mathcal{C}_v \Biggr) \mathbf{1}_{\mathcal{E}(\overline{\mathcal{A}}(0,y))} \Biggr] \Biggr]& \leq \bigl(c'\mathbb{E}_{q_0}\bigl[ \#\mathcal{C}_0^2 \bigr]\bigr)^{1/2} \sum_{n=1}^\infty (2n+1)^d \exp\!\biggl\{ -\dfrac{C_{7}}{4}n^{1/5} \biggr\} <\infty.\end{align*}

Therefore the proof of Proposition 3.1 is complete for $d=1,2$ .

4. Bounds for the annealed Lyapunov exponent

This section is devoted to the proof of Theorem 1.2. We fix $q_0$ with $0<q_0<1$ . For each $0 \leq r \leq 1$ , the path measure $\widetilde{\mathbb{P}}_r^{0,y}$ is defined by

(4.1) \begin{align}\dfrac{{\text{d}} \widetilde{\mathbb{P}}_r^{0,y}}{{\text{d}} P^0}= \mathbb{E}_r[e(0,y,\omega)]^{-1} \mathbb{E}_r\Biggl[ \exp\!\Biggl\{ -\sum_{k=0}^{H(y)-1}\omega(S_k) \Biggr\}\Biggr] \mathbf{1}_{\{ H(y)<\infty \}},\end{align}

and $\widetilde{\mathbb{E}}_r^{0,y}$ is the expectation with respect to $\widetilde{\mathbb{P}}_r^{0,y}$ . Then the following two lemmata are the key to proving Theorem 1.2.

Lemma 4.1. For all $0<r<1$ and $y \in \mathbb{Z}^d \setminus \{0\}$ , we have

\begin{align*}-\dfrac{{\text{d}}}{{\text{d}} r}b_r(0,y)= \widetilde{\mathbb{E}}_r^{0,y} \!\left[\rule{0pt}{24pt}\right.\! \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y)) \geq 1}} \dfrac{1-{\text{e}}^{-\ell_z(H(y))}}{r+{\text{e}}^{-\ell_z(H(y))}(1-r)} \!\left.\rule{0pt}{24pt}\right]\! .\end{align*}

Lemma 4.2. If $d \geq 3$ , then there exists a constant $C_8$ (which depends only on d and $q_0$ ) such that for all $0<r \leq q_0$ and $y \in \mathbb{Z}^d \setminus \{0\}$ ,

\begin{align*}\widetilde{\mathbb{E}}_r^{0,y} \!\left[\rule{0pt}{24pt}\right.\! \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y)) \geq 1}} \dfrac{1-{\text{e}}^{-\ell_z(H(y))}}{r+{\text{e}}^{-\ell_z(H(y))}(1-r)}\!\left.\rule{0pt}{24pt}\right]\!\leq C_{8}\|y\|_1.\end{align*}

Lemma 4.1 gives the derivative of $b_r(0,y)$ at r by using the path measure $\widetilde{\mathbb{P}}_r^{0,y}$ . Moreover, Lemma 4.2 guarantees that in the case $d \geq 3$ , the derivative of $b_r(0,y)$ at r can be bounded from below uniformly in $r \in (0,q_0]$ . Let us show Theorem 1.2 before proving the lemmata above.

Proof of Theorem 1.2. We first prove part (1) of Theorem 1.2, which gives lower bounds for differences between annealed Lyapunov exponents. Note that for each $0<r<1$ , the function $f(t)\;:\!=\; (1-t)/\{ r+t(1-r) \}$ is decreasing in $t \in [0,1]$ . Hence Lemma 4.1 and the fact that $\#\mathcal{A}(0,y) \geq \|y\|_1$ holds $\widetilde{\mathbb{P}}_r^{0,y}$ -a.s. yield that for all $0<r<1$ and $y \in \mathbb{Z}^d \setminus \{0\}$ ,

\begin{align*}-\dfrac{{\text{d}}}{{\text{d}} r}b_r(0,y)&= \widetilde{\mathbb{E}}_r^{0,y}\!\left[\rule{0pt}{24pt}\right.\! \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y)) \geq 1}} \dfrac{1-{\text{e}}^{-\ell_z(H(y))}}{r+{\text{e}}^{-\ell_z(H(y))}(1-r)} \!\left.\rule{0pt}{24pt}\right]\! \\&\geq \dfrac{1-{\text{e}}^{-1}}{r+{\text{e}}^{-1}(1-r)}\,\widetilde{\mathbb{E}}_r^{0,y}[\#\mathcal{A}(0,y)]\\&\geq (1-{\text{e}}^{-1})\|y\|_1.\end{align*}

Thus, similarly to the proof of part (1) of Theorem 1.1, we have for all $0<p<q<1$

\begin{align*}\inf_{x \in \mathbb{R}^d\setminus\{0\}}\dfrac{\beta_p(x)-\beta_q(x)}{\|x\|_1}\geq (1-{\text{e}}^{-1})(q-p),\end{align*}

which is the desired lower bound for the difference between annealed Lyapunov exponents.

Let us next show part (2) of Theorem 1.2, which gives upper bounds for differences between annealed Lyapunov exponents. Lemma 4.1 and the monotonicity of the function $f(t)=(1-t)/\{ r+t(1-r) \}$ , $0 \leq t \leq 1$ , tell us that for all $0<r \leq q_0$ and $y \in \mathbb{Z}^d \setminus \{0\}$ ,

(4.2) \begin{align}-\dfrac{{\text{d}}}{{\text{d}} r}b_r(0,y)&= \widetilde{\mathbb{E}}_r^{0,y}\!\left[\rule{0pt}{24pt}\right.\! \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y)) \geq 1}} \dfrac{1-{\text{e}}^{-\ell_z(H(y))}}{r+{\text{e}}^{-\ell_z(H(y))}(1-r)} \!\left.\rule{0pt}{24pt}\right]\!\leq r^{-1}\widetilde{\mathbb{E}}_r^{0,y}[\#\mathcal{A}(0,y)].\end{align}

Moreover, by the same arguments as in [Reference Zerner18, Lemma 3], it holds that for all $0<r<1$ and $y \in \mathbb{Z}^d \setminus \{0\}$ ,

(4.3) \begin{align}\widetilde{\mathbb{E}}_r^{0,y}[\#\mathcal{A}(0,y)]\leq \dfrac{1+\log(2d)}{-\log\{ {\text{e}}^{-1}+(1-{\text{e}}^{-1})r \}}\|y\|_1.\end{align}

Hence (4.2) and (4.3) imply that for all $0<r \leq q_0$ and $y \in \mathbb{Z}^d \setminus \{0\}$ ,

\begin{align*}-\dfrac{{\text{d}}}{{\text{d}} r}b_r(0,y)\leq \dfrac{1+\log(2d)}{-\log\{ {\text{e}}^{-1}+(1-{\text{e}}^{-1})q_0 \}}\,r^{-1}\|y\|_1.\end{align*}

Therefore, by taking

\begin{align*}C_2\;:\!=\; \dfrac{1+\log(2d)}{-\log\{ {\text{e}}^{-1}+(1-{\text{e}}^{-1})q_0 \}},\end{align*}

one sees immediately that for all $0<p<q \leq q_0$ ,

\begin{align*}\sup_{x \in \mathbb{R}^d \setminus \{0\}}\dfrac{\beta_p(x)-\beta_q(x)}{\|x\|_1} \leq C_2(\log q-\log p).\end{align*}

In particular, if $d \geq 3$ , then Lemma 4.2 allows us to obtain the following bound for the derivative of $b_r(0,y)$ at r instead of (4.2): for all $0<r \leq q_0$ and $y \in \mathbb{Z}^d \setminus \{0\}$ ,

\begin{align*}-\dfrac{{\text{d}}}{{\text{d}} r}b_r(0,y)= \widetilde{\mathbb{E}}_r^{0,y}\!\left[\rule{0pt}{24pt}\right.\! \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y)) \geq 1}}\dfrac{1-{\text{e}}^{-\ell_z(H(y))}}{r+{\text{e}}^{-\ell_z(H(y))}(1-r)} \!\left.\rule{0pt}{24pt}\right]\!\leq C_{8}\|y\|_1.\end{align*}

It follows immediately that

\begin{align*}\sup_{x \in \mathbb{R}^d \setminus \{0\}}\dfrac{\beta_p(x)-\beta_q(x)}{\|x\|_1} \leq C_{8}(q-p),\end{align*}

and the proof of part (2) of Theorem 1.2 is complete.

We close this section with the proofs of Lemmata 4.1 and 4.2.

Proof of Lemma 4.1. First of all, we often add $\mathbb{S}$ to the notation of H(y) and $\ell_z(N)$ to clarify the dependence of the trajectory $\mathbb{S}$ of the simple random walk: $H(y)=H(y,\mathbb{S})$ and $\ell_z(N)=\ell_z(N,\mathbb{S})$ . Fix $y \in \mathbb{Z}^d \setminus \{0\}$ , and define for $0<r<1$ and the trajectory $\mathbb{S}$ of the simple random walk,

\begin{align*}\phi(r,\mathbb{S})&\;:\!=\; \mathbb{E}_r\Biggl[ \exp\!\Biggl\{ -\sum_{k=0}^{H(y,\mathbb{S})-1}\omega(S_k) \Biggr\}\Biggr] \mathbf{1}_{\{ H(y,\mathbb{S})<\infty \}} \\& = \prod_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y,\mathbb{S}),\mathbb{S}) \geq 1}} \mathbb{E}_r\bigl[ {\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})\omega(0)} \bigr] \mathbf{1}_{\{ H(y,\mathbb{S})<\infty \}}.\end{align*}

It follows from Fubini’s theorem that $b_r(0,y)=-\log E^0[\phi(r,\mathbb{S})]$ . Hence our task is to show that

(4.4) \begin{align}\dfrac{{\text{d}}}{{\text{d}} r} E^0[\phi(r,\mathbb{S})]= E^0\!\left[\rule{0pt}{24pt}\right.\! \phi(r,\mathbb{S}) \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y,\mathbb{S}),\mathbb{S}) \geq 1}} \dfrac{1-{\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})}}{r+{\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})}(1-r)} \!\left.\rule{0pt}{24pt}\right]\! .\end{align}

Indeed, once (4.4) is proved, we have

\begin{align*}\dfrac{{\text{d}}}{{\text{d}} r}b_r(0,y)&= -E^0[\phi(r,\mathbb{S})]^{-1} E^0\!\left[\rule{0pt}{24pt}\right.\! \phi(r,\mathbb{S}) \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y,\mathbb{S}),\mathbb{S}) \geq 1}} \dfrac{1-{\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})}}{r+{\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})}(1-r)} \!\left.\rule{0pt}{24pt}\right]\! \\&= -\widetilde{\mathbb{E}}_r^{0,y}\!\left[\rule{0pt}{24pt}\right.\! \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y)) \geq 1}} \dfrac{1-{\text{e}}^{-\ell_z(H(y))}}{r+{\text{e}}^{-\ell_z(H(y))}(1-r)} \!\left.\rule{0pt}{24pt}\right]\! ,\end{align*}

which is the desired conclusion.

To prove (4.4), note that for the trajectory $\mathbb{S}$ of the simple random walk,

\begin{align*}\mathbb{E}_r\bigl[ {\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})\omega(0)} \bigr]= r+{\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})}(1-r).\end{align*}

This tells us that $\phi(r,\mathbb{S})$ is differentiable at $r \in (0,1)$ and

(4.5) \begin{align}\dfrac{{\text{d}}}{{\text{d}} r}\phi(r,\mathbb{S})= \phi(r,\mathbb{S}) \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y,\mathbb{S}),\mathbb{S}) \geq 1}}\dfrac{1-{\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})}}{r+{\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})}(1-r)}.\end{align}

Therefore, differentiating under the integral sign (the possibility of this operation will be checked in Appendix A), we have for $0<r<1$

\begin{align*}\dfrac{{\text{d}}}{{\text{d}} r} E^0[\phi(r,\mathbb{S})]&=E^0\biggl[ \dfrac{{\text{d}}}{{\text{d}} r}\phi(r,\mathbb{S}) \biggr]= E^0\!\left[\rule{0pt}{24pt}\right.\! \phi(r,\mathbb{S}) \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y,\mathbb{S}),\mathbb{S}) \geq 1}} \dfrac{1-{\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})}}{r+{\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})}(1-r)} \!\left.\rule{0pt}{24pt}\right]\! ,\end{align*}

and (4.4) follows.

Proof of Lemma 4.2. Let $d \geq 3$ and fix $0<r<1$ and $y \in \mathbb{Z}^d \setminus \{0\}$ . Furthermore, for each $z \in \mathbb{Z}^d$ , set $H_1(z)\;:\!=\; H(z)$ and define inductively

\begin{align*}H_{\ell+1}(z)\;:\!=\; \inf\{ k>H_\ell(z)\;:\; S_k=z \},\quad \ell \geq 1.\end{align*}

Then we have

(4.6) \begin{align}&\widetilde{\mathbb{E}}_r^{0,y}\!\left[\rule{0pt}{24pt}\right.\! \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y)) \geq 1}} \dfrac{1-{\text{e}}^{-\ell_z(H(y))}}{r+{\text{e}}^{-\ell_z(H(y))}(1-r)} \!\left.\rule{0pt}{24pt}\right]\! \notag \\ &\quad = \sum_{z \in \mathbb{Z}^d \setminus \{y\}} \mathbb{E}_r[e(0,y,\omega)]^{-1} \sum_{\ell=1}^\infty \dfrac{1-{\text{e}}^{-\ell}}{r+{\text{e}}^{-\ell}(1-r)} \,\mathbb{E}_r[\rho_\ell(z,\omega)],\end{align}

where

\begin{align*}\rho_\ell(z,\omega)\;:\!=\; E^0\Biggl[ \exp\!\Biggl\{ -\sum_{k=0}^{H(y)-1}\omega(S_k) \Biggr\} \mathbf{1}_{\{ H_\ell(z)<H(y)<H_{\ell+1}(z) \}} \Biggr].\end{align*}

The strong Markov property gives that

\begin{align*}\rho_\ell(z,\omega)&\leq E^z\left[ \exp\!\left\{ -\sum_{k=0}^{H^+(z)-1}\omega(S_k) \right\} \mathbf{1}_{\{ H^+(z)<\infty \}} \right]^{\ell-1}\\&\quad \times E^0 \left[ \exp\!\left\{ -\sum_{k=0}^{H(y)-1}\omega(S_k) \right\} \mathbf{1}_{\{ H(z)<H(y)<H_2(z) \}} \right]\end{align*}

(see the statement of Lemma 3.1 for the notation $H^+(z)$ ). Hence, in the case where $\omega(z)=0$ , $\rho_\ell(z,\omega)$ is bounded from above by

(4.7) \begin{align}P^0(H^+(0)<\infty)^{\ell-1} \, E^0 \!\left[\rule{0pt}{24pt}\right.\! \exp \!\left\{\rule{0pt}{21pt}\right.\! -\sum_{\substack{0 \leq k<H(y)\\ S_k \not= z}}\omega(S_k)\!\left.\rule{0pt}{21pt}\right\}\! \mathbf{1}_{\{ H(z)<H(y)<H_2(z) \}} \!\left.\rule{0pt}{24pt}\right]\! .\end{align}

On the other hand, in the case where $\omega(z)=1$ , $\rho_\ell(z,\omega)$ is smaller than or equal to

(4.8) \begin{align}{\text{e}}^{-\ell} P^0(H^+(0)<\infty)^{\ell-1} \, E^0\!\left[\rule{0pt}{24pt}\right.\! \exp\!\left\{\rule{0pt}{21pt}\right.\! -\sum_{\substack{0 \leq k<H(y)\\ S_k \not= z}}\omega(S_k) \!\left.\rule{0pt}{21pt}\right\}\! \mathbf{1}_{\{ H(z)<H(y)<H_2(z) \}} \!\left.\rule{0pt}{24pt}\right]\!.\end{align}

Note that both (4.7) and (4.8) are independent of $\omega(z)$ , and the simple random walk visits only one time before hitting y on the event $\{ H(z)<H(y)<H_2(z) \}$ . It follows that

\begin{align*}\mathbb{E}_r[\rho_\ell(z,\omega)]&\leq e\{ r+{\text{e}}^{-\ell}(1-r) \}P^0(H^+(0)<\infty)^{\ell-1}\\&\quad \times \mathbb{E}_r \Biggl[ E^0\Biggl[ \exp\!\Biggl\{ -\sum_{k=0}^{H(y)-1}\omega(S_k) \Biggr\} \mathbf{1}_{\{ H(z)<H(y)<H_2(z) \}} \Biggr] \Biggr].\end{align*}

Therefore (4.3), (4.6) and the fact that $P^0(H^+(0)<\infty)<1$ holds for $d \geq 3$ yield that for all $0<r \leq q_0$ and $y \in \mathbb{Z}^d \setminus \{0\}$ ,

\begin{align*}\widetilde{\mathbb{E}}_r^{0,y}\!\left[\rule{0pt}{24pt}\right.\! \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y)) \geq 1}} \dfrac{1-{\text{e}}^{-\ell_z(H(y))}}{r+{\text{e}}^{-\ell_z(H(y))}(1-r)} \!\left.\rule{0pt}{24pt}\right]\! & \leq e\sum_{\ell=1}^\infty P^0(H^+(0)<\infty)^{\ell-1}\,\widetilde{\mathbb{E}}_r^{0,y}[\#\mathcal{A}(0,y)]\\& \leq eP^0(H^+(0)=\infty)^{-1}\dfrac{1+\log(2d)}{-\log\{ {\text{e}}^{-1}+(1-{\text{e}}^{-1})q_0 \}}\|y\|_1,\end{align*}

which is the desired conclusion.

5. Comment on the large deviation principle

In this section we discuss the large deviation principle for the simple random walk in the Bernoulli potential. Let $0 \leq r \leq 1$ and let $\omega$ be the Bernoulli potential with parameter r. Consider the path measures $Q_{n,\omega}^\textrm{qu}$ and $Q_{n,r}^\textrm{an}$ defined as follows:

\begin{align*}\dfrac{{\text{d}} Q_{n,\omega}^{\text{qu}}}{{\text{d}} P^0}= \dfrac{1}{Z_{n,\omega}^{\text{qu}}} \exp\!\left\{ -\sum_{k=0}^{n-1}\omega(S_k) \right\}\end{align*}

and

\begin{align*}\dfrac{{\text{d}} Q_{n,r}^{\text{an}}}{{\text{d}} P^0}= \dfrac{1}{Z_{n,r}^{\text{an}}} \mathbb{E}_r\Biggl[ \exp\!\Biggl\{ -\sum_{k=0}^{n-1}\omega(S_k) \Biggr\} \Biggr],\end{align*}

where $Z_{n,\omega}^{\text{qu}}$ and $Z_{n,r}^{\text{an}}$ are the corresponding normalizing constants. Moreover, for $\lambda \geq 0$ , write $\alpha_r(\lambda,\cdot)$ and $\beta_r(\lambda,\cdot)$ , respectively, for the quenched and annealed Lyapunov exponents for the potential $\omega+\lambda=(\omega(x)+\lambda)_{x \in \mathbb{Z}^d}$ . Note that $\alpha_r(\lambda,x)$ and $\beta_r(\lambda,x)$ are continuous in $(\lambda,x) \in [0,\infty) \times \mathbb{R}^d$ and concave increasing in $\lambda$ (see [Reference Flury4, Theorem A], [Reference Mourrat13, Theorem 1.1], and [Reference Zerner18, Proposition 4]). Then, for $x \in \mathbb{R}^d$ , set

\begin{align*}I_r(x)\;:\!=\; \sup_{\lambda \geq 0}\{ \alpha_r(\lambda,x)-\lambda \}\end{align*}

and

\begin{align*}J_r(x)\;:\!=\; \sup_{\lambda \geq 0}\{ \beta_r(\lambda,x)-\lambda \}.\end{align*}

The following proposition states the large deviation principles for the simple random walk in the Bernoulli potential, which is a direct application of [Reference Flury4, Theorem B], [Reference Mourrat13, Theorem 1.10], and [Reference Zerner18, Theorem 19].

Proposition 5.1. Let $0<r \leq 1$ . Then the law of $S_n/n$ obeys the following quenched and annealed large deviation principles with the rate functions $I_r$ and $J_r$ , respectively.

  • (Quenched case.) $\mathbb{P}_r \textrm{-} \textit{a.s.}$ , for any Borel set $\Gamma$ in $\mathbb{R}^d$ ,

    \begin{align*}-\inf_{x \in \Gamma^o}I_r(x)& \leq \liminf_{n \to \infty} \dfrac{1}{n}\log Q_{n,\omega}^{\text{qu}}(S_n \in n\Gamma)\\& \leq \limsup_{n \to \infty} \dfrac{1}{n}\log Q_{n,\omega}^{\text{qu}}(S_n \in n\Gamma)\\&\leq -\inf_{x \in \overline{\Gamma}}I_r(x).\end{align*}
  • (Annealed case.) For any Borel set $\Gamma$ in $\mathbb{R}^d$ ,

    \begin{align*}-\inf_{x \in \Gamma^o}J_r(x)& \leq \liminf_{n \to \infty} \dfrac{1}{n}\log Q_{n,r}^{\text{an}}(S_n \in n\Gamma)\\& \leq \limsup_{n \to \infty} \dfrac{1}{n}\log Q_{n,r}^{\text{an}}(S_n \in n\Gamma)\\&\leq -\inf_{x \in \overline{\Gamma}}J_r(x).\end{align*}

Here $\Gamma^o$ and $\overline{\Gamma}$ , respectively, are the interior and closure of $\Gamma$ . Furthermore, the rate functions $I_r$ and $J_r$ are continuous and convex on their effective domains, which are equal to the closed $\ell^1$ -unit ball.

Since exactly the same arguments used in the previous sections work for $\alpha_r(\lambda,\cdot)$ and $\beta_r(\lambda,\cdot)$ , we can replace $\alpha_r(\!\cdot\!)$ and $\beta_r(\!\cdot\!)$ with $\alpha_r(\lambda,\cdot)$ and $\beta_r(\lambda,\cdot)$ in Theorems 1.1 and 1.2, respectively. In particular, the constants $C_{1}$ and $C_2$ can be chosen independently of $\lambda \geq 0$ because for any $a,b \in \mathbb{R}$ with $a<b$ , the function $f(\lambda)\;:\!=\; (\lambda+b)/(\lambda+a)$ is decreasing in $\lambda \geq 0$ , and the factor ${\text{e}}^{-\lambda\ell_z(H(y,\mathbb{S}),\mathbb{S})}$ appears in the denominator and numerator of the expressions above and can be canceled out. This derives the following differences between quenched and annealed rate functions.

Corollary 5.1. Let $d \geq 1$ and $0<q_0<1$ . Then there exist constants $C_9$ and $C_{10}$ (which depend only on d and $q_0$ ) such that the following results hold.

  • (Quenched case.) For all $0<p<q<1$ ,

    (5.1) \begin{align}\inf_{0<\|x\|_1 \leq 1}\dfrac{I_p(x)-I_q(x)}{\|x\|_1} \geq (1-{\text{e}}^{-1})(q-p),\end{align}
    and for all $0<p<q \leq q_0$ ,
    \begin{align*}\sup_{0<\|x\|_1 \leq 1}\dfrac{I_p(x)-I_q(x)}{\|x\|_1}\leq C_9 (q-p).\end{align*}
  • (Annealed case.) For all $0<p<q<1$ , we have (5.1) with $I_p(\!\cdot\!)$ and $I_q(\!\cdot\!)$ , replaced by $J_p(\!\cdot\!)$ and $J_q(\!\cdot\!)$ , respectively. Moreover, for all $0<p<q \leq q_0$ ,

    \begin{align*}\sup_{0<\|x\|_1 \leq 1} \dfrac{J_p(x)-J_q(x)}{\|x\|_1}\leq C_{10} (\log q-\log p).\end{align*}
    In particular, if $d \geq 3$ or $p_0 \leq p<q \leq q_0$ , then the right-hand side above can be replaced by $C_{10}(q-p)$ (here, $C_{10}$ also depends on $p_0$ in the latter case).

Proof. We treat only the upper bound for the quenched case since the same argument works for the other cases. For any $0<r<1$ and $x \in \mathbb{R}^d$ , set

\begin{align*}\lambda_r^{\text{qu}}(x)\;:\!=\; \inf\{ \lambda >0\;:\; \partial_- \alpha_r(\lambda,x) \leq 1 \},\end{align*}

where $\partial_-\alpha_r(\lambda,x)$ is the left derivative of $\alpha_r(\lambda,x)$ with respect to $\lambda$ (note that $\lambda_r^{\text{qu}}(x)$ exists in the case where $\|x\|_1 \leq 1$ although it may be equal to $\infty$ ). Clearly, $\lambda_r^{\text{qu}}(x)$ attains the supremum in the definition of $I_r(x)$ . Hence Theorem 1.1 with $\alpha_r(\!\cdot\!)$ replaced by $\alpha_r(\lambda,\cdot)$ implies that for any $0<p<q<1$ , $t \geq 0$ , and $x \in \mathbb{R}^d$ with $0<\|x\|_1 \leq 1$ ,

\begin{align*}\dfrac{I_p(x)-\{ \alpha_q(\lambda_q^{\text{qu}}(x) \wedge t,x)-(\lambda_q^{\text{qu}}(x) \wedge t) \}}{\|x\|_1} &\geq \dfrac{\alpha_p(\lambda_q^{\text{qu}}(x) \wedge t,x)-\alpha_q(\lambda_q^{\text{qu}}(x) \wedge t,x)}{\|x\|_1}\\&\geq (1-{\text{e}}^{-1})(q-p).\end{align*}

Since $\alpha_q(\lambda,x)$ is continuous in $\lambda$ , letting $t \to \infty$ proves (5.1).

Appendix A. Differentiation under the integral sign

The aim of this section is to discuss differentiation under the integral sign in the proof of Lemma 4.1. We first mention differentiation under the integral sign in measure theory (see e.g. [Reference Klenke8, Theorem 6.28]).

Lemma A.1. Let I be a non-empty open interval of $\mathbb{R}$ and let $\Sigma$ be a measure space equipped with a measure $\mu$ . Suppose that $f\;:\; I \times \Sigma \to \mathbb{R}$ is a function satisfying the following conditions.

  1. (1) For any $r \in I$ , the function $\sigma \mapsto f(r,\sigma)$ is $\mu$ -integrable.

  2. (2) For $\mu$ -a.e. $\sigma \in \Sigma$ , the function $r \mapsto f(r,\sigma)$ is differentiable at $r \in I$ with derivative $f_r(r,\sigma)$ .

  3. (3) There exists a $\mu$ -integrable function $g\;:\; \Sigma \to \mathbb{R}$ such that $|f_r(r,\sigma)| \leq g(\sigma)$ holds for all $r \in I$ and for $\mu$ -a.e. $\sigma \in \Sigma$ .

Then $f_r(r,\cdot)$ is $\mu$ -integrable for each $r \in I$ , and the function $F\;:\; r \mapsto \int_\Sigma f(r,\sigma)\,\mu({\text{d}} \sigma)$ is differentiable at $r \in I$ with derivative

\begin{align*}\dfrac{{\text{d}}}{{\text{d}} r} F(r)=\int_\Sigma f_r(r,\sigma)\,\mu({\text{d}} \sigma).\end{align*}

The following proposition enables us to differentiate under the integral sign in the proof of Lemma 4.1.

Proposition A.1. Fix $y \in \mathbb{Z}^d \setminus \{0\}$ and define for $0<r<1$ and the trajectory $\mathbb{S}$ of the simple random walk,

\begin{align*}\phi(r,\mathbb{S})\;:\!=\; \prod_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y,\mathbb{S}),\mathbb{S}) \geq 1}} \mathbb{E}_r\bigl[ {\text{e}}^{-\ell_z(H(y,\mathbb{S}),\mathbb{S})\omega(0)} \bigr] \mathbf{1}_{\{ H(y,\mathbb{S})<\infty \}}.\end{align*}

Then, for all $0<r<1$ , we have

(A.1) \begin{align}\dfrac{{\text{d}}}{{\text{d}} r} E^0[\phi(r,\mathbb{S})]=E^0\biggl[ \dfrac{{\text{d}}}{{\text{d}} r} \phi(r,\mathbb{S}) \biggr].\end{align}

Proof. Fix $y \in \mathbb{Z}^d \setminus \{0\}$ and let $0<r_0<1$ . It suffices to prove (A.1) for all $0<r<r_0$ . To this end, set $I\;:\!=\; (0,r_0)$ , $\Sigma\;:\!=\; (\mathbb{Z}^d)^{\mathbb{N}_0}$ (equipped with the probability measure $\mu\;:\!=\; P^0(\mathbb{S} \in \cdot)$ ) and

\begin{align*}f(r,\sigma)\;:\!=\; \phi(r,\sigma),\quad r \in I,\,\sigma \in \Sigma.\end{align*}

Then we can rewrite $E^0[\phi(r,\mathbb{S})]$ as

\begin{align*}E^0[\phi(r,\mathbb{S})]=\int_\Sigma f(r,\sigma)\,\mu({\text{d}} \sigma).\end{align*}

Hence, for (A.1), let us check conditions (1)–(3) in Lemma A.1.

Condition (1) is clearly satisfied since $0 \leq f(r,\sigma) \leq 1$ holds for each $r \in I$ and $\sigma \in \Sigma$ . Furthermore, by (4.5), for any $\sigma \in \Sigma$ ,

\begin{align*}f_r(r,\sigma)= \phi(r,\sigma) \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y,\sigma),\sigma) \geq 1}}\dfrac{1-{\text{e}}^{-\ell_z(H(y,\sigma),\sigma)}}{r+{\text{e}}^{-\ell_z(H(y,\sigma),\sigma)}(1-r)},\end{align*}

and condition (2) is valid. It remains to check condition (3). To this end, let us observe that for fixed $\sigma \in \Sigma$ and $z \in \mathbb{Z}^d$ with $\ell_z(H(y,\sigma),\sigma) \geq 1$ , the function

\begin{align*}h(r)\;:\!=\; \dfrac{\phi(r,\sigma)}{r+{\text{e}}^{-\ell_z(H(y,\sigma),\sigma)}(1-r)}\end{align*}

is increasing in $r \in (0,1)$ . A standard calculation shows that for each $z \in \mathbb{Z}^d$ ,

\begin{align*}\dfrac{{\text{d}}}{{\text{d}} r}h(r)&= \dfrac{\phi(r,\sigma)}{r+{\text{e}}^{-\ell_z(H(y,\sigma),\sigma)}(1-r)} \sum_{\substack{w \in \mathbb{Z}^d \setminus \{z\}\\ \ell_w(H(y,\sigma),\sigma) \geq 1}}\dfrac{1-{\text{e}}^{-\ell_w(H(y,\sigma),\sigma)}}{r+{\text{e}}^{-\ell_w(H(y,\sigma),\sigma)}(1-r)} \geq 0,\end{align*}

which implies that h(r) is increasing in $r \in (0,1)$ . Hence, for all $r \in I$ and $\sigma \in \Sigma$ , we have

\begin{align*}|f_r(r,\sigma)|&\leq \sum_{\substack{z \in \mathbb{Z}^d\\ \ell_z(H(y,\sigma),\sigma) \geq 1}}\dfrac{\phi(r_0,\sigma)}{r_0+{\text{e}}^{-\ell_z(H(y,\sigma),\sigma)}(1-r_0)} \\&\leq r_0^{-1} \phi(r_0,\sigma) \times \#\{ z \in \mathbb{Z}^d\;:\; \ell_z(H(y,\sigma),\sigma) \geq 1 \}\\&:= g(\sigma).\end{align*}

Note that

\begin{align*}\int_\Sigma g(\sigma)\,{\text{d}} \mu\leq r_0^{-1} \widetilde{\mathbb{E}}_{r_0}^{0,y}[\#\mathcal{A}(0,y)].\end{align*}

By (4.3), we have

\begin{align*}\widetilde{\mathbb{E}}_{r_0}^{0,y}[\#\mathcal{A}(0,y)] \leq \dfrac{1+\log(2d)}{-\log\{ {\text{e}}^{-1}+(1-{\text{e}}^{-1})r_0 \}}<\infty,\end{align*}

and the integrability of $g(\sigma)$ follows. Therefore condition (3) is also satisfied.

Acknowledgements

The author thanks Masato Takei for useful discussions and would like to express his profound gratitude to the reviewer for a very careful reading of the manuscript.

Funding information

The author was supported by JSPS KAKENHI grant JP20K14332.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Cerf, R. and Dembin, B. (2022). The time constant for Bernoulli percolation is Lipschitz continuous strictly above $p_c$ . Ann. Prob. 50, 17811812.CrossRefGoogle Scholar
Dembin, B. (2020). Anchored isoperimetric profile of the infinite cluster in supercritical bond percolation is Lipschitz continuous. Electron. Commun. Prob. 25, 113.CrossRefGoogle Scholar
Dembin, B. (2021). Regularity of the time constant for a supercritical Bernoulli percolation. ESAIM Prob. Statist. 25, 109132.CrossRefGoogle Scholar
Flury, M. (2007). Large deviations and phase transition for random walks in random nonnegative potentials. Stoch. Process. Appl. 117, 596612.CrossRefGoogle Scholar
Flury, M. (2008). Coincidence of Lyapunov exponents for random walks in weak random potentials. Ann. Prob. 36, 15281583.CrossRefGoogle Scholar
Fontes, L. and Newman, C. M. (1993). First passage percolation for random colorings of $\mathbb{Z}^d$ . Ann. Appl. Prob. 3, 746762.CrossRefGoogle Scholar
Grimmett, G. (1999). Percolation (Grundlehren der mathematischen Wissenschaften 321). Springer.CrossRefGoogle Scholar
Klenke, A. (2013). Probability Theory: A Comprehensive Course. Springer.Google Scholar
Kosygina, E., Mountford, T. S. and Zerner, M. P. (2011). Lyapunov exponents of Green’s functions for random potentials tending to zero. Prob. Theory Related Fields 150, 4359.CrossRefGoogle Scholar
Kubota, N. (2020). Strict comparison for the Lyapunov exponents of the simple random walk in random potentials. Available at arXiv:2010.08798.Google Scholar
Lawler, G. F. (2013). Intersections of Random Walks. Springer.CrossRefGoogle Scholar
Le, T. T. H (2017). On the continuity of Lyapunov exponents of random walk in random potential. Bernoulli 23, 522538.Google Scholar
Mourrat, J.-C. (2012). Lyapunov exponents, shape theorems and large deviations for the random walk in random potential. ALEA Lat. Am. J. Prob. Math. Statist. 9, 165209.Google Scholar
Scholler, J. (2014). On the time constant in a dependent first passage percolation model. ESAIM Prob. Statist. 18, 171184.CrossRefGoogle Scholar
Wang, W.-M. (2001). Mean field bounds on Lyapunov exponents in $\textrm{Z}^d$ at the critical energy. Prob. Theory Related Fields 119, 453474.CrossRefGoogle Scholar
Wang, W.-M. (2002). Mean field upper and lower bounds on Lyapunov exponents. Amer. J. Math. 124, 851878.CrossRefGoogle Scholar
Wu, X.-Y. and Feng, P. (2009). On a lower bound for the time constant of first-passage percolation. Acta Math. Sinica Chinese Series 52, 8186.Google Scholar
Zerner, M. P. (1998). Directional decay of the Green’s function for a random nonnegative potential on $\textrm{Z}^d$ . Ann. Appl. Prob. 8, 246280.CrossRefGoogle Scholar
Zygouras, N. (2009). Lyapounov norms for random walks in low disorder and dimension greater than three. Prob. Theory Related Fields 143, 615642.CrossRefGoogle Scholar