Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-27T08:32:03.796Z Has data issue: false hasContentIssue false

Boolean percolation on digraphs and random exchange processes

Published online by Cambridge University Press:  25 October 2023

Georg Braun*
Affiliation:
Eberhard Karls Universität Tübingen
*
Rights & Permissions [Opens in a new window]

Abstract

We study in a general graph-theoretic formulation a long-range percolation model introduced by Lamperti [27]. For various underlying digraphs, we discuss connections between this model and random exchange processes. We clarify, for all $n \in \mathbb{N}$, under which conditions the lattices $\mathbb{N}_0^n$ and $\mathbb{Z}^n$ are essentially covered in this model. Moreover, for all $n \geq 2$, we establish that it is impossible to cover the directed n-ary tree in our model.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Percolation theory is a fascinating area of modern probability, which tries to understand under which conditions infinite components arise in random structures. In the present article, we study the properties of a Boolean percolation model on directed graphs and relate this model to a classical Markov chain known as the random exchange process.

Let $\mathcal{G} = (V,E)$ be a directed graph with an infinite, countable vertex set V. For all vertices x, $y \in V$ , we let $d(x,y) \in \mathbb{N}_0 \cup \{ \infty \}$ denote the distance from x to y in $\mathcal{G}$ . Note that $d\,:\, V \times V \rightarrow \mathbb{N}_0 \cup \{ \infty \}$ is an extended quasimetric on V, which is symmetric if and only if the graph $\mathcal{G}$ is undirected, i.e. $(x,y) \in E$ implies $(y,x) \in E$ for all x, $y \in V$ . Moreover, for all $x \in V$ and $n \in \mathbb{N}_0$ , we let $B_n (x)$ denote the open ball of radius n starting from x, which is the set of all vertices $y \in V$ with $d(x,y) < n$ . So, $B_0 (x) = \emptyset$ and $B_1 ( x) = \{ x \}$ for all $x \in V$ .

Let $\mu = (\mu_n)_{n \in \mathbb{N}_0}$ be a probability vector and let $(Y_x)_{x \in V}$ be a family of independent and identically distributed (i.i.d.) random variables satisfying $\mathbb{P}[ Y_x = n] = \mu_n$ for all $n \geq 0$ . In our percolation model, for all $x \in V$ , the random variable $Y_x$ represents the coverage radius of the vertex x. Hence the set of covered, respectively uncovered, vertices are

\[V_\mu \,:\!=\, V_\mu ( \mathcal{G}) \,:\!=\, \bigcup_{x \in V} B _{Y_x} (x) \subseteq V, \quad V_\mu^c \,:\!=\, V_\mu^c ( \mathcal{G}) \,:\!=\, V \setminus V_\mu ( \mathcal{G}).\]

Note that if $\mu_0 + \mu_1 = 1$ , this Boolean percolation model reduces to Bernoulli percolation on the sites of the graph $\mathcal{G}$ , as every vertex may only cover itself.

As we are interested in the properties of the random sets $V_\mu$ and $V_\mu^c$ , we will always assume $\mu_0 \in (0,1)$ , since $V_\mu = V$ almost surely in the case of $\mu_0=0$ , and $V_\mu = \emptyset$ almost surely for $\mu_0=1$ . Moreover, in our study we will assume that $\mathcal{G}$ is weakly connected.

Let x, $y \in V$ and $V^{\prime} = V_\mu$ or $V^{\prime}= V_\mu^c$ . Then, if both x and y are contained in V and connected by a path in $\mathcal{G}$ , which uses only vertices from V , we will say that x and y are in the same cluster.

To state our results, we introduce the following notation. Let $n \in \mathbb{N}$ , $V = \mathbb{N}_0^n$ , or $ V = \mathbb{Z}^n$ (see Figure 1), and E be the set of all pairs $(x,x+e_j)$ , where $x \in V$ , $j=1,\ldots,n$ , and $e_j = (\delta_{ij})_{i=1,\ldots,n}$ . Then we denote the resulting graph $\mathcal{G} = (V,E)$ by $\mathbb{N}_0^n$ , respectively $\mathbb{Z}^n$ . Furthermore, for all $n \geq 2$ , we define the infinite directed n-ary tree $\mathcal{D}_n \,:\!=\, (V_n, E_n)$ by

\begin{align*}V_n &\,:\!=\, \bigcup_{m \geq 0} \{ 1,\ldots,n \}^m, \quad \text{where $ \{ 1,\ldots,n \}^0 \,:\!=\, \emptyset$,}\\E_n &\,:\!=\, \{ (\emptyset , 1 ),\ldots, ( \emptyset, n ) \} \cup \{ ( x, (x,j) ) \mid x \in V_n \setminus \{ \emptyset \}, \ j=1,\ldots,n \}.\end{align*}

Figure 1. Illustration of the lattices $\mathbb{N}_0^2$ (a) and $\mathbb{Z}^2$ (b).

In this article we will clarify under which conditions the graphs $\mathbb{N}_0^n$ and $\mathbb{Z}^n$ are (essentially) covered by a distribution $\mu$ ; see Theorems 1 and 2 below. On the other hand, in Theorem 3, we will see that for any distribution $\mu$ and $n \geq 2$ , $\# V_\mu^c ( \mathcal{D}_n) = \infty$ almost surely.

To the best of our knowledge, the present percolation model was first studied by Lamperti [Reference Lamperti27] for $\mathcal{G} = \mathbb{N}_0$ . This research was motivated by statistical physics and included the following description. At each location $n \in \mathbb{N}_0$ there is a fountain, which sprays water to the right and is wetting the segment from $n+1$ to $n+ Y_n$ . As $\mu_0> 0$ , with some positive probability, a fountain fails to operate at all.

Our percolation model and variants of it have been studied by various authors; see [Reference Bertacchi and Zucca6], [Reference Gallo, Garcia, Junior and Rodrguez15], [Reference Junior, Machado and Ravishankar23], [Reference Junior, Machado and Zuluaga24], [Reference Junior, Machado and Zuluaga25], and [Reference Lebensztayn and Rodríguez29]. For a recent survey, see [Reference Junior, Machado and Ravishankar23]. At this point, however, we want to postpone the discussion of how our new insights and results are related to these articles.

We can interpret our percolation model as the spread of a rumour through a network, a firework process, or a discrete version of Boolean percolation. The latter model was introduced by Gilbert [Reference Gilbert17]. First, points are chosen randomly in $\mathbb{R}^n$ according to a Poisson point process. Then, in the simplest case, around these points, the unit sphere is covered. For monographs concerned with Boolean percolation, see [Reference Hall19], [Reference Meester and Roy30], [Reference Schneider and Weil33], and [Reference Stoyan, Kendall and Mecke34].

We also want to mention that the discrete percolation model of the present paper was used by Bezborodov and Krueger [Reference Bezborodov and Krueger8] to study the growth of continuous-time frog processes.

Apart from our Boolean percolation itself, we also investigate its connection to a rather classical Markov chain, which is sometimes called random exchange process. As far as we know, it was first observed by Zerner [Reference Zerner36, Section 1] that these two stochastic models are related to each other.

Let $(Y_n)_{n \geq 0}$ denote a sequence of i.i.d. random variables, which, as before, are distributed according to $\mu$ . Then we set $X_0 \,:\!=\, Y_0$ and recursively define

\[X_{n+1} \,:\!=\, \max \{ X_n - 1, Y_{n+1} \}, \quad n \in \mathbb{N}_0.\]

To the best of our knowledge, this process $(X_n)_{n \geq 0}$ first occurred in a statistical research article on deepwater exchange of a fjord; see [Reference Gade14]. It was later studied in a more general form in [Reference Helland20] and [Reference Helland and Nilsen21]. In the following, we call the Markov chain $(X_n)_{n \geq 0}$ a (constant decrement) random exchange process. By construction, it has time-homogeneous transition probabilities and is irreducible on its state space $\mathcal{X}$ , which is equal to $\mathbb{N}_0$ if $\mu$ is unbounded, and otherwise takes the form $\{ 0, 1, \ldots, n_0 \}$ , where $n_0 \,:\!=\, \sup \{n \in \mathbb{N}\mid \mu_n \neq 0 \}$ . The transition matrix P associated with $(X_n)_{n \geq 0}$ is

\[P \,:\!=\, P_\mu \,:\!=\, ( P_{\mu; x,y} )_{x,y \in \mathcal{X}}, \quad \text{where}\ P_{\mu; x,y} \,:\!=\, \begin{cases}\mu_y,&y \geq x,\\\sum_{z=0}^{x-1} \mu_z, & y = x-1,\\0,& y \leq x - 2. \\\end{cases}\]

As $(X_n)_{n \geq 0}$ , respectively P, is irreducible, for all $z>0$ , the Green’s function

\[G(x,y \mid z) \,:\!=\, \sum_{n=0}^\infty P_{\mu; x,y} z^n\]

either converges or diverges simultaneously for all x, $y \in \mathcal{X}$ ; see [Reference Woess35, Chapter 1.1]. Therefore, independent of the choice of x, $y \in \mathcal{X}$ , we can define the spectral radius of $(X_n)_{n \geq 0}$ , respectively P, by

\[\rho (P ) \,:\!=\, \limsup_{n \rightarrow \infty} ( P_{\mu; x,y} )^{1/n} \in (0,1].\]

More generally, if A is an arbitrary irreducible matrix with non-negative entries, we can define $\rho(A) \in [0,\infty]$ in exactly the same way.

Let us now state connections between the set $V_\mu$ of covered vertices in our percolation model and the Markov chain $(X_n)_{n \geq 0}$ . We start by reformulating previous results as follows.

Theorem 1. For any law $\mu$ , the following statements are equivalent.

  1. (i) Almost surely, $\# V_\mu^c ( \mathbb{Z}) < \infty$ .

  2. (ii) Almost surely, $V_\mu ( \mathbb{Z}) = \mathbb{Z}$ .

  3. (iii) The Markov chain $(X_n)_{n \geq 0}$ is not positive recurrent.

  4. (iv) The expectation of $\mu$ is infinite, i.e. $\sum_{n \geq 0} n \mu_n = \infty$ .

It is not difficult to verify, more generally, that (i) and (ii) are equivalent if we replace $\mathbb{Z}$ with an arbitrary vertex-transitive graph.

By applying the Borel–Cantelli lemma, we can directly verify that (ii) and (iv) are equivalent statements. This equivalence was also observed, in a more general form, in [Reference Junior, Machado and Zuluaga24, Section 2.2] and [Reference Bertacchi and Zucca6, Section 4]. For any graph $\mathcal{G}=(V,E)$ , we have $V_\mu = V$ almost surely if and only if

\[\sum_{x \in V} \sum_{k \geq d(x,y)} \mu_k = \infty \quad \text{for all $ y \in V$.}\]

For example, for all $n \geq 1$ , $V_\mu ( \mathbb{Z}^n )= \mathbb{Z}^n$ almost surely if and only if the nth moment of $\mu$ diverges. This kind of phenomenon is well known in the context of Boolean percolation models, and thus it seems convenient to include some previous literature results at this point.

Hall [Reference Hall19] studied Boolean percolation on $\mathbb{R}^n$ with spheres of random i.i.d. radii and proved [Reference Hall19, Theorem 3.1] that the entire space is almost surely covered if and only if the nth moment of the radius distribution diverges. Gour [Reference Gouéré18] established that if the nth moment of the radius distribution is finite, there exists a critical value for the intensity of the underlying Poisson process. Recently, more results on phase transitions were deduced in [Reference Ahlberg, Tassion and Teixeira2] and [Reference Duminil-Copin and Raoufi13]. However, there are also results on other aspects of Boolean percolation. For example, Ahlberg et al. [Reference Ahlberg, Broman, Griffiths and Morris1] showed that this model is noise-sensitive, and Last et al. [Reference Last, Penrose and Zuyev28] studied the capacity functional.

Athreya et al. [Reference Athreya, Roy and Sarkar3] and Bezborodov [Reference Bezborodov7] studied Boolean percolation on $[0,\infty)^n$ when, instead of the sphere around a point x, the set $x + [0,R_x)^d$ is occupied, where $R_x$ is the radius associated with x. The results in [Reference Athreya, Roy and Sarkar3] characterize under which conditions the entire space is essentially covered, and interestingly depend on whether $n=1$ or $n \geq 2$ . Bezborodov [Reference Bezborodov7] observed, for $n=1$ and some radius distributions, that the covered volume fraction is one, but all clusters are bounded almost surely.

Coletti and Grynberg [Reference Coletti and Grynberg11] studied a model on $\mathbb{Z}^n$ in which first Bernoulli percolation with parameter $p \in (0,1)$ is performed, and then, independently around the present points, random i.i.d. balls are covered. Again, the occupied region is almost surely $\mathbb{Z}^n$ if and only if the nth moment of the radius distribution diverges. For a study of this percolation model on doubling graphs, see also [Reference Coletti, Miranda and Grynberg12].

Let us return to Theorem 1. The equivalence of (iii) and (iv) was first observed by Helland [Reference Helland20, Section 3] and also mentioned by Kellerer [Reference Kellerer26, comments after Theorem 2.6]. We can deduce it as follows. Due to the form of the transition probabilities of the Markov chain $(X_n)_{n \geq 0}$ , any invariant measure $\tau = (\tau_x)_{x \in \mathcal{X}}$ has to satisfy

\[\tau_{x} = \sum_{z \in \mathcal{X}} \tau_z P_{\mu;z,x} = \sum_{z=0}^x \tau_z \mu_x + \tau_{x+1} \sum_{z= 0}^{x} \mu_z, \quad \text{provided that $x, x+1 \in \mathcal{X}$,}\]

or respectively

\begin{align*}\tau_{x+1} = \Biggl( (1 - \mu_x) \tau_x - \sum_{z=0}^{x-1} \mu_x \tau_{z} \Biggr) \Bigg/ \Biggl( \sum_{z=0}^{x} \mu_z \Biggr).\end{align*}

We obtain $\tau_1 = \tau_0 (1 - \mu_0 ) \mu_0^{-1}$ ,

\begin{align*}\tau_2 = \tau_0 ( 1 - \mu_0 - \mu_1 ) \mu_0^{-1} (\mu_0 + \mu_1)^{-1},\end{align*}

and more generally the representation

(1) \begin{equation}\tau_x = \tau_0 \Biggl( \sum_{z \geq x} \mu_z \Biggr) \Biggl( \prod_{y=0}^{x-1} \sum_{z=0}^y \mu_z \Biggr)^{-1}, \quad x \in \mathcal{X}.\end{equation}

Indeed, from a careful look at this formula, it follows that (iii) and (iv) are equivalent. Moreover, if the distribution $\mu$ has a finite expectation, we can determine the stationary solution of $(X_n)_{n \geq 0}$ from (1) via normalization. In Section 3 we will present some concrete examples, and we also want to mention that results on positive recurrence of more general exchange processes with random decrements are given in [Reference Helland and Nilsen21, Section 2].

Our first main result is an analogue to Theorem 1, which characterizes transience of the random exchange process $(X_n)_{n \geq 0}$ .

Theorem 2. For any law $\mu$ , the following statements are equivalent.

  1. (a) There exists $n \in \mathbb{N}$ with $\# V_\mu^c \big( \mathbb{N}_0^n\big) < \infty$ almost surely.

  2. (b) For all $n \in \mathbb{N}$ , $\# V_\mu^c \big( \mathbb{N}_0^n\big) < \infty$ almost surely.

  3. (c) The Markov chain $(X_n)_{n \geq 0}$ is transient.

  4. (d) $\sum_{m \geq 0} \prod_{k=1}^m \sum_{l=0}^{k-1} \mu_l < \infty$ .

Moreover, if one of these conditions is satisfied, then $\mathbb{E} \big[\# V_\mu^c \big( \mathbb{N}_0^n\big)\big] < \infty$ for all $n \in \mathbb{N}$ and there exists $\alpha \in (0,\infty)$ with $\mathbb{E} \big[\!\exp \big( \alpha \# V_\mu^c \big( \mathbb{N}_0\big)\big)\big] < \infty$ .

This theorem improves on previous works by revealing that, rather surprisingly, the value of $n \in \mathbb{N}$ does not influence whether all but finitely many points of the graph $\mathbb{N}_0^n$ are covered by a distribution $\mu$ . In the appendix of Lamperti’s paper [Reference Lamperti27], Kesten gives a proof of the fact that $\# V_\mu^c ( \mathbb{N}_0) < \infty$ almost surely if and only if condition (d) in Theorem 2 is satisfied. This result was later rediscovered by various authors, partly in a different and more general form; see [Reference Kellerer26, comments to Proposition 6.6], [Reference Junior, Machado and Zuluaga24, Theorem 2.1], [Reference Gallo, Garcia, Junior and Rodrguez15, Theorem 1], [Reference Bertacchi and Zucca6, Section 3], and [Reference Bezborodov and Krueger8, Section 3].

As observed by Zerner [Reference Zerner36, Proposition 1.1] and suggested by our notation, we can couple the set of covered points $V_\mu ( \mathbb{N}_0)$ and the random exchange process $(X_n)_{n \geq 0}$ by using the same sequence of random variables $(Y_n)_{n \geq 0}$ in both definitions. Then, by construction,

\begin{align*}V_\mu ( \mathbb{N}_0)&= \{ n \in \mathbb{N}_0\mid \exists k \in \{0,\ldots,n\} \,:\, Y_k > n - k \}\\&= \Bigl\{ n \in \mathbb{N}_0 \bigm| \max_{0 \leq k \leq n} ( Y_k - (n-k) ) > 0 \Bigr\}\\&= \{ n \in \mathbb{N}_0\mid X_{n} > 0 \}.\end{align*}

Consequently, we know that the Markov chain $(X_n)_{n \geq 0}$ is transient if and only if $\# V_\mu^c ( \mathbb{N}_0 ) < \infty$ almost surely.

Let $n \geq 2$ and consider the infinite directed n-ary tree $\mathcal{D}_n = (V_n, E_n)$ . Then, interestingly, we can associate a multitype branching process $(Z_m)_{m \geq 0}$ to our percolation model on $\mathcal{D}_n$ in the following way.

Let $k \in \mathbb{N}_0$ and $y \in V_n$ with $d(\emptyset,y)=k$ . Then we identify the vertex y with an individual of the kth generation of $(Z_m)_{m \geq 0}$ if and only if $y \in V_\mu$ and y is contained in the same cluster as the root $\emptyset$ . In other words, we demand that all vertices forming the path from $\emptyset$ to y in $\mathcal{D}_n$ are contained in $V_\mu ( \mathcal{D}_n)$ . If this condition is satisfied, we define the type of y by

\[z_y\,:\!=\, \max \{ Y_x - d(x,y) \mid x \in V_n,\ d(x,y) < \infty \}.\]

By construction, if $Y_\emptyset \geq 1$ , the branching process $(Z_n)_{n \geq 0}$ starts with one individual of type $Y_\emptyset$ . However, on the event $Y_\emptyset = 0$ , there are no individuals at all. For a vertex $y \in V_n$ to be identified with an individual in $(Z_m)_{n \geq 0}$ , necessarily $y \in V_\mu$ , i.e. there exists $x \in V_n$ with $Y_x > d(x,y)$ . Hence, as $\mu_0 \in (0,1)$ , the type space of the branching process $(Z_m)_{n \geq 0}$ is $\mathcal{Z} = \mathcal{X} \setminus \{ 0 \}$ .

We can describe the reproduction in this branching process as follows. Every individual has up to n children, and given the type of the parent, the types of the children are independent. Moreover, every individual of type $x \geq 2$ has exactly n children. For each of them, the probability of type $y \in \mathcal{Z}$ is $M_{x,y}\,:\!=\, P_{x,y}$ . On the other hand, an individual of type 1 has n potential children, which are again independent of each other. For all $z \in \mathcal{Z}$ , the probability that a given potential child is born and of type z is $M_{1,z}\,:\!=\, P_{1,z} $ . However, with probability $\mu_0$ , a potential child is not born.

As the type space $\mathcal{Z}$ is infinite in general, we distinguish between the local and global extinction of $(Z_m)_{m \geq 0}$ . This process dies out globally if, at some moment, the total number of individuals vanishes. It dies out locally if, for all $z \in \mathcal{Z}$ , only finitely many individuals of type z are born. While global extinction always implies local extinction, the reverse is not true in general for branching processes with infinitely many types.

Theorem 3. Let $n \geq 2$ . Then, for any distribution $\mu$ , $\# V_\mu^c ( \mathcal{D}_n) = \infty$ almost surely. Moreover, the following statements are equivalent.

  1. (A) Almost surely, $V_\mu ( \mathcal{D}_n)$ contains a path of infinite length.

  2. (B) With positive probability, $(Z_m)_{m \geq 0}$ will not die out globally.

  3. (C) With positive probability, $(Z_m)_{m \geq 0}$ will not die out locally.

  4. (D) $\rho (M) > n^{-1}$ , where $M\,:\!=\, (M_{x,y})_{x,y \in \mathcal{Z}}$ .

If one of these statements holds, then almost surely $V_\mu ( \mathcal{D}_n)$ contains infinitely many distinct infinite clusters. Otherwise, almost surely there are no infinite clusters in $V_\mu ( \mathcal{D}_n)$ .

Note that, up to multiplication with $n \geq 2$ , M is the mean matrix of the branching process $(Z_m)_{m \geq 0}$ . To some degree, this explains why condition (D) is related to (B) and (C). Also, observe that M arises from the transition matrix P of the random exchange process $(X_n)_{n \geq 0}$ simply by deleting both the first row and first column.

For an introduction to infinite-type branching processes, we recommend Braunsteins’ exposition [Reference Braunsteins9, Chapter 2] and the references mentioned therein. This presentation also explains the rather well-understood results on the extinction of finite-type branching processes.

We also want to note that the branching processes $(Z_n)_{n \geq 0}$ , which we consider in the present article, have a mean matrix of upper Hessenberg form. Recently, Braunsteins and Haupthenne [Reference Braunsteins and Haupthenne10] studied the extinction of branching processes with a lower Hessenberg mean matrix.

2. Proof of Theorems 2 and 3

Proof of Theorem 2. (b) $\Longrightarrow$ (c) $\Longrightarrow$ (a) From our coupling between $V_\mu ( \mathbb{N}_0)$ and $(X_n)_{n \geq 0}$ , we know that $(X_n)_{n \geq 0}$ is transient if and only if $V_\mu^c ( \mathbb{N}_0) < \infty$ almost surely. In particular, the implications (b) $\Longrightarrow$ (c) and (c) $\Longrightarrow$ (a) follow.

(d) $\Longrightarrow$ (a) By Kolmogorov’s 0–1 law, for any probability distribution $\mu$ , either $\# V_\mu^c ( \mathbb{N}_0) < \infty$ almost surely or $\#V_\mu^c ( \mathbb{N}_0) = \infty$ almost surely. By observing

(2) \begin{equation}\mathbb{E} \big[\# V_\mu^c \big( \mathbb{N}_0\big) \big] = \sum_{m \geq 0} q_m = \sum_{m \geq 0} \prod_{k=1}^m \sum_{l=0}^{k-1} \mu_l,\end{equation}

where $q_m \,:\!=\, P \big[ m \in V_\mu^c \big( \mathbb{N}_0\big)\big]$ for all $m \geq 0$ , the implication follows.

Finally, let us assume that condition (a) holds for a distribution $\mu$ . We will show (a) $\Longrightarrow$ (d) and (a) $\Longrightarrow$ (b).

In the first step, we verify that we can restrict ourselves to the case $n=1$ . For this, suppose $\# V_\mu^c \big( \mathbb{N}_0^n\big) < \infty$ almost surely for some $n \geq 2$ . Then, consider the subgraph $\mathcal{G}^{\prime}=(V^{\prime},E^{\prime})$ of $\mathbb{N}_0^n$ , which is induced by the vertex set V of all $(x_1,\ldots,x_n) \in \mathbb{N}_0^n$ with $x_j = 0$ for all $j=2,\ldots,n$ . By construction, $\mathcal{G}^{\prime}$ is isomorphic to $\mathbb{N}_0$ , and we know that $V_\mu^c ( \mathcal{G}^{\prime}) = V_\mu^c \big( \mathbb{N}_0^n\big) \cap V^{\prime}$ is finite almost surely. Consequently, $\# V_\mu^c ( \mathbb{N}_0 ) < \infty$ almost surely.

In the second step, we prove all remaining claims. As $0 < \mu_0 < 1$ ,

\[p\,:\!=\, \mathbb{P} [ V_\mu(\mathbb{N}_0)=\mathbb{N} ] > 0.\]

Therefore, by the strong Markov property, we know that $\# V_\mu^c ( \mathbb{N}_0)$ is geometrically distributed with parameter p. In particular, this random variable has a finite exponential moment, and moreover, we can deduce (d) via (2).

Let $n \in \mathbb{N}$ and $x=(x_1,\ldots,x_n) \in \mathbb{N}_0^n$ . For all $j=1,\ldots,n$ , let $\pi_j$ denote the unique path from $(x_1,\ldots,x_{j-1},0,x_{j+1},\ldots,x_n)$ to x. Then, for all $j=1,\ldots,n$ , the path $\pi_j$ consists of $x_j$ edges in direction $e_j$ , and, for all $i \neq j$ , the paths $\pi_i$ and $\pi_j$ only share one vertex, which is their endpoint $x =(x_1,\ldots,x_n)$ . So, for the event $\big\{ x \in V_\mu^c \big(\mathbb{N}_0^n\big) \big\}$ to occur, it is necessary that for each $j=1,\ldots,n$ there exists no vertex y contained in the path $\pi_j$ such that $Z_y > d(y,x)$ . This defines n independent events, whose probabilities can be described with the sequence $(q_m)_{m \geq 0}$ defined above. All in all,

\begin{align*} \mathbb{E} \bigl[ \# V_\mu^c \big( \mathbb{N}_0^n\big) \bigr]&= \sum_{x \in \mathbb{N}_0^n} \mathbb{P} \bigl[ x \in V_\mu^c \big( \mathbb{N}_0^n\big) \bigr] \\ &\leq \sum_{(x_1,\ldots,x_n) \in \mathbb{N}_0^n}\ \prod_{j=1}^n\ q_{x_j}\\&= \sum_{x_1 \geq 0} q_{x_1} \sum_{x_2 \geq 0} q_{x_2} \cdots \sum_{x_{n-1} \geq 0} q_{x_{n-1}} \sum_{x_n \geq 0} q_{x_n}\\ &= \mathbb{E} \bigl[\# V_\mu^c \big( \mathbb{N}_0\big) \bigr]^n \\ &< \infty .\end{align*}

In particular, for all $n \geq 2$ , $\# V_\mu^c \big( \mathbb{N}_0^n\big) < \infty$ almost surely, and condition (b) holds. Since we have already verified (b) $\Longrightarrow$ (c), this finishes the proof.

Proof of Theorem 3. First we verify that for all $n \geq 2$ and any law $\mu$ , $\# V_\mu^c ( \mathcal{D}_n) = \infty$ almost surely. For this, for all $m \in \mathbb{N}$ , we set

\[r_m \,:\!=\, \mathbb{P} \bigl[ \exists y \in V_\mu^c ( \mathcal{D}_n)\,:\, d(\emptyset,y)=m \bigr].\]

As $0 < \mu_0 < 1$ , we know that $r_m \in (0,1)$ for all $m \in \mathbb{N}$ . Moreover, for all $j=1,\ldots,n$ , we let $\mathcal{G}_j$ denote the induced subgraph obtained from $\mathcal{D}_n$ by restricting to all vertices, which can be reached from $j \in V_n$ .

Let $m \geq 2$ . Then we know that there exists a $y \in V_\mu^c ( \mathcal{D}_n)$ with $d(\emptyset,y)=m$ if and only if $Y_\emptyset \leq m$ and, for some $j=1,\ldots,n$ , there exists a vertex z in the graph $\mathcal{G}_j$ with $d(j,z)=m-1$ , which is not covered by any other vertex of $\mathcal{G}_j$ . Note that the latter event is independent of $Y_\emptyset$ and that the graphs $\mathcal{G}_1,\ldots,\mathcal{G}_n$ are isomorphic to $\mathcal{D}_n$ . Consequently, for all $m \geq 1$ , we obtain the recurrence relation

(3) \begin{equation}r_{m+1} = ( 1 - (1 - r_m)^n )\ F(m+1), \quad \text{where} \ F(k) \,:\!=\, \sum_{l=0}^k \mu_l.\end{equation}

Let $N \in \mathbb{N}$ with $F(N) > 1/2$ . Then, by (3), for all $m \geq N$ ,

(4) \begin{equation}r_{m+1} \geq ( 1 -(1-r_m)^2 ) F(N) = r_m (2-r_m) F(N).\end{equation}

The map $f_N\,:\, [0,1] \rightarrow [0,1]$ , $x \mapsto x (2-x) F(N)$ , is monotone increasing. Hence, due to the estimate (4), iteration of the function $f_N$ yields

\[r_m \geq f_N^{m-N} (r_N) \quad \text{for all $ m \geq N+1$.}\]

The map $f_N$ has the two fixed points 0 and $x_N\,:\!=\, 2- F(N)^{-1} \in (0,1]$ . Hence, by monotonicity, if $r_N \geq x_N$ , then also $r_m \geq x_N$ for all $m \geq N$ . On the other hand, if $r_N < x_N$ , then, since $f_N$ is concave and $f^{\prime}_N (0) > 1$ , we have $f_N^k ( r_N) \rightarrow x_N$ for $k \rightarrow \infty$ . In both cases, we can deduce

\[\liminf_{m \rightarrow \infty} r_m \geq x_N = 2- F(N)^{-1}.\]

As $N \in \mathbb{N}$ can be chosen arbitrarily large in this argument, it follows that $r_m \rightarrow 1$ for $m \rightarrow \infty$ . In particular, $V_\mu^c ( \mathcal{D}_n)$ is almost surely non-empty.

By Kolmogorov’s 0–1 law, we know that either $\# V_\mu^c ( \mathcal{D}_n)$ is finite almost surely, or this random variable is infinite almost surely. In the first case, due to $\mu_0 \in (0,1)$ , it would follow that $V_\mu^c ( \mathcal{D}_n)$ is empty with probability greater than zero. As this is not possible, we can conclude $\# V_\mu^c ( \mathcal{D}_n) = \infty$ almost surely.

In the second step of this proof, we now verify that indeed the statements (A), (B), (C), and (D) are equivalent to each other.

(C) $\Longrightarrow$ (B) This implication is clear.

(A) $\Longleftrightarrow$ (B) If (A) holds, then, with positive probability, $V_\mu( \mathcal{D}_n)$ contains an infinite path starting from the root $\emptyset$ . On this event, $(Z_n)_{n \geq 0}$ does not die out globally, i.e. condition (B) holds. Conversely, if (B) holds, then, with positive probability, $V_\mu ( \mathcal{D}_n)$ contains an infinite path. By Kolmogorov’s 0-1 law, (A) follows.

(C) $\Longleftrightarrow$ (D) Since the mean matrix M of the branching process $(Z_n)_{n \geq 0}$ is irreducible, this equivalence follows from the theory of multitype branching processes; see [Reference Braunsteins9, Theorem 9], [Reference Gantert and Müller16], and [Reference Bertacchi and Zucca5].

(B) $\Longrightarrow$ (C) Suppose, for some distribution $\mu$ , that $(Z_n)_{n \geq 0}$ dies out locally almost surely but survives forever with positive probability. Then, as $(Z_n)_{n \geq 0}$ starts with a single individual of random type $Y_\emptyset$ , with some positive probability, $(Z_n)_{n \geq 0}$ survives forever, and no individuals of type 1 are born. On this event, we would know that $\# V_\mu^c ( \mathcal{D}_n) < \infty$ , and this is a contradiction to our first claim. The implication follows.

Finally, let us verify the two claims regarding infinite clusters of $V_\mu ( \mathcal{D}_n)$ . Note that $V_\mu ( \mathcal{D}_n)$ contains an infinite cluster if and only if it contains an infinite path, and due to Kolmogorov’s 0–1 law the probability for this is either 0 or 1. So, we only need to prove that if $V_\mu ( \mathcal{D}_n)$ contains an infinite path almost surely, then there exist infinitely many such paths so that every path of this collection is contained in a separate cluster of $V_\mu ( \mathcal{D}_n)$ .

First, let us extend some of our notation. For a vertex y in $\mathcal{D}_n$ , let $\mathcal{G}_{y}$ again denote the subgraph of $\mathcal{D}_n$ that arises by restricting to the vertex y and all vertices z of $\mathcal{D}_n$ with $d(y,z) < \infty$ . Again, this graph is isomorphic to $\mathcal{D}_n$ .

Choose a sequence of vertices $(y_k)_{k \geq 1}$ in $\mathcal{D}_n$ so that the vertex sets of the graphs $\mathcal{G}_{y_k}$ , $k \geq 1$ , are pairwise disjoint.

Now, we first verify that for any choice of $\mu$ and for all $k \geq 1$ , almost surely infinitely many vertices of $\mathcal{G}_{y_k}$ are not covered. For this, note that as $\mu_0 > 0$ , with positive probability all vertices x in $\mathcal{D}_n$ with $d(x,y_k) < \infty$ and $x \neq y_k$ satisfy $x \in V_\mu^c ( \mathcal{D}_n)$ . On this event, as $\mathcal{D}_{n}$ and $\mathcal{G}_{y_k}$ are isomorphic, we know that almost surely infinitely many vertices of $\mathcal{G}_{y_k}$ are contained in $V_\mu^c ( \mathcal{D}_n)$ . So, by Kolmogorov’s 0–1 law, indeed for all $\mu$ and $k \geq 1$ , infinitely many vertices of $\mathcal{G}_{y_k}$ are not covered by $\mu$ .

As a consequence, for all $k \geq 1$ , we can choose a random vertex $w_k$ in $\mathcal{G}_{y_k}$ that satisfies $w_k \in V_\mu^c ( \mathcal{D}_n)$ almost surely (for example, choose the smallest element of $V_\mu^c ( \mathcal{D}_n)$ in $\mathcal{G}_{y_k}$ with respect to the lexicographical order). Consider, for all $k \geq 1$ , the random subgraph $\mathcal{G}_{w_k}$ of $\mathcal{G}_{y_k}$ . As these graphs are almost surely isomorphic to $\mathcal{D}_n$ , almost surely each of them contains an infinite path using only vertices from $V_\mu ( \mathcal{D}_n)$ . Moreover, as the graphs $\mathcal{G}_{y_k}$ , $k \geq 1$ , are disjoint, so are the random graphs $\mathcal{G}_{w_k}$ , $k \geq 1$ , and these infinite paths. Finally, as $w_k \in V_\mu^c ( \mathcal{D}_n)$ almost surely, we also know that in this collection of infinite paths in $V_\mu ( \mathcal{D}_n)$ , every path is contained in a separate cluster of $V_\mu ( \mathcal{D}_n)$ .

3. Examples

Example 1. Let $m \in \mathbb{N}$ , $m \geq 2$ , and $\mu$ be the uniform distribution on $\{ 0,1,\ldots,m-1\}$ . Then, by (1), the stationary solution $\tau$ of $(X_n)_{n \geq 0}$ is

\[\tau_n = \dfrac{m!}{m^m} (m-n) \dfrac{m^{n-1}}{n!}, \quad n \in \{ 0, \ldots, m-1 \}.\]

This law $\tau$ is a terminating member of the Kemp family of generalized hypergeometric probability distributions; see [Reference Johnson, Kemp and Kotz22, Section 2.4.1]. However, it also naturally arises from Naor’s urn model [Reference Naor31, Appendix]; see also [Reference Johnson, Kemp and Kotz22, Section 11.2.12]. Assume that there are m balls in an urn, of which one is red and the rest are white. In each step, pick one ball, and if it is white, replace it with a red ball. Continue until the first time T at which a red ball gets chosen. Then the distribution of T is

\[\mathbb{P} [T=n] = (m-1)!\ m^{-n}\ \dfrac{n}{(m-n)!}, \quad n \in \{ 1,\ldots,m \},\]

and $m-T$ , i.e. the number of tries not needed, has distribution $\tau$ .

Example 2. Let $p \in (0,1)$ and $\mu$ be the geometric distribution with parameter $1-p$ . Then, by (2), the stationary solution $\tau$ of $(X_n)_{n \geq 0}$ is

\[\tau_n = \tau_0\ p^n \Biggl( \prod_{k=1}^{n} ( 1 - p^{k} ) \Biggr)^{-1} = \tau_0 \ \dfrac{p^n}{(p;\,p)_n}, \quad n \in \mathbb{N}_0,\]

where $(a;\,q)_n$ is the q-Pochhammer symbol. By normalization,

\[\tau_0 = \Biggl( \sum_{n \geq 0} \dfrac{p^n}{(p;\,p)_n} \Biggr)^{-1} = \dfrac{1}{(p;\,p)_{\infty}} = \phi(p)^{-1},\]

where we have applied the q-binomial theorem (see [Reference Olver, Lozier, Boisvert and Clark32, Section 17.2(iii)]), and $\phi$ denotes Euler’s function. Benkherouf and Bather [Reference Benkherouf and Bather4, Section 4] discussed this distribution $\tau$ in a more general form, and referred to it as an Euler distribution. For more information, see also [Reference Johnson, Kemp and Kotz22, Section 10.8.2].

Example 3. Assume that there exists $c \in (0,\infty)$ and $n_0 \in \mathbb{N}$ with

\[\sum_{k>n} \mu_k = \dfrac{c}{n} \quad \text{for all $ n \geq n_0$.}\]

Then, as the expectation of $\mu$ is infinite, we know that statements (i)–(iv) from Theorem 1 hold. Moreover,

\[\Biggl( \prod_{k=1}^{n+1} \sum_{l=0}^{k-1} \mu_l \Biggr) \Bigg/ \Biggl( \prod_{k=1}^{n} \sum_{l=0}^{k-1} \mu_l \Biggr) = \sum_{l=0}^{n+1} \mu_l = 1 - \dfrac{c}{n+1} \quad \text{for all $ n \geq n_0$,} \]

and consequently, by the Gaussian ratio test, we know that condition (d) in Theorem 2 is satisfied if and only if $c>1$ . According to [Reference Helland and Nilsen21, Theorem 3.2], for any value of $c \in (0,\infty)$ ,

\[\lim_{n \rightarrow \infty} \mathbb{P} [ X_n\, n^{-1} \leq y] = y^c (y+1)^{-c} \quad \text{for all $ y \in (0,\infty)$.}\]

This limit is an inverse Beta distribution with $\alpha=c$ and $\beta=1$ .

Example 4. Assume that $\mu = (\mu_n)_{n \geq 0}$ has finite support. Then $(X_n)_{n \geq 0}$ has a stationary solution and $\rho(P)=1$ . Moreover, $\rho(M)$ is the spectral radius and Perron–Frobenius eigenvalue of M. As in the proof of Theorem 3, we define, for the case $n=2$ ,

\[r_m\,:\!=\, \mathbb{P} \bigl[ \exists y \in V_\mu^c ( \mathcal{D}_2)\,:\, d(\emptyset,y)=m \bigr], \quad m \in \mathbb{N}.\]

Then, observe that for all $m \geq n_0 \,:\!=\, \sup \{ n \in \mathbb{N}\mid \mu_n \neq 0 \}$ , the recurrence relation (3) simplifies to

\[r_{m+1} = ( 1- (1-r_m)^2 ) = r_m ( 2 -r_m).\]

This recursion is a modified version of the logistic equation. It follows that

\[r_m = 1 - \exp ({-} c\, 2^m ) \quad \text{for all $ m \geq n_0$,}\]

where $c \in (0,\infty)$ is a fixed parameter.

Example 5. Let $n \in \mathbb{N}$ , $p \in (0,1)$ , $\mu_n \,:\!=\, p$ , and $\mu_0\,:\!=\, 1-p$ . Then the matrix $M=M_{n,p}$ has dimension n and is of the form

\[M_{n,p} = \begin{pmatrix}0& \quad 0& \quad \cdots& \quad \cdots& \quad 0& \quad p\\[3pt]1-p& \quad 0& \quad \cdots& \quad \cdots& \quad 0& \quad p\\[3pt]0& \quad 1-p& \quad 0& \quad \cdots& \quad 0& \quad p\\[3pt]\vdots& \quad & \quad \ddots& \quad & \quad & \quad \vdots\\[3pt]\vdots& \quad & \quad & \quad \ddots& \quad & \quad \vdots\\[3pt]0& \quad \cdots& \quad \cdots& \quad 0& \quad 1-p& \quad p\end{pmatrix}\!.\]

The characteristic polynomial $\chi_{n,p} =\chi_{n,p} (z)$ of $M_{n,p}$ satisfies

\[\chi_{n,p} (z) = \det\! ( z \mathbf{1}_{n} - M_{n,p}) = z \chi_{n-1,p} (z) -p (1-p)^{n-1},\]

and this recurrence relation can be deduced from a Laplace expansion of the first row of $M_{n,p}$ . It follows from $\chi_{1,p}(z)=z-p$ that

\[\chi_{n,p} (z) = \dfrac{p (1-p)^n + (z-1) z^n}{p+z-1}.\]

We know that $\rho ( M_{n,p})$ is the largest zero of this polynomial in (0, 1).

Acknowledgement

The author thanks Martin Zerner and Elmar Teufl for many helpful comments which improved the quality of this article. He is also grateful to the anonymous referee for several careful remarks.

Funding information

The author was financially supported via funds from the ERC Starting Grant 208417-NCIRW and by a PhD scholarship from the Landesgraduiertenforderung Baden-Württemberg.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Ahlberg, D., Broman, E. I., Griffiths, S. and Morris, R. (2014). Noise sensitivity in continuum percolation. Israel J. Math. 201, 847899.10.1007/s11856-014-1038-yCrossRefGoogle Scholar
Ahlberg, D., Tassion, V. and Teixeira, A. Q. (2018). Sharpness of the phase transition for continuum percolation in $\mathbb {R}^ 2$ . Prob. Theory Relat. Fields 172, 525581.10.1007/s00440-017-0815-8CrossRefGoogle Scholar
Athreya, S. R., Roy, R. and Sarkar, A. (2004). On the coverage of space by random sets. Adv. Appl. Prob. 36, 118.10.1239/aap/1077134461CrossRefGoogle Scholar
Benkherouf, L. and Bather, J. A. (1988). Oil exploration: sequential decisions in the face of uncertainty. J. Appl. Prob. 25, 529543.10.2307/3213982CrossRefGoogle Scholar
Bertacchi, D. and Zucca, F. (2009). Characterization of critical values of branching random walks on weighted graphs through infinite-type branching processes. J. Statist. Phys. 134, 5365.10.1007/s10955-008-9653-5CrossRefGoogle Scholar
Bertacchi, D. and Zucca, F. (2013). Rumor processes in random environment on $\mathbb{N}$ and on Galton–Watson trees. J. Statist. Phys. 153, 486511.10.1007/s10955-013-0843-4CrossRefGoogle Scholar
Bezborodov, V. (2021). Non-triviality in a totally asymmetric one-dimensional Boolean percolation model on a half-line. Statist. Prob. Lett. 176, 109155.10.1016/j.spl.2021.109155CrossRefGoogle Scholar
Bezborodov, V. and Krueger, T. (2023). Linear and superlinear spread for continuous-time frog model. Available at arXiv:2008.10585.Google Scholar
Braunsteins, P. T. (2018). Extinction in branching processes with countably many types. Doctoral thesis, University of Melbourne. Available at https://minerva-access.unimelb.edu.au/handle/11343/210538.Google Scholar
Braunsteins, P. T. and Haupthenne, S. (2019). Extinction in lower Hessenberg processes with countably many types. Ann. Appl. Prob. 29, 27822818.10.1214/19-AAP1464CrossRefGoogle Scholar
Coletti, C. F. and Grynberg, S. P. (2014). Absence of percolation in the Bernoulli Boolean model. Available at arXiv:1402.3118.Google Scholar
Coletti, C. F., Miranda, D. and Grynberg, S. P. (2020). Boolean percolation on doubling graphs. J. Statist. Phys. 178, 814831.10.1007/s10955-019-02462-6CrossRefGoogle Scholar
Duminil-Copin, H., Raoufi, A. Tassion, V. (2020). Subcritical phase of d-dimensional Poisson–Boolean percolation and its vacant set. Ann. H. Lebesgue 3, 677700.10.5802/ahl.43CrossRefGoogle Scholar
Gade, H. G. (1973). Deep water exchanges in a sill fjord: a stochastic process. J. Phys. Oceanogr. 3, 213219.10.1175/1520-0485(1973)003<0213:DWEIAS>2.0.CO;22.0.CO;2>CrossRefGoogle Scholar
Gallo, S., Garcia, N. L., Junior, V. V. and Rodrguez, P. M. (2014). Rumor processes on $\mathbb{N}$ and discrete renewal processes. J. Statist. Phys. 155, 591602.10.1007/s10955-014-0959-1CrossRefGoogle Scholar
Gantert, N. and Müller, S. (2006). The critical Markov chain is transient. Markov Process. Relat. Fields 12, 805814.Google Scholar
Gilbert, E. N. (1961). Random plane networks. J. Soc. Indust. Appl. Math. 9, 533543.10.1137/0109045CrossRefGoogle Scholar
Gouéré, J.-B. (2008). Subcritical regimes in the Poisson Boolean model of continuum percolation. Ann. Prob. 36, 12091220.10.1214/07-AOP352CrossRefGoogle Scholar
Hall, P. (1988). Introduction to the Theory of Coverage Processes. John Wiley, New York.Google Scholar
Helland, I. S. (1974). A random exchange model with constant decrements. Report, University of Bergen. Available at https://bora.uib.no/bora-xmlui/handle/1956/19470.Google Scholar
Helland, I. S. and Nilsen, T. S. (1976). On a general random exchange model. J. Appl. Prob. 13, 781790.10.2307/3212533CrossRefGoogle Scholar
Johnson, N. L., Kemp, A. W. and Kotz, S. (2005). Univariate Discrete Distributions. John Wiley, Hoboken, NJ.10.1002/0471715816CrossRefGoogle Scholar
Junior, V. V., Machado, F. P. and Ravishankar, K. (2019). The rumor percolation model and its variations. In Sojourns in Probability Theory and Statistical Physics II, ed. V. Sidoravicius, pp. 208227. Springer, Singapore.Google Scholar
Junior, V. V, Machado, F. P. and Zuluaga, M. (2011). Rumor processes on $\mathbb{N}$ . J. Appl. Prob. 48, 624636.10.1239/jap/1316796903CrossRefGoogle Scholar
Junior, V. V., Machado, F. P. and Zuluaga, M. (2014). The cone percolation on $\mathbb{T}_d$ . Brazilian J. Prob. Statist. 28, 367375.10.1214/12-BJPS212CrossRefGoogle Scholar
Kellerer, H. G. (2006). Random dynamical systems on ordered topological spaces. Stoch. Dyn. 6, 255300.10.1142/S0219493706001797CrossRefGoogle Scholar
Lamperti, J. W. (1970). Maximal branching processes and ‘long-range percolation’. J. Appl. Prob. 7, 8998.10.2307/3212151CrossRefGoogle Scholar
Last, G., Penrose, M. D. and Zuyev, S. (2017). On the capacity functional of the infinite cluster of a Boolean model. Ann. Appl. Prob. 27, 16781701.10.1214/16-AAP1241CrossRefGoogle Scholar
Lebensztayn, É. and Rodríguez, P. M. (2008). The disk-percolation model on graphs. Statist. Prob. Lett. 78, 21302136.10.1016/j.spl.2008.02.001CrossRefGoogle Scholar
Meester, R. and Roy, R. (1996). Continuum Percolation. Cambridge University Press, Cambridge.10.1017/CBO9780511895357CrossRefGoogle Scholar
Naor, P. (1957). Normal approximation to machine interference with many repairmen. J. R. Statist. Soc. B 19, 334341.Google Scholar
Olver, F. W. J., Lozier, D. W., Boisvert, R. F. and Clark, C. W. (2010). NIST Handbook of Mathematical Functions. Cambridge University Press, Cambridge. Available in the form of a digital library at https://dlmf.nist.gov/.Google Scholar
Schneider, R. and Weil, W. (2008). Stochastic and Integral Geometry. Springer, Berlin.10.1007/978-3-540-78859-1CrossRefGoogle Scholar
Stoyan, D., Kendall, W. S. and Mecke, J. (1987). Stochastic Geometry and its Applications. John Wiley, Chichester.Google Scholar
Woess, W. (2000). Random Walks on Infinite Graphs and Groups. Cambridge University Press, Cambridge.10.1017/CBO9780511470967CrossRefGoogle Scholar
Zerner, M. P. W. (2018). Recurrence and transience of contractive autoregressive processes and related Markov chains. Electron. J. Prob. 23, 124.10.1214/18-EJP152CrossRefGoogle Scholar
Figure 0

Figure 1. Illustration of the lattices $\mathbb{N}_0^2$ (a) and $\mathbb{Z}^2$ (b).