1. Introduction
Many notions of ‘randomness’ have been proposed for individual infinite sequences
${x\in \{-1,1\}^{\mathbb {N}}}$
. The simplest one is normality, introduced by Borel [Reference Borel5] more than a hundred years ago, which in this context means that every finite pattern
$\omega \in \{-1,1\}^{k}$
appears in x with asymptotic frequency
$2^{-k}$
, as would occur if x were a typical point for the ‘uniform’ product measure
$\mu ^{\mathbb {N}}=\prod _{n=1}^{\infty }[(1/2)\delta _{1}+(1/2)\delta _{-1}]$
.
Here, we shall be concerned with the notion of simply Poisson genericity, which was introduced by Z. Rudnik and is defined as follows. Given
$x\in \{-1,1\}^{\mathbb {N}}$
, let
$W_{k}$
be a uniformly sampled random word in
$\{-1,1\}^{k}$
and let
$M_{k}^{x}$
denote the (random) number of appearances of
$W_{k}$
in x up to time
$2^{k}$
:
Then, x is simply Poisson generic if
$M_k^x$
converges in distribution to a Poisson random variable with mean one (briefly,
$M_k^x\xrightarrow {d}\operatorname *{\mathrm {Po}}(1)$
), that is,
for all
$n\in \mathbb {Z}_{\ge 0}$
. Throughout this paper, we sometimes omit the term ‘simply’ and call this property Poisson normality for short. Note that the unqualified term, Poisson generic, has a stronger meaning in [Reference Weiss8].
In unpublished work (see [Reference Weiss8]), Yuval Peres and Benjamin Weiss proved the following.
-
• If x is Poisson generic, then it is normal.
-
• Almost every x for the uniform product measure on
$\{-1,1\}^{\mathbb {N}}$
is Poisson normal. -
• Normality does not imply Poisson normality.
For a long time, it was an open problem to exhibit explicit examples of simply Poisson generic sequences, but recently, an example over larger alphabets was given by [Reference Becher and Sac Himelfarb4]. We also mention [Reference Álvarez, Becher, Cesaratto, Mereb, Peres and Weiss1] which extends almost sure Poisson genericity to settings with infinite alphabets and exponentially mixing probability measures.
Since simply Poisson generic points are normal, the ergodic theorem tells us that
$\mu ^{\mathbb {N}}$
is the only ergodic shift-invariant measure on
$\{-1,1\}^{\mathbb {N}}$
that can be supported, or even give positive mass, to simply Poisson generic points. However, one may ask about non-shift-invariant measures. The most natural class to consider is that of product measures,
$$ \begin{align*} \nu=\prod_{n=1}^{\infty}\nu_{n}, \end{align*} $$
where
$\nu _{n}$
are non-trivial measure on
$\{-1,1\}$
. We parameterize the
$\nu _{n}$
using the sequence
so
$\nu _{n}=((1/2)-\gamma _{n})\delta _{-1}+((1/2)+\gamma _{n})\delta _{1}$
. Observe the following.
-
(i) If
$\nu _{n}\rightarrow \textrm {uniform measure on} \{-1,+1\}$
(equivalently,
$\gamma _{n}\rightarrow 0$
), then
$\nu $
-almost every (a.e.) point is normal. In fact,
$\nu $
-almost sure normality is characterized by Cesaro convergence,
$N^{-1}\sum _{n=1}^{N}\gamma _{n}\rightarrow 0$
as
$N\to \infty $
. Since Poisson normality implies normality, the latter is a necessary condition for
$\nu $
to be supported on simply Poisson generic points. -
(ii) By a theorem of Kakutani [Reference Kakutani6],
$\nu $
and
$\mu ^{\mathbb {N}}$
are equivalent as measures if and only if
$\sum _{n=0}^{\infty }\gamma _{n}^{2}<\infty $
. In this case,
$\nu $
-a.e.
$x\in \Omega ^{\mathbb {N}}$
is simply Poisson generic, because this is true for
$\mu ^{\mathbb {N}}$
.
Our main result is to identify a threshold, stated in terms of the decay of
$(\gamma _{n})$
, which separates product measures that are supported on simply Poisson generic points from those that are not. It turns out that this decay rate is far slower than the rate in Kakutani’s theorem, so we obtain product measures
$\nu $
that are singular with respect to
$\mu ^{\mathbb {N}}$
, but are nonetheless supported on simply Poisson generic points. This threshold is tight.
Theorem 1.1. Suppose that
$\gamma _n\in (-1/2,1/2)$
and
$\nu $
is the corresponding product measure. If
$\gamma _n= O (\log ^{-(1/2+\delta )}n)$
for some
$\delta>0$
, then
$\nu $
-a.e.
$x\in \Omega ^{\mathbb {N}}$
is simply Poisson generic. However, if
$\gamma _n=\log ^{-(1/2-\delta )}n$
for all large n, then
$\nu $
-a.e.
$x\in \Omega ^{\mathbb {N}}$
is not simply Poisson generic.
Remark 1.2. We have stated the theorem for Poisson normality for simplicity, but it holds also for the stronger notion of Poisson genericity found in [Reference Weiss8]. Furthermore, the convergence result in Theorem 1.1 remains valid for sequences over finite alphabets
$\{0,1,\ldots ,b-1\}$
. In this broader context, the definition of Poisson normality counts the occurrences of a uniformly sampled word
$W_k\in \{0,1,\ldots ,b-1\}^k$
within the first
$b^k$
digits of a sequence
$x\in \{0,1,\ldots ,b-1\}^{\mathbb {N}}$
. For
$\ell =0,\ldots ,b-1$
, the associated measures are defined as
$\nu _n(\{\ell \})=1/b+\gamma _n^{(\ell )}$
, where
$\{\gamma _{n}^{(\ell )}\}_{n\ge 1}$
satisfies
$\sum _{\ell =0}^{b-1}\gamma _n^{(\ell )}=0$
and
$\gamma _n^{(\ell )}\in (-(b-1)/b,(b-1)/b)$
. The following proofs can be adapted to this setup to show that, assuming that
$\max _{0\le \ell \le b-1}\gamma _n^{(\ell )}= O (\log ^{-(1/2+\delta )}n)$
, then
$\nu $
-a.e. x is simply Poisson generic.
The remainder of the paper is organized as follows: in the next section, we summarize our notation, in §3, we prove the convergence result in Theorem 1.1, and in §4, we establish tightness.
2. Setup and notation
We let
$\mathbb {N}=\{1,2,3,\ldots \}$
and for
$n\in \mathbb {N}$
, set
$[n]=\{1,\ldots ,n\}$
. Given a sequence
$(\gamma _{n})_{n\in \mathbb {N}}$
taking values in
$(-1/2,1/2)$
and
$\Omega =\{-1,1\}$
, we define the product measure
$\nu $
on
$\Omega ^{\mathbb {N}}$
by
$$ \begin{align*} \nu=\prod_{n=1}^{\infty}\nu_{n}\quad\text{where }\nu_{n}(\{1\})=\frac{1}{2}+\gamma_{n}\,\text{and}\,\nu(\{-1\})=\frac{1}{2}-\gamma_{n}. \end{align*} $$
Let
$\mu ^k$
denote the uniform product measure on
$\Omega ^k$
and consider
$\mathbb {P}_k =\nu \times \mu ^k$
defined on
$\Omega ^{\mathbb {N}}\times \Omega ^k$
. Denote by
${\mathbb E}_k $
the corresponding expectation.
For
$1\leq j\leq 2^{k}$
, define the indicator random variables
$I_{j}:\Omega ^{\mathbb {N}}\times \Omega ^k\rightarrow \{0,1\}$
by
$$ \begin{align} I_{j}(x,\omega)= \begin{cases} 1 & \text{for }x_{j}\ldots x_{j+k-1}=\omega,\\ 0 & \text{otherwise}, \end{cases} \end{align} $$
and
$M_{k}:\Omega ^{\mathbb {N}}\times \Omega ^k\rightarrow \mathbb {Z}_{\ge 0}$
by
$$ \begin{align} M_{k}(x,\omega)=\#\{1\leq i\leq2^{k}\mid x_{i}\ldots x_{i+k-1}=\omega\}=\sum_{j\in[2^k]}I_{j}(x,\omega). \end{align} $$
For
$\omega \in \Omega ^k$
and
$j,k\ge 1$
, we introduce the quantity
$$ \begin{align} P_{j,k}(\omega)=\prod_{i=1}^{k}(1+2\omega_{i}\gamma_{i+j-1}). \end{align} $$
Sometimes, we think of
$P_{j,k}$
as a random variable on
$\Omega ^{\mathbb {N}}\times \Omega ^k$
. By the independence of the random variables
$\{\omega _i\gamma _{i+j-1}\}_{1\le i\le k}$
, we point out that
$$ \begin{align} {\mathbb E}_k [P_{j,k}]=\prod_{i=1}^k{\mathbb E}_k [1+2\omega_i\gamma_{i+j-1}]=1. \end{align} $$
We also note that for any fixed
$\omega \in \Omega ^k$
,
$$ \begin{align} \mathbb{P}_k (I_j=1|\{\omega\})= \prod_{i=1}^{k}\Big(\frac{1}{2}+\omega_{i}\gamma_{i+j-1}\Big) =2^{-k}P_{j,k}(\omega). \end{align} $$
Observe that, for any fixed
$x\in \Omega ^{\mathbb {N}}$
, there is a unique
$\omega \in \Omega ^k$
such that
$I_j(x,\omega )=1$
, and the probability of this
$\omega $
, like all others, is
$2^{-k}$
; thus,
${\mathbb E}_k [I_{j}]=2^{-k}$
. When
$|i-j|\ge k$
, the variables
$I_j$
and
$I_i$
are independent conditionally to
$\omega \in \Omega ^k$
. However, the independence fails if we do not condition on
$\omega $
, since
is different from
${\mathbb E}_k [I_j]{\mathbb E}_k [I_i]=2^{-2k}$
.
3. Convergence to Poisson
Let
$\gamma _n\in (-1/2,1/2)$
be such that
$\gamma _n= O (\log ^{-(1/2+\delta )}n)$
for some
$\delta>0$
. Without loss of generality (decreasing
$\delta $
if necessary), we assume that there is
$n_0\ge 1$
such that
We consider
$M_k$
defined in (2.2) on the probability space
$(\Omega ^{\mathbb {N}}\times \Omega ^k,\mathbb {P}_k )$
; the main result of this section is that
$M_{k}$
converges in distribution to a Poisson random variable with mean one.
Proposition 3.1. We have that
$M_k\xrightarrow {d}\operatorname *{\mathrm {Po}}(1)$
as
$k\to \infty $
.
This is commonly referred to as the annealed case, because it involves a coupled probability space. By contrast, the quenched scenario refers to an almost sure result on the probability space
$\Omega ^{\mathbb {N}}$
, corresponding precisely to the convergence statement of Theorem 1.1. The following proposition establishes the connection between annealed and quenched results.
Proposition 3.2. If
$M_k\xrightarrow {d}\operatorname *{\mathrm {Po}}(1)$
, then
$\nu $
-a.e.
$x\in \Omega ^{\mathbb {N}}$
is simply Poisson generic.
Proof. Using that
$\nu $
is a product measure, this proof follows the same argument of Peres and Weiss, found in [Reference Álvarez, Becher and Mereb2, Proof of Theorem 1]. The main tools are McDiarmid’s inequality [Reference McDiarmid7] and the Borel–Cantelli lemma.
By Proposition 3.2, the convergence result in Theorem 1.1 follows from Proposition 3.1; hence, the remainder of this section is dedicated to proving Proposition 3.1.
3.1. A general convergence theorem applied to our setting
To prove Proposition 3.1, we rely on a general result on Poisson approximation, [Reference Barbour, Holst and Janson3, Theorem 1.A], derived by the Chen–Stein method, which provides a bound on the total variation distance
$d_{\text {TV}}$
(see the reference above for a definition). We note that convergence in total variation implies convergence in distribution. Given a family of random variables
$\{X_j\}_{j\in J}$
, we denote by
$\sigma (X_j:j\in J)$
the
$\sigma $
-algebra generated by such a family, that is, the smallest
$\sigma $
-algebra for which each of the
$X_j$
is measurable.
Theorem 3.3. Let
$I_{1},\ldots ,I_{n}$
be indicator random variables and
$S=\sum _{j\in [n]}I_{j}$
. For every
$j\in [n]$
, let there be given a partition
$\Gamma _{j}^{s},\Gamma _{j}^{w}\subseteq [n]$
of
$[n]\setminus \{j\}$
, let
$$ \begin{align*} \unicode{x3bb}=\sum_{j\in[n]}{\mathbb E} [I_{j}], \end{align*} $$
and let
Then,
$$ \begin{align*} \begin{aligned} d_\mathrm{TV}(S,\operatorname*{\mathrm{Po}}(\unicode{x3bb}))\le & \min\{1,\unicode{x3bb}^{-1}\}\bigg(\!\sum_{j\in[n]}({\mathbb E} [I_{j}]^{2}+\sum_{i\in\Gamma_{j}^{s}}({\mathbb E} [I_{j}]{\mathbb E} [I_{i}]+{\mathbb E} [I_{j}I_{i}])\bigg)\\ & +\min\{1,\unicode{x3bb}^{-1/2}\}\sum_{j\in[n]}\eta_{j}. \end{aligned} \end{align*} $$
The sets
$\Gamma _j^s,\Gamma _j^w$
partition the variables into those that are strongly and weakly correlated with
$I_j$
, respectively. This is the meaning of the superscripts: ‘s’ for strong and ‘w’ for weak.
For a fixed k, we apply this with
$n=2^{k}$
, the indicators
$I_{1},\ldots ,I_{2^{k}}$
from (2.1), and
$S=M_{k}=\sum _{j\in [2^k]}I_{j}$
. Recall from §2 that
${\mathbb E}_k [I_{j}]=2^{-k}$
and so
$\unicode{x3bb} =\sum _{j\in [2^{k}]}{\mathbb E}_k [I_{j}]=1.$
For
$j\in [2^{k}]$
, we let
Theorem 3.3 yields that
$$ \begin{align*} d_\mathrm{TV}(M_{k},\operatorname*{\mathrm{Po}}(1))\le\underset{A_{k}}{\underbrace{2^{-2k}\sum_{j\in[2^{k}]}(1+|\Gamma_{j}^{s}|)}}+\underset{B_{k}}{\underbrace{\sum_{j\in[2^{k}]}\sum_{i\in\Gamma_{j}^{s}}{\mathbb E}_k [I_{j}I_{i}]}}+\underset{C_{k}}{\underbrace{\sum_{j\in[2^{k}]}\eta_{j}}}. \end{align*} $$
To conclude that
$M_{k}\xrightarrow {d}\operatorname *{\mathrm {Po}}(1)$
, we will show that each of the positive terms
$A_{k},B_{k},C_{k}$
tend to zero as
$k\to \infty $
.
3.2.
$A_{k}\rightarrow 0$
This is simple: by
$|\Gamma _{j}^{s}|\le 2k$
, we have
$A_{k}\leq 2^{-k}(1+2k)\to 0$
.
3.3.
$B_{k}\rightarrow 0$
Lemma 3.4. There exists
$j_{0}\in \mathbb {N}$
such that
${\mathbb E}_k [I_{i}I_{j}]<2^{-3k/2}$
for all
$j_{0}\leq i<j\leq 2^{k}$
satisfying
$0<j-i< k$
.
Proof. Since
$(\gamma _n)$
is a null sequence, we let
$j_{0}$
be such that
$1+2\gamma _{n}<2^{1/4}$
for all
$n\ge j_0$
, and let i, j be as in the statement. Arguing as in the proof of [Reference Álvarez, Becher and Mereb2, Lemma 1], let
and note that a word
$\omega \in \Omega ^k$
can satisfy
$I_{i}(x,\omega )I_{j}(x,\omega )=1$
for some
$x\in \Omega ^{\mathbb {N}}$
, only if
${\omega \in \Omega ^k_{i,j}}$
. The elements of
$\Omega ^k_{i,j}$
are in bijection with their prefix of length
$j-i$
, so
$\mu ^k(\Omega ^k_{i,j})=2^{-k+(j-i)}$
.
For a fixed
$\omega \in \Omega ^k_{i,j}$
, we define
$\widetilde {\omega }\in \Omega ^{k+(j-i)}$
as the juxtaposition of two copies of
$\omega $
, namely
$\widetilde {\omega }_h=\omega _h$
if
$h\in \{1,\ldots ,k\}$
and
$\widetilde {\omega }_h=\omega _{h-(j-i)}$
if
$h\in \{k+1,\ldots ,k+(j-i)\}$
. By
$i\ge j_0$
,
$$ \begin{align*} \begin{aligned} \nu(x:I_{i}(x,\omega)I_{j}(x,\omega)=1)&= \prod_{h=i}^{j+k-1}(1/2+\widetilde{\omega}_{h-i+1}\gamma_{h})\\ &\le 2^{-(k+j-i)}\prod_{h=i}^{j+k-1}(1+2\gamma_{h})\\ &\le 2^{-(k+j-i)}2^{1/4(k+j-i)}. \end{aligned} \end{align*} $$
Using that
$k+j-i<2k$
, it follows that
So,
$$ \begin{align*} {\mathbb E}_k [I_{i} I_{j}] e &= \int_{\Omega^k_{i,j}}\nu(x:I_{i}(x,\omega)I_{j}(x,\omega)=1)\mathop{}\!\mathrm{d}\mu^k(\omega)\\& \leq\mu^k(\Omega^k_{i,j})2^{-(k/2+j-i)}=2^{-3k/2}, \end{align*} $$
which completes the proof.
To conclude the proof that
$B_{k}\rightarrow 0$
, we use that
${\mathbb E}_k [I_{j}]=2^{-k}$
to get
Therefore, with
$j_{0}$
as in Lemma 3.4,
$$ \begin{align*} B_{k} & =\sum_{j\in[2^{k}]}\sum_{i\in\Gamma_{j}^{s}}{\mathbb E}_k [I_{i}I_{j}]\\ & \leq\sum_{j=1}^{j_{0}-1}k{\mathbb E}_k [I_{i}I_{j}]+\sum_{j=j_{0}}^{2^{k}-1}k2^{-3k/2}\\ & \leq j_{0}k2^{-k}+k2^{-k/2}, \end{align*} $$
and
$B_{k}\rightarrow 0$
follows.
Remark 3.5. The arguments used so far do not rely on the specific rate at which
$(\gamma _n)$
decays to zero. This property becomes crucial in the next subsection.
3.4.
$C_{k}\to 0$
Let
$P_{j,k}$
be as in (2.3). The main step to prove
$C_k\to 0$
is the following proposition.
Proposition 3.6. Let
$\varepsilon>0$
. Then,
${\mathbb E}_k[|P_{j,k}-1|]\to 0$
uniformly in
$j\ge 2^{\varepsilon k}$
as
$k\to \infty $
.
Proof. As
$k\to \infty $
, the decay rate for
$(\gamma _n)$
yields uniformly in
$j\ge 2^{\varepsilon k}$
that
So,
$$ \begin{align} \sum_{i=1}^k\gamma_{i+j-1}^2= O (k^{-2\delta}). \end{align} $$
By a first-order expansion around
$x=0$
of
$f(x)=\log (1+x)$
and (3.2), for
$j>2^{\varepsilon k}$
, we have
$$ \begin{align} & P_{j,k}(\omega) = \exp\bigg\{\sum_{i=1}^k\log(1+2\omega_i\gamma_{i+j-1})\bigg\} =\exp\bigg\{2\sum_{i=1}^k\omega_i\gamma_{i+j-1} + O (k^{-2\delta})\bigg\}. \end{align} $$
For a fixed
$\theta \in (0,1/2)$
, we define
$$ \begin{align*} A_{k,j}^\theta=\bigg\{\omega\in \Omega^k: \bigg|\sum_{i=1}^k\omega_i\gamma_{i+j-1}\bigg|\le \bigg(\sum_{i=1}^k\gamma_{i+j-1}^2\bigg)^{1/2-\theta} \bigg\}. \end{align*} $$
By (3.2), we have uniformly in
$\omega \in A_{k,j}^\theta $
and
$j\ge 2^{\varepsilon k}$
that
$\sum _{i=1}^k\omega _i\gamma _{i+j-1}=O(k^{-\delta (1-2\theta )})$
. So, identity (3.3) yields that
$P_{j,k}(\omega )=1+o(1)$
uniformly on
$A^\theta _{k,j}$
and j. It follows that
In particular, this implies
Since by (2.4),
it follows that
Under the measure
$\mu ^k$
, the random variables
$(\omega _i\gamma _{i+j-1})_{1\le i\le k}$
are independent with mean zero and variance
$\gamma _{i+j-1}^2$
. Hence, by Chebyshev’s inequality and (3.2), we get
$$ \begin{align*} \mu^k(\Omega^k\setminus A_{k,j}^\theta)\le \bigg(\sum_{i=1}^k\gamma_{i+j-1}^2\bigg)^{2\theta}= O (k^{-4\delta\theta})\longrightarrow 0, \end{align*} $$
uniformly in
$j\ge 2^{\varepsilon k}$
, as
$k\to \infty $
. Applying (3.5),
Combining the latter with (3.4),
which finishes the proof.
Remark 3.7. The exponent
$1/2+\delta $
in the decay of
$(\gamma _n)$
is heuristically explained by applying of the central limit theorem to (3.3). The sum of independent random variables
$\sum _{i=1}^k\omega _i\gamma _{i+j-1}$
typically grows proportionally to
$(\sum _{i=1}^k{\gamma _{i+j-1}^2})^{1/2}$
. Thus, the elements of
$A^\theta _{k,j}$
characterize the asymptotics of
$P_{j,k}$
.
We now can complete the proof that
$C_{k}\rightarrow 0$
. For fixed
$k\ge 1$
and
$j\in [2^{k}]$
, we let
where
$\Gamma _j^w\subset [2^k]$
is from (3.1). Consider now the random variable
$W(x,\omega )=\omega $
and let
$\xi _{j}={\mathbb E}_k [I_{j}-2^{k}|\mathcal {F}_{j}]$
, where
$\mathcal {F}_{j}=\sigma (\{I_{i}:i\in \Gamma _{j}^{w}\},W)$
. Applying the tower property twice,
Since
$|j-i|\ge k$
, the variable
$I_j$
is independent of
$(I_{i}:i\in \Gamma _{j}^{w})$
conditionally to
${\{W=\omega \}}$
. Hence, by (2.5),
Therefore,
$\xi _j=2^{-k}(P_{j,k}-1)$
and
$$ \begin{align*} C_{k} \le \sum_{j\in[2^{k}]}{\mathbb E}_k |\xi_{j}| = 2^{-k}\sum_{j\in[2^{k}]}{\mathbb E}_k |P_{j,k}-1|. \end{align*} $$
By (2.4), we know that
${\mathbb E}_k [P_{j,k}]=1$
, so as
$k\to \infty $
,
$$ \begin{align*} 2^{-k}\sum_{j\le 2^{\varepsilon k}} {\mathbb E}_k |P_{j,k}-1|\le 2\cdot 2^{-k(1-\varepsilon)}=o(1). \end{align*} $$
Hence, Proposition 3.6 yields that
$$ \begin{align*} C_k\le o(1)+2^{-k}\sum_{2^{\varepsilon k}\le j\le 2^k}{\mathbb E}_k |P_{j,k}-1|\to 0. \end{align*} $$
This concludes the estimate for
$C_{k}$
and thus our proof of Proposition 3.1.
4. Non-convergence
Without loss of generality, we fix
$\delta \in (0,1/2), n_0\ge 1$
, and assume that
We consider
$M_k$
defined in (2.2) on the probability space
$(\Omega ^{\mathbb {N}}\times \Omega ^k,\mathbb {P}_k )$
; we shall show that
$M_{k}$
does not converge in distribution to a Poisson random variable with mean one. In the current section, we prove this result in the annealed setting, whereas the second part of Theorem 1.1 addresses the quenched result. However, since quenched convergence implies annealed convergence, this is sufficient.
Before proving the annealed case, we need to establish a few preliminary results. Let
$k\in \mathbb {N}$
and let
$D_{+},D_{-}\subseteq \{1,\ldots ,k\}$
be sets of equal size. For
$j\ge 1$
, write
Proposition 4.1. For any
$\varepsilon \in (0,1)$
, there is
$k_0\ge 1$
such that
$\Xi _j\leq 1$
uniformly in
${k\ge k_0}$
,
$2^{\varepsilon k}\le j\le 2^k$
, and
$D_{+}$
,
$D_{-}$
.
Proof. Let
$\ell =|D_{+}|=|D_{-}|\leq k$
. Because
$\gamma _{n}$
is decreasing, the product defining
$\Xi _j$
can only increase if we replace each
$1+\gamma _{i+j-1}$
by
$1+\gamma _{j}$
, and each
$1-\gamma _{i+j-1}$
by
$1-\gamma _{j+k}$
. Thus,
Let
$f(x)=\log ^{-(1/2-\delta )}x$
,
$x>0$
, so that
$f(n)=\gamma _n$
,
$n\ge 1$
. Since f is deceasing and concave, for
$x<y$
, we have
$|f(x)-f(y)|\leq |x-y||f'(x)|$
. Applying this with
$x=j$
,
$y=j+k$
, and using
$j\geq 2^{\varepsilon k}$
,
$f'(x)=-c(1/2-\delta )(x\log ^{3/2-\delta }x)^{-1}$
, we get
However, using
$j,j+k\leq 2^{k}+k<2^{k+1}$
for all k sufficiently large, we have
$\gamma _{j}\gamma _{j+k}\geq c^2(\log 2/(k+1))^{1-2\delta }$
. It follows that
$1+\gamma _{j}-\gamma _{j+k-1}-\gamma _{j}\gamma _{j+k}<1$
and the same holds after raising to the
$\ell $
th power, giving us
$\Xi _j\leq 1$
. This proves the statement.
For
$\eta>0$
and
$k\ge 1$
, define
$$ \begin{align} \Omega_k^\eta =\bigg\{\omega\in\Omega^k:\sum_{i=1}^{k}\omega_{i}<-\eta\sqrt{k}\bigg\}. \end{align} $$
When convenient, we identify
$\Omega _k^\eta \subseteq \Omega ^k$
with its lift
$\{(x,\omega )\mid \omega \in \Omega _k^\eta \}$
to
$\Omega ^{\mathbb {N}}\times \Omega ^k$
.
Lemma 4.2.
$\mathbb {P}_k (\Omega _k^\eta \cap \{M_{k}\geq 1\})\rightarrow 0$
as
$k\to \infty $
.
Proof. By Fubini, it suffices to bound
$\nu (x:M_{k}(x,\omega )\ge 1)=\mathbb {P}_k (M_{k}\geq 1|\{\omega \})$
uniformly in
$\omega \in \Omega _k^\eta $
. Since
$M_{k}=\sum _{j\in [2^{k}]}I_{j}$
, we get by (2.5) that for all
$\omega \in \Omega ^k$
,
$$ \begin{align*} \nu(x:M_{k}(x,\omega)\ge1) \leq \sum_{j\in[2^{k}]}\nu(x:I_{j}(x,\omega)=1) = 2^{-k}\sum_{j\in[2^{k}]}P_{j,k}(\omega). \end{align*} $$
Let
$\varepsilon \in (0,1)$
. We first claim that the sum on the right changes by
$o(1)$
if we sum over
$2^{\varepsilon k}\leq j\leq 2^{k}$
instead of
$1\leq j\leq 2^{k}$
. Indeed, using
$\gamma _{n}\rightarrow 0$
, there is
$j_{0}$
such that
${1+2\gamma _{j}<2^{(1-\varepsilon )/2}}$
for any
$j\ge j_0$
. By the fact that
$\gamma _{n}\rightarrow 0$
, for every fixed
$j\in \mathbb {N}$
, we have
$\sup _{\omega \in \Omega ^k}2^{-k}P_{j,k}(\omega )=o(1)$
as
$k\rightarrow \infty $
, so
$$ \begin{align*} \sum_{1\leq j\leq j_{0}}2^{-k}P_{j,k}(\omega)=j_{0}\cdot o(1)=o(1). \end{align*} $$
Also, for all
$j\ge j_0$
,
$$ \begin{align*} P_{j,k}(\omega) = \prod_{i=1}^{k}(1+2\omega_{i}\gamma_{ i+j-1}) \leq 2^{(1-\varepsilon)k/2}, \end{align*} $$
so
$$ \begin{align*} 2^{-k}\sum_{j_{0}\leq j\leq2^{\varepsilon k}}P_{j,k}(\omega)<2^{-k}\cdot2^{\varepsilon k}\cdot2^{(1-\varepsilon)k/2}=o(1), \end{align*} $$
uniformly in
$\omega \in \Omega ^k$
. Thus, we have shown that
$$ \begin{align*} \nu(x:M_{k}(x,\omega)\ge1)=o(1)+2^{-k}\sum_{2^{\varepsilon k}\le j\le 2^k}P_{j,k}(\omega). \end{align*} $$
Let
$ N_+(\omega ) =\#\{1\leq i\leq k\mid \omega _{i}=1\}$
and let
$D_{+},D_{-}\subseteq [k]$
denote the sets of positions of the first
$N_+(\omega )$
occurrences of
$+1,-1$
in
$\omega $
, respectively. Since
$\sum _{i\in D_+\cup D_-}\omega _i=0$
, the set
$E(\omega )=[k]\setminus (D_{+}\cup D_{-})$
has cardinality
$|E|=|\sum _{i=1}^k\omega _i|$
. Let now
$\omega \in \Omega _k^\eta $
. It follows that
$\omega _{i}=-1$
for
$i\in E$
and
$|E|>\eta \sqrt {k}$
. Since
$(\gamma _n)$
is decreasing, by Proposition 4.1,
for all
$k\ge 1$
sufficiently large, uniformly in
$2^{\varepsilon k}\leq j\leq 2^{k}$
and
$\omega \in \Omega _k^\eta $
. By
$|E|>\eta \sqrt {k}$
,
$$ \begin{align*} 2^{-k}\sum_{2^{\varepsilon k}\le j\le 2^k}P_{j,k}(\omega) & <(1-2\gamma_{k+2^{k}})^{\eta k^{1/2}}\leq\bigg(1-\frac{c'}{k^{1/2-\delta}}\bigg)^{\eta k^{1/2}} \end{align*} $$
for some
$c'>0$
. Since the exponent tends to infinity faster than the denominator, the last expression tends to zero as
$k\rightarrow \infty $
, as desired.
If Y is Poisson with parameter
$1$
, then
$\mathbb {P}_k (Y=0)=1/e$
. Thus, the next proposition shows that
$M_{k}$
does not converge in distribution to
$\operatorname *{\mathrm {Po}}(1)$
.
Proposition 4.3. We have
$\limsup _{k\rightarrow \infty }\mathbb {P}_k (M_{k}=0)>1/e$
.
Proof. Since
$M_{k}\geq 0$
is integer-valued, the complement of the event
$\{M_{k}=0\}$
is
${\{M_{k}\geq 1\}}$
; we shall bound the probability of the latter event from above. For a parameter
$\eta>0$
that we shall choose later, let
$\Omega _k^\eta $
be as in (4.1) and let
$\mathcal {N}$
be a standard Gaussian. Since on the space
$(\Omega ^k,\mu ^{k})$
the random variables
$\{\omega _i\}_{1\le i\le k}$
are independent and identically distributed with unitary second moment, as
$k\to \infty $
, the central limit theorem yields that
Therefore, by Lemma 4.2,
Since
$\lim _{\eta \rightarrow 0}\mathbb {P}_k (\mathcal {N}\geq -\eta )=\mathbb {P}_k (\mathcal {N}\geq 0)=1/2$
, by choosing
$\eta $
small enough, we can ensure that
$\mathbb {P}_k (\mathcal {N}\geq -\eta )<1-1/e$
. It then follows that
as desired.
Acknowledgements
The authors would like to thank Zemer Kosloff for insightful discussions throughout the development of this work, in particular suggesting a simplification to the proof of Proposition 3.6. This research was supported by the Israel Science Foundation Grant no. 3056/21.






