1. Introduction
The Hausdorff metric,
$d_\textrm{H}$
, is a useful measurement to determine how similar one set or shape is to another. More formally, given two compact sets A and B of
$\mathbb{R}^d$
,

where
$\|\cdot\|$
denotes the Euclidean distance. Classically, in practical applications a Hausdorff metric that is very close to 0 would indicate great similarity between the sets or shapes considered. Due to its utility in comparing shapes and sets, the Hausdorff metric finds applications in a variety of fields such as computer vision and image processing [Reference Rucklidge25], medical imaging [Reference Xiaoming, Ning, Haibin, Xiaoyang, Xue and Shuang28], pattern recognition [Reference Vivek and Sudha27], robotics [Reference Donoso-Aguirre, Bustos-Salas, Torres-Torriti and Guesalaga9], machine learning [Reference Piramuthu21], topological data analysis [Reference Chazal, Glisse, Labruère and Michel5, Reference Fasy, Lecci, Rinaldo, Wasserman, Balakrishnan and Singh11, Reference Kallel and Louhichi14, Reference Niyogi, Smale and Weinberger20], or in statistics and directional statistics [Reference Cuevas7, Reference Cuevas and Rodrguez-Casal8, Reference Kallel and Louhichi14, Reference Kato15].
For this later field, circular data is a key concept. Circular data
$x_1,\ldots,x_n$
are those for which the natural support is the unit circle or its toroidal extensions [Reference Kato15, Reference Taniguchi, Kato, Ogata and Pewsey26]. They may serve as models for wind directions, strata orientations, and movements of animals, among others. Suppose that the data
$x_1,\ldots,x_n$
, living in the same compact set, is a realization of stationary random variables
$X_1,\ldots, X_n$
compactly supported and not necessarily independent. Studying the Hausdorff metric from the observable random cloud
$\{X_1,\ldots, X_n\}$
to their common support and how close it is to 0 as n grows, is then helpful in deducing information about this common support that is, generally, unknown.
From now on, let
$(X_i)_{i\in \mathbb{N}}$
be a stationary sequence of
$\mathbb{R}^d$
-valued random variables. Let
$\mu$
be the distribution of
$X_1$
, and thus of all the
$X_i$
. Suppose that
$\mu$
is supported on a compact set
$\mathbb{M}$
of
$\mathbb{R}^d$
, that is,
$\mathbb{M}$
is the smallest closed set having probability 1. More formally,
$\mathbb{M}=\bigcap_{C\subset\mathbb{R}^d,\,P(\overline{C})=1}\overline{C}$
, where
$\overline{C}$
means the closure of the set C in Euclidean space. Denote by
$\mathbb{X}_n$
the set
$\{X_1,\ldots,X_n\}$
that is viewed as a subset of
$\mathbb{R}^d$
. We are interested in evaluating the Hausdorff metric,
$d_\textrm{H}$
, of
$\mathbb{X}_n$
to the support
$\mathbb{M}$
; more precisely, in giving a sharp upper bound for
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}}))$
, the expectation of
$d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}})$
, by suitably controlling
$\mathbb{P}(d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}}) > \varepsilon)$
for a positive
$\varepsilon$
.
In topological data analysis, the upper bounds for
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}}))$
are useful since they lead, thanks to the stability theorem, to the upper bounds for the bottleneck distance between suitable persistence diagrams. We refer the reader to the seminal paper [Reference Chazal, Glisse, Labruère and Michel5], where, in particular, optimal upper bounds for
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}}))$
are obtained for independent and identically distributed (i.i.d.) random sequences
$(X_i)_{i\in\mathbb{N}}$
.
One of our objectives is to extend the estimates of the i.i.d. case to the dependent one. Although this generalization is useful for modeling real phenomena, only a few works have addressed these questions. To our knowledge the only papers that deal with the dependent framework are [Reference Cholaquidis, Fraiman, Lugosi and Pateiro-López6] (for the trajectories of a reflected Brownian motion) and, more recently, [Reference Aaron, Doukhan and Reboul1], where the authors give estimation results and optimal rates on the R-convex hull of stationary dependent random variables using a kernel density estimation approach, and [Reference Kallel and Louhichi14] for topological reconstruction of compact supports of various classes of stationary dependent random variables. Even in the area of topological data analysis, only a few works have explored dependent data. We refer, for instance, to the recent paper [Reference Krebs18], which gave a concentration inequality for persistent Betti numbers, or to the more recent paper [Reference Reise, Michel and Chazal22], for estimation of topological signatures (both in the dependent context).
Now, we come back to the main purpose of this paper which is to give sharp upper bounds for
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}}))$
by suitably controlling
$\mathbb{P}(d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}}) > \varepsilon)$
for positive
$\varepsilon$
and for a stationary sequence
$(X_i)_{i\in\mathbb{N}}$
. In [Reference Kallel and Louhichi14] we gave upper bounds for
$\mathbb{P}(d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}})>\varepsilon)$
for different types of weak dependence of the stationary sequence
$(X_n)_{n\in\mathbb{N}}$
. Those upper bounds met the purpose of [Reference Kallel and Louhichi14], which was to establish the asymptotic
$(\varepsilon,\alpha)$
-density in
$\mathbb{M}$
of the stationary sequence (see [Reference Kallel and Louhichi14, Definition 1.1]). The proofs there used a clustering technique that consists of grouping the random variables into clusters and treating them as random variables in a larger space. While this approach accommodates multiple dependency types, it may not yield optimal speeds of convergence for
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}}))$
because of the curse of dimensionality due to the Euclidean distance.
In this paper, our starting point for controlling
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}}))$
is Proposition 1.1, which is true for any stationary
$\mathbb{R}^d$
-valued sequence of random variables compactly supported (for its proof, we refer the reader to [Reference Kallel and Louhichi14, Proposition 3.1], with
$k=n$
and
$r=1$
there). The statement of this proposition and its proof are along the lines of [Reference Chazal, Glisse, Labruère and Michel5, Reference Cuevas7, Reference Cuevas and Rodrguez-Casal8, Reference Fasy, Lecci, Rinaldo, Wasserman, Balakrishnan and Singh11]. Its proof uses a nice geometrical result, proved in [Reference Niyogi, Smale and Weinberger20], relating the
$\varepsilon$
-covering number of a compact set by closed balls of radius
$\varepsilon$
to its
$\varepsilon$
-packing number, i.e. to the maximal length of chains of points whose pairwise distances are bounded below by
$\varepsilon$
(see [Reference Niyogi, Smale and Weinberger20, Lemma 5.2]).
Proposition 1.1. Let
$(X_n)_{n\geq 0}$
be a stationary
$\mathbb{R}^d$
-valued sequence of random variables compactly supported (recall that
$\mathbb{X}_n=\{X_1,\ldots,X_n\}$
). Let
$\mathbb{M}$
be this common support. Then, for any
$\varepsilon>0$
,

In view of Proposition 1.1, to bound
$\mathbb{P}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M}) > \varepsilon)$
we have, mainly, to control the two quantities
$\mathbb{P}(\|X_1-x\| \leq \varepsilon)$
and
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\| > \varepsilon)$
. To control
$\mathbb{P}(\|X_1-x\| \leq \varepsilon)$
, we assume the (a, b)-standard assumption for
$\mu$
(the distribution of
$X_1$
) as in the i.i.d. case. The (a, b)-standard assumption was used in the i.i.d. context for set estimation problems under the Hausdorff metric [Reference Cuevas7, Reference Cuevas and Rodrguez-Casal8] and also for statistical analysis of persistence diagrams [Reference Chazal, Glisse, Labruère and Michel5, Reference Fasy, Lecci, Rinaldo, Wasserman, Balakrishnan and Singh11]. This (a, b)-standard assumption gives a lower bound for
$\mathbb{P}(\|X_1-x\| \leq \varepsilon)$
, uniformly in
$x\in \mathbb{M}$
. This lower bound is a power of
$\varepsilon$
. We summarize this notion in Definition 1.1.
Definition 1.1. Let X be a compactly supported
$\mathbb{R}^d$
-random variable. Let
$\mathbb{M}$
be its support. This random variable X satisfies the (a, b)-standard assumption if there exist
$a>0$
,
$b>0$
, and
$\varepsilon_0>0$
such that, for any
$0<\varepsilon\leq \varepsilon_0$
,
$\inf_{x\in\mathbb{M}}\mathbb{P}(\|X-x\| \leq \varepsilon) \geq a\varepsilon^b$
.
In [Reference Kallel and Louhichi14] we also needed, in order to establish the convergence in probability of
$d_\textrm{H}(\mathbb{X}_n,\mathbb{M})$
, a lower bound for
$\inf_{x\in\mathbb{M}}\mathbb{P}(\|X_1-x\| \leq \varepsilon)$
that is not necessarily a power of
$\varepsilon$
. This specific form of the lower bound as a power of
$\varepsilon$
allows us to obtain precise rates of convergence with the Hausdorff metric, as we shall see later.
Now, to control the second term,
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\| > \varepsilon)$
appearing in Proposition 1.1, we use two different approaches. The first is based on the following remark. For i.i.d. random variables satisfying the (a, b)-standard assumption, a uniform (on
$x\in \mathbb{M}$
) upper bound for
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\|>\varepsilon)$
is easily obtained since, for i.i.d. random variables,

The situation becomes more complicated when the variables are no longer i.i.d. For this reason, we introduce the notion of the minimal index
$\theta^*$
of a stationary sequence (see Definition 2.1): instead of having a power n in (1.1), we have a power
$n\theta^*$
for some
$\theta^* \in \mathopen{]}0,1]$
, and also an inequality instead of an equality. Finally, in a more general way, the upper bound that we propose for
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\| > \varepsilon)$
is, up to some positive constant c,
$(1-\kappa_{\varepsilon})^{n\theta^*}$
, where
$\kappa_{\varepsilon}$
is in
$\mathopen{]}0,1\mathclose{[}$
. In Definition 2.1, we call
$\theta^*$
the minimal index and
$\kappa_{\varepsilon}$
a marginal lower bound of the sequence. Theorem 2.1 proves that the rate (of convergence to 0 of
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M}))$
) for i.i.d. sequences is reached for stationary sequences satisfying the (a, b)-standard assumption and having
$\theta^*\in \mathopen{]}0,1]$
. In Section 2.1 we give some examples of calculations of
$\theta^*$
.
The reader accustomed to the theory of extreme values will no doubt think of the extremal index. The extremal index introduced in [Reference Leadbetter, Lindgren and Rootzén19], which is related to the asymptotic distribution of the maximum, has a nice meaning. It is a measure of the extent of clustering in the extremes of a stationary process since it represents the reciprocal of the mean cluster size. Clearly, in view of their definitions, these two indexes are not the same. However, they may have an analogous meaning. In fact, Definition 2.1 (that is, an inequality in (1.1) with
$n\theta^*$
instead of n on its right-hand side) suggests that the minimum over n random variables is controlled by
$n\theta^*$
independent ‘clusters’ with the same size. So each cluster has size
$n/{(n\theta^*)}$
, that is,
$1/\theta^*$
. Of course, this explanation remains intuitive at this stage.
The evaluation of the minimal index for an m-dependent sequence agrees with this interpretation. We prove in Proposition 2.1 that the minimal index for a stationary m-dependent sequence is
$\theta^*=1/{(m+1)}$
. In Proposition 2.2, we give sufficient conditions for a stationary sequence to have
$\theta^*=1$
and satisfy the requirements of Theorem 2.1 (see Corollary 2.1). We apply Proposition 2.2 to stationary Markov chains. The results are announced in Proposition 2.3, Corollary 2.2, and Corollary 2.3. Here, the minimal index of this Markov chain is
$\theta^*=1$
. The explicit values of a and b for the (a, b)-standard assumptions are also given. The proofs for Markov chains are based on some calculations in [Reference Kallel and Louhichi14]. In Proposition 2.4, we give sufficient conditions for a stationary sequence to have
$0<\theta^*<1$
and to satisfy the requirements of Theorem 2.1 (see Corollary 2.4). We apply these results to stationary Markov chains (see Proposition 2.5 and Corollary 2.5).
The second approach to bounding the second term,
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\| > \varepsilon/2)$
, appearing in Proposition 1.1 is based on a mixing assumption and on a local-type dependence condition. We are interested in this paper in the
$\beta$
-mixing assumption. We prove in Proposition 3.1 that, for
$\beta$
-mixing sequences, the control of
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\| > \varepsilon/2)$
needs suitable control of
$\mathbb{P}^k(\min_{1\leq i\leq p}\|X_i-x\| > \varepsilon/4)$
for
$kp\leq n$
. The control of this latter probability needs a local dependence condition in the spirit of the well-known Leadbetter anti-clustering condition
$D'(u_n)$
[Reference Jakubowski and Rosiński13, Reference Leadbetter, Lindgren and Rootzén19]. This local dependence condition allows us to give a lower bound for

by using a Bonferroni-type inequality (see Lemma 5.1). The result is summarized in Proposition 3.2. Once we have bounded
$\mathbb{P}(d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}}) > \varepsilon)$
, we deduce a bound for
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,{\mathbb{M}}))$
. This is the purpose of Theorem 3.1: for a stationary
$\beta$
-mixing sequence under polynomial decay of the
$\beta$
-coefficients together with the (a, b)-standard assumption and a local-type dependence condition, the optimal rate of the i.i.d. setting, proved in [Reference Chazal, Glisse, Labruère and Michel5], can be reached for
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M}))$
(see Theorem 3.1 and Corollaries 3.2 and Corollary 3.3 for precise statements).
In conclusion, this paper extends the known optimal bounds for
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M}))$
to stationary random sequences having a minimal index
$\theta^*\in\mathopen{]}0,1\mathclose{]}$
or to
$\beta$
-mixing random variables
$(X_i)_{i\in \mathbb{N}}$
, thus extending the framework of independence. This opens the scope of applications. As such, we apply Theorems 2.1 and 3.1 to some stationary Markov chains. In Section 4.1 we discuss an example of a Markov chain on a closed ball for which Theorem 2.1 applies. Section 4.2 gives a class of Markov chains for which our two approaches work: Proposition 4.1 for the first approach and Proposition 4.2 for the second. In fact, Proposition 4.2 proves that the considered Markov chain is geometrically ergodic and then
$\beta$
-mixing (recall that geometrically ergodic Markov chains are
$\beta$
-mixing [Reference Bradley4, Theorem 3.7]). Two explicit examples are studied: a Markov chain on the circle and a torus. The first example was introduced in [Reference Kato15] to model the wind direction. Both Theorems 2.1 and 3.1 apply to this Möbius Markov chain on the circle. We illustrate the result, in Section 4.2, by some simulations. The second example studies, with simulations, the case of a stationary Markov chain on a torus. This model satisfies the requirements of Theorem 2.1 (see Section 4.3, Proposition 4.3).
The paper is organized as follows. In Section 2, the first approach is described. The second approach is described in Section 3. Explicit examples are discussed in Section 4. All the proofs are given in Section 5.
From now on, the notation
$a_n=O(b_n)$
(respectively
$a_n=o(b_n)$
) means, as usual, that there exists a positive constant C such that, for n large enough,
$a_n \leq C b_n$
(respectively
$\lim_{n\rightarrow \infty}({a_n}/{b_n})=0$
). The notation
$[\cdot]$
means the integer part. The notation
$a\wedge b$
means
$\min(a,b)$
, and finally
$\mathit{cst}$
denotes a positive constant that may be different from line to line.
2. An approach based on the minimal index
$\theta^*$
We introduce the following definition, which allows us to give, uniformly on
$x\in \mathbb{M}$
, an upper bound for the main term of Proposition 1.1,
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\|>\varepsilon)$
.
Definition 2.1. Let
$(X_i)_{i\in\mathbb{N}}$
be a stationary sequence of
$\mathbb{R}^d$
-valued random variables. Suppose that
$X_1$
is with compact support
$\mathbb{M}$
. We say that this sequence
$(X_i)_{i\geq 0}$
has a minimal index
$\theta^*$
with a marginal lower bound
$\kappa_{\varepsilon}$
if there exist a positive constant c,
$\theta^*\in \mathopen{]}0,1]$
,
$\varepsilon_0>0$
, and
$n_0 \in \mathbb{N}$
such that, for any
$n\geq n_0$
and any
$\varepsilon\in \mathopen{]}0,\varepsilon_0]$
,

for a constant
$\kappa_{\varepsilon}\in \mathopen{]}0,1[$
.
Clearly, compactly supported i.i.d. random variables have a minimal index
$\theta^*=1$
with a marginal lower bound
$\kappa_{\varepsilon} \in \mathopen{]}0,1\mathclose{[}$
if, for
$\varepsilon$
small enough,
$\inf_{x\in\mathbb{M}}\mathbb{P}(\|X_1-x\| \leq \varepsilon) \geq \kappa_{\varepsilon}$
.
We now have what we need to announce our first result on the rates of
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M}))$
using this notion of
$\theta^*$
.
Theorem 2.1. Let
$(X_n)_{n\geq0}$
be a stationary sequence of
$\mathbb{R}^d$
-valued random variables. Suppose that
$X_1$
is supported on a compact set
$\mathbb{M}$
and satisfies the (a, b)-standard assumption. Suppose, moreover, that
$(X_n)_{n\geq0}$
has a minimal index
$\theta^*\in \mathopen{]}0,1]$
with a marginal lower bound
$\kappa_{\varepsilon}=a\varepsilon^b$
. Then,
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M})) = O(({\ln n}/{n})^{1/b})$
.
For i.i.d. random variables whose common distribution satisfies the (a, b)-standard assumption, the rate in Theorem 2.1 is optimal (see [Reference Chazal, Glisse, Labruère and Michel5]).
2.1. Examples of calculation of
$\theta^*$
The purpose of this section is to apply Theorem 2.1 to stationary m-dependent random sequences and to a class of some stationary random sequences including some stationary Markov chains, all compactly supported. We specify, for each example, the value of the minimal index
$\theta^*$
as introduced in Definition 2.1.
2.1.1. Stationary m-dependent random sequences.
Recall that the random sequence
$(X_i)_{i\in\mathbb{N}}$
is m-dependent for some
$m\geq 0$
if the two
$\sigma$
-fields
$\sigma(X_i,\,i\leq k)$
and
$\sigma(X_{i},\,i\geq k+m+1)$
are independent for every k. In particular, 0-dependent is the same as independent.
Proposition 2.1. Let
$(X_i)_{i\in\mathbb{N}}$
be a sequence of stationary m-dependent random variables compactly supported. Suppose that
$X_1$
satisfies the (a, b)-standard assumption. Then
$(X_i)_{i\in\mathbb{N}}$
has a minimal index
$\theta^*= {1}/({m+1})$
, with a marginal lower bound
$\kappa_{\varepsilon}=a\varepsilon^b$
. The conclusion of Theorem 2.1 holds for this m-dependent random sequence
$(X_i)_{i\in\mathbb{N}}$
.
2.1.2. Stationary random sequences and Markov chains with
$\theta^*=1$
.
In the following, we denote by
$\mathbb{P}_{X_0,\ldots,X_{n-1}}$
or by
$\mathbb{P}(\cdot\mid{X_0,\ldots,X_{n-1}})$
the conditional distribution given
$X_0,\ldots,X_{n-1}$
.
Proposition 2.2. Let
$(X_n)_{n\geq 0}$
be a stationary
$\mathbb{R}^d$
-valued random sequence compactly supported. Suppose that, for a positive
$\varepsilon\leq\varepsilon_0$
, there exists a positive constant
$\kappa_{\varepsilon}\in\mathopen{]}0,1\mathclose{[}$
such that, for any
$n\geq 1$
,

almost surely (a.s.). Then, for any
$n\geq 1$
,
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\| > \varepsilon) \leq (1-\kappa_{\varepsilon})^n$
. That is,
$(X_i)_{i\in\mathbb{N}}$
has a minimal index
$\theta^*=1$
with a marginal lower bound
$\kappa_{\varepsilon}$
, as soon as the bound (2.1) is satisfied.
An immediate consequence of Proposition 2.2 is the following corollary.
Corollary 2.1. Let
$(X_n)_{n\geq 0}$
be a stationary
$\mathbb{R}^d$
-valued random sequence compactly supported. Suppose that (2.1) is satisfied with
$\kappa_{\varepsilon}= a \varepsilon^b$
. Then the conclusion of Theorem 2.1 holds.
Proof. We deduce from
$\mathbb{P}(\|X_1-x\| \leq \varepsilon) = \mathbb{P}(\|X_n-x\| \leq \varepsilon) = \mathbb{P}(\mathbb{P}_{X_0,\ldots,X_{n-1}}(\|X_n-x\| \leq \varepsilon))$
that if (2.1) is satisfied with
$\kappa_{\varepsilon}= a\varepsilon^b$
then the distribution of
$X_1$
satisfies the (a, b)-standard assumption. This fact, together with the conclusion of Proposition 2.2, are enough to guarantee all the requirements of Theorem 2.1. The conclusion of Theorem 2.1 therefore holds.
Application to stationary Markov chains.
We suppose throughout this paragraph that
$(X_n)_{n\geq 0}$
is a Markov chain satisfying Assumption 2.1.
Assumption 2.1. The Markov chain
$(X_n)_{n\geq 0}$
has an invariant measure
$\mu$
with compact support
$\mathbb{M}$
(and then the chain is stationary).
Under Assumption 2.1, Proposition 2.2 is reduced to the following proposition. We denote by
$\mathbb{P}_{x_0}$
(respectively,
$\mathbb{P}_{\mu}$
) the conditional distribution given
$X_0=x_0$
(respectively, given that
$X_0$
is distributed as
${\mu}$
).
Proposition 2.3. Let
$(X_n)_{n\geq 0}$
be a Markov chain satisfying Assumption 2.1. Suppose that, for a positive
$\varepsilon\leq\varepsilon_0$
, there exists a positive constant
$\kappa_{\varepsilon}\in \mathopen{]}0,1\mathclose{[}$
such that, for any
$x_0 \in \mathbb{M}$
,

Then, for any
$n\geq 1$
,
$\mathbb{P}_{\mu}(\min_{1\leq i\leq n}\|X_i-x\| > \varepsilon) \leq (1-\kappa_{\varepsilon})^n$
. That is,
$(X_i)_{i\in \mathbb{N}}$
has a minimal index
$\theta^*=1$
with a marginal lower bound
$\kappa_{\varepsilon}$
, as soon as the bound (2.2) is satisfied.
We deduce from Proposition 2.3 the following corollary.
Corollary 2.2.
Let
$(X_n)_{n\geq 0}$
be a Markov chain satisfying Assumption 2.1. Suppose that (2.2) is satisfied with
$\kappa_{\varepsilon}= a\varepsilon^b$
. Then the conclusion of Theorem 2.1 holds.
The proof of Corollary 2.2 is exactly the same as that of Corollary 2.1 and is omitted; in fact, it is based on
$\mathbb{P}_{\mu}(\|X_1-x\| \leq \varepsilon) = \int\mathbb{P}_{x_0}(\|X_1-x\| \leq \varepsilon)\,\mu({\textrm{d}} x_0)$
. Our purpose now is to give sufficient conditions under which the lower bound (2.2) is satisfied. For this, we consider the following assumption introduced in [Reference Kallel and Louhichi14].
Assumption 2.2. The transition probability kernel K defined for
$x\in\mathbb{M}$
by

is absolutely continuous with respect to some measure
$\nu$
on
$\mathbb{M}$
, i.e. there exists a positive measure
$\nu$
and a positive function k such that, for any
$x\in \mathbb{M}$
,
$K(x,{\textrm{d}} y) = k(x,y)\,\nu({\textrm{d}} y)$
. Suppose that, for some
$b>0$
and
$\varepsilon_0>0$
,

and that there exists a positive constant
$\kappa$
such that
$\inf_{x\in\mathbb{M},y\in\mathbb{M}}k(x,y)\geq\kappa>0$
.
2.1.3. Stationary random sequences and Markov chains with
$\theta^*<1$
.
The purpose of this paragraph is to give sufficient conditions for a stationary random sequence to have
$\theta^*<1$
.
Proposition 2.4. Let
$(X_n)_{n\geq 0}$
be a compactly supported,
$\mathbb{R}^d$
-valued stationary random sequence. Suppose that there exist a positive integer
$m$
and
$\alpha \in ]0,1[$
such that, for any
$0 < \varepsilon \leq \varepsilon_0$
, there exists a positive constant
$\kappa_{\varepsilon} \in ]0, \alpha[$
such that, for any positive integer
$n$
,

a.s. Then, for any
$n\geq m+1$
,

That is,
$(X_i)_{i\in\mathbb{N}}$
has a minimal index
$\theta^*={1}/({m+1})$
with a marginal lower bound
$\kappa_{\varepsilon}$
, as soon as the lower bound (2.3) is satisfied.
We deduce the following corollary (its proof is omitted, since it is the same as that of Corollary 2.1).
2.1.4. Application to stationary Markov chains.
Proposition 2.4 applied to stationary Markov chains gives the following.
Proposition 2.5. Let
$(X_n)_{n\geq 0}$
be an
$\mathbb{R}^d$
-valued Markov chain satisfying Assumption 2.1. Suppose that there exist a positive integer
$m\geq 1$
and
$\alpha\in\mathopen{]}0,1\mathclose{[}$
such that, for any positive
$\varepsilon\leq\varepsilon_0$
, there exists a positive constant
$\kappa_{\varepsilon}\in\mathopen{]}0,\alpha\mathclose{[}$
such that, for any
$x_0 \in \mathbb{M}$
,

Then, for any
$n\geq m+1$
,

That is,
$(X_i)_{i\in\mathbb{N}}$
has a minimal index
$\theta^*={1}/({m+1})$
with a marginal lower bound
$\kappa_{\varepsilon}$
, as soon as the lower bound (2.4) is satisfied.
We deduce the following corollary (its proof is omitted since it is the same as those of Corollaries 2.1 and 2.2).
3. An approach for
$\beta$
-mixing random sequences
The main purpose of this section is to present a second approach to bound the quantity
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\| > \varepsilon/2)$
(which then gives, thanks to Proposition 1.1 and the (a, b)-standard assumption, an upper bound for
$\mathbb{P}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M}) > \varepsilon)$
). We focus on stationary
$\beta$
-mixing random sequences (introduced in [Reference Rozanov and Volkonskii24]). Recall that the stationary random sequence
$(X_n)_{n\in\mathbb{N}}$
is
$\beta$
-mixing if its coefficient
$\beta_n$
tends to 0 when n tends to infinity. These coefficients
$\beta_n$
can be defined by
$\beta_n = \sup_{l\geq 1}\mathbb{E}\{\sup|\mathbb{P}(B|\sigma(X_1,\ldots,X_l))-\mathbb{P}(B)|,\,B\in\sigma(\sigma_i,\,i\geq l+n)\}$
; we refer the reader to [Reference Bradley3, Reference Yu29] for this expression for
$\beta_n$
.
Geometrically, ergodic Markov chains are an example of
$\beta$
-mixing random sequences with geometrically decaying mixing coefficients
$(\beta_n)_{n\geq 1}$
(cf., for instance, [Reference Bradley4, Theorem 3.7] and the references therein). Recall that a stationary Markov chain, with a stationary measure
$\mu$
, is geometrically ergodic if there exists a positive constant c and a Borel positive function a such that the following bound holds for
$\mu$
-almost every
$x\in \mathbb{R}$
: for any
$n\in \mathbb{N}^*$
and any Boolean B of
$\mathbb{R}^d$
,

Proposition 3.1 gives a bound for
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\| > \varepsilon/2)$
and for stationary
$\beta$
-mixing random sequences. For its proof, we use a specific tool based on Berbee’s coupling argument [Reference Berbee2] (see also [Reference Rio23]), available for
$\beta$
-mixing random sequences.
Proposition 3.1. Let
$(X_n)_{n\geq 0}$
be a stationary sequence of
$\beta$
-mixing and
$\mathbb{R}^d$
-valued random variables. Suppose that
$X_1$
is supported on a compact set
$\mathbb{M}$
. Let p be a positive integer less than
$n/2$
and
$k=[n/(2p)]$
. Then, for any positive
$\varepsilon$
,

We see from Proposition 3.1 that control of
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\| > \varepsilon/2)$
needs control of
$\mathbb{P}^k(\min_{1\leq i\leq p}\|X_i-x\| > \varepsilon/4)$
. We control this latter term by introducing a local-type dependence condition analogous to the well-known Leadbetter condition
$D'(u_n)$
, described in our case by the local dependence coefficient
$(\Lambda(n,p,\varepsilon_n))_{n,p}$
– see (3.2). Propositions 3.1 and 1.1 together with a local dependence study give the following bound for
$\mathbb{P}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M}) > \varepsilon)$
under a
$\beta$
-mixing condition.
Proposition 3.2. Let
$(X_n)_{n\geq 0}$
be a stationary sequence of
$\beta$
-mixing and
$\mathbb{R}^d$
-valued random variables. Suppose that
$X_1$
is supported on a compact set
$\mathbb{M}$
and that its distribution satisfies the (a, b)-standard assumption. Let p be a positive integer less than
$n/2$
and
$k=[n/(2p)]$
. Define, for a non-random sequence
$(\varepsilon_n)_{n\geq 0}$
tending to 0 as n tends to infinity,

Then, for any positive
$\varepsilon$
small enough,

Recall that
$a'= {a}/{4^b}$
and that
$c\wedge d$
means
$\min(c,d)$
.
An immediate consequence of Proposition 3.2 is the following corollary.
Corollary 3.1.
Suppose that all the requirements of Proposition 3.2 are satisfied. Let
$p_n\rightarrow\infty$
and
$\varepsilon_n\rightarrow 0$
as
$n \rightarrow \infty$
be such that
$p_n \leq {n}/{4}$
,
${\exp(-n({a'}/{4})\varepsilon_n^b)}/{\varepsilon_n^b} = O(({\ln n}/{n})^{1/b})$
, and

Then, for n large enough,

Condition (3.3) is a local-type dependence condition or an anti-clustering dependence condition. Its meaning is analogous to Leadbetter’s condition
$D'(u_n)$
[Reference Jakubowski and Rosiński13, Reference Leadbetter, Lindgren and Rootzén19]. Condition (3.3) means that an observation
$X_1$
in the small ball
$B(x,\varepsilon_n)$
cannot be followed by an observation
$X_r$
in this ball, within an interval of length
$p_n$
. The size
$p_n$
affects the rates for
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M}))$
, as shown in the following theorem.
Theorem 3.1. Let
$(X_n)_{n\geq 0}$
be a stationary sequence of
$\beta$
-mixing and
$\mathbb{R}^d$
-valued random variables. Suppose that
$X_1$
is supported on a compact set
$\mathbb{M}$
and that its distribution satisfies the (a, b)-standard assumption. Let
$\varepsilon_n\rightarrow0$
as
$n \rightarrow \infty$
such that

and that, for some
$\alpha \in \mathopen{]}0,1]$
, this sequence
$(\varepsilon_n)$
also satisfies
$\limsup_{n\rightarrow\infty}\Lambda(n,[n^{\alpha}/4],\varepsilon_n)<\infty$
, where
$(\Lambda(n,p,\varepsilon_n))_{n,p}$
is as defined in (3.2). Suppose that
$\beta_n=O(n^{-\gamma})$
for some
$\gamma>0$
. Let
$s=\min(1,{1}/{b})$
. The following rates hold:
-
• If
$\gamma\geq\max(1,1/b)$ ,
$b\neq 1$ , and
$$\frac{s+1/b}{s(1+ \gamma)}\leq \alpha\leq 1,$$
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M})) = O(({\ln n}/{n})^{1/b})$ .
-
• If
$0<\gamma<\max(1,1/b)$ ,
$b\neq 1$ , and
${1}/({1+\gamma})< \alpha \leq 1$ , then
$$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M})) = O(n^{-\alpha s\gamma+(1-\alpha)s}).$$
-
• If
$\gamma>0$ ,
$b=1$ , and
${1}/({1+\gamma}) < \alpha \leq 1$ , then
$$ \mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M})) = O\bigg(\frac{\ln n}{n^{\min(-1+\alpha+\alpha\gamma,1)}}\bigg). $$
The following corollary proves that the optimal rates of the i.i.d. case can be reached, under suitable decays of the mixing coefficient
$\beta_n$
together with suitable control of the local-type dependence condition.
Corollary 3.2.
Suppose that all the requirements of Theorem 3.1 are satisfied. Suppose also that
$\gamma\geq\max(1,1/b)$
and

with
$s=\min(1,1/b)$
. Then

Finally, the following corollary gives a sufficient condition for the local-type dependence condition of Theorem 3.1 to hold.
Corollary 3.3.
Let
$(X_n)_{n\geq 0}$
be a stationary sequence of
$\beta$
-mixing and
$\mathbb{R}^d$
-valued random variables. Suppose that
$X_1$
is supported on a compact set
$\mathbb{M}$
and that its distribution satisfies the (a, b)-standard assumption. Suppose that

Then the requirements of Theorem 3.1 are satisfied with
$\varepsilon_n=({1}/{\sqrt{n^{1+\alpha}}})^{1/b}$
for some arbitrary
$\alpha\in\mathopen{]}0,1\mathclose{[}$
. The conclusion of Theorem 3.1 holds (with this
$\alpha<1$
).
4. Explicit examples
The purpose of this section is to give some explicit examples satisfying the requirements of Theorem 2.1 and/or Theorem 3.1.
4.1. Stationary Markov chains on a ball of
$\mathbb{R}^d$
We recall the following example already studied in [Reference Kallel and Louhichi14]. Let
$(X_n)_{n\geq 0}$
be a Markov chain defined, for
$n\geq 0$
, by

where
$A_{n+1}$
is a
$(d\times d)$
matrix,
$X_n\in\mathbb{R}^d$
,
$B_n\in\mathbb{R}^d$
,
$(A_n,B_n)_{n\geq 1}$
is an i.i.d. random sequence independent of
$X_0$
. Recall that, for a matrix M,
$\|M\|$
is the operator norm defined by
$\|M\|=\sup_{x\in\mathbb{R}^d,\|x\|=1}\|Mx\|$
. It is well known that, for any
$n\geq 1$
,
$X_n$
is distributed as
$\sum_{k=1}^n A_1\cdots A_{k-1}B_k + A_1\cdots A_{n}X_0$
; see, for instance, [Reference Kesten16]. It is also well known that the conditions [Reference Goldie and Maller12, Reference Kesten17]

ensure the existence of a stationary solution to (4.1), and that
$\|A_1\cdots A_n\|$
approaches 0 exponentially fast. If, in addition,
$\mathbb{E}\|B_1\|^{\beta}<\infty$
for some
$\beta>0$
, then the series
$R\ {:}{=}\ \sum_{i=1}^{\infty}A_1\cdots A_{i-1}B_i$
converges a.s. and the distribution of
$X_n$
converges to that of R, independently of
$X_0$
. The distribution of R is, then, that of the stationary measure of the chain.
[Reference Kallel and Louhichi14, Corollary 5.2] gives conditions under which Assumptions 2.1 and 2.2 are satisfied and thus
$\theta^*=1$
for this Markov chain. We summarize these conditions in the following corollary (that we announce without proof).
Corollary 4.1.
Suppose that in the model (4.1), conditions (4.2) are satisfied, and moreover
$\|B_1\|\leq c<\infty$
. If the density of
$A_1x+B_1$
, denoted by
$f_{A_1x+B_1}$
, satisfies
$\inf_{x,y\in\mathbb{M}}f_{A_1x+B_1}(y) \geq \kappa > 0$
for some positive
$\kappa$
, then Assumptions 2.1 and 2.2 are satisfied with
$b=d$
,
$\nu$
being the Lebesgue measure on
$\mathbb{R}^d$
and thus
$\theta^*=1$
for this Markov chain. The conclusion of Theorem 2.1 holds.
4.2. The Möbius Markov chain on the circle
Our purpose is to study an explicit example of a Markov chain on the unit circle, known as a Möbius Markov chain, for which both approaches are applicable. We first check that this model satisfies all the requirements of Theorem 2.1. Next, we prove that this Markov chain is geometrically ergodic. The Möbius Markov chain on the circle is introduced in [Reference Kato15] and is defined as follows:
-
• Let
$X_0$ be a random variable that takes values on the unit circle.
-
• Define, for
$n\geq 1$ ,
$$X_n=\frac{X_{n-1} + \beta}{\beta X_{n-1}+1}\varepsilon_n,$$
$\beta\in\mathopen{]}-1,1\mathclose{[}$ and
$(\varepsilon_n)_{n\geq 1}$ is a sequence of i.i.d. random variables that are independent of
$X_0$ and distributed as the wrapped Cauchy distribution with a common density,
$f_{\varphi}$ , with respect to the arc length measure
$\nu$ on the unit circle
$\partial B(0,1)$ , i.e. for all
$z\in \partial B(0,1)$ ,
$$f_{\varphi}(z)= \frac{1}{2\pi}\frac{1-\varphi^2}{|z-\varphi|^2},$$
$\varphi\in[0,1\mathclose{[}$ being fixed.
The following proposition proves that the introduced Möbius Markov chain satisfies the requirements of the first approach.
Proposition 4.1. Let
$(X_n)_{n\geq 0}$
be the Möbius Markov chain on the unit circle as previously defined. Then all the requirements of Theorem 2.1 are satisfied. More precisely, this Markov chain admits a unique invariant distribution, denoted by
$\mu$
. If
$X_0$
is distributed as
$\mu$
then the (a, b)-standard assumption is satisfied by
$\mu$
with

$\nu$
is the arc length measure on the unit circle. This Markov chain has a minimal index
$\theta^*=1$
with a marginal lower bound
$\kappa_{\varepsilon}=a\varepsilon$
. The conclusion of Theorem 2.1 holds, that is,

Proposition 4.2 proves that the Möbius Markov chain, as introduced in this section, also satisfies the requirements of the second approach.
Proposition 4.2. Let
$(X_n)_{n\geq 0}$
be the Möbius Markov chain on the unit circle as previously defined. This Markov chain is
$\beta$
-mixing with
$\beta_n=O({\textrm{e}}^{-cn})$
(for some
$c>0$
). It satisfies all the requirements of Theorem 3.1, and
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\partial B(0,1))) = O({\ln n}/{n})$
.
The purpose now is to simulate a Möbius Markov chain on the unit circle and to illustrate the rate of
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\partial B(0,1)))$
. More precisely, we simulate:
-
• A random variable
$X_0$ uniformly distributed on the unit circle
$\partial B(0,1)$ , i.e.
$X_0$ has the density
$f(z) = {1}/{2\pi}$ , for all
$z\in \partial B(0,1)$ .
-
• For
$n\geq 1$ ,
$X_n=X_{n-1} \varepsilon_n$ , where
$(\varepsilon_n)_{n\geq 1}$ is a sequence of i.i.d. random variables that are independent of
$X_0$ and distributed as the wrapped Cauchy distribution with a common density with respect to the arc length measure
$\nu$ on the unit circle
$\partial B(0,1)$ ,
$$ f_{\varphi}(z) = \frac{1}{2\pi}\frac{1-\varphi^2}{|z-\varphi|^2}, \quad \varphi \in [0,1\mathclose{[},\ z\in \partial B(0,1). $$
It is proved in [Reference Kato15] that this Markov chain is stationary. Its stationary measure is the uniform law on the unit circle.
The simulations give the numerical values shown in Table 1, which are plotted in Figure 1; these do not contradict the theoretical results of either Proposition 4.1 or Proposition 4.2.
Table 1. Behavior of
$d_\textrm{H}(\mathbb{X}_n,\partial B(0,1))$
with n. We used the distFact function from the R library TDA as described in [Reference Fasy, Kim, Lecci and Maria10].


Figure 1.
$d_\textrm{H}(\mathbb{X}_n,\partial B(0,1))$
and the rate
$(\ln n)/n$
.
4.3. A Markov chain on a square wrapped on a torus
Recall that for
$x\in \mathbb{R}$
, [x] denotes the integer part of x and
$x-[x]$
denotes its fractional part. Clearly,
$0\leq x-[x] <1$
. Define the Markov chain
$(\Phi_n)_{n\geq 0}=(\theta_n, \phi_n)_{n\geq 0}$
on the square
$[0,1\mathclose{[}\times [0,1\mathclose{[}$
with opposite edges identified by

where
$(\varepsilon_i)_{i\geq 0}$
and
$(\eta_i)_{i\geq 0}$
are two independent sequences of i.i.d. random variables that are all uniformly distributed on [0,1] (see Figure 2). Suppose also that, for each n,
$\theta_n$
(respectively,
$\phi_n$
) is independent of
$\varepsilon_{n+1}$
(respectively,
$\eta_{n+1}$
). The following proposition proves that the requirements of Theorem 2.1 are satisfied.

Figure 2. A Markov chain on
$[0,1\mathclose{[}\times [0,1\mathclose{[}$
with opposite edges identified. Different realizations of the set
$\{\Phi_1,\ldots,\Phi_n\}$
with different values of n.
Proposition 4.3.
-
(i)
$(\Phi_n)_{n\geq 0}$ is a stationary Markov chain with a stationary distribution uniform over
$[0,1]\times [0,1]$ .
-
(ii) For any positive
$\varepsilon$ small enough and any pairs
$(x,y) \in [0,1\mathclose{[}\times [0,1\mathclose{[}$ and
$(u,v)\in [0,1\mathclose{[}\times [0,1\mathclose{[}$ ,
$\mathbb{P}(\sqrt{|\theta_1-x|^2+|\phi_1-y|^2} \leq \varepsilon\mid\theta_0=u,\phi_0=v) \geq {\varepsilon^2}/{2}$ .
-
(iii)
$\Phi_1$ satisfies the (a, b)-standard assumption with
$a=\frac{1}{2}$ and
$b=2$ .
-
(iv) The Markov chain
$(\Phi_n)_{n\geq 0}$ has a minimal index
$\theta^*=1$ with a marginal lower bound
$\kappa_{\varepsilon}={\varepsilon^2}/{2}.$
The random torus on
$[0,1\mathclose{[}\times [0,1\mathclose{[}$
can be represented parametrically in three dimensions using the following equations (see Figure 3):

Table 2. Behavior (with n) of the Hausdorff distance between a set of realizations of
$\mathbb{X}_n =\{(X_i,Y_i,Z_i)_{1\leq i\leq n}\}$
and the torus with
$R=0.9$
and
$r=0.3$
.


Figure 3. Different realizations of the set
$\mathbb{X}_n =\{(X_i,Y_i,Z_i)_{1\leq i\leq n}\}$
for different values of n. From left to right,
$n=100$
, 1000, 50000. The points represent the realizations of
$(X_i,Y_i,Z_i)_{1\leq i\leq n}$
. The lines connecting the points represent the paths of the Markov chain. (Here,
$R=0.9$
and
$r=0.3$
.)
The simulation results are show in Table 2 and plotted in Figure 4. Proposition 4.3 together with Theorem 2.1 ensure that the rate of convergence is
$(\ln n/n)^{1/2}$
, which agrees with this figure.

Figure 4. The rate
$({(\ln n)}/{n})^{1/2}$
and the behavior of the Hausdorff distance between the set of realizations of
$\mathbb{X}_n=\{(X_i,Y_i,Z_i)_{1\leq i\leq n}\}$
and the torus with n (here,
$R=0.9$
and
$r=0.3$
).
5. Proofs
5.1. Theorem 2.1
Proof. From the assumptions of Theorem 2.1 and the definition of
$\theta^*$
(letting
$a'= a/4^b$
) we have, for any
$0<\varepsilon\leq 4\varepsilon_0\ {=}{:}\ \varepsilon_0'$
,
$\inf_{x\in\mathbb{M}}\mathbb{P}(\|X_{1}-x\| \leq \varepsilon/4) \geq a'\varepsilon^b$
and

(the latter bound is obtained since, for any
$x\in [0,1]$
,
$1-x\leq {\textrm{e}}^{-x}$
). The conclusion of Proposition 1.1, together with the two last bounds, give, for any
$0<\varepsilon\leq \varepsilon_0'$
,

We have, a.s., since
$\mathbb{X}_n$
is a subset of
$\mathbb{M}$
,
$d_\textrm{H}(\mathbb{X}_n,\mathbb{M}) \leq {\textrm{diam}}(\mathbb{M})$
. Hence,
$\mathbb{P}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M}) \geq \varepsilon)=0$
for any
$\varepsilon \geq C$
, where C is a positive constant satisfying
$C > \max({\textrm{diam}}(\mathbb{M}),\varepsilon_0')$
. Writing
$a''= a'/c$
and with
$\mathit{cst}$
a positive constant that does not depend on n, we have

the last bound obtained thanks to (5.1). We also have, using the same calculations as for the i.i.d. case (see, for instance, [Reference Chazal, Glisse, Labruère and Michel5, Section B.2]),

Clearly,
$\exp({-a'\theta^*n\varepsilon_0'^b}) = o(({\ln n}/{n})^{1/b})$
. Combining this with the bounds in (5.2) and (5.3), the proof of Theorem 2.1 is complete.
5.2. Proposition 2.1
Proof. The random sequence
$(X_i)_{i\in\mathbb{N}}$
is stationary and m-dependent, so the random variables
$X_1, X_{m+2}, X_{2m+3},\ldots, X_{1+k(m+1)}$
,
$k\in \mathbb{N}$
, are i.i.d. Hence, for any
$\varepsilon \in \mathopen{]}0,\varepsilon_0]$
,
$x\in \mathbb{M}$
, and
$n\geq m+1$
,

The (a, b)-standard assumption satisfied by the distribution of
$X_1$
gives, for any
$0 < \varepsilon \leq \varepsilon_0$
,
$\mathbb{P}(\|X_{1}-x\| > \varepsilon) = 1 - \mathbb{P}(\|X_{1}-x\| \leq \varepsilon) \leq 1 - a\varepsilon^b$
. Consequently, for any
$0<\varepsilon\leq \varepsilon_0$
,

The requirement of Definition 2.1 is then satisfied, with
$\theta^*= {1}/({m+1})$
,
$\kappa_{\varepsilon}= a\varepsilon^b$
, and
$c= {1}/({1 - a\varepsilon_0^b})$
. The first part of Proposition 2.1 is then proved. All the requirements of Theorem 2.1 are satisfied, so the conclusion of Theorem 2.1 holds.
5.3. Propositions 2.2 and 2.3
Proof. The proof of Proposition 2.2 is analogous to that of [Reference Kallel and Louhichi14, Lemma 7.2]. We have, letting k be a positive integer,

where
${\mathcal{F}}_{k-1} = \sigma(X_0,\ldots,X_{k-1})$
and
$B(x,\varepsilon)=\{y,\|x-y\| \leq \varepsilon\}$
. We deduce from (2.1) that

The proof of Proposition 2.2 is completed by induction on k. Recall that, for
$k=1$
,

by (2.1). Now, in the case of stationary Markov chains we have

and, by (2.2),
$\mathbb{E}_{X_{0}}(\mathbf{1}_{\{X_1\not\in B(x,\varepsilon)}) \leq 1 - \kappa_{\varepsilon}$
, almost surely. Therefore, Proposition 2.2 applies. More precisely, we obtain, for any
$k\geq 1$
,
$\mathbb{P}_{\mu}(\min_{1\leq i\leq k}\|X_i-x\| > \varepsilon) \leq (1 - \kappa_{\varepsilon})^k$
. The proof of Proposition 2.3 is complete.
5.4. Corollary 2.3
Proof. We have, for any
$x_0\in \mathbb{M}$
, using Assumption 2.2,

So, again using Assumption 2.2,

The bound in (2.2) is then satisfied with
$\kappa_{\varepsilon}=\kappa\varepsilon^b V_d$
. The rest of the proof of Corollary 2.3 follows from Corollary 2.2.
5.5. Propositions 2.4 and 2.5
Proof. Let m and k be two positive integers for which
$(m+1)k\leq n$
. Then

the last bound obtained thanks to (2.3). We deduce, using induction on k (the case
$k=1$
follows from (2.3), as in (5.4)),
$\mathbb{P}(\min_{1\leq i\leq k}\|X_{i(m+1)}-x\| > \varepsilon) \leq (1-\kappa_{\varepsilon})^k$
and, for
$k= [n/(m+1)]$
(recall that
$\kappa_{\varepsilon}\in \mathopen{]}0,\alpha\mathclose{[}$
) we obtain

The proof of Proposition 2.4 is complete. Let us now prove Proposition 2.5. Thanks to the stationary assumption of the Markov chain,

5.6. Proposition 3.1
Proof. Define, for a positive integer
$p<n/2$
,
$k=[{n}/{2p}]$
(recall that
$[\cdot]$
denotes the integer part). Define also, for
$1\leq i\leq k$
, the sets of indices
$I_{i,2p}=\{2p(i-1)+1,\ldots,ip+(i-1)p\}$
. For the proof of this proposition we need Berbee’s coupling (we refer, for instance, to [Reference Rio23, p. 116] for a clear formulation): there exists a random sequence of i.i.d. blocks
$\{\xi_j,\,j\in I_{i,2p}\}$
(onto a richer probability space) such that, for any
$1\leq i\leq k$
, the following three properties hold:
-
(
)
$\{\xi_j,\,j\in I_{i,2p}\}$ and
$\{X_j,\,j\in I_{i,2p}\}$ are identically distributed.
-
(
)
$\mathbb{P}(\{\xi_j,\,j\in I_{i,2p}\}\neq\{X_j,\,j\in I_{i,2p}\}) \leq \beta_p$ .
-
(
)
$\{\xi_j,\,j\in I_{i,2p}\}$ is independent of
$(\{X_j,\,j\in I_{l,2p}\})_{1\leq l\leq i-1}$ for
$i\geq 2$ .
Our purpose is to control
$\mathbb{P}(\min_{1\leq i\leq n}\|X_i-x\| > \varepsilon)$
. For this, we use Berbee’s coupling on the blocks of variables having the set of indices
$(I_{l,2p})_l$
, as defined above. More precisely, define, for
$x\in \mathbb{M}$
,
$X_l(x)$
and
$\xi_l(x)$
by

so that
$\|X_l(x)-x\| = \min_{i\in I_{l,2p}}\|X_i-x\|$
and
$\|\xi_l(x)-x\| = \min_{i\in I_{l,2p}}\|\xi_i-x\|$
. Let
$x\in \mathbb{M}$
be fixed. Clearly,

Hence,

Consequently, by (5.6),

Recalling that
$x \in \mathbb{M}$
,

Using this and (5.7), we get

Let us control the two terms I
$(\varepsilon/4)$
and II
$(\varepsilon/4)$
on the right-hand side of the last bound.
For I
$(\varepsilon/4)$
, we deduce from (5.5) that the random variable
$X_l(x)$
(respectively
$\xi_l(x)$
) belongs to the sigma-fields generated by
$\{X_j,\,j\in I_{l,2p}\}$
(respectively
$\{\xi_j,\,j\in I_{l,2p}\}$
). Hence, by (5.5),

so that, by the construction of the random sequence
$(\xi_j)_j$
, and more precisely by Property (
$\mathcal{P}_2$
),

and

For II
$(\varepsilon/4)$
, by construction the random variables
$(\xi_l(x))_{1\leq l\leq k}$
are i.i.d., and each of them is distributed as
$X_1(x)$
, since
$\xi_1(x)$
and
$X_1(x)$
are identically distributed. Hence,

where the last equality is obtained by using the definition of
$X_1(x)$
in 5.5. Hence,

We deduce, collecting (5.9) and (5.10) together with (5.8), that

The last bound completes the proof of Proposition 3.1.
5.7. Proposition 3.2
Proof. We combine Proposition 3.1 with Proposition 1.1. We obtain, noting that
$(a+b) \wedge 1 \leq (a \wedge 1) + (b \wedge 1)$
for
$a,b>0$
and that
$1-\sup_{x\in\mathbb{M}}\mathbb{P}(\|X_{1}-x\|>\varepsilon/4) \geq \inf_{x\in\mathbb{M}}\mathbb{P}(\|X_{1}-x\|\leq\varepsilon/4)$
,

Recall that, for reals a and b,
$a \wedge b = \min(a,b)$
. The last bound together with the (a, b)-standard assumption give

In order to control
$\mathbb{P}^k(\min_{1\leq i\leq p}\|X_i-x\|>\varepsilon/4)$
, we need the following lemma.
Lemma 5.1.
Let
$(\varepsilon_n)_{n\geq 0}$
be a non-random positive fixed sequence. Let
$(\Lambda(n,p,\varepsilon_n))_{n,p}$
be as defined in (3.2). Then

Proof. Using the trivial bound
$\ln(1-x)\leq -x$
for
$x\in \mathopen{]}0,1\mathclose{[}$
, we have

Recall the following Bonferroni-type inequality, for any events
$(A_i)_{1\leq i\leq p}$
:

Let
$A_i$
be the event
$(\|X_i-x\| \leq \varepsilon)$
, for a positive
$\varepsilon$
. Then
$\bigcup_{i=1}^pA_i \subset (\min_{1\leq i\leq p}(\|X_i-x\| \leq \varepsilon)$
and, for any positive
$\varepsilon$
,

If
$0<\varepsilon \leq \varepsilon_n$
then
$\mathbb{P}(\|X_1-x\| \leq \varepsilon,\,\|X_r-x\| \leq \varepsilon) \leq \mathbb{P}(\|X_1-x\| \leq \varepsilon_n,\,\|X_r-x\| \leq \varepsilon_n)$
and, by (5.12),

Consequently, since
$kp\leq n$
,

If
$\varepsilon > \varepsilon_n$
then

We have, by (5.12),

Finally, combining (5.13) and (5.14) we obtain

The proof of Lemma 5.1 is complete.
Lemma 5.1 and the (a, b)-standard assumption give

This last bound, together with (5.11), prove that, for any
$\varepsilon \leq \varepsilon_0'=4\varepsilon_0$
(since, for positive reals b, c,
$1 \wedge (b+c) \leq (1 \wedge b) + (1 \wedge c)$
),

The proof of Proposition 3.2 is thus complete.
5.8. Corollary 3.1
Proof. We have, a.s., since
$\mathbb{X}_n$
is a subset of
$\mathbb{M}$
,
$d_\textrm{H}(\mathbb{X}_n,\mathbb{M}) \leq {\textrm{diam}}(\mathbb{M})$
. Hence,
$\mathbb{P}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M}) \geq \varepsilon)=0$
for any
$\varepsilon \geq C$
, where C is a positive constant satisfying
$C > \max({\textrm{diam}}(\mathbb{M}),\varepsilon_0')$
. We have (writing
$a''= a'/c$
and
$\mathit{cst}$
for a positive constant that does not depend on n), using (5.15),

We have, from (5.15), for n large enough such that
$\varepsilon_n<\varepsilon_0'$
,

Hence, by (5.16), (5.17), and (5.3) we get, for n large enough,

Suppose now that
$p\leq {n}/{4}$
; then
${n}/{2} \geq pk \geq p(({n}/{2p})-1) \geq {n}/{4}$
. So, for n large enough,

Let
$p_n\rightarrow\infty$
and
$\varepsilon_n\rightarrow 0$
as
$n \rightarrow \infty$
be such that
$p_n\leq n/4$
,

and
$\limsup_{n\rightarrow\infty}\Lambda(n,p_n,\varepsilon_n) < \infty$
. Then, by (5.18), we obtain, for n large enough,

The proof of Corollary 3.1 is then complete.
5.9. Theorem 3.1
Proof. The task now is to calculate the integral
$\int_0^{\varepsilon_0'}(1 \wedge ({k\beta_p }/{a'\varepsilon^b}))\,{\textrm{d}}\varepsilon$
and to deduce a bound for
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M}))$
. We do this by considering the values of b. We suppose that
$k\beta_p<1$
.
If
$0<b<1$
then
$\int_0^{\varepsilon_0'}{1}/{\varepsilon^b}\,{\textrm{d}}\varepsilon < \infty$
and

If
$b>1$
and
$({k\beta_p}/{a'})^{1/b} < \varepsilon_0'$
then

If
$b=1$
,
$k\beta_p \leq C_{k,p}<1$
, and
$({C_{k,p}}/{a'}) < \varepsilon_0'$
, then

So we obtain, by Corollary 3.1, since
$k\beta_p \leq (k\beta_p)^{\min(1,1/b)}$
(recall that
$k=k_n$
,
$p=p_n$
, and
$k_n\beta_{p_n} < C_{k_n,p_n}<1$
),

The task now is to choose, in the last bound, suitable values of
$p=p_n$
. Let
$s=\min(1/b,1)$
and
$p= [n^{\alpha}/4]$
for some
$\alpha\in \mathopen{]}0,1]$
and
$k=[{n}/{2p}]$
. Recall that
$\beta_p=O(p^{-\gamma})$
. We have

So, when
$b\neq 1$
we obtain, thanks to (5.19),

Suppose that
$b\neq 1$
and
$\beta_n =O(n^{-\gamma})$
for some
$\gamma \geq \max(1, 1/b)$
. Choose
$\alpha$
such that

(such a value of
$\alpha$
exists since
$s\leq 1$
,
$s+1/b \leq s(1+\gamma)$
, and
$\gamma\geq\max(1,1/b)$
). Hence, by (5.22),

Consequently, since
$(1-\alpha)s - \alpha\gamma s + 1/b \leq 0$
,
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M})) = O(({\ln n}/{n})^{1/b})$
.
Suppose instead that
$b\neq 1$
and
$\beta_n =O(n^{-\gamma})$
for some
$0<\gamma < \max(1, 1/b)$
. Each
$\alpha \in \mathopen{]}{1}/({1+\gamma}),1]$
satisfies
$(1-\alpha)s - \alpha\gamma s < 0$
and
$1/b + (1-\alpha)s - \alpha\gamma s > 0$
, since
$s\gamma < 1/b$
. So, for any
$\alpha \in \mathopen{]}(1+\gamma)^{-1},1]$
,

and
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M})) = O(n^{(1-\alpha)s-\alpha\gamma s})$
.
Suppose now that
$b=1$
. In this case,
$s=1$
and, by (5.21),
$C_{k,p}= n^{1-\alpha-\alpha\gamma}$
(recall that
$C_{k,p}$
is an upper bound for
$k\beta_p$
less than one). So, for any
$\alpha \in \mathopen{]}(1+\gamma)^{-1},1]$
,
$1-\alpha-\alpha\gamma<0$
and inequality (5.20) gives
$\mathbb{E}(d_\textrm{H}(\mathbb{X}_n,\mathbb{M})) \leq \mathit{cst}(n^{1-\alpha-\alpha\gamma}\ln(n) + {\ln n}/{n})$
, so that

The proof of Theorem 3.1 is complete.
5.10. Corollary 3.2
Proof. According to Theorem 3.1, the rate
$O(({\ln n}/{n})^{1/b})$
is reached as soon as
$\gamma \geq \max(1,1/b)$
,
$b \neq 1$
, and

or
$\gamma>0$
,
$b=1$
,
${1}/({1+\gamma}) < \alpha \leq 1$
, and
$-1+\alpha+\alpha\gamma \geq 1$
. The second case implies that
${2}({1+\gamma}) \leq \alpha \leq 1$
so that
$\gamma\geq 1$
necessarily. This situation is included in the first case since, here,
$s=1$
and
$b=1$
. The proof of Corollary 3.2 is complete.
5.11. Corollary 3.3
Proof. Let
$\varepsilon_n^b = {1}/{\sqrt{n^{1+\alpha}}}$
for fixed
$\alpha \in \mathopen{]}0,1\mathclose{[}$
. Then

Using (3.4) We have
$\Lambda(n,[n^{\alpha}/4],\varepsilon_n) \leq \mathit{cst}\,n^{1+\alpha}\varepsilon_n^{2b} \leq \mathit{cst}$
for this choice of
$\varepsilon_n^b$
. All the requirements of Theorem 3.1 are satisfied for this sequence
$(\varepsilon_n)_n$
and for
$\alpha\in \mathopen{]}0,1\mathclose{[}$
. The proof of Corollary 3.3 is complete.
5.12. Proposition 4.1.
Proof. Our main reference for this proof is [Reference Kato15]. We summarize Kato’s results in the following proposition (but only for
$\beta \in \mathopen{]}-1,1\mathclose{[}$
, our case of interest.) We note that
$A = \sqrt{(1-\varphi)^2 + 4\beta^2\varphi}$
,
$\lambda_1 = \frac{1}{2}(1+\varphi-A)$
,
$\lambda_2 = \frac{1}{2}(1+\varphi+A)$
.
Proposition 5.1 Consider the Möbius Markov chain (as introduced in Proposition 4.1). The following properties hold:
-
(i) For any x in the unit circle
$\partial B(0,1)$ , the conditional distribution of
$X_n$ given
$X_0=x$ is, for each
$n\geq 1$ , the wrapped Cauchy on the unit circle
$C^*(\phi_n(x))$ , i.e. having a density
$\pi_n$ (with respect to the arc length on the circle)
$$ \pi_n(z) = \frac{1}{2\pi}\frac{1-|\phi_n(x)|^2}{|z-\phi_n(x)|^2}, $$
$\beta\neq 0$ and
$\varphi>0$ ,
$$ \phi_n(x) = \frac{\lambda_1^n(1-\varphi+A)x+\lambda_2^n(\varphi-1+A)x+2(\lambda_2^n-\lambda_1^n)\beta\varphi} {2(\lambda_2^n-\lambda_1^n)\beta x + \lambda_2^n(1-\varphi+A) + \lambda_1^n(\varphi-1+A)}; $$
$\beta=0$ then
$\phi_n(x)=\varphi^n x$ ; and finally if
$\varphi=0$ then
$\phi_n(x)=0$ for each
$n\geq 1$ .
-
(ii) The conditional distribution of
$X_n$ given
$X_0=x$ converges in law, as n tends to infinity, to the wrapped Cauchy on the unit circle
$C^*(\phi_{\infty})$ with
$\phi_{\infty} = ({\varphi-1+A})/{2\beta}$ if
$\beta\neq 0$ and
$\phi_{\infty}=0$ otherwise.
-
(iii) The wrapped Cauchy on the unit circle
$C^*(\phi_{\infty})$ is the unique invariant measure of this Möbius Markov chain (denoted by
$\mu$ ). Recall that
$C^*(\phi_{\infty})$ has a density on the unit circle defined, for
$z\in \partial B(0,1)$ , by
$$ \pi(z)=\frac{1}{2\pi}\frac{1-|\phi_{\infty}|^2}{|z-\phi_{\infty}|^2}. $$
This Markov chain therefore has a unique invariant measure
$\mu$
on the unit circle, so Assumption 2.1 is satisfied with
$\mathbb{M}=\partial B(0,1)$
. The task now is to check Assumption 2.2. We have, for
$x\in \partial B(0,1)$
,

where
$\nu$
is the arc length measure on the unit circle and, for
$x,z\in \partial B(0,1)$
,

with
$\phi_1(x) = ({\varphi x + \beta\varphi})/({\beta x + 1})$
,
$|\phi_1(x)|= \varphi$
. It is proved for [Reference Kallel and Louhichi14, Proposition 5.3] that

and, for positive
$\varepsilon_0$
sufficiently small,

Assumption 2.2 is satisfied with
$b=1$
, thanks to (5.23), (5.25), and (5.26). The requirements of Theorem 2.1 are satisfied thanks to Corollary 2.3. In particular,
$\kappa_{\varepsilon}=\kappa V_d\,\varepsilon$
. From this, we deduce that
$a=\kappa V_d$
. The value of
$\theta^*=1$
follows from Proposition 2.3. The conclusion of Theorem 2.1 holds, so the proof of Proposition 4.1 is complete.
5.13. Proposition 4.2
Proof. We now have to prove the geometric ergodicity property. This allows us to deduce that this Markov chain is
$\beta$
-mixing with geometrically decaying mixing coefficients
$(\beta_n)_{n\geq 1}$
(cf., for instance, [Reference Bradley4, Theorem 3.7] and the references therein). We will prove this property only in the case when
$\beta\neq 0$
and
$\varphi>0$
using Proposition 5.1 (which is already proved in [Reference Kato15]). The other cases are much easier. Clearly, for a measurable subset A of the unit circle
$\partial B(0,1)$
,

so

Now we have (using the fact that
$|a_1a_2-b_1b_2| \leq 2|a_1-b_1| + 2|a_2-b_2|$
for
$|a_1|$
,
$|a_2|$
,
$|b_1|$
,
$|b_2|$
all less than 2), for any z belonging to the unit circle,

Now, for any
$x\in {\partial B(0,1)}$
,

since, thanks to the expression for A, 2
$\varphi\beta/{(\varphi-1+A)} = ({1-\varphi+A})/{2\beta}$
. So, for
${\tilde{b}}={\lambda_1}/{\lambda_2}\in\mathopen{]}0,1\mathclose{[}$
,

Recalling that x is in the unit circle and can be seen as a complex number with
$|x|=1$
, we obtain

We deduce that there exists a positive constant c (independent of x) such that, for any
$n\in\mathbb{N}\setminus\{0\}$
,
$|\phi_n(x)-\phi_{\infty}| \leq c{\tilde{b}}^n$
. This together with (5.27) and (5.28) give

Recall that, for any
$z\in \partial B(0,1)$
,
$|z-\phi_n(x)| \geq |1-|\phi_n(x)||$
. Hence, recalling that
$\phi_{\infty}\neq 1$
and that
$\phi_n(x)\neq 1$
,

Since
${\tilde{b}}\in\mathopen{]}0,1\mathclose{[}$
and
$\sup_{n\in\mathbb{N}}|\phi_n(x)|<\infty$
, the bound in (3.1) is satisfied.
We now need to check (3.4) with
$b=1$
. We have

Recall first (arguing as in the proof of [Reference Kallel and Louhichi14, (5.5)]) that
$\nu(\partial B(0,1) \cap B(x,u)) \leq \mathit{cst}\,u$
for any
$x\in\partial B(0,1)$
. We also have (recall that the conditional density of
$X_1$
given
$X_0=x_0$
is provided in (5.24)),

and
$\mathbb{P}_{\mu}(\|X_1-x\| \leq u) = \int_{\partial B(0,1)}\mathbb{P}_{x_0}(\|X_1-x\| \leq u)\,\mu({\textrm{d}} x_0) \leq \mathit{cst}\,u$
. Condition (3.4) with
$b=1$
follows from this, (5.29), and (5.30). We complete the proof of Proposition 4.2 by applying Corollary 3.3 together with Corollary 3.2 (recall that here
$b=1$
and
$\gamma$
is any real number greater than one since
$\beta_n=O({\textrm{e}}^{-cn})$
for some positive c).
5.14. Proposition 4.3
Proof. Recall that if
$\varepsilon$
is a random variable uniformly distributed over [0,1] then, for any
$u\in [0,1]$
, the random variable
$u+\varepsilon-[u+\varepsilon]$
is also uniformly distributed over [0,1]. From this, we deduce that the stationary Markov chain
$(\Phi_n)_{n\geq 0}$
has a uniform stationary distribution over the unit square. Let us now prove the second statement of Proposition 4.3. We have, for any
$u,v,x,y\in [0,1\mathclose{[}$
(recall that the two random variables
$\varepsilon_1$
,
$\eta_1$
are independent and that the pairs
$(\varepsilon_1,\eta_1)$
and
$(\theta_0,\phi_0)$
are also independent),

Letting
$\varepsilon'=\varepsilon/\sqrt{2}$
, let us control
$\mathbb{P}(|u+\varepsilon_1-[u+\varepsilon_1]-x|\leq\varepsilon')$
. Clearly,

Our purpose is to give a lower bound for I. For this, we discuss the following cases.
-
(a) If
$-\varepsilon'+x \leq 0 \leq 1 \leq \varepsilon'+x$ then
$I=1$ , so
$I \geq \varepsilon'$ .
-
(b) If
$0 \leq -\varepsilon'+x \leq \varepsilon'+x \leq 1$ then, recalling that
$u+\varepsilon_1-[u+\varepsilon_1]$ is uniformly distributed over [0,1],
$I = 2\varepsilon'$ .
-
(c) If
$-\varepsilon'+x \leq 0 \leq \varepsilon'+x \leq 1$ then
\begin{align*} I & = \mathbb{P}(-\varepsilon'+x \leq u+\varepsilon_1 \leq \varepsilon'+x,\,u+\varepsilon_1 < 1) \\ & \quad + \mathbb{P}(-\varepsilon'+x \leq u+\varepsilon_1-1 \leq \varepsilon'+x,\,u+\varepsilon_1\geq 1) \\ & = \mathbb{P}(0 \leq \varepsilon_1 \leq \varepsilon'+x-u) + \mathbb{P}(u-\varepsilon'-x \leq 1-\varepsilon_1 \leq u+\varepsilon'-x,\,u \geq 1-\varepsilon_1), \end{align*}
-
• If
$0<\varepsilon'+x-u$ (we already have
$\varepsilon'+x-u<1$ ,
$x\in[0,1\mathclose{[}$ ) then
\begin{equation*} I = \mathbb{P}(0 \leq \varepsilon_1 \leq \varepsilon'+x-u) + \mathbb{P}(0 \leq 1-\varepsilon_1 \leq u) = \varepsilon'+x-u+u \geq \varepsilon'. \end{equation*}
-
• If
$\varepsilon'+x-u<0$ then
\begin{align*} I & = \mathbb{P}(u-\varepsilon'-x \leq 1-\varepsilon_1 \leq u+\varepsilon'-x,\,u \geq 1-\varepsilon_1) \\ & = \mathbb{P}(u-\varepsilon'-x \leq 1-\varepsilon_1 \leq u) = \varepsilon'+x \geq \varepsilon'. \end{align*}
-
-
(d) If
$0 \leq -\varepsilon'+x \leq 1 \leq \varepsilon'+x$ then
\begin{align*} I & = \mathbb{P}(-\varepsilon'+x \leq u+\varepsilon_1 \leq \varepsilon'+x,\,u+\varepsilon_1 < 1) \\ & \quad + \mathbb{P}(-\varepsilon'+x \leq u+\varepsilon_1-1 \leq \varepsilon'+x,\,u+\varepsilon_1 \geq 1) \\ & = \mathbb{P}(-\varepsilon'+x-u \leq \varepsilon_1 \leq 1-u) \\ & \quad + \mathbb{P}(u-\varepsilon'-x \leq 1-\varepsilon_1 \leq u+\varepsilon'-x,\,u+\varepsilon_1 \geq 1). \end{align*}
We discuss the following two subcases:
-
• If
$0 \leq -\varepsilon'+x-u$ then, recalling that
$1-x \geq 0$ ,
\begin{equation*} I = \mathbb{P}(-\varepsilon'+x-u \leq \varepsilon_1 \leq 1-u) = 1-u-(-\varepsilon'+x-u) = 1-x+\varepsilon' \geq \varepsilon'. \end{equation*}
-
• If
$-\varepsilon'+x-u \leq 0$ then, recalling that
$u-\varepsilon'-x \leq 1-\varepsilon'-x \leq 0$ ,
\begin{equation*} I = \mathbb{P}(0 \leq \varepsilon_1 \leq 1-u) + \mathbb{P}(0 \leq 1-\varepsilon_1 \leq u) = 1-u+u = 1 \geq \varepsilon'. \end{equation*}
-
So we always have
$I \geq \varepsilon'$
and

This last bound together with (5.31) prove the second item of Proposition 4.3. This immediately implies that the distribution of
$(\theta_1,\phi_1)$
satisfies the (a, b)-standard assumption with
$a=\frac12$
and
$b=2$
. The bound (2.2) of Proposition 2.3 is satisfied and proves the last item of Proposition 4.3. The proof of Proposition 4.3 is now complete.
Acknowledgements
We are deeply grateful to the two referees whose comments have significantly improved this article. This article is a continuation of [Reference Kallel and Louhichi14], written jointly with Sadok Kallel, to whom we are very grateful for the interesting discussions.
Funding information
There are no funding bodies to thank relating to the creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.