1 Introduction
 Thouvenot and Weiss showed in [Reference Thouvenot and Weiss12] that for every aperiodic, probability-preserving system 
 $(X,{\mathcal B},m,T)$
 and for a random variable Y, there exist a function
$(X,{\mathcal B},m,T)$
 and for a random variable Y, there exist a function 
 $f:X\to {\mathbb R}$
 and a sequence
$f:X\to {\mathbb R}$
 and a sequence 
 $a_n\to \infty $
 such that
$a_n\to \infty $
 such that 
 $$ \begin{align*} \frac{1}{a_n}\sum_{k=0}^{n-1}f\circ T^k \ \text{converges in distribution to } Y. \end{align*} $$
$$ \begin{align*} \frac{1}{a_n}\sum_{k=0}^{n-1}f\circ T^k \ \text{converges in distribution to } Y. \end{align*} $$
This result means that any distribution can be approximated by observations of an aperiodic, probability-preserving system. See also [Reference Aaronson and Weiss1] for a refinement of this distributional convergence result for positive random variables and the subsequent [Reference Gouëzel6] which is concerned with the possible growth rate of the normalizing constants 
 $a_n$
. The results mentioned above were preceded by research into central limit theorems (CLTs) in dynamical systems with convergence towards a normal law; see, for example, [Reference Burton and Denker4, Reference Volný13].
$a_n$
. The results mentioned above were preceded by research into central limit theorems (CLTs) in dynamical systems with convergence towards a normal law; see, for example, [Reference Burton and Denker4, Reference Volný13].
 Given a stochastic process 
 $Y=(Y(t)) _{t\in {\mathbb R}}$
 whose sample paths are in a Polish space
$Y=(Y(t)) _{t\in {\mathbb R}}$
 whose sample paths are in a Polish space 
 $\mathcal {D}$
, a natural question that arises is whether we can simulate it using our prescribed dynamical system. That is, do there exist a measurable function
$\mathcal {D}$
, a natural question that arises is whether we can simulate it using our prescribed dynamical system. That is, do there exist a measurable function 
 $f:X\to {\mathbb R}$
 and normalizing constants
$f:X\to {\mathbb R}$
 and normalizing constants 
 $a_n$
 and
$a_n$
 and 
 $b_n$
 such that the processes
$b_n$
 such that the processes 
 $Y_n:X\to \mathcal {D}$
 defined by
$Y_n:X\to \mathcal {D}$
 defined by 
 $Y_n(t)(x)=({1}/{a_n})(\sum _{k=0}^{[nt]}f\circ T^k(x)-b_{[nt]})$
 converge in distribution to Y?
$Y_n(t)(x)=({1}/{a_n})(\sum _{k=0}^{[nt]}f\circ T^k(x)-b_{[nt]})$
 converge in distribution to Y?
 As noted by Gouëzel in [Reference Gouëzel6], by a famous result of Lamperti (see [Reference Bingham, Goldie and Teugels3, Theorem 8.5.3]), any process Y which can be simulated in this manner must be self-similar and the normalizing constants need to be of the form 
 $a_n=n^{\alpha }L(n)$
 where
$a_n=n^{\alpha }L(n)$
 where 
 $L(n)$
 is a slowly varying function and
$L(n)$
 is a slowly varying function and 
 $\alpha $
 is the self-similarity index of the process. Perhaps due to this, results about the simulation of processes are rather scarce; to the best of our knowledge the only such result is [Reference Volný13], where the second author has answered a question of Burton and Denker [Reference Burton and Denker4] and shown that every aperiodic, probability-preserving system can simulate a Brownian motion with classical normalizing constants
$\alpha $
 is the self-similarity index of the process. Perhaps due to this, results about the simulation of processes are rather scarce; to the best of our knowledge the only such result is [Reference Volný13], where the second author has answered a question of Burton and Denker [Reference Burton and Denker4] and shown that every aperiodic, probability-preserving system can simulate a Brownian motion with classical normalizing constants 
 $a_n=\sqrt {n}$
.
$a_n=\sqrt {n}$
.
 An important subclass of self-similar processes is the class of 
 $\alpha $
-stable Lévy motions which we describe in the next subsection. These include Brownian motion
$\alpha $
-stable Lévy motions which we describe in the next subsection. These include Brownian motion 
 $(\alpha =2)$
 and Cauchy–Lévy motion (
$(\alpha =2)$
 and Cauchy–Lévy motion (
 $\alpha =1$
) which is a process with independent increments which are Cauchy-distributed and are often used to model heavy-tailed phenomena.
$\alpha =1$
) which is a process with independent increments which are Cauchy-distributed and are often used to model heavy-tailed phenomena.
 In this work we show that given an aperiodic, ergodic, probability-preserving transformation 
 $(X,{\mathcal B},m,T)$
:
$(X,{\mathcal B},m,T)$
: 
- 
• every  $\alpha $
-stable Lévy motion with $\alpha $
-stable Lévy motion with $\alpha \in (0,1)$
 can be simulated by this transformation; $\alpha \in (0,1)$
 can be simulated by this transformation;
- 
• every symmetric  $\alpha $
-stable Lévy motion can be simulated using this transformation. $\alpha $
-stable Lévy motion can be simulated using this transformation.
One may ask about general 
 $\alpha $
-stable Lévy motions when
$\alpha $
-stable Lévy motions when 
 $\alpha \in [1,2)$
. In this regard we extend the results of [Reference Kosloff and Volný9] and show a classical CLT result for any
$\alpha \in [1,2)$
. In this regard we extend the results of [Reference Kosloff and Volný9] and show a classical CLT result for any 
 $\alpha $
-stable distribution when
$\alpha $
-stable distribution when 
 $\alpha \neq 1$
.
$\alpha \neq 1$
.
 From a bird’s-eye view, the methods are similar to those in [Reference Kosloff and Volný9, Reference Volný13] in the sense that the process is constructed by a sum of coboundaries and that in any ergodic and aperiodic dynamical system and for a natural number n there is a function f such that the sequence of 
 $f, f\circ T, \ldots , f\circ T^n$
 has a given distribution of a discrete-valued independent and identically distributed (i.i.d.) sequence
$f, f\circ T, \ldots , f\circ T^n$
 has a given distribution of a discrete-valued independent and identically distributed (i.i.d.) sequence 
 $X_0, \ldots , X_n$
 (Proposition 2 in [Reference Kosloff and Volný8]). We remark that our work shows that any ergodic dynamical system can simulate these
$X_0, \ldots , X_n$
 (Proposition 2 in [Reference Kosloff and Volný8]). We remark that our work shows that any ergodic dynamical system can simulate these 
 $\alpha $
-stable processes but in order to have algorithms which converge fast one may want to choose a special dynamical system; such works in the context of
$\alpha $
-stable processes but in order to have algorithms which converge fast one may want to choose a special dynamical system; such works in the context of 
 $\alpha $
-stable processes were carried out, for example, in [Reference Gottwald and Melbourne5, Reference Wu, Michailidis and Zhang14].
$\alpha $
-stable processes were carried out, for example, in [Reference Gottwald and Melbourne5, Reference Wu, Michailidis and Zhang14].
 The coboundaries used in the preceding papers naturally lead to a convergence towards symmetric laws. A natural challenge, which is treated in full generality in this work, is to get CLT convergence with i.i.d. scaling towards skewed stable limits. We note that the case where 
 $1\leq \alpha <2$
 (Theorem 2.10) is especially challenging.
$1\leq \alpha <2$
 (Theorem 2.10) is especially challenging.
The invariance principle was studied in [Reference Volný13] only where the structure of Hilbert spaces could be used and the convergence is with respect to the metric of uniform convergence in the space of continuous functions. The methods of this paper are different, and even in the case of a symmetric stable process limit the function here is different and makes use of linear combinations of skewed stable functions.
1.1 Definitions and statement of the theorems
 A random variable Y is stable if there exist a sequence 
 $Z_1,Z_2,\ldots $
 of i.i.d. random variables and sequences
$Z_1,Z_2,\ldots $
 of i.i.d. random variables and sequences 
 $a_n,b_n$
 such that
$a_n,b_n$
 such that 
 $$ \begin{align*} \frac{\sum_{k=1}^n Z_k-a_n}{b_n}\ \text{converges in distribution to } Y\quad \text{as } n\to\infty. \end{align*} $$
$$ \begin{align*} \frac{\sum_{k=1}^n Z_k-a_n}{b_n}\ \text{converges in distribution to } Y\quad \text{as } n\to\infty. \end{align*} $$
In other words, Y arises as a distributional limit of a CLT; see [Reference Ibragimov and Linnik7]. Furthermore, in this case 
 $b_n$
 is regularly varying of index
$b_n$
 is regularly varying of index 
 ${1}/{\alpha }$
 which implies that
${1}/{\alpha }$
 which implies that 
 $b_n=n^{1/\alpha }L(n)$
, where
$b_n=n^{1/\alpha }L(n)$
, where 
 $L(n)$
 is a slowly varying function. A stable distribution is uniquely defined by its characteristic function (Fourier transform). Namely, a random variable is
$L(n)$
 is a slowly varying function. A stable distribution is uniquely defined by its characteristic function (Fourier transform). Namely, a random variable is 
 $\alpha $
-stable,
$\alpha $
-stable, 
 $0<\alpha \leq 2$
, if there exist
$0<\alpha \leq 2$
, if there exist 
 $\sigma>0$
,
$\sigma>0$
, 
 $\beta \in [-1,1]$
 and
$\beta \in [-1,1]$
 and 
 $\mu \in {\mathbb R}$
 such that for all
$\mu \in {\mathbb R}$
 such that for all 
 $\theta \in {\mathbb R}$
,
$\theta \in {\mathbb R}$
, 
 $$ \begin{align*} \mathbb{E}(\exp(i\theta Y))=\begin{cases} \exp\bigg\{\bigg(-\sigma^\alpha|\theta|^\alpha \bigg(1-i\beta\text{sign}(\theta)\tan\bigg(\dfrac{\pi\alpha}{2}\bigg)\bigg)\bigg)+i\mu\theta\bigg\}, &\alpha\neq 1,\\[6pt] \exp\bigg\{\bigg(-\sigma^\alpha|\theta|^\alpha \bigg(1+\dfrac{i\beta}{2}\text{sign}(\theta)\ln (\theta)\bigg)\bigg)+i\mu\theta\bigg\}, & \alpha=1. \end{cases} \end{align*} $$
$$ \begin{align*} \mathbb{E}(\exp(i\theta Y))=\begin{cases} \exp\bigg\{\bigg(-\sigma^\alpha|\theta|^\alpha \bigg(1-i\beta\text{sign}(\theta)\tan\bigg(\dfrac{\pi\alpha}{2}\bigg)\bigg)\bigg)+i\mu\theta\bigg\}, &\alpha\neq 1,\\[6pt] \exp\bigg\{\bigg(-\sigma^\alpha|\theta|^\alpha \bigg(1+\dfrac{i\beta}{2}\text{sign}(\theta)\ln (\theta)\bigg)\bigg)+i\mu\theta\bigg\}, & \alpha=1. \end{cases} \end{align*} $$
The constant 
 $\sigma>0$
 is the dispersion parameter and
$\sigma>0$
 is the dispersion parameter and 
 $\beta $
 is the skewness parameter. In this case we will say that Y is an
$\beta $
 is the skewness parameter. In this case we will say that Y is an 
 $\alpha $
-stable random variable with dispersion parameter
$\alpha $
-stable random variable with dispersion parameter 
 $\sigma $
, skewness parameter
$\sigma $
, skewness parameter 
 $\beta $
 and shift parameter
$\beta $
 and shift parameter 
 $\mu $
, or in short Y is an
$\mu $
, or in short Y is an 
 $S_\alpha (\sigma ,\beta ,\mu )$
 random variable. If
$S_\alpha (\sigma ,\beta ,\mu )$
 random variable. If 
 $\mu =\beta =0$
 and
$\mu =\beta =0$
 and 
 $\sigma>0$
 then the random variable is symmetric
$\sigma>0$
 then the random variable is symmetric 
 $\alpha $
-stable and we will say that Y is
$\alpha $
-stable and we will say that Y is 
 $S\alpha S(\sigma )$
.
$S\alpha S(\sigma )$
.
 A probability-preserving dynamical system is a quadruplet 
 $(\mathcal {X},{\mathcal B},m,T)$
 where
$(\mathcal {X},{\mathcal B},m,T)$
 where 
 $(\mathcal {X},{\mathcal B},m)$
 is a standard probability space, T is a measurable self-map of X and
$(\mathcal {X},{\mathcal B},m)$
 is a standard probability space, T is a measurable self-map of X and 
 $m\circ T^{-1}=m$
. The system is aperiodic if the collection of all periodic points is a null set. It is ergodic if every T-invariant set is either a null or a conull set. Given a function
$m\circ T^{-1}=m$
. The system is aperiodic if the collection of all periodic points is a null set. It is ergodic if every T-invariant set is either a null or a conull set. Given a function 
 $f:X\to {\mathbb R}$
, we write
$f:X\to {\mathbb R}$
, we write 
 $S_n(f):=\sum _{k=0}^{n-1}f\circ T^k$
 for the corresponding random walk.
$S_n(f):=\sum _{k=0}^{n-1}f\circ T^k$
 for the corresponding random walk.
 Recall that if 
 $Y_n$
 and Y are random variables taking values in a Polish space
$Y_n$
 and Y are random variables taking values in a Polish space 
 $\mathbb {X}$
, then
$\mathbb {X}$
, then 
 $Y_n$
 converges to Y in distribution if for every continuous function
$Y_n$
 converges to Y in distribution if for every continuous function 
 $G:\mathbb {X}\to {\mathbb R}$
,
$G:\mathbb {X}\to {\mathbb R}$
, 
 $$ \begin{align*} \lim_{n\to\infty}\mathbb{E}(G(Y_n))=\mathbb{E}(G(Y)). \end{align*} $$
$$ \begin{align*} \lim_{n\to\infty}\mathbb{E}(G(Y_n))=\mathbb{E}(G(Y)). \end{align*} $$
Here 
 $\mathbb {E}$
 denotes the expectation with respect to the relevant probability measure of the space on which the random variable is defined on.
$\mathbb {E}$
 denotes the expectation with respect to the relevant probability measure of the space on which the random variable is defined on.
Theorem 1.1. (See Theorem 2.10)
 For every ergodic and aperiodic probability-preserving system 
 $(\mathcal {X},{\mathcal B},m,T)$
,
$(\mathcal {X},{\mathcal B},m,T)$
, 
 $\alpha>1$
 and
$\alpha>1$
 and 
 $\beta \in [-1,1]$
, there exist a function
$\beta \in [-1,1]$
, there exist a function 
 $f:X\to {\mathbb R}$
 and
$f:X\to {\mathbb R}$
 and 
 $B_n\to \infty $
 such that
$B_n\to \infty $
 such that 
 $$ \begin{align*} \frac{S_n(f)+B_n}{n^{1/\alpha}}\ \text{converges in distribution to} \ S_\alpha(\kern-2pt\sqrt[\alpha]{\ln(2)},\beta,0). \end{align*} $$
$$ \begin{align*} \frac{S_n(f)+B_n}{n^{1/\alpha}}\ \text{converges in distribution to} \ S_\alpha(\kern-2pt\sqrt[\alpha]{\ln(2)},\beta,0). \end{align*} $$
 A process 
 ${\mathbb {W}}=\big ({\mathbb {W}}_s\big )_{s\in [0,1]}$
 is an
${\mathbb {W}}=\big ({\mathbb {W}}_s\big )_{s\in [0,1]}$
 is an 
 $S_\alpha (\sigma ,\beta ,0)$
 Lévy motion if it has independent increments and for all
$S_\alpha (\sigma ,\beta ,0)$
 Lévy motion if it has independent increments and for all 
 $0\leq s<t\leq 1$
,
$0\leq s<t\leq 1$
, 
 ${\mathbb {W}}_t-{\mathbb {W}}_s$
 is
${\mathbb {W}}_t-{\mathbb {W}}_s$
 is 
 $S_\alpha (\sigma \kern -2pt\sqrt [\alpha ]{t-s},\beta ,0)$
 distributed. The existence of an
$S_\alpha (\sigma \kern -2pt\sqrt [\alpha ]{t-s},\beta ,0)$
 distributed. The existence of an 
 $S_\alpha (\sigma ,\beta ,0)$
-stable motion can be demonstrated via a functional CLT (also called a weak invariance principle); the details given below appear in [Reference Resnick10].
$S_\alpha (\sigma ,\beta ,0)$
-stable motion can be demonstrated via a functional CLT (also called a weak invariance principle); the details given below appear in [Reference Resnick10].
 Consider the vector space 
 $D([0,1])$
 of functions
$D([0,1])$
 of functions 
 $f:[0,1]\to {\mathbb R}$
 which are right-continuous with left limits, also known as càdlàg functions. Equipped with the Skorohod
$f:[0,1]\to {\mathbb R}$
 which are right-continuous with left limits, also known as càdlàg functions. Equipped with the Skorohod 
 $J_1$
 topology,
$J_1$
 topology, 
 $D([0,1])$
 is a Polish space. Now a natural construction of a distribution on
$D([0,1])$
 is a Polish space. Now a natural construction of a distribution on 
 $D([0,1])$
 is to take
$D([0,1])$
 is to take 
 $X_1,X_2,\ldots ,$
 an i.i.d. sequence of random variables and
$X_1,X_2,\ldots ,$
 an i.i.d. sequence of random variables and 
 $a_n>0$
 and define a
$a_n>0$
 and define a 
 $D([0,1])$
-valued random variable
$D([0,1])$
-valued random variable 
 ${\mathbb {W}}_n$
 via
${\mathbb {W}}_n$
 via 
 $$ \begin{align*} {\mathbb{W}}_n(t)=a_nS_{[nt]}(X) \end{align*} $$
$$ \begin{align*} {\mathbb{W}}_n(t)=a_nS_{[nt]}(X) \end{align*} $$
where 
 $S_n(X):=\sum _{k=1}^nX_k$
 and
$S_n(X):=\sum _{k=1}^nX_k$
 and 
 $[\cdot ]$
 is the floor function. By [Reference Resnick10, Corollary 7.1.], if
$[\cdot ]$
 is the floor function. By [Reference Resnick10, Corollary 7.1.], if 
 $X_i$
 are
$X_i$
 are 
 $S_\alpha (\sigma ,\beta ,0)$
 and
$S_\alpha (\sigma ,\beta ,0)$
 and 
 $a_n=n^{-1/\alpha }$
, then
$a_n=n^{-1/\alpha }$
, then 
 ${\mathbb {W}}_n$
 converges in distribution (as a sequence of random variables on the Polish space
${\mathbb {W}}_n$
 converges in distribution (as a sequence of random variables on the Polish space 
 $D([0,1])$
 with the
$D([0,1])$
 with the 
 $J_1$
 topology), its limit being an
$J_1$
 topology), its limit being an 
 $S_\alpha (\sigma ,\beta ,0)$
 Lévy motion. The main result of this work is such functional CLT results in the setting of dynamical systems.
$S_\alpha (\sigma ,\beta ,0)$
 Lévy motion. The main result of this work is such functional CLT results in the setting of dynamical systems.
Theorem 1.2. Let 
 $(\mathcal {X},{\mathcal B},m,T)$
 be an ergodic and aperiodic probability-preserving system.
$(\mathcal {X},{\mathcal B},m,T)$
 be an ergodic and aperiodic probability-preserving system. 
- 
(Theorem 2.5) For every  $\alpha \in (0,1)$
, $\alpha \in (0,1)$
, $\sigma>0$
 and $\sigma>0$
 and $\beta \in [-1,1]$
, there exists $\beta \in [-1,1]$
, there exists $f:X\to {\mathbb R}$
 such that $f:X\to {\mathbb R}$
 such that ${\mathbb {W}}_n(f)(t):=({1}/{n^{1/\alpha }})S_{[nt]}(f)$
 converges in distribution to an ${\mathbb {W}}_n(f)(t):=({1}/{n^{1/\alpha }})S_{[nt]}(f)$
 converges in distribution to an $S_\alpha (\sigma ,\beta ,0)$
 Lévy motion. $S_\alpha (\sigma ,\beta ,0)$
 Lévy motion.
- 
(Theorem 2.6) For every  $\alpha \in [1,2)$
 and $\alpha \in [1,2)$
 and $\sigma>0$
, there exists $\sigma>0$
, there exists $f:X\to {\mathbb R}$
 such that $f:X\to {\mathbb R}$
 such that ${\mathbb {W}}_n(f)(t):=({1}/{n^{1/\alpha }})S_{[nt]}(f)$
 converges in distribution to an ${\mathbb {W}}_n(f)(t):=({1}/{n^{1/\alpha }})S_{[nt]}(f)$
 converges in distribution to an $S_\alpha S(\sigma )$
 Lévy motion. $S_\alpha S(\sigma )$
 Lévy motion.
 We remark that while the results in Theorem 2.5 provide a function f whose partial sum process 
 ${\mathbb {W}}_n(f)$
 converges to an
${\mathbb {W}}_n(f)$
 converges to an 
 $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$
 Lévy motion, the scaling property of
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$
 Lévy motion, the scaling property of 
 $\alpha $
-stable distributions gives that, writing
$\alpha $
-stable distributions gives that, writing 
 $c:={\sigma }/{\kern -2pt\sqrt [\alpha ]{\ln (2)}}$
,
$c:={\sigma }/{\kern -2pt\sqrt [\alpha ]{\ln (2)}}$
, 
 ${\mathbb {W}}_n(cf)$
 converges to an
${\mathbb {W}}_n(cf)$
 converges to an 
 $S_\alpha (\sigma ,\beta ,0)$
 Lévy motion. A similar remark is true with regard to Theorem 2.6.
$S_\alpha (\sigma ,\beta ,0)$
 Lévy motion. A similar remark is true with regard to Theorem 2.6.
1.2 Notation
 Here and throughout, 
 $\log (x)$
 denotes the logarithm of x in base 2, and similarly
$\log (x)$
 denotes the logarithm of x in base 2, and similarly 
 $\ln (x)$
 is the natural logarithm of x.
$\ln (x)$
 is the natural logarithm of x.
 Given two non-negative sequences 
 $a_n$
 and
$a_n$
 and 
 $b_n$
, we write
$b_n$
, we write 
 $a_n\lesssim b_n$
 if there exists
$a_n\lesssim b_n$
 if there exists 
 $C>0$
 such that
$C>0$
 such that 
 $a_n\leq Cb_n$
 for all
$a_n\leq Cb_n$
 for all 
 $n\in {\mathbb N}$
; and if, in addition,
$n\in {\mathbb N}$
; and if, in addition, 
 $b_n>0$
 for all n then we write
$b_n>0$
 for all n then we write 
 $a_n\sim b_n$
 if
$a_n\sim b_n$
 if 
 $\lim _{n\to \infty }({a_n}/{b_n})=1$
,
$\lim _{n\to \infty }({a_n}/{b_n})=1$
,
 For a function 
 $f:X\to {\mathbb R}$
 and
$f:X\to {\mathbb R}$
 and 
 $p>0$
,
$p>0$
, 
 $\|f\|_p:=(\int |f|^p\,dm)^{1/p}$
.
$\|f\|_p:=(\int |f|^p\,dm)^{1/p}$
.
2 Construction of the function
2.1 Target distributions
 Let 
 $(\Omega ,\mathcal {F},{\mathbb P})$
 be a probability space. Let
$(\Omega ,\mathcal {F},{\mathbb P})$
 be a probability space. Let 
 $\{X_k(m):\ k,m\in {\mathbb N}\}$
 be independent random variables so that for every
$\{X_k(m):\ k,m\in {\mathbb N}\}$
 be independent random variables so that for every 
 $k\in {\mathbb N}$
,
$k\in {\mathbb N}$
, 
 $X_k(1),X_k(2),X_k(3),\ldots $
 are i.i.d.
$X_k(1),X_k(2),X_k(3),\ldots $
 are i.i.d. 
 $S_\alpha (\sigma _k,1,0)$
 random variables with
$S_\alpha (\sigma _k,1,0)$
 random variables with 
 $\sigma _k^\alpha ={1}/{k}$
.
$\sigma _k^\alpha ={1}/{k}$
.
 For every 
 $k,m\in {\mathbb N}$
, define
$k,m\in {\mathbb N}$
, define 
 $Y_k(m)=X_k(m)1_{[2^k\leq X_k(m)\leq 4^k]}$
 and its discretization on a grid of scale
$Y_k(m)=X_k(m)1_{[2^k\leq X_k(m)\leq 4^k]}$
 and its discretization on a grid of scale 
 $4^{-k}$
 defined by
$4^{-k}$
 defined by 
 $$ \begin{align*} Z_k(m)=\sum_{j=2^k4^k}^{4^{2k}} \bigg(\frac{j}{4^k}\bigg)1_{[({j}/{4^{k}})\leq Y_k(m)<({j+1})/{4^k}]}. \end{align*} $$
$$ \begin{align*} Z_k(m)=\sum_{j=2^k4^k}^{4^{2k}} \bigg(\frac{j}{4^k}\bigg)1_{[({j}/{4^{k}})\leq Y_k(m)<({j+1})/{4^k}]}. \end{align*} $$
The following fact easily follows from the definitions.
Fact 2.1. For every 
 $k\in {\mathbb N}$
,
$k\in {\mathbb N}$
, 
 $Z_k(1),Z_k(2),\ldots $
 are i.i.d. random variables supported on the finite set
$Z_k(1),Z_k(2),\ldots $
 are i.i.d. random variables supported on the finite set 
 $\{2^k,2^k+4^{-k},\ldots ,4^k\}$
, and for all
$\{2^k,2^k+4^{-k},\ldots ,4^k\}$
, and for all 
 $m\in {\mathbb N}$
,
$m\in {\mathbb N}$
, 
 $$ \begin{align*}0\leq Y_k(m)-Z_k(m)<4^{-k}.\end{align*} $$
$$ \begin{align*}0\leq Y_k(m)-Z_k(m)<4^{-k}.\end{align*} $$
The construction of the cocycle will hinge on realizing a triangular array of the Z random variables in a dynamical system.
2.2 Construction of the function
 Let 
 $(\mathcal {X},{\mathcal B},m,T)$
 be an ergodic, aperiodic, probability-preserving system. We first recall some definitions and the copying lemma of [Reference Kosloff and Volný8] and its application as in [Reference Kosloff and Volný9].
$(\mathcal {X},{\mathcal B},m,T)$
 be an ergodic, aperiodic, probability-preserving system. We first recall some definitions and the copying lemma of [Reference Kosloff and Volný8] and its application as in [Reference Kosloff and Volný9].
 A finite partition of 
 $\mathcal {X}$
 is measurable if all of its pieces (atoms) are Borel-measurable. Recall that a finite sequence of random variables
$\mathcal {X}$
 is measurable if all of its pieces (atoms) are Borel-measurable. Recall that a finite sequence of random variables 
 $X_1,\ldots ,X_n:\mathcal {X}\to {\mathbb R}$
, each taking finitely many values, is independent of a finite partition
$X_1,\ldots ,X_n:\mathcal {X}\to {\mathbb R}$
, each taking finitely many values, is independent of a finite partition 
 $\mathcal {P}=(P)_{P\in \mathcal {P}}$
 if for all
$\mathcal {P}=(P)_{P\in \mathcal {P}}$
 if for all 
 $s\in {\mathbb R}^n$
 and
$s\in {\mathbb R}^n$
 and 
 $P\in \mathcal {P}$
,
$P\in \mathcal {P}$
, 
 $$ \begin{align*} m((X_j)_{j=1}^n=s|P)=m((X_j)_{j=1}^n=s). \end{align*} $$
$$ \begin{align*} m((X_j)_{j=1}^n=s|P)=m((X_j)_{j=1}^n=s). \end{align*} $$
We will embed the triangular array using the following key proposition.
Proposition 2.2. [Reference Kosloff and Volný8, Proposition 2]
 Let 
 $(\mathcal {X},{\mathcal B},m,T)$
 be an aperiodic, ergodic, probability-preserving transformation and
$(\mathcal {X},{\mathcal B},m,T)$
 be an aperiodic, ergodic, probability-preserving transformation and 
 $\mathcal {P}$
 a finite-measurable partition of
$\mathcal {P}$
 a finite-measurable partition of 
 $\mathcal {X}$
. For every finite set A and
$\mathcal {X}$
. For every finite set A and 
 $U_1,U_2,\ldots ,U_n$
 an i.i.d. sequence of A-valued random variables, there exists
$U_1,U_2,\ldots ,U_n$
 an i.i.d. sequence of A-valued random variables, there exists 
 $f:\mathcal {X}\to A$
 such that
$f:\mathcal {X}\to A$
 such that 
 $(f\circ T^j)_{j=0}^{n-1}$
 is distributed as
$(f\circ T^j)_{j=0}^{n-1}$
 is distributed as 
 $(U_j)_{j=1}^n$
 and
$(U_j)_{j=1}^n$
 and 
 $(f\circ T^j)_{j=0}^{n-1}$
 is independent of
$(f\circ T^j)_{j=0}^{n-1}$
 is independent of 
 $\mathcal {P}$
.
$\mathcal {P}$
.
Using this, we deduce the following corollary.
Corollary 2.3. Let 
 $(\mathcal {X},{\mathcal B},m,T)$
 be an aperiodic, ergodic, probability-preserving transformation and
$(\mathcal {X},{\mathcal B},m,T)$
 be an aperiodic, ergodic, probability-preserving transformation and 
 $(Z_k(j))_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2} \}}$
 be the triangular array from §2.1. There exist functions
$(Z_k(j))_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2} \}}$
 be the triangular array from §2.1. There exist functions 
 $f_k,g_k:\mathcal {X}{\kern-1pt}\to{\kern-1pt} \mathbb {R}$
 such that
$f_k,g_k:\mathcal {X}{\kern-1pt}\to{\kern-1pt} \mathbb {R}$
 such that 
 $(f_k\circ T^{j-1})_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2} \}}$
 and
$(f_k\circ T^{j-1})_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2} \}}$
 and 
 $(g_k\circ T^{j-1})_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2} \}}$
 are independent and each is distributed as
$(g_k\circ T^{j-1})_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2} \}}$
 are independent and each is distributed as 
 $(Z_k(j))_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2}\}}$
.
$(Z_k(j))_{\{k\in {\mathbb N},1\leq j\leq 4^{k^2}\}}$
.
Proof. The sequence 
 $(Z_k(m))_{n\in {\mathbb N},1\leq m\leq 2\cdot 4^{k^2}}$
 is a sequence of independent random variables and for each k,
$(Z_k(m))_{n\in {\mathbb N},1\leq m\leq 2\cdot 4^{k^2}}$
 is a sequence of independent random variables and for each k, 
 $(Z_k(m))_{1\leq m\leq 2\cdot 4^{k^2}}$
 are i.i.d. random variables which take finitely many values.
$(Z_k(m))_{1\leq m\leq 2\cdot 4^{k^2}}$
 are i.i.d. random variables which take finitely many values.
 Proceeding verbatim as in the proof of [Reference Kosloff and Volný9, Corollary 4], one obtains a sequence of functions 
 $f_k:\mathcal {X}\to {\mathbb R}$
 such that
$f_k:\mathcal {X}\to {\mathbb R}$
 such that 
 $(f_k\circ T^{j-1})_{\{k\in {\mathbb N},1\leq j\leq 2\cdot 4^{k^2} \}}$
 is distributed as
$(f_k\circ T^{j-1})_{\{k\in {\mathbb N},1\leq j\leq 2\cdot 4^{k^2} \}}$
 is distributed as 
 $(Z_k(j))_{\{k\in {\mathbb N},1\leq j\leq 2\cdot 4^{k^2}\}}$
. Setting
$(Z_k(j))_{\{k\in {\mathbb N},1\leq j\leq 2\cdot 4^{k^2}\}}$
. Setting 
 $g_k=f_k\circ T^{4^{k^2}}$
 concludes the proof.
$g_k=f_k\circ T^{4^{k^2}}$
 concludes the proof.
 From now on let 
 $(\mathcal {X},{\mathcal B},m,T)$
 be an aperiodic, ergodic dynamical system and
$(\mathcal {X},{\mathcal B},m,T)$
 be an aperiodic, ergodic dynamical system and 
 $(f_k)_{k=1}^\infty $
 and
$(f_k)_{k=1}^\infty $
 and 
 $(g_k)_{k=1}^\infty $
 the functions from Corollary 2.3.
$(g_k)_{k=1}^\infty $
 the functions from Corollary 2.3.
Lemma 2.4. We have that 
 $\#\{k\in {\mathbb N}: f_k\neq 0\ \text {or}\ g_k\neq 0\}<\infty $
, m-almost everywhere.
$\#\{k\in {\mathbb N}: f_k\neq 0\ \text {or}\ g_k\neq 0\}<\infty $
, m-almost everywhere.
Proof. Since 
 $f_k$
 and
$f_k$
 and 
 $g_k$
 are
$g_k$
 are 
 $Z_k(1)$
 distributed and
$Z_k(1)$
 distributed and 
 $X_k(1)$
 is
$X_k(1)$
 is 
 $S_\alpha (\sigma _k,1,0)$
 distributed, it follows from Proposition A.1 that
$S_\alpha (\sigma _k,1,0)$
 distributed, it follows from Proposition A.1 that 
 $$ \begin{align*} m(f_k\neq 0\ \text{or} \ g_k\neq 0)&\leq m(f_k\neq 0)+m(g_k\neq 0)\\ &=2{\mathbb P}(Z_k(1)\neq 0)\\ &\leq 2{\mathbb P}(X_k(1)>2^k)\leq C\frac{2^{-\alpha k}}{k}, \end{align*} $$
$$ \begin{align*} m(f_k\neq 0\ \text{or} \ g_k\neq 0)&\leq m(f_k\neq 0)+m(g_k\neq 0)\\ &=2{\mathbb P}(Z_k(1)\neq 0)\\ &\leq 2{\mathbb P}(X_k(1)>2^k)\leq C\frac{2^{-\alpha k}}{k}, \end{align*} $$
where C is a global constant which does not depend on k. Using the union bound and stationarity, the right-hand side being summable, the claim follows from the Borel–Cantelli lemma.
 In what follows, we assume that 
 $\alpha \in (0,2)$
 is fixed and
$\alpha \in (0,2)$
 is fixed and 
 $f_k$
 and
$f_k$
 and 
 $g_k$
 correspond to the functions in Corollary 2.3. In addition, we write for
$g_k$
 correspond to the functions in Corollary 2.3. In addition, we write for 
 $h:\mathcal {X}\to {\mathbb R}$
 and
$h:\mathcal {X}\to {\mathbb R}$
 and 
 $n\in \mathbb {N}$
,
$n\in \mathbb {N}$
, 
 $$ \begin{align*} S_n(h):=\sum_{k=0}^{n-1}h\circ T^k. \end{align*} $$
$$ \begin{align*} S_n(h):=\sum_{k=0}^{n-1}h\circ T^k. \end{align*} $$
Define
 $$ \begin{align*} f=\sum_{k=1}^\infty f_k\quad \text{and}\quad g=\sum_{k=1}^\infty g_k. \end{align*} $$
$$ \begin{align*} f=\sum_{k=1}^\infty f_k\quad \text{and}\quad g=\sum_{k=1}^\infty g_k. \end{align*} $$
Note that by Lemma 2.4, f and g are well defined as the sum in their definition is almost surely a sum of finitely many functions. Recall that the (rescaled) partial sum process of a function 
 $h:\mathcal {X}\to {\mathbb R}$
 is
$h:\mathcal {X}\to {\mathbb R}$
 is 
 $$ \begin{align*} \mathbb{W}_n(h)(t)=\frac{1}{n^{1/\alpha}}S_{[nt]}(h),\quad 0\leq t\leq 1. \end{align*} $$
$$ \begin{align*} \mathbb{W}_n(h)(t)=\frac{1}{n^{1/\alpha}}S_{[nt]}(h),\quad 0\leq t\leq 1. \end{align*} $$
Theorem 2.5. Assume 
 $0<\alpha <1$
. Fix
$0<\alpha <1$
. Fix 
 $\beta \in [-1,1]$
 and define
$\beta \in [-1,1]$
 and define 
 $$ \begin{align*} h_k&:=\bigg(\frac{1+\beta}{2}\bigg)^{1/\alpha}f_k-\bigg(\frac{1-\beta}{2}\bigg)^{1/\alpha}g_k,\\ h&:=\bigg(\frac{1+\beta}{2}\bigg)^{1/\alpha}f-\bigg(\frac{1-\beta}{2}\bigg)^{1/\alpha}g=\sum_{k=1}^\infty h_k. \end{align*} $$
$$ \begin{align*} h_k&:=\bigg(\frac{1+\beta}{2}\bigg)^{1/\alpha}f_k-\bigg(\frac{1-\beta}{2}\bigg)^{1/\alpha}g_k,\\ h&:=\bigg(\frac{1+\beta}{2}\bigg)^{1/\alpha}f-\bigg(\frac{1-\beta}{2}\bigg)^{1/\alpha}g=\sum_{k=1}^\infty h_k. \end{align*} $$
 ${\mathbb {W}}_n(h)\Rightarrow ^d {\mathbb {W}}$
 where
${\mathbb {W}}_n(h)\Rightarrow ^d {\mathbb {W}}$
 where 
 ${\mathbb {W}}$
 is an
${\mathbb {W}}$
 is an 
 $S_\alpha (\ln (2),\beta ,0)$
 Lévy stable motion.
$S_\alpha (\ln (2),\beta ,0)$
 Lévy stable motion.
 We also have a functional CLT version for general 
 $\alpha \in (0,2)$
 when the limit is
$\alpha \in (0,2)$
 when the limit is 
 $S\alpha S$
. Recall that the functions
$S\alpha S$
. Recall that the functions 
 $f_k$
 and
$f_k$
 and 
 $g_k$
 are related by
$g_k$
 are related by 
 $g_k=f_k\circ T^{4^{k^2}}$
.
$g_k=f_k\circ T^{4^{k^2}}$
.
Theorem 2.6. Assume 
 $\alpha \in [1,2)$
. Define
$\alpha \in [1,2)$
. Define 
 $$ \begin{align*} h_k&:=f_k-g_k,\\ h&:=f-g=\sum_{k=1}^\infty h_k. \end{align*} $$
$$ \begin{align*} h_k&:=f_k-g_k,\\ h&:=f-g=\sum_{k=1}^\infty h_k. \end{align*} $$
 ${\mathbb {W}}_n(h)\Rightarrow ^d{\mathbb {W}}$
 where
${\mathbb {W}}_n(h)\Rightarrow ^d{\mathbb {W}}$
 where 
 ${\mathbb {W}}$
 is an
${\mathbb {W}}$
 is an 
 $S_\alpha S(\kern -2pt\sqrt [\alpha ]{2\ln (2)})$
 Lévy motion.
$S_\alpha S(\kern -2pt\sqrt [\alpha ]{2\ln (2)})$
 Lévy motion.
2.3 General CLT for 
 $\alpha>1$
$\alpha>1$
 Recall that a coboundary for a measure-preserving transformation is a function H such that there exists a function G, called a transfer function, such that 
 $H=G-G\circ T$
. The resulting cocycle (sum process) of the coboundaries
$H=G-G\circ T$
. The resulting cocycle (sum process) of the coboundaries 
 $f_k - g_k$
 from the proof of Theorem 2.6 converges to a symmetric
$f_k - g_k$
 from the proof of Theorem 2.6 converges to a symmetric 
 $\alpha $
-stable distribution. To get a skewed
$\alpha $
-stable distribution. To get a skewed 
 $\alpha $
-stable limit we thus use a different kind of coboundaries as described below. Set
$\alpha $
-stable limit we thus use a different kind of coboundaries as described below. Set 
 $D_k:=4^{\alpha k}$
,
$D_k:=4^{\alpha k}$
, 
 $$ \begin{align*} \varphi_k:=\frac{1}{D_k}\sum_{j=0}^{D_k-1}f_k\circ T^j \end{align*} $$
$$ \begin{align*} \varphi_k:=\frac{1}{D_k}\sum_{j=0}^{D_k-1}f_k\circ T^j \end{align*} $$
and 
 $h_k:=f_k-\varphi _k$
. We note that the
$h_k:=f_k-\varphi _k$
. We note that the 
 $h_k$
 and h in this subsection denote different functions than in the previous subsection.
$h_k$
 and h in this subsection denote different functions than in the previous subsection.
Lemma 2.7. If 
 $\alpha \in (1,2)$
, then
$\alpha \in (1,2)$
, then 
 $\sum _{k=1}^N h_k$
 converges in
$\sum _{k=1}^N h_k$
 converges in 
 $L^1(m)$
 and almost surely as
$L^1(m)$
 and almost surely as 
 $N\to \infty $
.
$N\to \infty $
.
Proof. By Fubini’s theorem it suffices to show that 
 $\sum _{k=1}^\infty \int |h_k|\,dm<\infty $
.
$\sum _{k=1}^\infty \int |h_k|\,dm<\infty $
.
To that end, for a fixed k we have
 $$ \begin{align*} \int |h_k|\,dm&\leq \int|f_k|\,dm+\frac{1}{D_k}\sum_{j=0}^{D_k-1}\int|f_k\circ T^j|\,dm =2\int|f_k|\,dm, \end{align*} $$
$$ \begin{align*} \int |h_k|\,dm&\leq \int|f_k|\,dm+\frac{1}{D_k}\sum_{j=0}^{D_k-1}\int|f_k\circ T^j|\,dm =2\int|f_k|\,dm, \end{align*} $$
where the last equality is true as T preserves m. Next 
 $f_k$
 and
$f_k$
 and 
 $Z_k(1)$
 are equally distributed and
$Z_k(1)$
 are equally distributed and 
 $$ \begin{align*} Z_k(1)\leq Y_k(1)\leq X_k(1)1_{[X_k(1)\geq 2^k]}. \end{align*} $$
$$ \begin{align*} Z_k(1)\leq Y_k(1)\leq X_k(1)1_{[X_k(1)\geq 2^k]}. \end{align*} $$
As 
 $\alpha>1$
, it follows from this and Corollary A.3 that there exists
$\alpha>1$
, it follows from this and Corollary A.3 that there exists 
 $C>0$
 such that for all
$C>0$
 such that for all 
 $k\in {\mathbb N}$
,
$k\in {\mathbb N}$
, 
 $$ \begin{align*} \int|f_k|\,dm&=\mathbb{E}(Z_k(1))\\ &\leq \mathbb{E}(X_k(1)1_{[X_k(1)\geq 2^k]})\leq C\frac{2^{k(1-\alpha)}}{k}. \end{align*} $$
$$ \begin{align*} \int|f_k|\,dm&=\mathbb{E}(Z_k(1))\\ &\leq \mathbb{E}(X_k(1)1_{[X_k(1)\geq 2^k]})\leq C\frac{2^{k(1-\alpha)}}{k}. \end{align*} $$
We conclude that
 $$ \begin{align*} \sum_{k=1}^\infty\int |h_k|\,dm\leq C\sum_{k=1}^\infty \frac{2^{k(1-\alpha)}}{k}<\infty.\\[-47pt] \end{align*} $$
$$ \begin{align*} \sum_{k=1}^\infty\int |h_k|\,dm\leq C\sum_{k=1}^\infty \frac{2^{k(1-\alpha)}}{k}<\infty.\\[-47pt] \end{align*} $$
 Following this, we write 
 $h=\sum _{k=1}^\infty h_k$
 and throughout this subsection and §5, h always corresponds to this function. Note that for every
$h=\sum _{k=1}^\infty h_k$
 and throughout this subsection and §5, h always corresponds to this function. Note that for every 
 $k\in {\mathbb N}$
,
$k\in {\mathbb N}$
, 
 $\mathbb {E}(X_k(1)1_{[X_k(1)\leq 2^k]})$
 exists, and write
$\mathbb {E}(X_k(1)1_{[X_k(1)\leq 2^k]})$
 exists, and write 
 $$ \begin{align*} B_n:=n\sum_{k=({1}/{2\alpha})\log(n)}^{({1}/{\alpha})\log(n)}\mathbb{E}(X_k(1)1_{[X_k(1)\leq 2^k]}). \end{align*} $$
$$ \begin{align*} B_n:=n\sum_{k=({1}/{2\alpha})\log(n)}^{({1}/{\alpha})\log(n)}\mathbb{E}(X_k(1)1_{[X_k(1)\leq 2^k]}). \end{align*} $$
Theorem 2.8. Assume 
 $\alpha \in (1,2)$
.
$\alpha \in (1,2)$
. 
 $({S_n(h)+B_n})/{n^{1/\alpha }}$
 converges in distribution to an
$({S_n(h)+B_n})/{n^{1/\alpha }}$
 converges in distribution to an 
 $S_\alpha (\ln (2),1,0))$
 random variable.
$S_\alpha (\ln (2),1,0))$
 random variable.
 The following claim gives the asymptotics of 
 $B_n$
.
$B_n$
. 
Claim 2.9. For every 
 $\alpha \in (1,2)$
, there exists
$\alpha \in (1,2)$
, there exists 
 $c_\alpha>0$
 such that
$c_\alpha>0$
 such that 
 $B_n=c_\alpha n(\log (n))^{1-{1}/{\alpha }} (1+o(1))$
 as
$B_n=c_\alpha n(\log (n))^{1-{1}/{\alpha }} (1+o(1))$
 as 
 $n\to \infty $
.
$n\to \infty $
.
Proof. Recall that 
 $\sigma _k=k^{-1/\alpha }$
. Since
$\sigma _k=k^{-1/\alpha }$
. Since 
 ${2^k}/{\sigma _k}\to \infty $
 as
${2^k}/{\sigma _k}\to \infty $
 as 
 $k\to \infty $
, it follows from the monotone convergence theorem that if Z is an
$k\to \infty $
, it follows from the monotone convergence theorem that if Z is an 
 $S_\alpha (1,1,0)$
 random variable, then
$S_\alpha (1,1,0)$
 random variable, then 
 $$ \begin{align*} \lim_{k\to\infty}\mathbb{E}(Z1_{[\sigma_kZ\leq 2^k]})=\mathbb{E}(Z)=:\eta_\alpha>0. \end{align*} $$
$$ \begin{align*} \lim_{k\to\infty}\mathbb{E}(Z1_{[\sigma_kZ\leq 2^k]})=\mathbb{E}(Z)=:\eta_\alpha>0. \end{align*} $$
Now for every k, 
 $X_k(j)$
 and
$X_k(j)$
 and 
 $\sigma _kZ$
 are equally distributed. Consequently,
$\sigma _kZ$
 are equally distributed. Consequently, 
 $$ \begin{align*} \mathbb{E}(X_k(1)1_{[X_k(1)\leq 2^k]})=\sigma_k\mathbb{E}(Z1_{[\sigma_kZ\leq 2^k]})=\sigma_k\eta_\alpha(1+o_{k\to\infty}(1)). \end{align*} $$
$$ \begin{align*} \mathbb{E}(X_k(1)1_{[X_k(1)\leq 2^k]})=\sigma_k\mathbb{E}(Z1_{[\sigma_kZ\leq 2^k]})=\sigma_k\eta_\alpha(1+o_{k\to\infty}(1)). \end{align*} $$
The claimed asymptotics now follows from this and
 $$ \begin{align*} \sum_{k=({1}/{2\alpha})\log(n)}^{({1}/{\alpha})\log(n)}\sigma_k\sim \bigg(\frac{1}{\alpha}\log(n)\bigg)^{1-{1}/{\alpha}}(1-2^{{1}/{\alpha}-1})\quad\text{as } n\to\infty.\\[-47pt] \end{align*} $$
$$ \begin{align*} \sum_{k=({1}/{2\alpha})\log(n)}^{({1}/{\alpha})\log(n)}\sigma_k\sim \bigg(\frac{1}{\alpha}\log(n)\bigg)^{1-{1}/{\alpha}}(1-2^{{1}/{\alpha}-1})\quad\text{as } n\to\infty.\\[-47pt] \end{align*} $$
Now write
 $$ \begin{align*} \hat{h}_k:=g_k-\frac{1}{D_k}\sum_{j=0}^{D_k-1}g_k\circ T^j \end{align*} $$
$$ \begin{align*} \hat{h}_k:=g_k-\frac{1}{D_k}\sum_{j=0}^{D_k-1}g_k\circ T^j \end{align*} $$
and 
 $\hat {h}:=\sum _{k=1}^\infty \hat {h}_k$
. Note that
$\hat {h}:=\sum _{k=1}^\infty \hat {h}_k$
. Note that 
 $\hat {h}$
 is well defined as for all k,
$\hat {h}$
 is well defined as for all k, 
 $\hat {h}_k=h_k\circ T^{4^{k^2}}$
 so h is a limit in
$\hat {h}_k=h_k\circ T^{4^{k^2}}$
 so h is a limit in 
 $L^1$
 by Lemma 2.7.
$L^1$
 by Lemma 2.7.
Theorem 2.10. Assume 
 $\alpha>1$
. Fix
$\alpha>1$
. Fix 
 $\beta \in [-1,1]$
 and define
$\beta \in [-1,1]$
 and define 
 $$ \begin{align*} H:=\bigg(\frac{\beta+1}{2}\bigg)^{1/\alpha}h-\bigg(\frac{1-\beta}{2}\bigg)^{1/\alpha}\hat{h}. \end{align*} $$
$$ \begin{align*} H:=\bigg(\frac{\beta+1}{2}\bigg)^{1/\alpha}h-\bigg(\frac{1-\beta}{2}\bigg)^{1/\alpha}\hat{h}. \end{align*} $$
Then 
 ${1}/{n^{1/\alpha }}(S_n(H)+B_n((({1+\beta })/{2})^{1/\alpha }-(({1-\beta })/{2})^{1/\alpha }))$
 converges in distribution to
${1}/{n^{1/\alpha }}(S_n(H)+B_n((({1+\beta })/{2})^{1/\alpha }-(({1-\beta })/{2})^{1/\alpha }))$
 converges in distribution to 
 $S_\alpha (\ln (2),\beta ,0)$
.
$S_\alpha (\ln (2),\beta ,0)$
.
2.4 Strategy of the proof of Theorems 2.5 and 2.6
 The proof starts by writing for 
 $\psi \in \{h,f,g\}$
,
$\psi \in \{h,f,g\}$
, 
 $$ \begin{align} {\mathbb{W}}_n(\psi)= {\mathbb{W}}_n^{(\mathbf{S})}(\psi)+{\mathbb{W}}_n^{(\mathbf{M})}(\psi)+{\mathbb{W}}_n^{(\mathbf{L})}(\psi) \end{align} $$
$$ \begin{align} {\mathbb{W}}_n(\psi)= {\mathbb{W}}_n^{(\mathbf{S})}(\psi)+{\mathbb{W}}_n^{(\mathbf{M})}(\psi)+{\mathbb{W}}_n^{(\mathbf{L})}(\psi) \end{align} $$
where
 $$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(\psi)&:= \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}{\mathbb{W}}_n(\psi_k),\\ {\mathbb{W}}_n^{(\mathbf{S})}(\psi)&:= \sum_{k=1}^{({1}/{2\alpha})\log(n)} {\mathbb{W}}_n(\psi_k),\\ {\mathbb{W}}_n^{(\mathbf{L})}(\psi)&:=\sum_{k=({1}/{\alpha})\log n+1}^\infty {\mathbb{W}}_n(\psi_k). \end{align*} $$
$$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(\psi)&:= \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}{\mathbb{W}}_n(\psi_k),\\ {\mathbb{W}}_n^{(\mathbf{S})}(\psi)&:= \sum_{k=1}^{({1}/{2\alpha})\log(n)} {\mathbb{W}}_n(\psi_k),\\ {\mathbb{W}}_n^{(\mathbf{L})}(\psi)&:=\sum_{k=({1}/{\alpha})\log n+1}^\infty {\mathbb{W}}_n(\psi_k). \end{align*} $$
Writing 
 $\|\cdot \|_\infty $
 for the supremum norm, we first show that
$\|\cdot \|_\infty $
 for the supremum norm, we first show that 
 $\| {\mathbb {W}}_n^{(\mathbf {S})}(h)\|_\infty $
 and
$\| {\mathbb {W}}_n^{(\mathbf {S})}(h)\|_\infty $
 and 
 $\| {\mathbb {W}}_n^{(\mathbf {L})}(h)\|_\infty $
 converge to
$\| {\mathbb {W}}_n^{(\mathbf {L})}(h)\|_\infty $
 converge to 
 $0$
 in probability, hence the two processes converge to the zero function in the uniform (and consequently the
$0$
 in probability, hence the two processes converge to the zero function in the uniform (and consequently the 
 $J_1$
) topology.
$J_1$
) topology.
 Next we show that 
 ${\mathbb {W}}_n^{(\mathbf {M})}(h)$
 converges in distribution (in the
${\mathbb {W}}_n^{(\mathbf {M})}(h)$
 converges in distribution (in the 
 $J_1$
 topology) to the correct limiting process.
$J_1$
 topology) to the correct limiting process.
 Finally, we use Slutsky’s theorem, also known as the convergence together lemma, in the (Polish) Skorohod 
 $J_1$
 topology, to deduce the weak convergence result for
$J_1$
 topology, to deduce the weak convergence result for 
 ${\mathbb {W}}_n(h)$
.
${\mathbb {W}}_n(h)$
.
Lemma 2.11. Let 
 $A_n,B_n$
 and W be
$A_n,B_n$
 and W be 
 $D[0,1]$
-valued processes such that
$D[0,1]$
-valued processes such that 
 $A_n\Rightarrow ^d0$
 in the uniform topology and
$A_n\Rightarrow ^d0$
 in the uniform topology and 
 $B_n\Rightarrow W$
 in the
$B_n\Rightarrow W$
 in the 
 $J_1$
 topology. Then
$J_1$
 topology. Then 
 $A_n+B_n\Rightarrow ^dW$
 in the
$A_n+B_n\Rightarrow ^dW$
 in the 
 $J_1$
 topology.
$J_1$
 topology.
 We remark that Lemma 2.11 follows from [Reference Billingsley2, Theorem 3.1.] and the fact that the uniform topology is stronger than the 
 $J_1$
 topology on
$J_1$
 topology on 
 $D[0,1]$
.
$D[0,1]$
.
3 Proof of Theorem 2.5
 We carry out the proof strategy as stated in §2.4. In what follows 
 $(X,{\mathcal B},m,T)$
 is an ergodic, aperiodic probability-preserving system,
$(X,{\mathcal B},m,T)$
 is an ergodic, aperiodic probability-preserving system, 
 $\beta \in [-1,1]$
,
$\beta \in [-1,1]$
, 
 $\alpha \in (0,1)$
 is fixed and the functions
$\alpha \in (0,1)$
 is fixed and the functions 
 $f_k$
 are as in Theorem 2.5.
$f_k$
 are as in Theorem 2.5.
 This section has two subsections. In the first we prove results on 
 ${\mathbb {W}}_n^{(\mathbf {S})}(f)$
,
${\mathbb {W}}_n^{(\mathbf {S})}(f)$
, 
 ${\mathbb {W}}_n^{(\mathbf {M})}$
 and
${\mathbb {W}}_n^{(\mathbf {M})}$
 and 
 ${\mathbb {W}}_n^{(\mathbf {L})}(f)$
. These results combined prove Theorem 2.5 in the totally skewed to the right (
${\mathbb {W}}_n^{(\mathbf {L})}(f)$
. These results combined prove Theorem 2.5 in the totally skewed to the right (
 $\beta =1$
) case. In the second subsection we show how to deduce Theorem 2.5 from these results.
$\beta =1$
) case. In the second subsection we show how to deduce Theorem 2.5 from these results.
3.1 Case 
 $\beta =1$
$\beta =1$
Lemma 3.1. We have 
 $\lim _{n\to \infty }m(\|{\mathbb {W}}_n^{(\mathbf {L})}(f)\|_\infty \neq 0)=0$
.
$\lim _{n\to \infty }m(\|{\mathbb {W}}_n^{(\mathbf {L})}(f)\|_\infty \neq 0)=0$
.
Proof. The statement follows from the inclusion
 $$ \begin{align*} [\|{\mathbb{W}}_n^{(\mathbf{L})}(f)\|_\infty \neq 0]\subset \bigcup_{k=({1}/{\alpha})\log(n)+1}^\infty \bigcup_{j=0}^{n-1}[f_k\circ T^j\neq 0]. \end{align*} $$
$$ \begin{align*} [\|{\mathbb{W}}_n^{(\mathbf{L})}(f)\|_\infty \neq 0]\subset \bigcup_{k=({1}/{\alpha})\log(n)+1}^\infty \bigcup_{j=0}^{n-1}[f_k\circ T^j\neq 0]. \end{align*} $$
Therefore,
 $$ \begin{align*} m(\|{\mathbb{W}}_n^{(\mathbf{L})}(f)\|_\infty\neq 0) &\leq \sum_{k=({1}/{\alpha})\log(n)+1}^\infty \sum_{j=0}^{n-1}m(f_k\circ T^j\neq 0)\\ &=\sum_{k=({1}/{\alpha})\log(n)+1}^\infty n \cdot m(f_k\neq 0)\\ &\leq C n\sum_{k=({1}/{\alpha})\log(n)+1}^\infty \frac{2^{-\alpha k}}{k}, \end{align*} $$
$$ \begin{align*} m(\|{\mathbb{W}}_n^{(\mathbf{L})}(f)\|_\infty\neq 0) &\leq \sum_{k=({1}/{\alpha})\log(n)+1}^\infty \sum_{j=0}^{n-1}m(f_k\circ T^j\neq 0)\\ &=\sum_{k=({1}/{\alpha})\log(n)+1}^\infty n \cdot m(f_k\neq 0)\\ &\leq C n\sum_{k=({1}/{\alpha})\log(n)+1}^\infty \frac{2^{-\alpha k}}{k}, \end{align*} $$
where the last inequality is from the proof of Lemma 2.4. The result now follows since
 $$ \begin{align*} n\sum_{k=({1}/{\alpha})\log(n)+1}^\infty \frac{2^{-\alpha k}}{k}\lesssim \frac{1}{\log(n)}\xrightarrow[n\to\infty]{}0,\\[-49pt] \end{align*} $$
$$ \begin{align*} n\sum_{k=({1}/{\alpha})\log(n)+1}^\infty \frac{2^{-\alpha k}}{k}\lesssim \frac{1}{\log(n)}\xrightarrow[n\to\infty]{}0,\\[-49pt] \end{align*} $$
Lemma 3.2. The random variable 
 $\|{\mathbb {W}}_n^{(\mathbf {S})}(f)\|_\infty$
 converges to
$\|{\mathbb {W}}_n^{(\mathbf {S})}(f)\|_\infty$
 converges to 
 $0$
 in measure.
$0$
 in measure.
Proof. Recall that for all 
 $k\in {\mathbb N}$
,
$k\in {\mathbb N}$
, 
 $f_k$
 is distributed as
$f_k$
 is distributed as 
 $Z_k(1)$
, whence
$Z_k(1)$
, whence 
 $f_k\geq 0$
 and
$f_k\geq 0$
 and 
 $$ \begin{align*} \|{\mathbb{W}}_n^{(\mathbf{S})}(f)\|_\infty&=\max_{0\leq t\leq 1}\bigg| \frac{1}{n^{1/\alpha}} \sum_{k=1}^{({1}/{2\alpha})\log(n)}\bigg(\sum_{j=0}^{[nt]-1}f_k\circ T^j \bigg)\bigg|\\ &= \frac{1}{n^{1/\alpha}} \sum_{k=1}^{({1}/{2\alpha})\log(n)}\bigg(\sum_{j=0}^{n-1}f_k\circ T^j \bigg). \end{align*} $$
$$ \begin{align*} \|{\mathbb{W}}_n^{(\mathbf{S})}(f)\|_\infty&=\max_{0\leq t\leq 1}\bigg| \frac{1}{n^{1/\alpha}} \sum_{k=1}^{({1}/{2\alpha})\log(n)}\bigg(\sum_{j=0}^{[nt]-1}f_k\circ T^j \bigg)\bigg|\\ &= \frac{1}{n^{1/\alpha}} \sum_{k=1}^{({1}/{2\alpha})\log(n)}\bigg(\sum_{j=0}^{n-1}f_k\circ T^j \bigg). \end{align*} $$
 For every 
 $k,j\in {\mathbb Z}$
,
$k,j\in {\mathbb Z}$
, 
 $f_k\circ T^j$
 is distributed as
$f_k\circ T^j$
 is distributed as 
 $Z_k(1)$
 and
$Z_k(1)$
 and 
 $$ \begin{align*}0\leq Z_k(1)\leq X_k(1)\mathbf{1}_{[0\leq X_k(1)\leq 4^k]}.\end{align*} $$
$$ \begin{align*}0\leq Z_k(1)\leq X_k(1)\mathbf{1}_{[0\leq X_k(1)\leq 4^k]}.\end{align*} $$
By Corollary A.2, there exists 
 $C>0$
 such that for all
$C>0$
 such that for all 
 $k,j\in {\mathbb N}$
,
$k,j\in {\mathbb N}$
, 
 $$ \begin{align*} \|f_k\circ T^j\|_{1}&= \mathbb{E}(Z_k(1))\\ &\leq \mathbb{E}(X_k(1) \mathbf{1}_{[0\leq X_k(1)\leq 4^k]})\leq C\frac{4^{k(1-\alpha)}}{k}. \end{align*} $$
$$ \begin{align*} \|f_k\circ T^j\|_{1}&= \mathbb{E}(Z_k(1))\\ &\leq \mathbb{E}(X_k(1) \mathbf{1}_{[0\leq X_k(1)\leq 4^k]})\leq C\frac{4^{k(1-\alpha)}}{k}. \end{align*} $$
Consequently,
 $$ \begin{align*} \| \|{\mathbb{W}}_n^{(\mathbf{S})}(f)\|_\infty\|_1 &\leq n^{-{1}/{\alpha}}\sum_{k=1}^{({1}/{2\alpha})\log(n)}\sum_{j=0}^{n-1} \|f_k\circ T^j\|_1\\ &\leq n^{-{1}/{\alpha}}\sum_{k=1}^{({1}/{2\alpha})\log(n)}Cn\frac{4^{(1-\alpha)k}}{k}\lesssim \frac{1}{\log(n)}\xrightarrow[n\to\infty]{}0. \end{align*} $$
$$ \begin{align*} \| \|{\mathbb{W}}_n^{(\mathbf{S})}(f)\|_\infty\|_1 &\leq n^{-{1}/{\alpha}}\sum_{k=1}^{({1}/{2\alpha})\log(n)}\sum_{j=0}^{n-1} \|f_k\circ T^j\|_1\\ &\leq n^{-{1}/{\alpha}}\sum_{k=1}^{({1}/{2\alpha})\log(n)}Cn\frac{4^{(1-\alpha)k}}{k}\lesssim \frac{1}{\log(n)}\xrightarrow[n\to\infty]{}0. \end{align*} $$
A standard application of the Markov inequality shows that 
 $\|{\mathbb {W}}_n^{(\mathbf {S})}(f)\|_\infty $
 converges to
$\|{\mathbb {W}}_n^{(\mathbf {S})}(f)\|_\infty $
 converges to 
 $0$
 in measure, concluding the proof.
$0$
 in measure, concluding the proof.
The rest of this subsection is concerned with the proof of the following result for 
 ${\mathbb {W}}_n^{(\mathbf {M})}(f)$
.
${\mathbb {W}}_n^{(\mathbf {M})}(f)$
.
Proposition 3.3. The random variable 
 ${\mathbb {W}}_n^{(\mathbf {M})}(f)$
 converges in distribution to
${\mathbb {W}}_n^{(\mathbf {M})}(f)$
 converges in distribution to 
 ${\mathbb {W}}$
, an
${\mathbb {W}}$
, an 
 $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 Lévy motion.
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 Lévy motion.
 For 
 $V\in \{X,Y,Z\}$
 and
$V\in \{X,Y,Z\}$
 and 
 $k,n\in {\mathbb N}$
, define
$k,n\in {\mathbb N}$
, define 
 $$ \begin{align*} S_n\big(V_k\big):=\sum_{j=1}^n V_k(j). \end{align*} $$
$$ \begin{align*} S_n\big(V_k\big):=\sum_{j=1}^n V_k(j). \end{align*} $$
We introduce the following 
 $D[0,1]$
-valued processes on
$D[0,1]$
-valued processes on 
 $(\Omega ,\mathcal {F},{\mathbb P})$
:
$(\Omega ,\mathcal {F},{\mathbb P})$
: 
 $$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(Z)(t)&:=\frac{1}{n^{1/\alpha}}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(Z_k(\cdot)),\\ {\mathbb{W}}_n^{(\mathbf{M})}(Y)(t)&:=\frac{1}{n^{1/\alpha}}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(Y_k(\cdot)),\\ {\mathbb{W}}_n^{(\mathbf{M})}(X)(t)&:=\frac{1}{n^{1/\alpha}}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(X_k(\cdot)). \end{align*} $$
$$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(Z)(t)&:=\frac{1}{n^{1/\alpha}}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(Z_k(\cdot)),\\ {\mathbb{W}}_n^{(\mathbf{M})}(Y)(t)&:=\frac{1}{n^{1/\alpha}}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(Y_k(\cdot)),\\ {\mathbb{W}}_n^{(\mathbf{M})}(X)(t)&:=\frac{1}{n^{1/\alpha}}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(X_k(\cdot)). \end{align*} $$
The reason for their definition is the following lemma.
Lemma 3.4. The random variables 
 ${\mathbb {W}}_n^{(\mathbf {M})}(f)$
 and
${\mathbb {W}}_n^{(\mathbf {M})}(f)$
 and 
 ${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
 are equally distributed.
${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
 are equally distributed.
Proof. By the definition of 
 $f_k$
,
$f_k$
, 
 $\{f_k\circ T^{j-1}:\ k\in {\mathbb N},\ 1\leq j\leq 4^k\}$
 and
$\{f_k\circ T^{j-1}:\ k\in {\mathbb N},\ 1\leq j\leq 4^k\}$
 and 
 $\{Z_k(j):\ k\in {\mathbb N}, 1\leq j\leq 4^k\}$
 are equally distributed.
$\{Z_k(j):\ k\in {\mathbb N}, 1\leq j\leq 4^k\}$
 are equally distributed.
 The function 
 $G_n: \prod _{k\in {\mathbb N}} {\mathbb R}^{2\cdot 4^k}\to D[0,1]$
 defined for all
$G_n: \prod _{k\in {\mathbb N}} {\mathbb R}^{2\cdot 4^k}\to D[0,1]$
 defined for all 
 $0\leq t\leq 1$
 and
$0\leq t\leq 1$
 and 
 $(x_k(j))_{\ k\in {\mathbb N},\ 1\leq j\leq 4^k}\in \prod _{k\in {\mathbb N}} {\mathbb R}^{2\cdot 4^k}$
 by
$(x_k(j))_{\ k\in {\mathbb N},\ 1\leq j\leq 4^k}\in \prod _{k\in {\mathbb N}} {\mathbb R}^{2\cdot 4^k}$
 by 
 $$ \begin{align*} G_n((x_k(j))_{\ k\in{\mathbb N},\ 1\leq j\leq 4^k})(t):=\frac{1}{n^{1/\alpha}}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^{[nt]}x_k\big(j\big), \end{align*} $$
$$ \begin{align*} G_n((x_k(j))_{\ k\in{\mathbb N},\ 1\leq j\leq 4^k})(t):=\frac{1}{n^{1/\alpha}}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^{[nt]}x_k\big(j\big), \end{align*} $$
is continuous.
 As 
 $G_n((f_k\circ T^{j-1})_{\ k\in {\mathbb N},\ 1\leq j\leq 4^k})={\mathbb {W}}_n^{(\mathbf {M})}(f)$
 and similarly
$G_n((f_k\circ T^{j-1})_{\ k\in {\mathbb N},\ 1\leq j\leq 4^k})={\mathbb {W}}_n^{(\mathbf {M})}(f)$
 and similarly 
 $G_n((Z_k(j))_{\ k\in {\mathbb N},\ 1\leq j\leq 4^k})={\mathbb {W}}_n^{(\mathbf {M})}(Z)$
, the claim follows from the continuous mapping theorem.
$G_n((Z_k(j))_{\ k\in {\mathbb N},\ 1\leq j\leq 4^k})={\mathbb {W}}_n^{(\mathbf {M})}(Z)$
, the claim follows from the continuous mapping theorem.
 Using this equality of distributions, it suffices to show that 
 ${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
 converges in distribution to an
${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
 converges in distribution to an 
 $S_\alpha (1,1,0)$
 Lévy motion. This follows from the convergence together lemma (Lemma 2.11) and the following result.
$S_\alpha (1,1,0)$
 Lévy motion. This follows from the convergence together lemma (Lemma 2.11) and the following result. 
Lemma 3.5. The following two properties are satisfied.
- 
(a) The sequence of random variables  $\|W_n^{(M)}(X)-W_n^{(M)}(Z)\|_{\infty}$
 converges to $\|W_n^{(M)}(X)-W_n^{(M)}(Z)\|_{\infty}$
 converges to $0$
 in measure. $0$
 in measure.
- 
(b) The sequence of  $D[0,1]$
 valued random variables $D[0,1]$
 valued random variables $W_n^{(M)}(X)$
 converges in distribution to an $W_n^{(M)}(X)$
 converges in distribution to an $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 Lévy motion. $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 Lévy motion.
Proof of Lemma 3.5(a)
 For every 
 $k,m\in {\mathbb N}$
 (noting here that as
$k,m\in {\mathbb N}$
 (noting here that as 
 $\alpha <1$
, a skewed
$\alpha <1$
, a skewed 
 $\alpha $
-stable random variable is non-negative),
$\alpha $
-stable random variable is non-negative), 
 $$ \begin{align*} 0\leq Z_k(m)\leq Y_k(m)\leq X_k(m). \end{align*} $$
$$ \begin{align*} 0\leq Z_k(m)\leq Y_k(m)\leq X_k(m). \end{align*} $$
We deduce from this and the triangle inequality that
 $$ \begin{align*} \| {\mathbb{W}}_n^{(\mathbf{M})}(X)-{\mathbb{W}}_n^{(\mathbf{M})}(Z)\|_\infty\leq n^{-1/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}(S_n(X_k)-S_n(Z_k)). \end{align*} $$
$$ \begin{align*} \| {\mathbb{W}}_n^{(\mathbf{M})}(X)-{\mathbb{W}}_n^{(\mathbf{M})}(Z)\|_\infty\leq n^{-1/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}(S_n(X_k)-S_n(Z_k)). \end{align*} $$
We will show that the right-hand side converges to 
 $0$
 in probability.
$0$
 in probability.
 Firstly 
 $0<\alpha <1$
, hence for all
$0<\alpha <1$
, hence for all 
 $k>({1}/{2\alpha })\log (n)$
,
$k>({1}/{2\alpha })\log (n)$
, 
 $n<4^k$
. Consequently, by Fact 2.1,
$n<4^k$
. Consequently, by Fact 2.1, 
 $$ \begin{align*} 0\leq S_n(Y_k)-S_n(Z_k)\leq n4^{-k}\leq 1. \end{align*} $$
$$ \begin{align*} 0\leq S_n(Y_k)-S_n(Z_k)\leq n4^{-k}\leq 1. \end{align*} $$
We conclude that
 $$ \begin{align} \mathrm{I}_n:=n^{-1/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}(S_n(Y_k)-S_n(Z_k))\lesssim \frac{\log(n)}{n^{1/\alpha}}. \end{align} $$
$$ \begin{align} \mathrm{I}_n:=n^{-1/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}(S_n(Y_k)-S_n(Z_k))\lesssim \frac{\log(n)}{n^{1/\alpha}}. \end{align} $$
Secondly,
 $$ \begin{align*} n^{-1/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha}) \log(n)}(S_n(X_k)-S_n(Y_k))=\mathrm{II}_n+\mathrm{III}_n \end{align*} $$
$$ \begin{align*} n^{-1/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha}) \log(n)}(S_n(X_k)-S_n(Y_k))=\mathrm{II}_n+\mathrm{III}_n \end{align*} $$
where
 $$ \begin{align*} \widetilde{V}_k(m):=X_k(m)1_{[X_k(m)>4^k]},\quad\widehat{V}_k(m):=X_k(m)1_{[X_k(m)\leq 2^k]}, \end{align*} $$
$$ \begin{align*} \widetilde{V}_k(m):=X_k(m)1_{[X_k(m)>4^k]},\quad\widehat{V}_k(m):=X_k(m)1_{[X_k(m)\leq 2^k]}, \end{align*} $$
and
 $$ \begin{align*} \mathrm{II}_n&:=n^{-1/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_n(\widehat{V}_k),\\ \mathrm{III}_n&:=n^{-1/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_n(\widetilde{V}_k). \end{align*} $$
$$ \begin{align*} \mathrm{II}_n&:=n^{-1/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_n(\widehat{V}_k),\\ \mathrm{III}_n&:=n^{-1/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_n(\widetilde{V}_k). \end{align*} $$
Similarly to the proof of Lemma 3.1,
 $$ \begin{align*} {\mathbb P}(\mathrm{III}_n\neq 0)&\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n{\mathbb P}(X_k(j)>4^k)\\ &\lesssim Cn\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{4^{-k\alpha}}{k}\lesssim \frac{1}{\log(n)}, \end{align*} $$
$$ \begin{align*} {\mathbb P}(\mathrm{III}_n\neq 0)&\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n{\mathbb P}(X_k(j)>4^k)\\ &\lesssim Cn\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{4^{-k\alpha}}{k}\lesssim \frac{1}{\log(n)}, \end{align*} $$
showing that 
 $\mathrm {III}_n\xrightarrow [n\to \infty ]{}0$
 in probability.
$\mathrm {III}_n\xrightarrow [n\to \infty ]{}0$
 in probability.
 We now fix 
 $\alpha <r<1$
 and
$\alpha <r<1$
 and 
 $\varepsilon>0$
. Note that by Corollary A.2 there exists a global constant C so that for every k and m,
$\varepsilon>0$
. Note that by Corollary A.2 there exists a global constant C so that for every k and m, 
 $$ \begin{align*} \mathbb{E}((\widehat{V}_k(m))^r)\leq C\frac{2^{k(r-\alpha)}}{k}. \end{align*} $$
$$ \begin{align*} \mathbb{E}((\widehat{V}_k(m))^r)\leq C\frac{2^{k(r-\alpha)}}{k}. \end{align*} $$
By Markov’s inequality and the triangle inequality for the r’th moments,
 $$ \begin{align*} {\mathbb P}(\mathrm{II}_n>\varepsilon)&\leq \mathbb{E}((\mathrm{II}_n)^r)\varepsilon^{-r}\\ &\leq n^{-r/\alpha}\varepsilon^{-r}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n\mathbb{E}((\widehat{V}_k(j))^r)\\ &\lesssim \varepsilon^{-r}n^{1-r/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{2^{k(r-\alpha)}}{k}\lesssim \varepsilon^{-r}\frac{1}{\log(n)}. \end{align*} $$
$$ \begin{align*} {\mathbb P}(\mathrm{II}_n>\varepsilon)&\leq \mathbb{E}((\mathrm{II}_n)^r)\varepsilon^{-r}\\ &\leq n^{-r/\alpha}\varepsilon^{-r}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n\mathbb{E}((\widehat{V}_k(j))^r)\\ &\lesssim \varepsilon^{-r}n^{1-r/\alpha}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{2^{k(r-\alpha)}}{k}\lesssim \varepsilon^{-r}\frac{1}{\log(n)}. \end{align*} $$
We conclude that 
 $\mathrm {II}_n\xrightarrow [n\to \infty ]{}0$
 in probability. Finally, we conclude the proof as we have
$\mathrm {II}_n\xrightarrow [n\to \infty ]{}0$
 in probability. Finally, we conclude the proof as we have 
 $$ \begin{align*} \| {\mathbb{W}}_n^{(\mathbf{M})}(X)-{\mathbb{W}}_n^{(\mathbf{M})}(Z)\|_\infty\leq \mathrm{I}_n+\mathrm{II}_n+\mathrm{III}_n \end{align*} $$
$$ \begin{align*} \| {\mathbb{W}}_n^{(\mathbf{M})}(X)-{\mathbb{W}}_n^{(\mathbf{M})}(Z)\|_\infty\leq \mathrm{I}_n+\mathrm{II}_n+\mathrm{III}_n \end{align*} $$
and each of the terms on the right-hand side converges to 
 $0$
 in probability.
$0$
 in probability.
Proof of Lemma 3.5(b)
 For all 
 $0\leq t\leq 1$
$0\leq t\leq 1$
 
 $$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(X)(t)=n^{-1/\alpha}S_{[nt]}\big(V_n\big), \end{align*} $$
$$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(X)(t)=n^{-1/\alpha}S_{[nt]}\big(V_n\big), \end{align*} $$
where for 
 $j\in {\mathbb N}$
,
$j\in {\mathbb N}$
, 
 $$ \begin{align*} V_n(j):=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}X_k(j). \end{align*} $$
$$ \begin{align*} V_n(j):=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}X_k(j). \end{align*} $$
We claim that 
 $V_n(1),V_n(2),\ldots ,V_n(n)$
 are i.i.d.
$V_n(1),V_n(2),\ldots ,V_n(n)$
 are i.i.d. 
 $S_\alpha (A_n,1,0)$
 random variables with
$S_\alpha (A_n,1,0)$
 random variables with 
 $\lim _{n\to \infty }(A_n)^\alpha =\ln (2)$
.
$\lim _{n\to \infty }(A_n)^\alpha =\ln (2)$
.
 Indeed, since 
 $\alpha <1$
, we deduce that for all
$\alpha <1$
, we deduce that for all 
 $k>({1}/{2\alpha })\log (n)$
, we have
$k>({1}/{2\alpha })\log (n)$
, we have 
 $4^{k}>n$
. The independence of
$4^{k}>n$
. The independence of 
 $V_n(1),V_n(2),\ldots ,V_n(n)$
 readily follows from the independence of
$V_n(1),V_n(2),\ldots ,V_n(n)$
 readily follows from the independence of 
 $\{X_k(m):k\in {\mathbb N},\ m\leq 4^{k}\}$
.
$\{X_k(m):k\in {\mathbb N},\ m\leq 4^{k}\}$
.
 Now for all 
 $1\leq j\leq n$
 and
$1\leq j\leq n$
 and 
 $k\in (({1}/{2\alpha })\log (n),({1}/{\alpha })\log (n)]$
,
$k\in (({1}/{2\alpha })\log (n),({1}/{\alpha })\log (n)]$
, 
 $X_k(j)$
 is an
$X_k(j)$
 is an 
 $S_\alpha (\sigma _k,1,0)$
 random variable with
$S_\alpha (\sigma _k,1,0)$
 random variable with 
 $(\sigma _k)^\alpha =1/k$
. As
$(\sigma _k)^\alpha =1/k$
. As 
 $V_n(j)$
 is a sum of independent
$V_n(j)$
 is a sum of independent 
 $S_\alpha (\sigma _k,1,0)$
 random variables (and
$S_\alpha (\sigma _k,1,0)$
 random variables (and 
 $\alpha \neq 1$
), it follows from [Reference Samorodnitsky and Taqqu11, Properties 1.2.1 and 1.2.3] that
$\alpha \neq 1$
), it follows from [Reference Samorodnitsky and Taqqu11, Properties 1.2.1 and 1.2.3] that 
 $V_n(j)$
 is
$V_n(j)$
 is 
 $S_\alpha (A_n,1,0)$
 distributed with
$S_\alpha (A_n,1,0)$
 distributed with 
 $$ \begin{align*} (A_n)^\alpha=\sum_{k=({1}/{2\alpha})\log(n)}^{({1}/{\alpha})\log(n)}\frac{1}{k}\sim \ln(2)\quad\text{as}\ n\to\infty. \end{align*} $$
$$ \begin{align*} (A_n)^\alpha=\sum_{k=({1}/{2\alpha})\log(n)}^{({1}/{\alpha})\log(n)}\frac{1}{k}\sim \ln(2)\quad\text{as}\ n\to\infty. \end{align*} $$
We will now conclude the proof. Write 
 $a_n:={(\ln (2))^{1/\alpha }}/{A_n}$
 and define
$a_n:={(\ln (2))^{1/\alpha }}/{A_n}$
 and define 
 $W_n(t):=a_n{\mathbb {W}}_n^{(\mathbf {M})}(X)(t)$
 so that
$W_n(t):=a_n{\mathbb {W}}_n^{(\mathbf {M})}(X)(t)$
 so that 
 $W_n$
 is the partial sum process driven by the random variables,
$W_n$
 is the partial sum process driven by the random variables, 
 $a_nV_n(1),\ldots , a_nV_n(n)$
.
$a_nV_n(1),\ldots , a_nV_n(n)$
.
 As the latter are i.i.d. 
 $S_\alpha (\ln (2),\beta ,0)$
 random variables, this shows that
$S_\alpha (\ln (2),\beta ,0)$
 random variables, this shows that 
 $W_n$
 is equally distributed as
$W_n$
 is equally distributed as 
 $ {\mathbb {W}}_n(V)$
 where
$ {\mathbb {W}}_n(V)$
 where 
 $(V(j))_{j=1}^\infty $
 are i.i.d.
$(V(j))_{j=1}^\infty $
 are i.i.d. 
 $S_\alpha (\ln (2),1,0)$
 random variables.
$S_\alpha (\ln (2),1,0)$
 random variables.
 By [Reference Resnick10, Corollary 7.1] 
 ${\mathbb {W}}_n(V)$
 (and hence
${\mathbb {W}}_n(V)$
 (and hence 
 $W_n$
) converges in distribution to an
$W_n$
) converges in distribution to an 
 $S_\alpha (\ln (2),1,0)$
 Lévy motion.
$S_\alpha (\ln (2),1,0)$
 Lévy motion.
 Since 
 ${\mathbb {W}}_n^{(\mathbf {M})}(X)=(a_n)^{-1}W_n$
 with
${\mathbb {W}}_n^{(\mathbf {M})}(X)=(a_n)^{-1}W_n$
 with 
 $a_n\to 1$
, we conclude that
$a_n\to 1$
, we conclude that 
 ${\mathbb {W}}_n^{(\mathbf {M})}(X)$
 converges in distribution to an
${\mathbb {W}}_n^{(\mathbf {M})}(X)$
 converges in distribution to an 
 $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 Lévy motion.
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 Lévy motion.
3.2 Concluding the proof of Theorem 2.5
 We now fix 
 $\alpha \in (0,1)$
 and
$\alpha \in (0,1)$
 and 
 $\beta \in [-1,1]$
 and set
$\beta \in [-1,1]$
 and set 
 $h_k$
, h as the functions from Theorem 2.5 corresponding to
$h_k$
, h as the functions from Theorem 2.5 corresponding to 
 $\beta $
. We claim that
$\beta $
. We claim that 
 $ {\mathbb {W}}_n(h) $
 converges in distribution to an
$ {\mathbb {W}}_n(h) $
 converges in distribution to an 
 $S_\alpha (\ln (2),\beta ,0)$
 Lévy motion.
$S_\alpha (\ln (2),\beta ,0)$
 Lévy motion.
 We deduce this claim from the results on the skewed 
 $\beta =1$
 case via the following lemma.
$\beta =1$
 case via the following lemma.
Lemma 3.6.
- 
(a) The sequence of  $D[0,1]\times D[0,1]$
 valued random variables $D[0,1]\times D[0,1]$
 valued random variables $({\mathbb {W}}_n^{(\mathbf {S})}(f),{\mathbb {W}}_n^{(\mathbf {S})}(g))$
 converges in distribution (in the uniform topology) to $({\mathbb {W}}_n^{(\mathbf {S})}(f),{\mathbb {W}}_n^{(\mathbf {S})}(g))$
 converges in distribution (in the uniform topology) to $(0,0)$
. $(0,0)$
.
- 
(b) The sequence of  $D[0,1]\times D[0,1]$
 valued random variables $D[0,1]\times D[0,1]$
 valued random variables $({\mathbb {W}}_n^{(\mathbf {M})}(f),{\mathbb {W}}_n^{(\mathbf {M})}(g))$
 converges in distribution to $({\mathbb {W}}_n^{(\mathbf {M})}(f),{\mathbb {W}}_n^{(\mathbf {M})}(g))$
 converges in distribution to $({\mathbb {W}},{\mathbb {W}}')$
 where $({\mathbb {W}},{\mathbb {W}}')$
 where ${\mathbb {W}},{\mathbb {W}}'$
 are independent ${\mathbb {W}},{\mathbb {W}}'$
 are independent $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 Lévy motions. $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 Lévy motions.
- 
(c) The sequence of  $D[0,1]\times D[0,1]$
 valued random variables $D[0,1]\times D[0,1]$
 valued random variables $({\mathbb {W}}_n^{(\mathbf {L})}(f),{\mathbb {W}}_n^{(\mathbf {L})}(g))$
 converges in distribution (in the uniform topology) to $({\mathbb {W}}_n^{(\mathbf {L})}(f),{\mathbb {W}}_n^{(\mathbf {L})}(g))$
 converges in distribution (in the uniform topology) to $(0,0)$
. $(0,0)$
.
Proof. For all 
 $k\in \mathbb {N}$
,
$k\in \mathbb {N}$
, 
 $f_k$
 and
$f_k$
 and 
 $g_k$
 are equally distributed. Following the proofs of Lemmas 3.1 and 3.2 we see that
$g_k$
 are equally distributed. Following the proofs of Lemmas 3.1 and 3.2 we see that 
 $ \|W_n^{(\mathbf {S})}(g)\|_\infty $
 and
$ \|W_n^{(\mathbf {S})}(g)\|_\infty $
 and 
 $\|W_n^{(\mathbf {L})}(g)\|_\infty $
 tend to
$\|W_n^{(\mathbf {L})}(g)\|_\infty $
 tend to 
 $0$
 in probability as
$0$
 in probability as 
 $n\to \infty $
. Parts (a) and (c) follow from this and Lemmas 3.1 and 3.2.
$n\to \infty $
. Parts (a) and (c) follow from this and Lemmas 3.1 and 3.2.
 Now for all 
 $n\in {\mathbb N}$
,
$n\in {\mathbb N}$
, 
 ${\mathbb {W}}_n^{(\mathbf {M})}(f)$
 and
${\mathbb {W}}_n^{(\mathbf {M})}(f)$
 and 
 ${\mathbb {W}}_n^{(\mathbf {M})}(g)$
 are independent and equally distributed. Part (b) follows from this and Proposition 3.3.
${\mathbb {W}}_n^{(\mathbf {M})}(g)$
 are independent and equally distributed. Part (b) follows from this and Proposition 3.3.
We have the following immediate corollary.
Corollary 3.7. The following three properties are satisfied:
- 
(a)  $\|{\mathbb {W}}_n^{(\mathbf {S})}(h)\|_\infty \to 0$
 converges in measure; $\|{\mathbb {W}}_n^{(\mathbf {S})}(h)\|_\infty \to 0$
 converges in measure;
- 
(b)  ${\mathbb {W}}_n^{(\mathbf {M})}(h)$
 converges in distribution to an ${\mathbb {W}}_n^{(\mathbf {M})}(h)$
 converges in distribution to an $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$
 Lévy motion; $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$
 Lévy motion;
- 
(c)  $\|{\mathbb {W}}_n^{(\mathbf {L})}(h)\|_\infty \to 0$
 in measure. $\|{\mathbb {W}}_n^{(\mathbf {L})}(h)\|_\infty \to 0$
 in measure.
Proof. Set
 $$ \begin{align*} \varphi(x,y)=\bigg(\frac{1+\beta}{2}\bigg)^{1/\alpha}x-\bigg(\frac{1-\beta}{2}\bigg)^{1/\alpha}y \end{align*} $$
$$ \begin{align*} \varphi(x,y)=\bigg(\frac{1+\beta}{2}\bigg)^{1/\alpha}x-\bigg(\frac{1-\beta}{2}\bigg)^{1/\alpha}y \end{align*} $$
and write 
 $c_\beta :=((({\beta +1})/{2})^{1/\alpha }-(({1-\beta })/{2})^{1/\alpha })$
.
$c_\beta :=((({\beta +1})/{2})^{1/\alpha }-(({1-\beta })/{2})^{1/\alpha })$
.
 For each 
 $\mathbf {D}\in \{\mathbf {S},\mathbf {M},\mathbf {L}\}$
, and all
$\mathbf {D}\in \{\mathbf {S},\mathbf {M},\mathbf {L}\}$
, and all 
 $n\in {\mathbb N}$
,
$n\in {\mathbb N}$
, 
 $$ \begin{align*} \varphi({\mathbb{W}}_n^{(\mathbf{D})}(f),{\mathbb{W}}_n^{(\mathbf{D})}(g))={\mathbb{W}}_n^{(\mathbf{D})}(h). \end{align*} $$
$$ \begin{align*} \varphi({\mathbb{W}}_n^{(\mathbf{D})}(f),{\mathbb{W}}_n^{(\mathbf{D})}(g))={\mathbb{W}}_n^{(\mathbf{D})}(h). \end{align*} $$
 Parts (a) and (c) follow from Lemma 3.6(a) and (c) since for all 
 $x,y\in {\mathbb R}$
,
$x,y\in {\mathbb R}$
, 
 $|\varphi (x,y)|\leq |x|+|y|$
.
$|\varphi (x,y)|\leq |x|+|y|$
.
 Let 
 ${\mathbb {W}},{\mathbb {W}}'$
 be two independent
${\mathbb {W}},{\mathbb {W}}'$
 be two independent 
 $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 Lévy motions. It follows that
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 Lévy motions. It follows that 
 $\widetilde {{\mathbb {W}}}:=\varphi ({\mathbb {W}},{\mathbb {W}}')$
 is a process with independent increments. By [Reference Samorodnitsky and Taqqu11, Property 1.2.13], for all
$\widetilde {{\mathbb {W}}}:=\varphi ({\mathbb {W}},{\mathbb {W}}')$
 is a process with independent increments. By [Reference Samorodnitsky and Taqqu11, Property 1.2.13], for all 
 $s<t$
,
$s<t$
, 
 $\widetilde {{\mathbb {W}}}(t)-\widetilde {{\mathbb {W}}}(s)$
 is
$\widetilde {{\mathbb {W}}}(t)-\widetilde {{\mathbb {W}}}(s)$
 is 
 $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)(t-s)},1,0)$
 distributed, whence
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)(t-s)},1,0)$
 distributed, whence 
 $\widetilde {{\mathbb {W}}}$
 is an
$\widetilde {{\mathbb {W}}}$
 is an 
 $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$
 Lévy motion.
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$
 Lévy motion.
 Since 
 $\varphi $
 is continuous, Lemma 3.6 and the continuous mapping theorem imply that
$\varphi $
 is continuous, Lemma 3.6 and the continuous mapping theorem imply that 
 $$ \begin{align*} \varphi({\mathbb{W}}_n^{(\mathbf{M})}(f),{\mathbb{W}}_n^{(\mathbf{M})}(g))={\mathbb{W}}_n^{(\mathbf{M})}(h) \end{align*} $$
$$ \begin{align*} \varphi({\mathbb{W}}_n^{(\mathbf{M})}(f),{\mathbb{W}}_n^{(\mathbf{M})}(g))={\mathbb{W}}_n^{(\mathbf{M})}(h) \end{align*} $$
converges in distribution to 
 $\widetilde {{\mathbb {W}}}$
 and the proof is concluded.
$\widetilde {{\mathbb {W}}}$
 and the proof is concluded.
4 Proof of Theorem 2.6
 Let 
 $\alpha \geq 1$
. The strategy of the proof goes along similar lines. However, there is a major difference in the treatment of
$\alpha \geq 1$
. The strategy of the proof goes along similar lines. However, there is a major difference in the treatment of 
 ${\mathbb {W}}_n^{(\mathbf {S})}$
 as the
${\mathbb {W}}_n^{(\mathbf {S})}$
 as the 
 $L^1$
 norm does not decay to
$L^1$
 norm does not decay to 
 $0$
. For this reason we retort to a more sophisticated
$0$
. For this reason we retort to a more sophisticated 
 $L^2$
 estimate and make use of the fact that for all k,
$L^2$
 estimate and make use of the fact that for all k, 
 $h_k$
 is a
$h_k$
 is a 
 $T^{4^{k^2}}$
 coboundary.
$T^{4^{k^2}}$
 coboundary.
 In what follows, 
 $1\leq \alpha <2$
 is fixed,
$1\leq \alpha <2$
 is fixed, 
 $h_k$
 and h are as in the statement of Theorem 2.6 and the decomposition of
$h_k$
 and h are as in the statement of Theorem 2.6 and the decomposition of 
 ${\mathbb {W}}_n(h)$
 to a sum of
${\mathbb {W}}_n(h)$
 to a sum of 
 ${\mathbb {W}}_n^{(\mathbf {S})}(h)$
,
${\mathbb {W}}_n^{(\mathbf {S})}(h)$
, 
 $\mathbb{W}_n^{(\mathbf{M})}(h)$
 and
$\mathbb{W}_n^{(\mathbf{M})}(h)$
 and 
 $\mathbb{W}_n^{(\mathbf{L})}(h)$
 is as before. We write
$\mathbb{W}_n^{(\mathbf{L})}(h)$
 is as before. We write 
 $d_k:=4^{k^2}$
.
$d_k:=4^{k^2}$
. 
Lemma 4.1. We have 
 $\lim _{n\to \infty }m(\|{\mathbb {W}}_n^{(\mathbf {L})}(h)\|_\infty \neq 0)=0$
.
$\lim _{n\to \infty }m(\|{\mathbb {W}}_n^{(\mathbf {L})}(h)\|_\infty \neq 0)=0$
.
Proof. The statement follows from the inclusion
 $$ \begin{align*} [\|{\mathbb{W}}_n^{(\mathbf{L})}(h)\|_\infty \neq 0]\subset \bigcup_{k=({1}/{\alpha})\log(n)+1}^\infty \bigcup_{j=0}^{n-1}[f_k\circ T^j\neq 0\quad \text{or}\quad f_k\circ T^{d_k+j}\neq 0]. \end{align*} $$
$$ \begin{align*} [\|{\mathbb{W}}_n^{(\mathbf{L})}(h)\|_\infty \neq 0]\subset \bigcup_{k=({1}/{\alpha})\log(n)+1}^\infty \bigcup_{j=0}^{n-1}[f_k\circ T^j\neq 0\quad \text{or}\quad f_k\circ T^{d_k+j}\neq 0]. \end{align*} $$
In a similar way to the proof of Lemma 3.1, we have
 $$ \begin{align*} {\mathbb P}(\|{\mathbb{W}}_n^{(\mathbf{L})}(h)\|_\infty \neq 0)\leq \sum_{k=({1}/{\alpha})\log(n)+1}^\infty 2n \cdot m(f_k\neq 0)\lesssim \frac{1}{\log(n)}\xrightarrow[n\to\infty]{}0.\\[-47pt] \end{align*} $$
$$ \begin{align*} {\mathbb P}(\|{\mathbb{W}}_n^{(\mathbf{L})}(h)\|_\infty \neq 0)\leq \sum_{k=({1}/{\alpha})\log(n)+1}^\infty 2n \cdot m(f_k\neq 0)\lesssim \frac{1}{\log(n)}\xrightarrow[n\to\infty]{}0.\\[-47pt] \end{align*} $$
As before, we also have the following lemma.
Lemma 4.2. The random variable 
 $\|{\mathbb {W}}_n^{(\mathbf {S})}(f)\|_\infty$
 converges to
$\|{\mathbb {W}}_n^{(\mathbf {S})}(f)\|_\infty$
 converges to 
 $0$
 in measure.
$0$
 in measure.
 The proof of this lemma when 
 $1\leq \alpha <2$
 is more difficult than the analogous Lemma 3.2. It is given in §4.2.
$1\leq \alpha <2$
 is more difficult than the analogous Lemma 3.2. It is given in §4.2.
Proposition 4.3. The sequence of 
 $D[0,1]$
 valued random variables
$D[0,1]$
 valued random variables 
 ${\mathbb {W}}_n^{(\mathbf {M})}(h)$
 converges in distribution to
${\mathbb {W}}_n^{(\mathbf {M})}(h)$
 converges in distribution to 
 ${\mathbb {W}}$
, an
${\mathbb {W}}$
, an 
 $S_\alpha S(\kern -2pt\sqrt [\alpha ]{\ln (2)})$
 Lévy motion.
$S_\alpha S(\kern -2pt\sqrt [\alpha ]{\ln (2)})$
 Lévy motion.
Assuming the previous claims, we can complete the proof of Theorem 2.6.
Proof of Theorem 2.6
 By Lemmas 4.1 and 4.2, 
 $\|{\mathbb {W}}_n^{(\mathbf {S})}+{\mathbb {W}}_n^{(\mathbf {L})}\|_\infty $
 converges in probability to
$\|{\mathbb {W}}_n^{(\mathbf {S})}+{\mathbb {W}}_n^{(\mathbf {L})}\|_\infty $
 converges in probability to 
 $0$
. The claim now follows from Proposition 4.3 and Lemma 2.11.
$0$
. The claim now follows from Proposition 4.3 and Lemma 2.11.
In the next two subsections we prove Proposition 4.3 and Lemma 4.2.
4.1 Proof of Proposition 4.3
 We introduce the following 
 $D[0,1]$
-valued processes on
$D[0,1]$
-valued processes on 
 $(\Omega ,\mathcal {F},{\mathbb P})$
:
$(\Omega ,\mathcal {F},{\mathbb P})$
: 
 $$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(Z)(t)&:=\frac{1}{n^{1/\alpha}} \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(Z_k(\cdot)-Z_k(\cdot+d_k)),\\ {\mathbb{W}}_n^{(\mathbf{M})}(Y)(t)&:=\frac{1}{n^{1/\alpha}} \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(Y_k(\cdot)-Y_k(\cdot+d_k)),\\ {\mathbb{W}}_n^{(\mathbf{M})}(X)(t)&:=\frac{1}{n^{1/\alpha}} \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(X_k(\cdot)-X_k(\cdot+d_k)). \end{align*} $$
$$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(Z)(t)&:=\frac{1}{n^{1/\alpha}} \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(Z_k(\cdot)-Z_k(\cdot+d_k)),\\ {\mathbb{W}}_n^{(\mathbf{M})}(Y)(t)&:=\frac{1}{n^{1/\alpha}} \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(Y_k(\cdot)-Y_k(\cdot+d_k)),\\ {\mathbb{W}}_n^{(\mathbf{M})}(X)(t)&:=\frac{1}{n^{1/\alpha}} \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(X_k(\cdot)-X_k(\cdot+d_k)). \end{align*} $$
The following is the analogue of Lemma 3.4 for the current case.
Lemma 4.4. The random variables 
 ${\mathbb {W}}_n^{(\mathbf {M})}(h)$
 and
${\mathbb {W}}_n^{(\mathbf {M})}(h)$
 and 
 ${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
 are equally distributed.
${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
 are equally distributed.
The proof of Lemma 4.4 is similar to the proof of Lemma 3.4, with obvious modifications. We leave it to the reader. Proposition 4.3 follows from Lemma 4.4 and the following result.
Lemma 4.5. The following two properties are satisfied.
- 
(a) The sequence of random variables  $ \| {\mathbb {W}}_n^{(\mathbf {M})}(X)-({\mathbb {W}}_n^{(\mathbf {M})}(Z))\|_\infty$
 converges to $ \| {\mathbb {W}}_n^{(\mathbf {M})}(X)-({\mathbb {W}}_n^{(\mathbf {M})}(Z))\|_\infty$
 converges to $0$
 in measure. $0$
 in measure.
- 
(b) The sequence of  $D[0,1]$
 valued random variables $D[0,1]$
 valued random variables ${\mathbb {W}}_n^{(\mathbf {M})}(X)$
 converges in distribution to an ${\mathbb {W}}_n^{(\mathbf {M})}(X)$
 converges in distribution to an $S_\alpha S(\kern -2pt\sqrt [\alpha ]{2\ln (2)})$
 Lévy motion. $S_\alpha S(\kern -2pt\sqrt [\alpha ]{2\ln (2)})$
 Lévy motion.
Consequently, 
 ${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
 converges in distribution to an
${\mathbb {W}}_n^{(\mathbf {M})}(Z)$
 converges in distribution to an 
 $S_\alpha S(\kern -2pt\sqrt [\alpha ]{2\ln (2)})$
 Lévy motion.
$S_\alpha S(\kern -2pt\sqrt [\alpha ]{2\ln (2)})$
 Lévy motion.
Proof of Lemma 4.5(b)
 For all 
 $0\leq t\leq 1$
,
$0\leq t\leq 1$
, 
 $$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(X)(t)=n^{-1/\alpha}S_{[nt]}\big(V_n\big), \end{align*} $$
$$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(X)(t)=n^{-1/\alpha}S_{[nt]}\big(V_n\big), \end{align*} $$
where for 
 $j\in {\mathbb N}$
,
$j\in {\mathbb N}$
, 
 $$ \begin{align*} V_n(j):=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}(X_k(j)-X_k\big(j+d_k\big)). \end{align*} $$
$$ \begin{align*} V_n(j):=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}(X_k(j)-X_k\big(j+d_k\big)). \end{align*} $$
We claim that for all but finitely many n, 
 $V_n(1),V_n(2),\ldots ,V_n(n)$
 are i.i.d.
$V_n(1),V_n(2),\ldots ,V_n(n)$
 are i.i.d. 
 $S_\alpha S(A_n)$
 random variables with
$S_\alpha S(A_n)$
 random variables with 
 $\lim _{n\to \infty }(A_n)^\alpha =\ln (2)$
.
$\lim _{n\to \infty }(A_n)^\alpha =\ln (2)$
.
 For all 
 $n\geq 2^{4\alpha }$
, if
$n\geq 2^{4\alpha }$
, if 
 $k\geq ({1}/{2\alpha })\log (n)$
, we have
$k\geq ({1}/{2\alpha })\log (n)$
, we have 
 $d_k\geq n$
. For all such n, the independence of
$d_k\geq n$
. For all such n, the independence of 
 $V_n(1),\ldots ,V_n(n)$
 follows from the independence of
$V_n(1),\ldots ,V_n(n)$
 follows from the independence of 
 $\{X_k(j):\ k\in {\mathbb N},1\leq j\leq 2\cdot d_k\}$
. We will now calculate its distributions.
$\{X_k(j):\ k\in {\mathbb N},1\leq j\leq 2\cdot d_k\}$
. We will now calculate its distributions.
 For all 
 $1\leq j\leq n$
 and
$1\leq j\leq n$
 and 
 $k>({1}/{2\alpha })\log (n)$
,
$k>({1}/{2\alpha })\log (n)$
, 
 $X_k(j)-X_k\big (j+d_k\big )$
 is a difference of two independent
$X_k(j)-X_k\big (j+d_k\big )$
 is a difference of two independent 
 $S_\alpha (k^{-1/\alpha },1,0)$
 random variables. By [Reference Samorodnitsky and Taqqu11, Properties 1.2.1 and 1.2.3], it is
$S_\alpha (k^{-1/\alpha },1,0)$
 random variables. By [Reference Samorodnitsky and Taqqu11, Properties 1.2.1 and 1.2.3], it is 
 $S_\alpha S(({2}/{k})^{1/\alpha })$
 distributed. As
$S_\alpha S(({2}/{k})^{1/\alpha })$
 distributed. As 
 $V_n(j)$
 is a sum of independent
$V_n(j)$
 is a sum of independent 
 $S_\alpha S$
 random variables, we see that
$S_\alpha S$
 random variables, we see that 
 $V_n(j)$
 is
$V_n(j)$
 is 
 $S_\alpha S(A_n)$
 distributed with
$S_\alpha S(A_n)$
 distributed with 
 $$ \begin{align*} (A_n)^\alpha:=\sum_{k=({1}/{2\alpha})\log(n)}^{({1}/{\alpha})\log(n)}\frac{2}{k}=2\ln(2)(1+o(1))\quad\text{as}\ n\to\infty. \end{align*} $$
$$ \begin{align*} (A_n)^\alpha:=\sum_{k=({1}/{2\alpha})\log(n)}^{({1}/{\alpha})\log(n)}\frac{2}{k}=2\ln(2)(1+o(1))\quad\text{as}\ n\to\infty. \end{align*} $$
This concludes the claim on 
 $V_n(1),\ldots ,V_n(n)$
. The conclusion of the statement from here is similar to the end of the proof of Lemma 4.5(b).
$V_n(1),\ldots ,V_n(n)$
. The conclusion of the statement from here is similar to the end of the proof of Lemma 4.5(b).
Proof of Lemma 4.5(a)
 We assume 
 $n>2^{4\alpha }$
 so that for all
$n>2^{4\alpha }$
 so that for all 
 $k>({1}/{2\alpha })\log (n)$
,
$k>({1}/{2\alpha })\log (n)$
, 
 $d_k>n$
.
$d_k>n$
.
 Firstly, since for all 
 $k\in {\mathbb N}$
 and
$k\in {\mathbb N}$
 and 
 $j\leq 2d_k$
,
$j\leq 2d_k$
, 
 $$ \begin{align*} 0<Y_k(j)-Z_k(j)\leq 4^{-{k}}, \end{align*} $$
$$ \begin{align*} 0<Y_k(j)-Z_k(j)\leq 4^{-{k}}, \end{align*} $$
we have
 $$ \begin{align*} n^{1/\alpha}\| {\mathbb{W}}_n^{(\mathbf{M})}(Y)-({\mathbb{W}}_n^{(\mathbf{M})}(Z))\|_\infty&\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_n(Y_k(\cdot)-Z_k(\cdot))\\ &\ \ \ +\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_n(Y_k(\cdot+d_k)-Z_k(\cdot+d_k))\\ &\leq 2n\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}4^{-k}\lesssim n^{1-{1}/{\alpha}}. \end{align*} $$
$$ \begin{align*} n^{1/\alpha}\| {\mathbb{W}}_n^{(\mathbf{M})}(Y)-({\mathbb{W}}_n^{(\mathbf{M})}(Z))\|_\infty&\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_n(Y_k(\cdot)-Z_k(\cdot))\\ &\ \ \ +\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_n(Y_k(\cdot+d_k)-Z_k(\cdot+d_k))\\ &\leq 2n\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}4^{-k}\lesssim n^{1-{1}/{\alpha}}. \end{align*} $$
Consequently,
 $$ \begin{align} \| {\mathbb{W}}_n^{(\mathbf{M})}(Y)-{\mathbb{W}}_n^{(\mathbf{M})}(Z)\|_\infty\lesssim n^{1-{2}/{\alpha}}\xrightarrow[n\to\infty]{}0. \end{align} $$
$$ \begin{align} \| {\mathbb{W}}_n^{(\mathbf{M})}(Y)-{\mathbb{W}}_n^{(\mathbf{M})}(Z)\|_\infty\lesssim n^{1-{2}/{\alpha}}\xrightarrow[n\to\infty]{}0. \end{align} $$
We now look at 
 ${\mathbb {W}}_n^{(\mathbf {M})}(X)-{\mathbb {W}}_n^{(\mathbf {M})}(Y)$
. For all
${\mathbb {W}}_n^{(\mathbf {M})}(X)-{\mathbb {W}}_n^{(\mathbf {M})}(Y)$
. For all 
 $0\leq t \leq 1$
,
$0\leq t \leq 1$
, 
 $$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(X)(t)-{\mathbb{W}}_n^{(\mathbf{M})}(Y)(t)=\mathrm{II}_n(t)+\mathrm{III}_n(t), \end{align*} $$
$$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{M})}(X)(t)-{\mathbb{W}}_n^{(\mathbf{M})}(Y)(t)=\mathrm{II}_n(t)+\mathrm{III}_n(t), \end{align*} $$
where
 $$ \begin{align*} \widetilde{V}_k(m)&:=X_k(m)1_{[X_k(m)>4^k]}-X_k\big(m+d_k\big)1_{[X_k\big(m+d_k\big)> 4^k]},\\ \widehat{V}(m)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}(X_k(m)1_{[X_k(m)\leq 2^k]}-X_k\big(m+d_k\big)1_{[X_k\big(m+d_k\big)\leq 2^k]}), \end{align*} $$
$$ \begin{align*} \widetilde{V}_k(m)&:=X_k(m)1_{[X_k(m)>4^k]}-X_k\big(m+d_k\big)1_{[X_k\big(m+d_k\big)> 4^k]},\\ \widehat{V}(m)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}(X_k(m)1_{[X_k(m)\leq 2^k]}-X_k\big(m+d_k\big)1_{[X_k\big(m+d_k\big)\leq 2^k]}), \end{align*} $$
and
 $$ \begin{align*} \mathrm{II}_n(t)&:=n^{-1/\alpha}S_{[nt]}(\widehat{V}),\\ \mathrm{III}_n(t)&:=n^{-1/\alpha}\sum_{k=({1}/{2\alpha}) \log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(\widetilde{V}_k). \end{align*} $$
$$ \begin{align*} \mathrm{II}_n(t)&:=n^{-1/\alpha}S_{[nt]}(\widehat{V}),\\ \mathrm{III}_n(t)&:=n^{-1/\alpha}\sum_{k=({1}/{2\alpha}) \log(n)+1}^{({1}/{\alpha})\log(n)}S_{[nt]}(\widetilde{V}_k). \end{align*} $$
Similarly to the proof of Lemma 3.1,
 $$ \begin{align*} {\mathbb P}(\text{there exists } t: \mathrm{III}_n(t)\neq 0)&\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n{\mathbb P}(\widetilde{V}_k(j)\neq 0)\\ &\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n({\mathbb P}(X_k(m)>4^k)\\&\quad+{\mathbb P}(X_k\big(m+d_k\big)>4^k))\\ &\lesssim 2Cn\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{4^{-k\alpha}}{k}\lesssim \frac{1}{\log(n)}. \end{align*} $$
$$ \begin{align*} {\mathbb P}(\text{there exists } t: \mathrm{III}_n(t)\neq 0)&\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n{\mathbb P}(\widetilde{V}_k(j)\neq 0)\\ &\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n({\mathbb P}(X_k(m)>4^k)\\&\quad+{\mathbb P}(X_k\big(m+d_k\big)>4^k))\\ &\lesssim 2Cn\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{4^{-k\alpha}}{k}\lesssim \frac{1}{\log(n)}. \end{align*} $$
Now 
 $ \widehat {V}(j)$
,
$ \widehat {V}(j)$
, 
 $1\leq j \leq n$
, are zero-mean, independent random variables. By Proposition A.4, they also have second moment and for all
$1\leq j \leq n$
, are zero-mean, independent random variables. By Proposition A.4, they also have second moment and for all 
 $1\leq j\leq n$
,
$1\leq j\leq n$
, 
 $$ \begin{align} \mathbb{E}(\widehat{V}(j)^2)&\lesssim 2\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\mathbb{E}(X_k(j)^21_{[X_k(j)\leq 2^k]}) \nonumber \\ &\lesssim \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{2^{(2-\alpha)k}}{k}\lesssim \frac{n^{{2}/{\alpha}-1}}{\log(n)}. \end{align} $$
$$ \begin{align} \mathbb{E}(\widehat{V}(j)^2)&\lesssim 2\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\mathbb{E}(X_k(j)^21_{[X_k(j)\leq 2^k]}) \nonumber \\ &\lesssim \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{2^{(2-\alpha)k}}{k}\lesssim \frac{n^{{2}/{\alpha}-1}}{\log(n)}. \end{align} $$
It follows from Kolmogorov’s maximal inequality that for every 
 $\epsilon>0$
,
$\epsilon>0$
, 
 $$ \begin{align*} {\mathbb P}(\max_{0\leq t\leq 1}\big|\mathrm{II}_n(t)\big|>\epsilon)&= {\mathbb P}(\max_{1\leq m\leq n}|S_m(\widehat{V})|>\epsilon n^{1/\alpha})\\ &\leq \epsilon^{-2\alpha}n^{-2/\alpha}\mathbb{E}(|S_n(\widehat{V})|^2)\\ &= \epsilon^{-2\alpha}n^{-2/\alpha}(n\mathbb{E}(\widehat{V}(1)^2))\lesssim \frac{1}{\epsilon^{2\alpha}\log(n)}. \end{align*} $$
$$ \begin{align*} {\mathbb P}(\max_{0\leq t\leq 1}\big|\mathrm{II}_n(t)\big|>\epsilon)&= {\mathbb P}(\max_{1\leq m\leq n}|S_m(\widehat{V})|>\epsilon n^{1/\alpha})\\ &\leq \epsilon^{-2\alpha}n^{-2/\alpha}\mathbb{E}(|S_n(\widehat{V})|^2)\\ &= \epsilon^{-2\alpha}n^{-2/\alpha}(n\mathbb{E}(\widehat{V}(1)^2))\lesssim \frac{1}{\epsilon^{2\alpha}\log(n)}. \end{align*} $$
Here the first equality of the last line is true as 
 $\widehat {V}(1),\ldots ,\widehat {V}(n)$
 are independent, zero-mean random variables with finite variance. This concludes the proof that
$\widehat {V}(1),\ldots ,\widehat {V}(n)$
 are independent, zero-mean random variables with finite variance. This concludes the proof that 
 $$ \begin{align*} \|\mathrm{II}_n\|_\infty\xrightarrow[n\to\infty]{}0\quad\text{in probability}. \end{align*} $$
$$ \begin{align*} \|\mathrm{II}_n\|_\infty\xrightarrow[n\to\infty]{}0\quad\text{in probability}. \end{align*} $$
The claim now follows from (3) and the convergence in probability of 
 $\mathrm {II}_n,\mathrm {III}_n$
 to the zero function.
$\mathrm {II}_n,\mathrm {III}_n$
 to the zero function.
4.2 Proof of Lemma 4.2
We first write
 $$ \begin{align*} {\mathbb{W}}_n^{\mathbf{S}}(h)={\mathbb{W}}_n^{(\mathbf{VS})}(h)+{\mathbb{W}}_n^{(\mathbf{LS})}(h) \end{align*} $$
$$ \begin{align*} {\mathbb{W}}_n^{\mathbf{S}}(h)={\mathbb{W}}_n^{(\mathbf{VS})}(h)+{\mathbb{W}}_n^{(\mathbf{LS})}(h) \end{align*} $$
where
 $$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{VS})}(h)&=\sum_{k=1}^{\sqrt{\log(n)}} {\mathbb{W}}_n(h_k),\\ {\mathbb{W}}_n^{(\mathbf{LS})}(h)&=\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)} {\mathbb{W}}_n(h_k). \end{align*} $$
$$ \begin{align*} {\mathbb{W}}_n^{(\mathbf{VS})}(h)&=\sum_{k=1}^{\sqrt{\log(n)}} {\mathbb{W}}_n(h_k),\\ {\mathbb{W}}_n^{(\mathbf{LS})}(h)&=\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)} {\mathbb{W}}_n(h_k). \end{align*} $$
The reason for this further decomposition is that 
 $d_k>n$
 if and only if
$d_k>n$
 if and only if 
 $k>\sqrt {\log (n)}$
 so that only in the very small
$k>\sqrt {\log (n)}$
 so that only in the very small 
 $(\mathbf {VS})$
 terms do we no longer have full independence in the summands. The proof that
$(\mathbf {VS})$
 terms do we no longer have full independence in the summands. The proof that 
 ${\mathbb {W}}_n^{(\mathbf {LS})}(h)$
 tends to the zero function is quite similar to the proof of the last part in Lemma 4.5(a) while the proof of the other term makes use of the fact that we are dealing with coboundaries.
${\mathbb {W}}_n^{(\mathbf {LS})}(h)$
 tends to the zero function is quite similar to the proof of the last part in Lemma 4.5(a) while the proof of the other term makes use of the fact that we are dealing with coboundaries.
Lemma 4.6. The random variable 
 $\|{\mathbb {W}}_n^{(\mathbf {LS})}(h)\|_\infty$
 converges in measure to
$\|{\mathbb {W}}_n^{(\mathbf {LS})}(h)\|_\infty$
 converges in measure to 
 $0$
.
$0$
.
Proof. Write
 $$ \begin{align*} \psi_n:=\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}(f_k-f_k\circ T^{d_k}). \end{align*} $$
$$ \begin{align*} \psi_n:=\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}(f_k-f_k\circ T^{d_k}). \end{align*} $$
We have that:
- 
•  $\psi _n\circ T^j$
, $\psi _n\circ T^j$
, $1\leq j\leq n$
, are independent (since $1\leq j\leq n$
, are independent (since $d_k>n$
 for all k in the range of summation), bounded and $d_k>n$
 for all k in the range of summation), bounded and $\int \psi _n \,dm=0$
; $\int \psi _n \,dm=0$
;
- 
• for all t,  ${\mathbb {W}}_n^{(\mathbf {LS})}(h)(t)=n^{-1/\alpha }S_{[nt]}(\psi _n)$
. ${\mathbb {W}}_n^{(\mathbf {LS})}(h)(t)=n^{-1/\alpha }S_{[nt]}(\psi _n)$
.
By Kolmogorov’s maximal inequality, for all 
 $\epsilon>0$
,
$\epsilon>0$
, 
 $$ \begin{align*} m(\|{\mathbb{W}}_n^{(\mathbf{LS})}(h)\|_\infty>\epsilon)&=m(\max_{1\leq k\leq n}|S_k(\psi_n)|>\epsilon n^{1/\alpha})\\ &\leq \frac{\|S_n(\psi_n)\|_2^2}{\epsilon^{2}n^{2/\alpha}}=\epsilon^{-2}n^{1-{2}/{\alpha}}\|\psi_n\|_2^2, \end{align*} $$
$$ \begin{align*} m(\|{\mathbb{W}}_n^{(\mathbf{LS})}(h)\|_\infty>\epsilon)&=m(\max_{1\leq k\leq n}|S_k(\psi_n)|>\epsilon n^{1/\alpha})\\ &\leq \frac{\|S_n(\psi_n)\|_2^2}{\epsilon^{2}n^{2/\alpha}}=\epsilon^{-2}n^{1-{2}/{\alpha}}\|\psi_n\|_2^2, \end{align*} $$
where the last equality follows from 
 $S_n(\psi _n)$
 being a sum of zero mean, square integrable, independent random variables. We will now give an upper bound for
$S_n(\psi _n)$
 being a sum of zero mean, square integrable, independent random variables. We will now give an upper bound for 
 $\|\psi _n\|_2^2$
. Firstly,
$\|\psi _n\|_2^2$
. Firstly, 
 $\{f_k-f_k\circ T^{d_k}:\ k>\sqrt {\log (n)}\}$
 is distributed as
$\{f_k-f_k\circ T^{d_k}:\ k>\sqrt {\log (n)}\}$
 is distributed as 
 $\{Z_k(1)-Z_k(d_k+1):\ k>\sqrt {\log (n)}\}$
. Using in addition that for all
$\{Z_k(1)-Z_k(d_k+1):\ k>\sqrt {\log (n)}\}$
. Using in addition that for all 
 $k\in {\mathbb N}$
,
$k\in {\mathbb N}$
, 
 $$ \begin{align*} Z_k(1)\leq Y_k(1)\leq |X_k(1)|1_{[X_k(1)\leq 4^k]}, \end{align*} $$
$$ \begin{align*} Z_k(1)\leq Y_k(1)\leq |X_k(1)|1_{[X_k(1)\leq 4^k]}, \end{align*} $$
we observe that
 $$ \begin{align*} \|\psi_n\|_2^2&= \mathbb{E}\bigg(\bigg(\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}(Z_k(1)-Z_k(1+d_k))\bigg)^2\bigg)\\ &=\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}\mathbb{E}((Z_k(1)-Z_k(1+d_k))^2)\\ &\leq 4\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}\mathbb{E}((Z_k(1))^2)\\ &\leq 4\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}\mathbb{E}(|X_k(1)|^21_{[X_k(1)\leq 4^k]})\quad\text{(by Proposition A.4)}\\ &\lesssim 4\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}C\frac{4^{(2-\alpha)k}}{k}\lesssim \frac{n^{{2}/{\alpha}-1}}{\log(n)}. \end{align*} $$
$$ \begin{align*} \|\psi_n\|_2^2&= \mathbb{E}\bigg(\bigg(\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}(Z_k(1)-Z_k(1+d_k))\bigg)^2\bigg)\\ &=\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}\mathbb{E}((Z_k(1)-Z_k(1+d_k))^2)\\ &\leq 4\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}\mathbb{E}((Z_k(1))^2)\\ &\leq 4\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}\mathbb{E}(|X_k(1)|^21_{[X_k(1)\leq 4^k]})\quad\text{(by Proposition A.4)}\\ &\lesssim 4\sum_{k=\sqrt{\log(n)}+1}^{({1}/{2\alpha})\log(n)}C\frac{4^{(2-\alpha)k}}{k}\lesssim \frac{n^{{2}/{\alpha}-1}}{\log(n)}. \end{align*} $$
Plugging this into the previous upper bound, we see that for all 
 $\epsilon>0$
,
$\epsilon>0$
, 
 $$ \begin{align*} m(\|{\mathbb{W}}_n^{(\mathbf{LS})}(h)\|_\infty>\epsilon)\lesssim \epsilon^{-2}n^{1-{2}/{\alpha}}\|\psi_n\|_2^2\lesssim \frac{\epsilon^{-2}}{\log(n)}\xrightarrow[n\to\infty]{}0, \end{align*} $$
$$ \begin{align*} m(\|{\mathbb{W}}_n^{(\mathbf{LS})}(h)\|_\infty>\epsilon)\lesssim \epsilon^{-2}n^{1-{2}/{\alpha}}\|\psi_n\|_2^2\lesssim \frac{\epsilon^{-2}}{\log(n)}\xrightarrow[n\to\infty]{}0, \end{align*} $$
proving the claim.
 We now treat 
 ${\mathbb {W}}_n^{(\mathbf {VS})}(h)$
. As before, we define
${\mathbb {W}}_n^{(\mathbf {VS})}(h)$
. As before, we define 
 $$ \begin{align*} \varphi_n:=\sum_{k=1}^{\sqrt{\log(n)}}(f_k-f_k\circ T^{d_k}), \end{align*} $$
$$ \begin{align*} \varphi_n:=\sum_{k=1}^{\sqrt{\log(n)}}(f_k-f_k\circ T^{d_k}), \end{align*} $$
so that for all 
 $t\in [0,1]$
,
$t\in [0,1]$
, 
 ${\mathbb {W}}_n^{(\mathbf {VS})}(h)=S_{[nt]}\big (\varphi _n\big )$
. It is no longer guaranteed that
${\mathbb {W}}_n^{(\mathbf {VS})}(h)=S_{[nt]}\big (\varphi _n\big )$
. It is no longer guaranteed that 
 $\varphi _n,\ldots ,\varphi _n\circ T^n$
 are independent. For this reason we can no longer bound the maximum using the Lévy inequality and we will make use of a more general maximal inequality. The first step involves bounding the square moments of random variables and we make repetitive use of the following most crude bound.
$\varphi _n,\ldots ,\varphi _n\circ T^n$
 are independent. For this reason we can no longer bound the maximum using the Lévy inequality and we will make use of a more general maximal inequality. The first step involves bounding the square moments of random variables and we make repetitive use of the following most crude bound.
Claim 4.7. Let 
 $U_1,U_2,\ldots ,U_N$
 be square integrable random variables. Then
$U_1,U_2,\ldots ,U_N$
 be square integrable random variables. Then 
 $$ \begin{align*} \mathbb{E}((\sum_{j=1}^NU_j)^2)\leq N\sum_{j=1}^N \mathbb{E}(\big(U_j\big)^2). \end{align*} $$
$$ \begin{align*} \mathbb{E}((\sum_{j=1}^NU_j)^2)\leq N\sum_{j=1}^N \mathbb{E}(\big(U_j\big)^2). \end{align*} $$
Lemma 4.8. There exists a global constant 
 $C>0$
 such that for all
$C>0$
 such that for all 
 $1\leq l<j\leq n$
 and
$1\leq l<j\leq n$
 and 
 $1\leq k\leq \sqrt {\log (n)}$
,
$1\leq k\leq \sqrt {\log (n)}$
, 
 $$ \begin{align*} \|S_j(f_k-f_k\circ T^{d_k})-S_l(f_k-f_k\circ T^{d_k})\|_2^2\le C(j-l)\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
$$ \begin{align*} \|S_j(f_k-f_k\circ T^{d_k})-S_l(f_k-f_k\circ T^{d_k})\|_2^2\le C(j-l)\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
Proof. Let 
 $\mu _k:=\int f_k\,dm$
 and write
$\mu _k:=\int f_k\,dm$
 and write 
 $F_k:=f_k-\mu _k$
. For every
$F_k:=f_k-\mu _k$
. For every 
 $j\leq n$
,
$j\leq n$
, 
 $$ \begin{align*} S_j(f_k-f_k\circ T^{d_k})&=S_j(F_k-F_k\circ T^{d_k})\\ &=S_{\min(j,d_k)}(F_k)-S_{\min(j,d_k)}\big(F_k\big)\circ T^{\max(j,d_k)}. \end{align*} $$
$$ \begin{align*} S_j(f_k-f_k\circ T^{d_k})&=S_j(F_k-F_k\circ T^{d_k})\\ &=S_{\min(j,d_k)}(F_k)-S_{\min(j,d_k)}\big(F_k\big)\circ T^{\max(j,d_k)}. \end{align*} $$
Consequently, for every 
 $1\leq l<j\leq n$
,
$1\leq l<j\leq n$
, 
 $$ \begin{align*} S_j(f_k-f_k\circ T^{d_k})-S_l(f_k-f_k\circ T^{d_k})=A+B, \end{align*} $$
$$ \begin{align*} S_j(f_k-f_k\circ T^{d_k})-S_l(f_k-f_k\circ T^{d_k})=A+B, \end{align*} $$
where
 $$ \begin{align*} A:=\begin{cases} \sum_{r=l}^{j-1}F_k\circ T^r,& l<j<d_k,\\ \sum_{r=l}^{d_k-1}F_k\circ T^r,&l<d_k\leq j,\\ 0, &\text{otherwise}. \end{cases} \end{align*} $$
$$ \begin{align*} A:=\begin{cases} \sum_{r=l}^{j-1}F_k\circ T^r,& l<j<d_k,\\ \sum_{r=l}^{d_k-1}F_k\circ T^r,&l<d_k\leq j,\\ 0, &\text{otherwise}. \end{cases} \end{align*} $$
and
 $$ \begin{align*} B:=\begin{cases} -\sum_{r=d_k+l}^{d_k+j-1}F_k\circ T^{r},& l<j\leq d_k,\\ \sum_{r=d_k}^{j-1}F_k\circ T^r-\sum_{r=d_k+l}^{d_k+j-1}F_k\circ T^r,& l< d_k<j\leq l+d_k, \\ \sum_{r=l}^{j-1}F_k\circ T^r-\sum_{r=d_k+l}^{d_k+j-1}F_k\circ T^r,&j-l\leq d_k\leq l<j,\\ \sum_{r=\max(j,d_k)}^{j+d_k-1}F_k\circ T^r-\sum_{r=l}^{l+d_k-1}F_k\circ T^r,& j-l>d_k. \end{cases} \end{align*} $$
$$ \begin{align*} B:=\begin{cases} -\sum_{r=d_k+l}^{d_k+j-1}F_k\circ T^{r},& l<j\leq d_k,\\ \sum_{r=d_k}^{j-1}F_k\circ T^r-\sum_{r=d_k+l}^{d_k+j-1}F_k\circ T^r,& l< d_k<j\leq l+d_k, \\ \sum_{r=l}^{j-1}F_k\circ T^r-\sum_{r=d_k+l}^{d_k+j-1}F_k\circ T^r,&j-l\leq d_k\leq l<j,\\ \sum_{r=\max(j,d_k)}^{j+d_k-1}F_k\circ T^r-\sum_{r=l}^{l+d_k-1}F_k\circ T^r,& j-l>d_k. \end{cases} \end{align*} $$
We will next show that there exists a constant C such that
 $$ \begin{align} \|A\|_2^2,\|B\|_2^2\leq 4C(j-l)\frac{4^{(2-\alpha)k}}{k}. \end{align} $$
$$ \begin{align} \|A\|_2^2,\|B\|_2^2\leq 4C(j-l)\frac{4^{(2-\alpha)k}}{k}. \end{align} $$
The statement follows from this and Claim 4.7.
 Recall that for all 
 $0\leq L\leq d_k$
 and
$0\leq L\leq d_k$
 and 
 $M\in {\mathbb N}$
,
$M\in {\mathbb N}$
, 
 $$ \begin{align*}S_L(F_k)\circ T^M=\sum_{r=M}^{M+L-1}F_k\circ T^r\end{align*} $$
$$ \begin{align*}S_L(F_k)\circ T^M=\sum_{r=M}^{M+L-1}F_k\circ T^r\end{align*} $$
is a sum of i.i.d. zero-mean square integrable random variables. We deduce that so long as 
 $L\leq d_k$
,
$L\leq d_k$
, 
 $$ \begin{align*} \|S_L(F_k)\circ T^M\|_2^2=L\|F_k\|_2^2. \end{align*} $$
$$ \begin{align*} \|S_L(F_k)\circ T^M\|_2^2=L\|F_k\|_2^2. \end{align*} $$
A similar argument as in the proof of Lemma 4.6 shows that there exists 
 $c>0$
 so that
$c>0$
 so that 
 $$ \begin{align*} \|F_k\|_2^2\propto \|f_k\|_2\leq c\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
$$ \begin{align*} \|F_k\|_2^2\propto \|f_k\|_2\leq c\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
We conclude that there exists 
 $c_2>0$
 such that for all
$c_2>0$
 such that for all 
 $L\leq d_k$
 and
$L\leq d_k$
 and 
 $M\in {\mathbb N}$
,
$M\in {\mathbb N}$
, 
 $$ \begin{align} \|S_L(F_k)\circ T^M\|_2^2\leq c_2 L\frac{4^{(2-\alpha)k}}{k}. \end{align} $$
$$ \begin{align} \|S_L(F_k)\circ T^M\|_2^2\leq c_2 L\frac{4^{(2-\alpha)k}}{k}. \end{align} $$
Noting that in the definition of A all terms on the right are of the form 
 $S_L(F_k)\circ T^M$
 with
$S_L(F_k)\circ T^M$
 with 
 $L\leq d_k$
, we observe that
$L\leq d_k$
, we observe that 
 $$ \begin{align*} \|A\|_2^2\leq c_2\frac{4^{(2-\alpha)k}}{k}\begin{cases} j-l.& l<j\leq d_k,\\ d_k-j,&\ l\leq d_k<j,\\ 0, &\text{otherwise}, \end{cases} \end{align*} $$
$$ \begin{align*} \|A\|_2^2\leq c_2\frac{4^{(2-\alpha)k}}{k}\begin{cases} j-l.& l<j\leq d_k,\\ d_k-j,&\ l\leq d_k<j,\\ 0, &\text{otherwise}, \end{cases} \end{align*} $$
and thus
 $$ \begin{align*} \|A\|_2^2\leq c_2(j-l)\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
$$ \begin{align*} \|A\|_2^2\leq c_2(j-l)\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
Now by Claim 4.7,
 $$ \begin{align*} \|B\|_2^2\leq \begin{cases} \|S_{j-l}(F_k)\circ T^{d_k}\|_2^2,& l<j\leq d_k,\\ 2(\|S_{j-d_k}(F_k)\circ T^{d_k}\|_2^2+\|S_{j-l}(F_k)\circ T^{l+d_k}\|_2^2),& l\leq d_k<j\leq l+d_k, \\ 2(\|S_{j-l}(F_k)\circ T^{l}\|_2^2+\|S_{j-l}(F_k)\circ T^{d_k+l}\|_2^2),&j-l\leq d_k\leq l<j,\\ 2(\|S_{\min(l,d_k)}(F_k)\circ T^{\max(l,d_k)}\|_2^2+\|S_{d_k}(F_k)\circ T^l\|_2^2),& j-l>d_k. \end{cases} \end{align*} $$
$$ \begin{align*} \|B\|_2^2\leq \begin{cases} \|S_{j-l}(F_k)\circ T^{d_k}\|_2^2,& l<j\leq d_k,\\ 2(\|S_{j-d_k}(F_k)\circ T^{d_k}\|_2^2+\|S_{j-l}(F_k)\circ T^{l+d_k}\|_2^2),& l\leq d_k<j\leq l+d_k, \\ 2(\|S_{j-l}(F_k)\circ T^{l}\|_2^2+\|S_{j-l}(F_k)\circ T^{d_k+l}\|_2^2),&j-l\leq d_k\leq l<j,\\ 2(\|S_{\min(l,d_k)}(F_k)\circ T^{\max(l,d_k)}\|_2^2+\|S_{d_k}(F_k)\circ T^l\|_2^2),& j-l>d_k. \end{cases} \end{align*} $$
A similar argument to that for 
 $\|A\|_2^2$
 shows that
$\|A\|_2^2$
 shows that 
 $$ \begin{align*} \|B\|_2^2\leq c_2\frac{4^{(2-\alpha)k}}{k}\begin{cases} (j-l),& l<j\leq d_k,\\ 2((j-d_k)+(j-l)),& l\leq d_k<j\leq l+d_k, \\ 4(j-l)& j-l\leq d_k\leq l<j,\\ 2(\min(l,d_k)+d_k),& j-l>d_k, \end{cases} \end{align*} $$
$$ \begin{align*} \|B\|_2^2\leq c_2\frac{4^{(2-\alpha)k}}{k}\begin{cases} (j-l),& l<j\leq d_k,\\ 2((j-d_k)+(j-l)),& l\leq d_k<j\leq l+d_k, \\ 4(j-l)& j-l\leq d_k\leq l<j,\\ 2(\min(l,d_k)+d_k),& j-l>d_k, \end{cases} \end{align*} $$
and
 $$ \begin{align*} \|B\|_2^2\leq 4c_2(j-l)\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
$$ \begin{align*} \|B\|_2^2\leq 4c_2(j-l)\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
This concludes the proof.
Corollary 4.9. For every 
 $\kappa>0$
, there exists
$\kappa>0$
, there exists 
 $C>0$
 such that for all
$C>0$
 such that for all 
 $1\leq l<j\leq n$
,
$1\leq l<j\leq n$
, 
 $$ \begin{align*} \|S_j(\varphi_n)-S_l(\varphi_n)\|_2^2\leq C(j-l)n^\kappa. \end{align*} $$
$$ \begin{align*} \|S_j(\varphi_n)-S_l(\varphi_n)\|_2^2\leq C(j-l)n^\kappa. \end{align*} $$
Proof. By Claim 4.7,
 $$ \begin{align*} \|S_j\big(\varphi_n\big)-S_l\big(\varphi_n\big)\|_2^2\leq \sqrt{\log(n)}\sum_{k=1}^{\sqrt{\log(n)}}\|S_j(f_k-f_k\circ T^{d_k})-S_l(f_k-f_k\circ T^{d_k})\|_2^2. \end{align*} $$
$$ \begin{align*} \|S_j\big(\varphi_n\big)-S_l\big(\varphi_n\big)\|_2^2\leq \sqrt{\log(n)}\sum_{k=1}^{\sqrt{\log(n)}}\|S_j(f_k-f_k\circ T^{d_k})-S_l(f_k-f_k\circ T^{d_k})\|_2^2. \end{align*} $$
Plugging in the bound of Lemma 4.8 on the right-hand side we see that there exists 
 $C>0$
 such that
$C>0$
 such that 
 $$ \begin{align*} \|S_j\big(\varphi_n\big)-S_l\big(\varphi_n\big)\|_2^2\leq C(j-l) \sqrt{\log(n)}\sum_{k=1}^{\sqrt{\log(n)}}\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
$$ \begin{align*} \|S_j\big(\varphi_n\big)-S_l\big(\varphi_n\big)\|_2^2\leq C(j-l) \sqrt{\log(n)}\sum_{k=1}^{\sqrt{\log(n)}}\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
Since
 $$ \begin{align*} \sum_{k=1}^{\sqrt{\log(n)}}\frac{4^{(2-\alpha)k}}{k}\lesssim \frac{4^{2\sqrt{\log(n)}}}{\sqrt{\log(n)}} \ll \frac{n^\kappa}{\sqrt{\log(n)}}, \end{align*} $$
$$ \begin{align*} \sum_{k=1}^{\sqrt{\log(n)}}\frac{4^{(2-\alpha)k}}{k}\lesssim \frac{4^{2\sqrt{\log(n)}}}{\sqrt{\log(n)}} \ll \frac{n^\kappa}{\sqrt{\log(n)}}, \end{align*} $$
the claim follows.
Lemma 4.10. The random variable 
 $\|{\mathbb {W}}_n^{(\mathbf {VS})}(h)\|_\infty$
 converges in measure to
$\|{\mathbb {W}}_n^{(\mathbf {VS})}(h)\|_\infty$
 converges in measure to 
 $0$
.
$0$
.
Proof. Let 
 $\epsilon>0$
. We have
$\epsilon>0$
. We have 
 $$ \begin{align*} m(\|{\mathbb{W}}_n^{(\mathbf{VS})}(h)\|_\infty>\epsilon)=m\Big(\max_{1\leq j\leq n}|S_j\big(\varphi_n\big)|>\epsilon n^{1/\alpha}\Big). \end{align*} $$
$$ \begin{align*} m(\|{\mathbb{W}}_n^{(\mathbf{VS})}(h)\|_\infty>\epsilon)=m\Big(\max_{1\leq j\leq n}|S_j\big(\varphi_n\big)|>\epsilon n^{1/\alpha}\Big). \end{align*} $$
Fix 
 $\kappa>0$
 small enough so that
$\kappa>0$
 small enough so that 
 $\kappa +1<({2}/{\alpha })$
. By Corollary 4.9 and Markov’s inequality, for all
$\kappa +1<({2}/{\alpha })$
. By Corollary 4.9 and Markov’s inequality, for all 
 $1\leq l<j\leq n$
,
$1\leq l<j\leq n$
, 
 $$ \begin{align*} m(|S_j\big(\varphi_n\big)-S_l\big(\varphi_n\big)|>\epsilon n^{1/\alpha})\leq C\epsilon^{-2}(j-l)n^{\kappa-{2}/{\alpha}}. \end{align*} $$
$$ \begin{align*} m(|S_j\big(\varphi_n\big)-S_l\big(\varphi_n\big)|>\epsilon n^{1/\alpha})\leq C\epsilon^{-2}(j-l)n^{\kappa-{2}/{\alpha}}. \end{align*} $$
By [Reference Billingsley2, Theorem 10.2] with 
 $\beta =\frac {1}{2}$
 and
$\beta =\frac {1}{2}$
 and 
 $u_l:=\sqrt {C}n^{{\kappa }/{2}-{1}/{\alpha }}$
,
$u_l:=\sqrt {C}n^{{\kappa }/{2}-{1}/{\alpha }}$
, 
 $$ \begin{align*} m(\max_{1\leq j\leq n}|S_j\big(\varphi_n\big)|>\epsilon n^{1/\alpha})\leq C\epsilon^{-2}n^{1+\kappa-{2}/{\alpha}}\xrightarrow[n\to\infty]{}0.\\[-41pt] \end{align*} $$
$$ \begin{align*} m(\max_{1\leq j\leq n}|S_j\big(\varphi_n\big)|>\epsilon n^{1/\alpha})\leq C\epsilon^{-2}n^{1+\kappa-{2}/{\alpha}}\xrightarrow[n\to\infty]{}0.\\[-41pt] \end{align*} $$
5 Skewed CLT for 
 $\alpha \in (1,2)$
$\alpha \in (1,2)$
 Assume 
 $\alpha \in (1,2)$
 and
$\alpha \in (1,2)$
 and 
 $(f_k)_{k=1}^\infty $
 are the functions from Corollary 2.3 where
$(f_k)_{k=1}^\infty $
 are the functions from Corollary 2.3 where 
 $X_k(j)$
 are
$X_k(j)$
 are 
 $S_\alpha (\kern -2pt\sqrt [\alpha ]{1/l},1,0)$
 random variables and
$S_\alpha (\kern -2pt\sqrt [\alpha ]{1/l},1,0)$
 random variables and 
 $Z_k(j)$
 is the corresponding discretization of the truncation
$Z_k(j)$
 is the corresponding discretization of the truncation 
 $Y_k(j)$
. Recall that
$Y_k(j)$
. Recall that 
 $D_k:=4^{\alpha k}$
,
$D_k:=4^{\alpha k}$
, 
 $$ \begin{align*} \varphi_k:=\frac{1}{D_k}\sum_{j=0}^{D_k-1}f_k\circ T^j, \end{align*} $$
$$ \begin{align*} \varphi_k:=\frac{1}{D_k}\sum_{j=0}^{D_k-1}f_k\circ T^j, \end{align*} $$
 $h_k:=f_k-\varphi _k$
 and
$h_k:=f_k-\varphi _k$
 and 
 $h=\sum _{k=1}^\infty h_k$
. The function h is well defined by Lemma 2.7.
$h=\sum _{k=1}^\infty h_k$
. The function h is well defined by Lemma 2.7.
We aim to show that
 $$ \begin{align*} \frac{S_n(h)+B_n}{n^{1/\alpha}}\Rightarrow^d S_\alpha(\ln(2),1,0), \end{align*} $$
$$ \begin{align*} \frac{S_n(h)+B_n}{n^{1/\alpha}}\Rightarrow^d S_\alpha(\ln(2),1,0), \end{align*} $$
where
 $$ \begin{align*} B_n:=n\sum_{k=({1}/{2\alpha})\log(n)}^{({1}/{\alpha})\log(n)}\mathbb{E}(X_k(1)1_{[X_k(1)\leq 2^k]}). \end{align*} $$
$$ \begin{align*} B_n:=n\sum_{k=({1}/{2\alpha})\log(n)}^{({1}/{\alpha})\log(n)}\mathbb{E}(X_k(1)1_{[X_k(1)\leq 2^k]}). \end{align*} $$
5.1 Proof of Theorem 2.8
The strategy of the proof starts with the decomposition,
 $$ \begin{align} S_n(h)+B_n=S_n^{(\mathbf{S})}(h)+S_n^{(\mathbf{M})}(f)+S_n^{(\mathbf{L})}(f)-V_n(\varphi) \end{align} $$
$$ \begin{align} S_n(h)+B_n=S_n^{(\mathbf{S})}(h)+S_n^{(\mathbf{M})}(f)+S_n^{(\mathbf{L})}(f)-V_n(\varphi) \end{align} $$
where
 $$ \begin{align*} S_n^{(\mathbf{S})}(h)&:=\sum_{k=1}^{({1}/{2\alpha})\log(n)}S_{n}(h_k),\\ S_n^{(\mathbf{M})}(f)&:=B_n+\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{n}(f_k-\int f_k \,dm),\\ S_n^{(\mathbf{L})}(f)&:=\sum_{k=({1}/{\alpha})\log(n)+1}^{\infty}S_{n}(f_k-\int f_k\,dm),\\ V_n(\varphi)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{\infty}S_{n}(\varphi_k-\int\varphi_k\,dm). \end{align*} $$
$$ \begin{align*} S_n^{(\mathbf{S})}(h)&:=\sum_{k=1}^{({1}/{2\alpha})\log(n)}S_{n}(h_k),\\ S_n^{(\mathbf{M})}(f)&:=B_n+\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{n}(f_k-\int f_k \,dm),\\ S_n^{(\mathbf{L})}(f)&:=\sum_{k=({1}/{\alpha})\log(n)+1}^{\infty}S_{n}(f_k-\int f_k\,dm),\\ V_n(\varphi)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{\infty}S_{n}(\varphi_k-\int\varphi_k\,dm). \end{align*} $$
Note that in deriving (7) we used that for all 
 $k\in {\mathbb N}$
,
$k\in {\mathbb N}$
, 
 $\int f_k\,dm=\int \varphi _kdm$
 and that both
$\int f_k\,dm=\int \varphi _kdm$
 and that both 
 $\sum _{k=1}^N f_k$
 and
$\sum _{k=1}^N f_k$
 and 
 $\sum _{k=1}^N\varphi _k$
 converge in
$\sum _{k=1}^N\varphi _k$
 converge in 
 $L^1(m)$
 as
$L^1(m)$
 as 
 $N\to \infty $
.
$N\to \infty $
.
 The proof of Theorem 2.8 is by showing that when normalized, three of the four terms converge to 
 $0$
 in probability and the remaining one converges in distribution to an
$0$
 in probability and the remaining one converges in distribution to an 
 $S_\alpha (\ln (2),0,0)$
 random variable.
$S_\alpha (\ln (2),0,0)$
 random variable.
Lemma 5.1. We have
 $$ \begin{align*} \lim_{n\to\infty} m(S_n^{(\mathbf{L})}(f)\neq 0)=0. \end{align*} $$
$$ \begin{align*} \lim_{n\to\infty} m(S_n^{(\mathbf{L})}(f)\neq 0)=0. \end{align*} $$
The proof of Lemma 5.1 is similar to the proof of Lemma 3.1 and is thus omitted.
Lemma 5.2. The sequence of random variables 
 $n^{-1/\alpha }V_n(\varphi )$
 converges to
$n^{-1/\alpha }V_n(\varphi )$
 converges to 
 $0$
 in probability.
$0$
 in probability.
The proof of Lemma 5.2 begins with the following easy calculation.
Fact 5.3. If 
 $n\leq D_k$
 then
$n\leq D_k$
 then 
 $$ \begin{align} S_n(\varphi_k)= \sum_{j=0}^{n-2}\frac{j+1}{D_k}f_k\circ T^j+\frac{n}{D_k}\sum_{j=n-1}^{D_k-1}f_k\circ T^j+\sum_{j=1}^{n-1}\frac{n-j}{D_k}f_k\circ T^{D_k+j-1}. \end{align} $$
$$ \begin{align} S_n(\varphi_k)= \sum_{j=0}^{n-2}\frac{j+1}{D_k}f_k\circ T^j+\frac{n}{D_k}\sum_{j=n-1}^{D_k-1}f_k\circ T^j+\sum_{j=1}^{n-1}\frac{n-j}{D_k}f_k\circ T^{D_k+j-1}. \end{align} $$
If 
 $D_k\leq n$
 then
$D_k\leq n$
 then 
 $$ \begin{align} S_n(\varphi_k)= \sum_{j=0}^{D_k-2}\frac{j+1}{D_k}f_k\circ T^j+\sum_{j=D_k-1}^{n-1}f_k\circ T^j+\sum_{j=1}^{D_k-1}\frac{D_k-j}{D_k}f_k\circ T^{n+j-1}. \end{align} $$
$$ \begin{align} S_n(\varphi_k)= \sum_{j=0}^{D_k-2}\frac{j+1}{D_k}f_k\circ T^j+\sum_{j=D_k-1}^{n-1}f_k\circ T^j+\sum_{j=1}^{D_k-1}\frac{D_k-j}{D_k}f_k\circ T^{n+j-1}. \end{align} $$
 Since 
 $f_k$
 and
$f_k$
 and 
 $Z_k(1)$
 are equally distributed and
$Z_k(1)$
 are equally distributed and 
 $Z_k(1)\leq Y_k(1)$
, the next claim follows easily from Proposition A.4.
$Z_k(1)\leq Y_k(1)$
, the next claim follows easily from Proposition A.4.
Claim 5.4. For every 
 $k\in {\mathbb N}$
,
$k\in {\mathbb N}$
, 
 $$ \begin{align*} \mathrm{Var}(f_k)\leq E(Y_k(1)^2)\leq C\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
$$ \begin{align*} \mathrm{Var}(f_k)\leq E(Y_k(1)^2)\leq C\frac{4^{(2-\alpha)k}}{k}. \end{align*} $$
Using (8) and this claim we obtain the following lemma.
Lemma 5.5. We have 
 $\mathrm {Var}(V_n(\varphi ))\lesssim ({n^{2/\alpha }}/{\log (n)})$
.
$\mathrm {Var}(V_n(\varphi ))\lesssim ({n^{2/\alpha }}/{\log (n)})$
.
Proof. For all 
 $k\geq ({1}/{2\alpha })\log (n)$
,
$k\geq ({1}/{2\alpha })\log (n)$
, 
 $D_k\geq n$
 and (up to finitely many n)
$D_k\geq n$
 and (up to finitely many n) 
 $D_k\leq 4^{k^2}$
. Since
$D_k\leq 4^{k^2}$
. Since 
 $\{f_k\circ T^j:0\leq j<2\cdot 4^{k^2}\}$
 is equally distributed as
$\{f_k\circ T^j:0\leq j<2\cdot 4^{k^2}\}$
 is equally distributed as 
 $\{Z_k(j):j\leq 1\leq j\leq 2\cdot 4^{k^2}\}$
, we deduce from (8) and the fact that the
$\{Z_k(j):j\leq 1\leq j\leq 2\cdot 4^{k^2}\}$
, we deduce from (8) and the fact that the 
 $f_k$
 are the functions from Corollary 2.3 that:
$f_k$
 are the functions from Corollary 2.3 that: 
- 
(a) for all  $k\geq ({1}/{2\alpha })\log (n)$
, $k\geq ({1}/{2\alpha })\log (n)$
, $S_n(\varphi _k)$
 is a sum of independent random variables; $S_n(\varphi _k)$
 is a sum of independent random variables;
- 
(b)  $S_n(\varphi _k),\ k\geq ({1}/{2\alpha })\log (n)$
 are independent. $S_n(\varphi _k),\ k\geq ({1}/{2\alpha })\log (n)$
 are independent.
By item (a),
 $$ \begin{align*} \mathrm{Var}(S_n(\varphi_k))&=\mathrm{Var}(f_k)\bigg(\sum_{j=0}^{n-2}\frac{(j+1)^2}{D_k^2}+\frac{n^2}{D_k^2}(D_k-n+1)+\sum_{j=1}^{n-1}\frac{(n-j)^2}{D_k^2}\bigg)\\ &\leq \frac{3n^2}{D_k}\mathrm{Var}(f_k),\quad \text{as } D_k\geq n\\ &\leq 3C n^2\cdot \frac{4^{(2-2\alpha)k}}{k}. \end{align*} $$
$$ \begin{align*} \mathrm{Var}(S_n(\varphi_k))&=\mathrm{Var}(f_k)\bigg(\sum_{j=0}^{n-2}\frac{(j+1)^2}{D_k^2}+\frac{n^2}{D_k^2}(D_k-n+1)+\sum_{j=1}^{n-1}\frac{(n-j)^2}{D_k^2}\bigg)\\ &\leq \frac{3n^2}{D_k}\mathrm{Var}(f_k),\quad \text{as } D_k\geq n\\ &\leq 3C n^2\cdot \frac{4^{(2-2\alpha)k}}{k}. \end{align*} $$
Here the last inequality follows from Claim 5.4 and 
 ${4^{(2-\alpha )k}}/{D_k}=4^{(2-2\alpha )k}$
.
${4^{(2-\alpha )k}}/{D_k}=4^{(2-2\alpha )k}$
.
Finally, by item (b),
 $$ \begin{align*} \mathrm{Var}(V_n(\varphi))&=\sum_{k=({1}/{2\alpha})\log(n)+1}^\infty \mathrm{Var}(S_n(\varphi_k))\\ &\leq 3Cn^2\sum_{k=({1}/{2\alpha})\log(n)+1}^\infty \frac{4^{(2-2\alpha)k}}{k}\lesssim \frac{n^{2/\alpha}}{\log(n)}.\\[-49pt] \end{align*} $$
$$ \begin{align*} \mathrm{Var}(V_n(\varphi))&=\sum_{k=({1}/{2\alpha})\log(n)+1}^\infty \mathrm{Var}(S_n(\varphi_k))\\ &\leq 3Cn^2\sum_{k=({1}/{2\alpha})\log(n)+1}^\infty \frac{4^{(2-2\alpha)k}}{k}\lesssim \frac{n^{2/\alpha}}{\log(n)}.\\[-49pt] \end{align*} $$
Applying Markov’s inequality we obtain the following corollary.
Corollary 5.6. The sequence of random variables 
 ${V_n(\varphi )}/{n^{1/\alpha }}\xrightarrow [n\to \infty ]{}0$
 in probability.
${V_n(\varphi )}/{n^{1/\alpha }}\xrightarrow [n\to \infty ]{}0$
 in probability.
 We now show that 
 $ n^{-1/\alpha }S_n^{(\mathbf {S})}(h)$
 tends to
$ n^{-1/\alpha }S_n^{(\mathbf {S})}(h)$
 tends to 
 $0$
 in probability. The first step is the following simple claim. Recall the notation
$0$
 in probability. The first step is the following simple claim. Recall the notation 
 $F_k=f_k-\int f_k\,dm$
.
$F_k=f_k-\int f_k\,dm$
.
Claim 5.7. For every 
 $k\leq ({1}/{2\alpha })\log (n)$
,
$k\leq ({1}/{2\alpha })\log (n)$
, 
 $$ \begin{align*} S_n(h_k)=\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)F_k\circ T^j-U^n\bigg(\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)F_k\circ T^j\bigg). \end{align*} $$
$$ \begin{align*} S_n(h_k)=\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)F_k\circ T^j-U^n\bigg(\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)F_k\circ T^j\bigg). \end{align*} $$
Proof. As 
 $D_k\leq n$
 and
$D_k\leq n$
 and 
 $h_k=f_k-\varphi _k$
, it follows from (9) that
$h_k=f_k-\varphi _k$
, it follows from (9) that 
 $$ \begin{align*}\kern-12pt S_n(h_k)&=\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)f_k\circ T^j-U^n\bigg(\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)f_k\circ T^j\bigg)\\\kern-12pt&=\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)F_k\circ T^j-U^n\bigg(\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)F_k\circ T^j\bigg).\\[-51pt] \end{align*} $$
$$ \begin{align*}\kern-12pt S_n(h_k)&=\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)f_k\circ T^j-U^n\bigg(\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)f_k\circ T^j\bigg)\\\kern-12pt&=\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)F_k\circ T^j-U^n\bigg(\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)F_k\circ T^j\bigg).\\[-51pt] \end{align*} $$
Lemma 5.8. The sequence of random variables 
 $ n^{-1/\alpha }S_n^{(\mathbf {S})}(h)$
 converges to
$ n^{-1/\alpha }S_n^{(\mathbf {S})}(h)$
 converges to 
 $0$
 in probability.
$0$
 in probability.
Proof. By Claim 5.7,
 $$ \begin{align} S_n^{(\mathbf{S})}(h)=A_n-U^n(A_n), \end{align} $$
$$ \begin{align} S_n^{(\mathbf{S})}(h)=A_n-U^n(A_n), \end{align} $$
where
 $$ \begin{align*} A_n:=\sum_{k=0}^{({1}/{2\alpha})\log(n)}\hspace{0.4mm}\sum_{j=0}^{D_k-2} \bigg(\frac{D_k-j-1}{D_k}\bigg)F_k\circ T^j. \end{align*} $$
$$ \begin{align*} A_n:=\sum_{k=0}^{({1}/{2\alpha})\log(n)}\hspace{0.4mm}\sum_{j=0}^{D_k-2} \bigg(\frac{D_k-j-1}{D_k}\bigg)F_k\circ T^j. \end{align*} $$
As 
 $A_n$
 is a sum of independent random variables,
$A_n$
 is a sum of independent random variables, 
 $$ \begin{align*} \mathrm{Var}(A_n)&=\sum_{k=0}^{({1}/{2\alpha})\log(n)}\hspace{0.4mm}\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)^2\mathrm{Var}(F_k\circ T^j)\\ &\leq \sum_{k=0}^{({1}/{2\alpha})\log(n)}D_k\mathrm{Var}(F_k). \end{align*} $$
$$ \begin{align*} \mathrm{Var}(A_n)&=\sum_{k=0}^{({1}/{2\alpha})\log(n)}\hspace{0.4mm}\sum_{j=0}^{D_k-2}\bigg(\frac{D_k-j-1}{D_k}\bigg)^2\mathrm{Var}(F_k\circ T^j)\\ &\leq \sum_{k=0}^{({1}/{2\alpha})\log(n)}D_k\mathrm{Var}(F_k). \end{align*} $$
Noting that for all 
 $k\in {\mathbb N}$
,
$k\in {\mathbb N}$
, 
 $\mathrm {Var}(F_k)=\mathrm {Var}(f_k)$
, we deduce from the last inequality and Claim 5.4 that
$\mathrm {Var}(F_k)=\mathrm {Var}(f_k)$
, we deduce from the last inequality and Claim 5.4 that 
 $$ \begin{align*} \mathrm{Var}(A_n)&=\sum_{k=0}^{({1}/{2\alpha})\log(n)}D_k\mathrm{Var}(f_k)\\ &\lesssim \sum_{k=0}^{({1}/{2\alpha})\log(n)} \frac{4^{2k}}{k}\lesssim \frac{n^{2/\alpha}}{\log(n)}. \end{align*} $$
$$ \begin{align*} \mathrm{Var}(A_n)&=\sum_{k=0}^{({1}/{2\alpha})\log(n)}D_k\mathrm{Var}(f_k)\\ &\lesssim \sum_{k=0}^{({1}/{2\alpha})\log(n)} \frac{4^{2k}}{k}\lesssim \frac{n^{2/\alpha}}{\log(n)}. \end{align*} $$
Next, as 
 $\int A_n\,dm=0$
, it follows from Chebyshev’s inequality that for every
$\int A_n\,dm=0$
, it follows from Chebyshev’s inequality that for every 
 $\epsilon>0$
,
$\epsilon>0$
, 
 $$ \begin{align*} m(|n^{-1/\alpha}A_n|>\epsilon)&\leq \frac{\mathrm{Var}(A_n)}{n^{2/\alpha}\epsilon^2}\lesssim \frac{1}{\log(n)\epsilon^2}. \end{align*} $$
$$ \begin{align*} m(|n^{-1/\alpha}A_n|>\epsilon)&\leq \frac{\mathrm{Var}(A_n)}{n^{2/\alpha}\epsilon^2}\lesssim \frac{1}{\log(n)\epsilon^2}. \end{align*} $$
This shows that 
 $n^{-1/\alpha }A_n$
 tends to
$n^{-1/\alpha }A_n$
 tends to 
 $0$
 in probability.
$0$
 in probability.
 Since 
 $A_n$
 and
$A_n$
 and 
 $U^n(A_n)$
 are equally distributed,
$U^n(A_n)$
 are equally distributed, 
 $n^{-1/\alpha }U^n(A_n)$
 also tends to
$n^{-1/\alpha }U^n(A_n)$
 also tends to 
 $0$
 in probability. The claim now follows from the converging together lemma.
$0$
 in probability. The claim now follows from the converging together lemma.
Proposition 5.9. The sequence of random variables 
 $S_n^{(\mathbf {M})}(f)$
 converges in distribution to an
$S_n^{(\mathbf {M})}(f)$
 converges in distribution to an 
 $S_\alpha (\sigma ,1,0)$
 random variable with
$S_\alpha (\sigma ,1,0)$
 random variable with 
 $\sigma ^\alpha =\ln 2$
.
$\sigma ^\alpha =\ln 2$
.
We postpone the proof of this proposition to §5.2, but if we assume it here we can now prove Theorem 2.8.
Proof of Theorem 2.8
We deduce from Lemmas 5.8, 5.1 and 5.2 that
 $$ \begin{align*} S_n^{(\mathbf{S})}(h)+V_n(\varphi)+S_n^{(\mathbf{L})}(f)\xrightarrow[n\to\infty]{}0\quad\text{in probability.} \end{align*} $$
$$ \begin{align*} S_n^{(\mathbf{S})}(h)+V_n(\varphi)+S_n^{(\mathbf{L})}(f)\xrightarrow[n\to\infty]{}0\quad\text{in probability.} \end{align*} $$
The result now follows from (7), Corollary 5.6, Proposition 5.9 and the converging together lemma.
5.2 Proof of Proposition 5.9
The proof of this proposition goes along similar lines to the proof of Proposition 3.3, with some (rather) obvious modifications. We first define
 $$ \begin{align*} S_n^{(\mathbf{M})}(Z)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{n}(Z_k(\cdot)),\\ S_n^{(\mathbf{M})}(Y)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{n}(Y_k(\cdot)),\\ S_n^{(\mathbf{M})}(X)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{n}(X_k(\cdot)). \end{align*} $$
$$ \begin{align*} S_n^{(\mathbf{M})}(Z)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{n}(Z_k(\cdot)),\\ S_n^{(\mathbf{M})}(Y)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{n}(Y_k(\cdot)),\\ S_n^{(\mathbf{M})}(X)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_{n}(X_k(\cdot)). \end{align*} $$
The following result is the analogue of Lemma 3.4 for the current case.
Lemma 5.10. The random variables 
 $S_n^{(\mathbf {M})}(f)$
 and
$S_n^{(\mathbf {M})}(f)$
 and 
 $S_n^{(\mathbf {M})}(Z)+B_n$
 are equally distributed.
$S_n^{(\mathbf {M})}(Z)+B_n$
 are equally distributed.
The proof of Lemma 5.10 is similar to the proof of Lemma 3.4 with obvious modifications. We leave it to the reader. Proposition 5.9 follows from Lemma 5.10 and the following result.
Lemma 5.11.
- 
(a) The random variables  $ {1}/{n^{1/\alpha }}(S_n^{(\mathbf {M})}(X)-S_n^{(\mathbf {M})}(Z)-B_n)$
 converge to $ {1}/{n^{1/\alpha }}(S_n^{(\mathbf {M})}(X)-S_n^{(\mathbf {M})}(Z)-B_n)$
 converge to $0$
 in measure. $0$
 in measure.
- 
(b) The random variables  ${1}/{n^{1/\alpha }}S_n^{(\mathbf {M})}(X)$
 converge in distribution to an ${1}/{n^{1/\alpha }}S_n^{(\mathbf {M})}(X)$
 converge in distribution to an $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 random variable. $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 random variable.
Consequently 
 $S_n^{(\mathbf {M})}(Z)+B_n$
 converges in distribution to an
$S_n^{(\mathbf {M})}(Z)+B_n$
 converges in distribution to an 
 $S_\alpha S(\kern -2pt\sqrt [\alpha ]{\ln (2)})$
 random variable
$S_\alpha S(\kern -2pt\sqrt [\alpha ]{\ln (2)})$
 random variable
Proof of Lemma 5.11(b)
 For all 
 $n\in \mathbb {N}$
,
$n\in \mathbb {N}$
, 
 $S_n^{(\mathbf {M})}(X)$
 is a sum of independent totally skewed
$S_n^{(\mathbf {M})}(X)$
 is a sum of independent totally skewed 
 $\alpha $
-stable random variables. By [Reference Samorodnitsky and Taqqu11, Property 1.2.1],
$\alpha $
-stable random variables. By [Reference Samorodnitsky and Taqqu11, Property 1.2.1], 
 $n^{-1/\alpha }S_n(X)$
 is
$n^{-1/\alpha }S_n(X)$
 is 
 $S_\alpha (\Sigma _n,1,0)$
 distributed with
$S_\alpha (\Sigma _n,1,0)$
 distributed with 
 $$ \begin{align*} (\Sigma_n)^\alpha&:=\frac{1}{n}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{n}{k}\sim \ln(2)\quad\text{as } n\to\infty. \end{align*} $$
$$ \begin{align*} (\Sigma_n)^\alpha&:=\frac{1}{n}\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{n}{k}\sim \ln(2)\quad\text{as } n\to\infty. \end{align*} $$
The result follows from the fact that if 
 $B_n$
 is
$B_n$
 is 
 $S_\alpha (\Sigma _n,1,0)$
 distributed and
$S_\alpha (\Sigma _n,1,0)$
 distributed and 
 $\Sigma _n\to \kern -2pt\sqrt [\alpha ]{\ln (2)}$
 then
$\Sigma _n\to \kern -2pt\sqrt [\alpha ]{\ln (2)}$
 then 
 $B_n$
 converges to an
$B_n$
 converges to an 
 $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 random variable.
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 random variable.
Proof of Lemma 5.11(a)
 We assume n is large enough so that if 
 $k>({1}/{2\alpha })\log (n)$
 then
$k>({1}/{2\alpha })\log (n)$
 then 
 $n<4^{k^2}$
.
$n<4^{k^2}$
.
 Firstly, since for all 
 $k\in {\mathbb N}$
 and
$k\in {\mathbb N}$
 and 
 $j\leq d_k$
,
$j\leq d_k$
, 
 $$ \begin{align*} 0<Y_k(j)-Z_k(j)\leq 4^{-{k}}, \end{align*} $$
$$ \begin{align*} 0<Y_k(j)-Z_k(j)\leq 4^{-{k}}, \end{align*} $$
we have
 $$ \begin{align*} |S_n^{(\mathbf{M})}(Y)-S_n^{(\mathbf{M})}(Z)|&\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_n(Y_k(\cdot)-Z_k(\cdot))\\ &\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}4^{-k}n\lesssim n^{1-{1}/{\alpha}}. \end{align*} $$
$$ \begin{align*} |S_n^{(\mathbf{M})}(Y)-S_n^{(\mathbf{M})}(Z)|&\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}S_n(Y_k(\cdot)-Z_k(\cdot))\\ &\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}4^{-k}n\lesssim n^{1-{1}/{\alpha}}. \end{align*} $$
Consequently,
 $$ \begin{align} \frac{1}{n^{1/\alpha}}| S_n^{(\mathbf{M})}(Y)-S_n^{(\mathbf{M})}(Z)|\lesssim n^{1-{2}/{\alpha}}\xrightarrow[n\to\infty]{}0. \end{align} $$
$$ \begin{align} \frac{1}{n^{1/\alpha}}| S_n^{(\mathbf{M})}(Y)-S_n^{(\mathbf{M})}(Z)|\lesssim n^{1-{2}/{\alpha}}\xrightarrow[n\to\infty]{}0. \end{align} $$
We now look at 
 $S_n^{(\mathbf {M})}(X)-S_n^{(\mathbf {M})}(Y)-B_n$
. For all n,
$S_n^{(\mathbf {M})}(X)-S_n^{(\mathbf {M})}(Y)-B_n$
. For all n, 
 $$ \begin{align*} S_n^{(\mathbf{M})}(X)-{\mathbb{W}}_n^{(\mathbf{M})}(Y)-B_n=\mathrm{II}_n+\mathrm{III}_n, \end{align*} $$
$$ \begin{align*} S_n^{(\mathbf{M})}(X)-{\mathbb{W}}_n^{(\mathbf{M})}(Y)-B_n=\mathrm{II}_n+\mathrm{III}_n, \end{align*} $$
where
 $$ \begin{align*} \widetilde{V}_k(m)&:=X_k(m)1_{[X_k(m)>4^k]},\\ \widehat{V}_n(m)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}(X_k(m)1_{[X_k(m)\leq 2^k]}-\mathbb{E}(X_k(m)1_{[X_k(m)\leq 2^k]})), \end{align*} $$
$$ \begin{align*} \widetilde{V}_k(m)&:=X_k(m)1_{[X_k(m)>4^k]},\\ \widehat{V}_n(m)&:=\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}(X_k(m)1_{[X_k(m)\leq 2^k]}-\mathbb{E}(X_k(m)1_{[X_k(m)\leq 2^k]})), \end{align*} $$
and
 $$ \begin{align*} \mathrm{II}_n&:=n^{-1/\alpha}S_{n}(\widehat{V}_n)\\ \mathrm{III}_n&:=n^{-1/\alpha}\sum_{k=({1}/{2\alpha}) \log(n)+1}^{({1}/{\alpha})\log(n)}S_{n}(\widetilde{V}_k). \end{align*} $$
$$ \begin{align*} \mathrm{II}_n&:=n^{-1/\alpha}S_{n}(\widehat{V}_n)\\ \mathrm{III}_n&:=n^{-1/\alpha}\sum_{k=({1}/{2\alpha}) \log(n)+1}^{({1}/{\alpha})\log(n)}S_{n}(\widetilde{V}_k). \end{align*} $$
Similarly to the proof of Lemma 3.1,
 $$ \begin{align*} {\mathbb P}(\mathrm{III}_n\neq 0)&\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n{\mathbb P}(\widetilde{V}_k(j)\neq 0)\\ &\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n{\mathbb P}(X_k(m)>4^k)\\ &\lesssim 2Cn\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{4^{-k\alpha}}{k}\lesssim \frac{1}{\log(n)}. \end{align*} $$
$$ \begin{align*} {\mathbb P}(\mathrm{III}_n\neq 0)&\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n{\mathbb P}(\widetilde{V}_k(j)\neq 0)\\ &\leq \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\sum_{j=1}^n{\mathbb P}(X_k(m)>4^k)\\ &\lesssim 2Cn\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{4^{-k\alpha}}{k}\lesssim \frac{1}{\log(n)}. \end{align*} $$
Now 
 $ \widehat {V}(m)$
,
$ \widehat {V}(m)$
, 
 $1\leq m \leq n$
, are zero-mean, independent random variables. By Proposition A.4, they also have second moment and for all
$1\leq m \leq n$
, are zero-mean, independent random variables. By Proposition A.4, they also have second moment and for all 
 $1\leq j\leq n$
,
$1\leq j\leq n$
, 
 $$ \begin{align} \mathbb{E}(\widehat{V}_n(j)^2)&\lesssim 2\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\mathbb{E}(X_k(m)^21_{[X_k(m)\leq 2^k]}) \nonumber \\ &\lesssim \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{2^{(2-\alpha)k}}{k}\lesssim \frac{n^{{2}/{\alpha}-1}}{\log(n)}. \end{align} $$
$$ \begin{align} \mathbb{E}(\widehat{V}_n(j)^2)&\lesssim 2\sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\mathbb{E}(X_k(m)^21_{[X_k(m)\leq 2^k]}) \nonumber \\ &\lesssim \sum_{k=({1}/{2\alpha})\log(n)+1}^{({1}/{\alpha})\log(n)}\frac{2^{(2-\alpha)k}}{k}\lesssim \frac{n^{{2}/{\alpha}-1}}{\log(n)}. \end{align} $$
It follows from Markov’s inequality that for every 
 $\epsilon>0$
,
$\epsilon>0$
, 
 $$ \begin{align*} {\mathbb P}(\big|\mathrm{II}_n\big|>\epsilon) &\leq \epsilon^{-2}n^{-2/\alpha}\mathbb{E}(|S_n(\widehat{V}_n)|^2)\\ &= \epsilon^{-2}n^{-2/\alpha}(n\mathbb{E}(\widehat{V}_n(1)^2))\lesssim \frac{1}{\epsilon^{2}\log(n)}. \end{align*} $$
$$ \begin{align*} {\mathbb P}(\big|\mathrm{II}_n\big|>\epsilon) &\leq \epsilon^{-2}n^{-2/\alpha}\mathbb{E}(|S_n(\widehat{V}_n)|^2)\\ &= \epsilon^{-2}n^{-2/\alpha}(n\mathbb{E}(\widehat{V}_n(1)^2))\lesssim \frac{1}{\epsilon^{2}\log(n)}. \end{align*} $$
This concludes the proof that 
 $\mathrm {II}_n\xrightarrow [n\to \infty ]{}0$
 in probability.
$\mathrm {II}_n\xrightarrow [n\to \infty ]{}0$
 in probability.
 The claim now follows from (11) and the convergence in probability of 
 $\mathrm {II}_n+\mathrm {III}_n$
 to
$\mathrm {II}_n+\mathrm {III}_n$
 to 
 $0$
.
$0$
.
5.3 Deducing Theorem 2.10 from Theorem 2.8
This is similar to the strategy and steps which were carried out in §3.2.
 Recall the notation 
 $\hat {\varphi }_k:=(1/D_k)\sum _{j=0}^{D_k-1}g_k\circ T^j$
 and
$\hat {\varphi }_k:=(1/D_k)\sum _{j=0}^{D_k-1}g_k\circ T^j$
 and 
 $\hat {h}_k:=g_k-\hat {\varphi }_k$
. Since for all
$\hat {h}_k:=g_k-\hat {\varphi }_k$
. Since for all 
 $k\in {\mathbb N}$
,
$k\in {\mathbb N}$
, 
 $\varphi _k$
 and
$\varphi _k$
 and 
 $\hat {\varphi }_k$
 are equally distributed, by mimicking the proof of Lemma 5.2 and Corollary 5.6 we obtain the following result.
$\hat {\varphi }_k$
 are equally distributed, by mimicking the proof of Lemma 5.2 and Corollary 5.6 we obtain the following result.
Lemma 5.12. The random variables 
 $n^{-1/\alpha }V_n(\hat {\varphi })$
 converge to
$n^{-1/\alpha }V_n(\hat {\varphi })$
 converge to 
 $0$
 in probability.
$0$
 in probability.
Next we have the following analogue of Lemma 3.6.
Lemma 5.13.
- 
(a) The random variables  ${1}/{n^{1/\alpha }}(S_n^{(\mathbf {S})}(h),S_n^{(\mathbf {S})}\big (\hat {h}\big ))$
 converge in probability to ${1}/{n^{1/\alpha }}(S_n^{(\mathbf {S})}(h),S_n^{(\mathbf {S})}\big (\hat {h}\big ))$
 converge in probability to $(0,0)$
. $(0,0)$
.
- 
(b) The random variables  ${1}/{n^{1/\alpha }}(S_n^{(\mathbf {M})}(f)+B_n,S_n^{(\mathbf {M})}(g)+B_n)$
 converge in distribution to ${1}/{n^{1/\alpha }}(S_n^{(\mathbf {M})}(f)+B_n,S_n^{(\mathbf {M})}(g)+B_n)$
 converge in distribution to $(W,W')$
 where $(W,W')$
 where $W,W'$
 are independent $W,W'$
 are independent $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 random variables. $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 random variables.
- 
(c) The random variables  ${1}/{n^{1/\alpha }}(S_n^{(\mathbf {L})}(f),S_n^{(\mathbf {L})}(g))$
 converge in probability to ${1}/{n^{1/\alpha }}(S_n^{(\mathbf {L})}(f),S_n^{(\mathbf {L})}(g))$
 converge in probability to $(0,0)$
. $(0,0)$
.
Proof. As for all k, 
 $h_k$
 and
$h_k$
 and 
 $\hat {h}_k$
 are equally distributed, by mimicking the proof of Lemma 5.8 one proves that
$\hat {h}_k$
 are equally distributed, by mimicking the proof of Lemma 5.8 one proves that 
 $$ \begin{align*} \frac{1}{n^{1/\alpha}}S_n^{(\mathbf{S})}(\hat{h})\xrightarrow[n\to\infty]{}0\ \text{in probability}. \end{align*} $$
$$ \begin{align*} \frac{1}{n^{1/\alpha}}S_n^{(\mathbf{S})}(\hat{h})\xrightarrow[n\to\infty]{}0\ \text{in probability}. \end{align*} $$
Part (a) follows from this and Lemma 5.8.
The deduction of part (c) from Lemma 5.1 and its proof is similar.
 Part (b) follows from Proposition 5.9 as 
 $S_n^{(\mathbf {M})}(f)$
 and
$S_n^{(\mathbf {M})}(f)$
 and 
 $S_n^{(\mathbf {M})}(g)$
 are independent and equally distributed.
$S_n^{(\mathbf {M})}(g)$
 are independent and equally distributed.
 Now fix 
 $\beta \in [-1,1]$
 and recall that
$\beta \in [-1,1]$
 and recall that 
 $H=\Phi _{\beta }(h,\hat {h})$
 where
$H=\Phi _{\beta }(h,\hat {h})$
 where 
 $\Phi _\beta $
 is the linear function defined for all
$\Phi _\beta $
 is the linear function defined for all 
 $x,y\in {\mathbb R}$
 by
$x,y\in {\mathbb R}$
 by 
 $$ \begin{align*} \Phi_\beta(x,y):=\bigg(\frac{\beta+1}{2}\bigg)^{1/\alpha}x-\bigg(\frac{1-\beta}{2}\bigg)^{1/\alpha}y. \end{align*} $$
$$ \begin{align*} \Phi_\beta(x,y):=\bigg(\frac{\beta+1}{2}\bigg)^{1/\alpha}x-\bigg(\frac{1-\beta}{2}\bigg)^{1/\alpha}y. \end{align*} $$
Proof of Theorem 2.10
Writing
 $$ \begin{align*} A_n:=\frac{1}{n^{1/\alpha}}(S_n^{(\mathbf{S})}(h)+V_n(\varphi)+S_n^{(\mathbf{L})}(f),S_n^{(\mathbf{S})}\big(\hat{h}\big)+V_n\big(\hat{\varphi}\big)+S_n^{(\mathbf{L})}(g)), \end{align*} $$
$$ \begin{align*} A_n:=\frac{1}{n^{1/\alpha}}(S_n^{(\mathbf{S})}(h)+V_n(\varphi)+S_n^{(\mathbf{L})}(f),S_n^{(\mathbf{S})}\big(\hat{h}\big)+V_n\big(\hat{\varphi}\big)+S_n^{(\mathbf{L})}(g)), \end{align*} $$
we have for all 
 $n\in {\mathbb N}$
,
$n\in {\mathbb N}$
, 
 $$ \begin{align} S_n(H)=\Phi_\beta\big(A_n\big)+\Phi_\beta(n^{-1/\alpha}S_n^{(\mathbf{M})}(f), n^{-1/\alpha}S_n^{(\mathbf{M})}(g)). \end{align} $$
$$ \begin{align} S_n(H)=\Phi_\beta\big(A_n\big)+\Phi_\beta(n^{-1/\alpha}S_n^{(\mathbf{M})}(f), n^{-1/\alpha}S_n^{(\mathbf{M})}(g)). \end{align} $$
By Lemma 5.12 and parts (a) and (c) of Lemma 5.13, 
 $A_n\to (0,0)$
 in probability. Since
$A_n\to (0,0)$
 in probability. Since 
 $\Phi _\beta $
 is continuous with
$\Phi _\beta $
 is continuous with 
 $\Phi _\beta (0,0)=0$
, it follows that
$\Phi _\beta (0,0)=0$
, it follows that 
 $\Phi _\beta \big (A_n\big )$
 converges to
$\Phi _\beta \big (A_n\big )$
 converges to 
 $0$
 in probability as
$0$
 in probability as 
 $n\to \infty $
.
$n\to \infty $
.
By Lemma 5.13(b) and the continuous mapping theorem,
 $$ \begin{align*} \Phi_\beta(n^{-1/\alpha}S_n^{(\mathbf{M})}(f), n^{-1/\alpha}S_n^{(\mathbf{M})}(g))\Rightarrow^d \Phi_\beta(W,W'), \end{align*} $$
$$ \begin{align*} \Phi_\beta(n^{-1/\alpha}S_n^{(\mathbf{M})}(f), n^{-1/\alpha}S_n^{(\mathbf{M})}(g))\Rightarrow^d \Phi_\beta(W,W'), \end{align*} $$
where 
 $W,W'$
 are independent
$W,W'$
 are independent 
 $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 distributed random variables. By [Reference Samorodnitsky and Taqqu11, Property 1.2.13],
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},1,0)$
 distributed random variables. By [Reference Samorodnitsky and Taqqu11, Property 1.2.13], 
 $\Phi _\beta (W,W')$
 is
$\Phi _\beta (W,W')$
 is 
 $S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$
 distributed.
$S_\alpha (\kern -2pt\sqrt [\alpha ]{\ln (2)},\beta ,0)$
 distributed.
The conclusion now follows from (13) and the converging together lemma.
Acknowledgements
The research of Z.K. was partially supported by ISF grant no. 1180/22.
A Appendix. Estimates on moments of truncated stable random variables
The following tail bound follows easily from [Reference Samorodnitsky and Taqqu11, Property 1.2.15]
Proposition A.1. There exists 
 $C>0$
 such that if Y is
$C>0$
 such that if Y is 
 $S_\alpha (\sigma ,1,0)$
 distributed with
$S_\alpha (\sigma ,1,0)$
 distributed with 
 $0<\sigma \leq 1$
 and
$0<\sigma \leq 1$
 and 
 $K>1$
 then
$K>1$
 then 
 $$ \begin{align*} {\mathbb P}(Y>K)\leq C\sigma^\alpha K^{-\alpha}. \end{align*} $$
$$ \begin{align*} {\mathbb P}(Y>K)\leq C\sigma^\alpha K^{-\alpha}. \end{align*} $$
 In a similar way to the appendix in [Reference Kosloff and Volný9],the tail bound implies the following two estimates on moments of truncated 
 $S_\alpha (\sigma ,1,0)$
 random variables.
$S_\alpha (\sigma ,1,0)$
 random variables.
Corollary A.2. For every 
 $r>\alpha $
, there exists
$r>\alpha $
, there exists 
 $C>0$
 such that if Y is
$C>0$
 such that if Y is 
 $S_\alpha (\sigma ,1,0)$
 distributed with
$S_\alpha (\sigma ,1,0)$
 distributed with 
 $0<\sigma \leq 1$
 and
$0<\sigma \leq 1$
 and 
 $K>1$
,
$K>1$
, 
 $$ \begin{align*} \mathbb{E}(Y^r1_{[0\leq Y\leq K]})\leq C\sigma^\alpha K^{r-\alpha}. \end{align*} $$
$$ \begin{align*} \mathbb{E}(Y^r1_{[0\leq Y\leq K]})\leq C\sigma^\alpha K^{r-\alpha}. \end{align*} $$
Proof. The bound follows from
 $$ \begin{align*} \mathbb{E}(Y^r1_{[0\leq Y\leq K]})&=\int_{0}^\infty {\mathbb P}(Y^r1_{[0\leq Y\leq K]}>t)\,dt\\ &= \int_{0}^{K^r} {\mathbb P}(Y>t^{1/r})\,dt\\ &=r\int_{0}^{K} u^{r-1}{\mathbb P}(Y>u)\,du\\ &=r\int_{0}^{1} u^{r-1}{\mathbb P}(Y>u)\,du+r\int_{1}^{K} u^{r-1}{\mathbb P}(Y>u)\,du\\ &\leq r+C\sigma^\alpha r\int_{1}^Ku^{r-1-\alpha}\,du. \end{align*} $$
$$ \begin{align*} \mathbb{E}(Y^r1_{[0\leq Y\leq K]})&=\int_{0}^\infty {\mathbb P}(Y^r1_{[0\leq Y\leq K]}>t)\,dt\\ &= \int_{0}^{K^r} {\mathbb P}(Y>t^{1/r})\,dt\\ &=r\int_{0}^{K} u^{r-1}{\mathbb P}(Y>u)\,du\\ &=r\int_{0}^{1} u^{r-1}{\mathbb P}(Y>u)\,du+r\int_{1}^{K} u^{r-1}{\mathbb P}(Y>u)\,du\\ &\leq r+C\sigma^\alpha r\int_{1}^Ku^{r-1-\alpha}\,du. \end{align*} $$
Here the last inequality follows from Proposition A.1.
Corollary A.3. For every 
 $r<\alpha $
, there exists
$r<\alpha $
, there exists 
 $C>0$
 such that if Y is
$C>0$
 such that if Y is 
 $S_\alpha (\sigma ,1,0)$
 distributed with
$S_\alpha (\sigma ,1,0)$
 distributed with 
 $0<\sigma \leq 1$
 as
$0<\sigma \leq 1$
 as 
 $K\to \infty $
,
$K\to \infty $
, 
 $$ \begin{align*} \mathbb{E}(Y^r1_{[Y\geq K]})\lesssim C\sigma^\alpha K^{r-\alpha}. \end{align*} $$
$$ \begin{align*} \mathbb{E}(Y^r1_{[Y\geq K]})\lesssim C\sigma^\alpha K^{r-\alpha}. \end{align*} $$
The proof of Corollary A.3 is similar to the proof of Corollary A.2. The following proposition is important in the proofs of Theorems 2.6 and 2.10.
Proposition A.4. For every 
 $K,\sigma>0$
, if X is
$K,\sigma>0$
, if X is 
 $S_\alpha (\sigma ,1,0)$
 distributed then
$S_\alpha (\sigma ,1,0)$
 distributed then 
 $X1_{X<K}$
 is square integrable. Furthermore, there exists
$X1_{X<K}$
 is square integrable. Furthermore, there exists 
 $C>0$
 such that for every
$C>0$
 such that for every 
 $S_\alpha (\sigma ,1,0)$
 random variable X with
$S_\alpha (\sigma ,1,0)$
 random variable X with 
 $0<\sigma \leq 1$
 and
$0<\sigma \leq 1$
 and 
 $K>1$
,
$K>1$
, 
 $$ \begin{align*} \mathbb{E}((X \mathbf{1}_{[X<K]})^2)\leq C\sigma^\alpha K^{2-\alpha}. \end{align*} $$
$$ \begin{align*} \mathbb{E}((X \mathbf{1}_{[X<K]})^2)\leq C\sigma^\alpha K^{2-\alpha}. \end{align*} $$
Proof. Let Y be an 
 $S_\alpha (1,1,0)$
 random variable and note that
$S_\alpha (1,1,0)$
 random variable and note that 
 $\sigma Y$
 and X are equally distributed. By [Reference Zolotarev15, Theorems 2.5.3 and 2.5.4] (see also equations (1.2.11) and (1.2.12) in [Reference Samorodnitsky and Taqqu11]),
$\sigma Y$
 and X are equally distributed. By [Reference Zolotarev15, Theorems 2.5.3 and 2.5.4] (see also equations (1.2.11) and (1.2.12) in [Reference Samorodnitsky and Taqqu11]), 
 ${\mathbb P}(Y<-\unicode{x3bb} )$
 decays faster than any polynomial as
${\mathbb P}(Y<-\unicode{x3bb} )$
 decays faster than any polynomial as 
 $\unicode{x3bb} \to \infty $
. This implies that
$\unicode{x3bb} \to \infty $
. This implies that 
 $Y1_{Y<0}$
 has moments of all orders and
$Y1_{Y<0}$
 has moments of all orders and 
 $$ \begin{align*} \mathbb{E}((X1_{[X<0]})^2)=\sigma^2\mathbb{E}(Y^21_{[Y<0]})\leq D, \end{align*} $$
$$ \begin{align*} \mathbb{E}((X1_{[X<0]})^2)=\sigma^2\mathbb{E}(Y^21_{[Y<0]})\leq D, \end{align*} $$
where 
 $D=\mathbb {E}(Y^21_{[Y<0]})$
. Now by this and Corollary A.2, we have
$D=\mathbb {E}(Y^21_{[Y<0]})$
. Now by this and Corollary A.2, we have 
 $$ \begin{align*} \mathbb{E}((X \mathbf{1}_{[X<K]})^2)&\leq 4(\mathbb{E}((X \mathbf{1}_{[0<X<K]})^2)+\mathbb{E}((X \mathbf{1}_{[X<0]})^2))\\ &\leq 4(C\sigma^\alpha K^{2-\alpha}+D)\sim 4C\sigma^\alpha K^{2-\alpha}\quad\text{as}\ K\to\infty. \end{align*} $$
$$ \begin{align*} \mathbb{E}((X \mathbf{1}_{[X<K]})^2)&\leq 4(\mathbb{E}((X \mathbf{1}_{[0<X<K]})^2)+\mathbb{E}((X \mathbf{1}_{[X<0]})^2))\\ &\leq 4(C\sigma^\alpha K^{2-\alpha}+D)\sim 4C\sigma^\alpha K^{2-\alpha}\quad\text{as}\ K\to\infty. \end{align*} $$
The claim follows from this upper bound.
 
 











 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
