Hostname: page-component-78c5997874-t5tsf Total loading time: 0 Render date: 2024-11-10T17:38:38.559Z Has data issue: false hasContentIssue false

Hausdorff dimension of sets defined by almost convergent binary expansion sequences

Published online by Cambridge University Press:  13 March 2023

Qing-Yao Song*
Affiliation:
School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, P. R. China
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we study the Hausdorff dimension of sets defined by almost convergent binary expansion sequences. More precisely, the Hausdorff dimension of the following set

\begin{align*} \bigg\{x\in[0,1)\;:\;\frac{1}{n}\sum_{k=a}^{a+n-1}x_{k}\longrightarrow\alpha\textrm{ uniformly in }a\in\mathbb{N}\textrm{ as }n\rightarrow\infty\bigg\} \end{align*}
is determined for any $ \alpha\in[0,1] $. This completes a question considered by Usachev [Glasg. Math. J. 64 (2022), 691–697] where only the dimension for rational $ \alpha $ is given.

MSC classification

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Glasgow Mathematical Journal Trust

1. Introduction

The concept almost convergence was first introduced by Lorentz [Reference Lorentz6] in 1948 which is used to study the property of divergent sequences, that is what will happen if all the Banach limits of a sequence are equal. In [Reference Lorentz6], Lorentz defined almost convergence by the equality of all the Banach limits and discovered that this definition is equivalent to the one we give later. Lorentz [Reference Lorentz6] also studied the relationship between almost convergence, or in other words summation by method F, and matrix methods, then found that most of the commonly used matrix methods contain the method F. One is referred to [Reference Lorentz6] for details.

Definition 1.1. A bounded sequence $ \{x_{k}\}_{k=1}^{\infty} $ is called almost convergent to a number $ t\in\mathbb{R} $ , if

\begin{align*}\frac{1}{n}\sum_{k=a}^{a+n-1}x_{k}\longrightarrow t\end{align*}

uniformly in $ a\in \mathbb{N} $ as $ n\rightarrow\infty $ . We write this by $ \{x_{k}\}_{k=1}^{\infty}\in \mathbf{AC}(t) $ .

It is easy to notice that if take $ a=1 $ , the summation is exactly Cesàro summation. So a sequence that is almost convergent to t must be Cesàro convergent to t. Borel [Reference Borel2] presented a classical result that the set such that the corresponding sequences with binary expansions of numbers in [0, 1] are Cesàro convergent to $\frac{1}{2}$ has full Lebesgue measure. That is

\begin{align*} \mathscr{L}\bigg(\bigg\{x\in [0,1)\;:\;\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n}x_{k}=\frac{1}{2}\bigg\}\bigg)=1\textrm{,}\end{align*}

where $ \mathscr{L} $ denotes the Lebesgue measure and $ x=(x_{1}x_{2}\ldots)_{2} $ denotes the binary expansion of $ x\in[0,1] $ . Besicovitch [Reference Besicovitch1] showed that for all $ 0\leq\alpha\leq 1 $ , the set

\begin{align*} F_{\alpha}=\bigg\{x\in [0,1)\;:\;\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n}x_{k}=\alpha\bigg\}\end{align*}

has Hausdorff dimension $ -\alpha\log_{2}\alpha-(1-\alpha)\log_{2}\!(1-\alpha) $ . Besicovitch’s result is also extended to other matrix summations [Reference Usachev8]. For further research about almost convergence and its applications, one can refer to [Reference Esi and Necdet4, Reference Mohiuddine7, Reference Usachev8] and the references therein.

Almost convergence can imply Cesàro convergence, and a Cauchy sequence must be an almost convergent sequence. Besides, there are many other sences of convergence. For each one, we can study the Hausdorff dimension of the sets similar to $ F_{\alpha} $ . In this paper, we consider numbers with almost convergent sequences associated with their binary expansions. Connor [Reference Connor3] proved that there is no almost convergent binary expansion sequences for almost all numbers in [0, 1]. In 2022, Usachev [Reference Usachev8] showed that for rational $ \alpha $ , the Hausdorff dimension of the set

\begin{align*} G_{\alpha} &\;:\!=\;\bigg\{x\in [0,1)\;:\;\frac{1}{n}\sum_{k=a}^{a+n-1}x_{k}\longrightarrow\alpha\textrm{ uniformly in }a\in\mathbb{N}\textrm{ as }n\rightarrow\infty\bigg\}\\[5pt] {}&\;=\bigg\{x\in [0,1)\;:\;\{x_{k}\}_{k=1}^{\infty}\in \boldsymbol{AC}(\alpha)\bigg\}\end{align*}

is also $ -\alpha\log_{2}\alpha-(1-\alpha)\log_{2}\!(1-\alpha) $ , but left a problem for the case when $ \alpha $ is irrational. Usachev’s proof depends highly on the rationality of $ \alpha $ , while our method is applicable for any $ \alpha $ .

Theorem 1.2. For all $ 0\leq\alpha\leq 1 $ , we have

\begin{align*}\dim_{\textrm{H}}\!G_{\alpha}=-\alpha\log_{2}\alpha-(1-\alpha)\log_{2}\!(1-\alpha)\textrm{.}\end{align*}

We refer to Lorentz [Reference Lorentz6] for the definition of strongly regular matrix method.

Definition 1.3. A matrix method of summation is a mapping on the space of all sequences $ \{x_{k}\}_{k=1}^{\infty} $ generated by a matrix $ A=\{a_{nk}\}_{n,k=1}^{\infty} $ , which is

\begin{align*}\{x_{k}\}_{k=1}^{\infty}\longmapsto\bigg\{\sum_{k=1}^{\infty}a_{nk}x_{k}\bigg\}_{n=1}^{\infty}\textrm{.}\end{align*}

We call it strongly regular if

\begin{align*}\sum_{k=1}^{\infty}|a_{nk}-a_{n,k+1}|\longrightarrow 0\textit{ as }n\longrightarrow\infty\textrm{.}\end{align*}

Corollary 3.5 in [Reference Usachev8] remains true for irrational $ \alpha $ . That is

Corollary 1.4. Let $ A=\{a_{nk}\}_{n,k=1}^{\infty} $ be a strongly regular matrix method, which is weaker than (or consistent with) the Cesàro method, and let $ 0\leq\alpha\leq1 $ . The Hausdorff dimension of the set

\begin{align*}\bigg\{x\in[0,1)\;:\;\lim_{n\rightarrow\infty}\sum_{k=1}^{\infty}a_{nk}x_{k}=\alpha\bigg\}\textrm{,}\end{align*}

is $ -\alpha\log_{2}\alpha-(1-\alpha)\log_{2}\!(1-\alpha) $ .

The proof of this corollary is completely same as which in [Reference Usachev8].

2. Preliminaries

In this section, we recall the definition of Hausdorff dimension and mass distribution principle that will be used later.

Definition 2.1. [Reference Falconer5] Given a set $ E\subset\mathbb{R}^{n} $ , its s -dimensional Hausdorff measure is

\begin{align*}\mathscr{H}^{s}(E)=\lim_{\delta\rightarrow 0}\textrm{inf}\bigg\{\sum_{i=1}^{\infty}|O_{i}|^{s}\;:\; E\subset\bigcup_{i=1}^{\infty} O_{i}, 0\leq|O_{i}|\leq\delta\bigg\}\end{align*}

where $ \{O_{i}\}_{i=1}^{\infty} $ is an open cover, and $ |\cdot| $ denotes the diameter. Besides, the Hausdorff dimension of E is

\begin{align*}\dim_{\textrm{H}}\!E\;:\!=\;\textrm{inf}\{s\;:\; \mathscr{H}^{s}(E)=0\}\textrm{.}\end{align*}

Theorem 2.2. (Mass distribution principle) [Reference Falconer5] Let E be a set, and there is a strictly positive Borel measure $ \mu $ supported on E. If some $ s\geq 0 $ ,

\begin{align*}\liminf_{r\rightarrow 0}\frac{\log\mu(B(x,r))}{\log r}\geq s\end{align*}

holds for all $x\in E$ , then

\begin{align*}\dim_{\textrm{H}}\!E\geq s\textrm{.}\end{align*}

At the end, we fix a notation. For any $ x\in[0,1) $ , let

\begin{align*}x=\frac{x_{1}}{2}+\frac{x_{2}}{2^{2}}+\ldots\end{align*}

be the binary expansion of x. We write $ x=(x_{1},x_{2},\ldots)_{2} $ . For any $ m\geq 1 $ and a finite block $ (\varepsilon_{1}\varepsilon_{2}\ldots\varepsilon_{n}) $ with $ \varepsilon_{i}\in\{0,1\} $ for all $ 1\leq i\leq n $ , we write

\begin{align*}I_{n}(\varepsilon_{1}\varepsilon_{2}\ldots\varepsilon_{n})=\bigg\{x\in[0,1)\;:\;x_{i}=\varepsilon_{i},1\leq i\leq n\bigg\}\textrm{,}\end{align*}

which is the set of points whose binary expansion begin with the digits $ \varepsilon_{1},\varepsilon_{2},\ldots,\varepsilon_{n} $ . Recall that

\begin{align*}F_{\alpha}=\bigg\{x\in [0,1)\;:\; \lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n}x_{k}=\alpha\bigg\}\end{align*}

is the set which consists of numbers with binary expansion sequences that are Cesàro convergent to $ \alpha $ , and

\begin{align*}G_{\alpha}=\bigg\{x\in [0,1)\;:\;\frac{1}{n}\sum_{k=a}^{a+n-1}x_{k}\longrightarrow\alpha\textrm{ uniformly in }a\in\mathbb{N}\textrm{ as }n\rightarrow\infty\bigg\}\textrm{.}\end{align*}

The upper bound of the Hausdorff dimension of $G_{\alpha}$ is since $ G_{\alpha} $ is a subset of $ F_{\alpha} $ , so by Besicovitch’s result [Reference Besicovitch1],

\begin{align*}\dim_{\textrm{H}}\!G_{\alpha}\leq\dim_{\textrm{H}}\!F_{\alpha}=-\alpha\log_{2}\alpha-(1-\alpha)\log_{2}\!(1-\alpha)\textrm{.}\end{align*}

Thus, we need only to focus on the lower bound of the Hausdorff dimension of $ G_{\alpha} $ .

3. Proof of Theorem 1.2

The lower bound of the Hausdorff dimension of $ G_{\alpha} $ is given by classical methods:

  1. (1) Construct a Cantor subset of $ G_{\alpha} $ , denoted by $ E_{m,\alpha} $ for each $ m\in\mathbb{N} $ large;

  2. (2) Define a probability measure supported on $ E_{m,\alpha} $ ;

  3. (3) Use mass distribution principle to find the lower bound of the Hausdorff dimension of $ E_{m,\alpha} $ .

So define the set $ E_{m,\alpha} $ as:

\begin{align*}\textrm{For every }m\in \mathbb{N}\textrm{, }E_{m,\alpha}\;:\!=\;\bigg\{x\in [0,1)\;:\;\sum_{k=(j-1)m+1}^{jm}x_{k}=[m\alpha]+\xi_{j} \textrm{ for all }j\in\mathbb{N}\bigg\}\textrm{,}\end{align*}

where $ [m\alpha] $ denotes the largest integer less than or equal to $ m\alpha $ , and $ \{\xi_{j}\}_{j\in\mathbb{N}} $ is defined as follows. Firstly, take $ \xi_{1} $ to be 0, and $ \xi_{2} $ to be the integer such that $ 2[m\alpha]+\xi_{1}+\xi_{2}=[2m\alpha] $ , so $ \xi_{2} $ could be 0 or 1. Then, secondly, we define $ \xi_{j} $ inductively, that is, we choose $ \xi_{j} $ to be an integer satisfying

\begin{align*} j[m\alpha]+\xi_{1}+\xi_{2}+\ldots+\xi_{j}=[jm\alpha]\textrm{.} \end{align*}

By a simple calculation, we have

\begin{align*}\xi_{j}=\big[jm\alpha\big]-\big[(j-1)m\alpha\big]-\big[m\alpha\big]=\big\{(j-1)m\alpha\big\}+\big\{m\alpha\big\}-\big\{jm\alpha\big\}\textrm{,}\end{align*}

where $ \{m\alpha\}=m\alpha-[m\alpha] $ . Therefore, $ \xi_{j} $ could be $ -1 $ , 0, 1, or 2.

In other words, $ E_{m,\alpha} $ is a collection of such numbers, whose binary expansion sequences satisfy that for all $ j\in\mathbb{N} $ , the first jm digits contain exactly $ [jm\alpha] $ many ones, and if we cut the sequences into blocks of length m, the j-th block contains exactly $ [m\alpha]+\xi_{j} $ ones. Except that, there is no request for the position of ones.

We check that $ E_{m,\alpha} $ is indeed a subset of $ G_{\alpha} $ when m is sufficiently large.

Lemma 3.1. Let $ 0<\alpha\leq 1 $ , $ m>\left[\frac{100}{\alpha}\right]+100 $ , or $ \alpha=0 $ , $ m\in\mathbb{N} $ . Then for any $ x\in E_{m,\alpha} $ ,

\begin{align*}\frac{1}{n}\sum_{k=a}^{a+n-1}x_{k}\longrightarrow\alpha\textit{ uniformly in }a\in\mathbb{N}\textit{ as }n\rightarrow\infty\textrm{.}\end{align*}

Proof. First, for $ \alpha=0 $ , $ E_{m,0} $ contains a single point 0, so it is a subset of $ G_{0} $ . Second, for $ 0<\alpha\leq1 $ , we see that $ [m\alpha] $ cannot be 0 for m large. Fix an integer a, for any $ x\in E_{m,\alpha} $ , let $ j_{1} $ be the largest j such that $ jm<a $ , and $ j_{2} $ be the smallest j such that $ jm\geq a+n-1 $ . Then, on one hand,

(3.1) \begin{equation} \sum_{k=a}^{a+n-1}x_{k}\leq\sum_{k=j_{1}m+1}^{j_{2}m}x_{k}\leq \big[j_{2}m\alpha\big]-\big[j_{1}m\alpha\big] \leq(j_{2}-j_{1})m\alpha+1\textrm{.} \end{equation}

On the other hand,

(3.2) \begin{equation} \sum_{k=a}^{a+n-1}x_{k}\geq\sum_{k=(j_{1}+1)m+1}^{(j_{2}-1)m}x_{k}\geq\big[(j_{2}-1)m\alpha\big] -\big[(j_{1}+1)m\alpha\big] \geq(j_{2}-j_{1})m\alpha-2m\alpha-1\textrm{.} \end{equation}

Furthermore, we have

(3.3) \begin{equation} n-1\leq (j_{2}-j_{1})m\leq n+2m\textrm{.} \end{equation}

Combining (3.1) (3.2) (3.3) together, it follows that

(3.4) \begin{equation} \frac{(n-1)\alpha-2m\alpha-1}{n}\leq\frac{1}{n}\sum_{k=a}^{a+n-1}x_{k} \leq\frac{(n+2m)\alpha+1}{n}\textrm{.} \end{equation}

Since the left and the right most terms in (3.4) do not depend on a, the convergence is uniform when n tends to infinity. This shows $ E_{m,\alpha} $ is a subset of $ G_{\alpha} $ .

Next, we analyze the Cantor structure of $ E_{m,\alpha} $ in detail. Here, we assume that $ 0<\alpha\leq 1 $ and m is sufficiently large. Denote $ U_{j} $ the blocks of length m which contain exactly $ [m\alpha]+\xi_{j} $ ones, $ j=1,2,\ldots $ , that is

\begin{align*}U_{j}=\bigg\{u=(\varepsilon_{1}\varepsilon_{2}\ldots\varepsilon_{m})\in\{0,1\}^{m}\;:\;\sum_{k=1}^{m}\varepsilon_{k}=[m\alpha]+\xi_{j}\bigg\}\textrm{.}\end{align*}

Then, for each j, the collection $ U_{j} $ contains $ D_{j}=\textrm{C}_{m}^{[m\alpha]+\xi_{j}} $ elements, where $ \textrm{C}_{n}^{k} $ are binomial coefficients. So the first level of the Cantor structure of $ E_{m,\alpha} $ is

The second level is

and by induction, the level-j of the Cantor structure of $ E_{m,\alpha} $ is

Then, we have that

Now we compute the lower bound of the Hausdorff dimension of $ E_{m,\alpha} $ . We have to deal with some binomial coefficients later, so here we simplify it by using Stirling formula.

Lemma 3.2. Let $ \{d_{m}\}_{m\geq 1} $ be a sequence of integers with $ d_{m}\leq m $ for all $ m\geq 1 $ and

\begin{align*}\limsup_{m\rightarrow\infty}\frac{d_{m}}{m}=\alpha\textrm{.}\end{align*}

Then

\begin{align*}\limsup_{m\rightarrow\infty}\frac{1}{m}\log_{2}\textrm{C}_{m}^{d_{m}}=-\alpha\log_{2}\alpha-(1-\alpha)\log_{2}\!(1-\alpha)\textrm{.}\end{align*}

Proof. Write $ m^{\prime}=d_{m} $ . By Stirling formula,

\begin{align*} \textrm{C}_{m}^{m^{\prime}} {}&=\frac{m\textrm{!}}{m^{\prime}\textrm{!}(m-m^{\prime})\textrm{!}}\\[5pt] {}&=\frac{\sqrt{2\pi m}(\frac{m}{\textrm{e}})^{m}\textrm{e}^{O(\frac{1}{m})}}{\sqrt{2\pi m^{\prime}}(\frac{m^{\prime}}{\textrm{e}})^{m^{\prime}}\textrm{e}^{O(\frac{1}{m^{\prime}})}\sqrt{2\pi (m-m^{\prime})}(\frac{m-m^{\prime}}{\textrm{e}})^{m-m^{\prime}}\textrm{e}^{O(\frac{1}{m-m^{\prime}})}}\\[5pt] {}&=\left(\frac{m}{m-m^{\prime}}\right)^{m}\left(\frac{m-m^{\prime}}{m^{\prime}}\right)^{m^{\prime}}O\!\left(m^{-\frac{1}{2}}\right)\textrm{.} \end{align*}

So we have

\begin{align*} \limsup_{m\rightarrow\infty}\frac{1}{m}\log_{2}\textrm{C}_{m}^{m^{\prime}} {}&=\limsup_{m\rightarrow\infty}\bigg(\!\log_{2}\frac{1}{1-\frac{m^{\prime}}{m}}+\frac{m^{\prime}}{m}\log_{2}\frac{1-\frac{m^{\prime}}{m}}{\frac{m^{\prime}}{m}}+O\!\left(\frac{\log_{2}m}{m}\right)\!\!\bigg)\\[5pt] {}&=\log_{2}\frac{1}{1-\alpha}+\alpha\log_{2}\frac{1-\alpha}{\alpha}\\[5pt] {}&=-\alpha\log_{2}\alpha-(1-\alpha)\log_{2}\!(1-\alpha)\textrm{.} \end{align*}

Lemma 3.3. For any $ 0<\alpha\leq 1 $ and $ m>[\frac{100}{\alpha}]+100 $ , we have

\begin{align*}\dim_{\textrm{H}}\!E_{m,\alpha}\geq \liminf_{j\rightarrow\infty}\frac{\log_{2}\prod_{k=1}^{j}D_{k}}{jm}\textrm{.}\end{align*}

Proof. Define a measure $ \mu $ supported on $ E_{m,\alpha} $ . Let $ \mu ([0,1))=1 $ . For all $ j>0 $ and every element $ I\in\mathscr{S}_{j} $ , we set

\begin{align*} \mu(I)=\bigg(\prod_{k=1}^{j}D_{k}\bigg)^{-1}\textrm{,} \end{align*}

and $ \mu(E)=0 $ for all $ E\cap S_{j}=\emptyset $ . Then, one can see that the set function $\mu$ satisfies Kolmogorov’s consistency condition, that is, for any $j\ge 1$ and $I\in \mathscr{S}_j$ ,

\begin{align*} \sum_{\tilde{I}\in \mathscr{S}_{j+1}, \tilde{I}\subset I}\mu(\tilde{I})=\mu(I). \end{align*}

Thus, it can be extended into a mass distribution supported on $E_{m,\alpha}$ [Reference Falconer5]. So for any ball B(x, r) centered at $ x\in[0,1) $ with radius $ r<1 $ , there is an integer j such that $ 2^{-(j+1)m}\leq r\lt 2^{-jm} $ . It follows that the ball intersects at most 2 elements in , then we have

\begin{align*} \frac{\log_{2}\mu(B(x,r))}{\log_{2}r} {}&\geq\frac{\log_{2}\!(2\cdot(\prod_{k=1}^{j}D_{k})^{-1})}{\log_{2}r} \\[5pt] &= \frac{-\log_{2}\prod_{k=1}^{j}D_{k}}{\log_{2}r}+O\!\left(\frac{1}{\log_{2}r}\right)\\[5pt] {}&\geq \frac{-\log_{2}\prod_{k=1}^{j}D_{k}}{-(j+1)m}+O\!\left(\frac{1}{\log_{2}r}\right)\textrm{.} \end{align*}

Thus

\begin{align*}\liminf_{r\rightarrow 0}\frac{\log\mu(B(x,r))}{\log r}\geq\liminf_{j\rightarrow\infty}\frac{\log_{2}\prod_{k=1}^{j}D_{k}}{jm}\textrm{,}\end{align*}

and the lemma holds by mass distribution principle.

Remark 1. For the case $ \alpha=0 $ and $ m\in\mathbb{N} $ , the result in Lemma 3.3 is trivial since $ E_{m,0} $ is a set of a single point, and $ D_{j}=1 $ for all $ j\in\mathbb{N} $ .

Proof. (Proof of Theorem 1.2) As Lemmas 3.1 and 3.3 hold for m sufficiently large, we can take the limit supremum of $ \dim_{\textrm{H}}\!E_{m,\alpha} $ as $ m\rightarrow\infty $ . Let $ \xi_{(m,\alpha)} $ be a number in $ \{-1,0,1,2\} $ such that $ \textrm{C}_{m}^{[m\alpha]+\xi_{(m,\alpha)}} $ is the smallest one among

\begin{align*}\textrm{C}_{m}^{[m\alpha]-1},\textrm{C}_{m}^{[m\alpha]},\textrm{C}_{m}^{[m\alpha]+1},\textrm{C}_{m}^{[m\alpha]+2}\textrm{,}\end{align*}

and take $ m^{\prime}=[m\alpha]+\xi_{(m,\alpha)} $ . Then, we have

\begin{align*} \dim_{\textrm{H}}\!G_{\alpha} {}&\geq\limsup_{m\rightarrow\infty}\dim_{\textrm{H}}\!E_{m,\alpha}\\[5pt] {}&=\limsup_{m\rightarrow\infty}\liminf_{j\rightarrow\infty}\frac{\log_{2}\prod_{k=1}^{j}D_{k}}{jm}\\ &\geq\limsup_{m\rightarrow\infty}\liminf_{j\rightarrow\infty}\frac{\log_{2}\prod_{k=1}^{j}\textrm{C}_{m}^{[m\alpha]+\xi_{(m,\alpha)}}}{jm}\\{} {}&=\limsup_{m\rightarrow\infty}\frac{1}{m}\log_{2}\textrm{C}_{m}^{m^{\prime}}\\&=-\alpha\log_{2}\alpha-(1-\alpha)\log_{2}\!(1-\alpha) \end{align*}

where the last equality follows from Lemma 3.2 since $ \limsup_{m\rightarrow\infty}\frac{m^{\prime}}{m}=\alpha $ .

Usachev [Reference Usachev8] also posed a potential method to attach the case when $ \alpha $ is irrational. Define a sequence of rationals $ \{\frac{p_{i}}{q_{i}} \}_{i=1}^{\infty}$ which converges to $ \alpha $ , and construct a set that consists of numbers with such binary expansions. It requests that among the positions in

\begin{align*} \bigg[\sum_{k=1}^{i-1}q_{k}m+1, \sum_{k=1}^{i-1}q_{k}m+q_{i}m\bigg]\textrm{,} \end{align*}

the binary expansion sequences have exactly $ p_{i}m $ many ones. Although we do have

\begin{align*}\frac{1}{n}\sum_{k=a}^{a+n-1}x_{k}\longrightarrow\alpha\textrm{ as }n\rightarrow\infty\end{align*}

for every positive integer a, the convergence may not be uniform as mentioned in [Reference Usachev8], that in every block of length $ q_{i}m $ , all the ones appear at first and then the zeros follow.

So ensure the convergence is uniform, it is necessary to pose suitable restrictions on the distribution of ones, for example, asking zeros and ones appear regularly. Here, we cut the sequence into blocks of the same length, and there is only a little difference about the quantity of ones in different blocks. It avoids the problem caused by $ q_{i} $ going to infinity and ones being separated from zeros.

References

Besicovitch, A., On the sum of digits of real numbers represented in the dyadic system, Math. Ann. 110 (1935), 321330.CrossRefGoogle Scholar
Borel, É., Les probabilités dénombrables et leurs applications arithmétiques, Rend. Circ. Mat. Parlemo 26 (1909), 247271.CrossRefGoogle Scholar
Connor, J., Almost none of the sequences of 0’s and 1’s are almost convergent, Int. J. Math. Math. Sci. 13 (1990), 775777.CrossRefGoogle Scholar
Esi, A. and Necdet, M. çatalbaş, Almost convergence of triple sequences, Global J. Math. A 2 (2014), 610.Google Scholar
Falconer, K., Fractal geometry, Mathematical Foundations and Applications, 2nd edition (John Wiley & Sons, Hoboken, NJ, 2003).CrossRefGoogle Scholar
Lorentz, G., A contribution to the theory of divergent sequences, Acta Math. 80 (1948), 167190.CrossRefGoogle Scholar
Mohiuddine, S., An application of almost convergence in approximation theorems, App. Math. L. 24 (2011), 18561860.CrossRefGoogle Scholar
Usachev, A., Hausdorff dimension of the set of almost convergent sequences, Glasg. Math. J. 64 (2022), 691697.CrossRefGoogle Scholar