1 Introduction and background
1.1 Preliminaries
We assume that the reader is familiar with the basic concepts and results of algorithmic randomness. Our notation is standard. Unexplained notation can be found in Downey and Hirschfeldt [Reference Downey and Hirschfeldt3]. As it is standard in the field, all rational and real numbers are meant to be in the unit interval
$[0,1)$
, unless explicitly stated otherwise.
We start with reviewing some central concepts and results that will be used subsequently. The main object of interest of this article is the Solovay reducibility, which has been introduced by Robert M. Solovay [Reference Solovay11] in 1975 as a measure of relative randomness. Its original definition by Solovay uses the notion of a translation function defined on the left cut of a real.
Definition 1.1. A computable approximation is a computable Cauchy sequence, i.e., a computable sequence of rational numbers that converges. A real is computably approximable, or c.a., if it is the limit of some computable approximation.
A left-c.e. approximation is a nondecreasing computable approximation. A real is left-c.e. if it is the limit of some left-c.e. approximation.
Definition 1.2. The left cut of a real
$\alpha $
, written
$LC(\alpha )$
, is the set of all rationals strictly smaller than
$\alpha $
.
Definition 1.3 (Solovay, 1975).
A translation function from a real
$\beta $
to a real
$\alpha $
is a partially computable function g from the set
${\mathbb {Q}} \cap [0,1)$
to itself such that, for all
$q<\beta $
, the value
$g(q)$
is defined and fulfills
$g(q)<\alpha $
, and
where
$\lim \limits _{q\nearrow \beta }$
denotes the left limit.
A real
$\alpha $
is Solovay reducible to a real
$\beta $
, also written as
$\alpha {\le }_{\mathrm {S}} \beta $
, if there is a real constant c and a translation function g from
$\beta $
to
$\alpha $
such that, for all
$q< \beta $
, it holds that
We will refer to (2) as Solovay condition and to c as Solovay constant, and we say that g witnesses the Solovay reducibility of
$\alpha $
to
$\beta $
.
Note that if a partially computable rational-valued function g is defined on all of the set
$LC(\beta )$
and maps it to
$LC(\alpha )$
, then the Solovay condition (2) implies (1).
Noting that the translation function g defined above provides useful information only about the left cuts of
$\alpha $
and
$\beta $
, many researchers focused on Solovay reducibility as a measure of relative randomness of left-c.e. reals, whereas, outside of the left-c.e. reals, the notion has been considered as “badly behaved” by several authors (see, e.g., Downey and Hirschfeldt [Reference Downey and Hirschfeldt3, Section 9.1]).
Calude, Hertling, Khoussainov, and Wang [Reference Calude, Hertling, Khoussainov and Wang2] gave an equivalent characterization of Solovay reducibility on the set of the left-c.e. reals in terms of left-c.e. approximations of the involved reals.
Proposition 1.4 (Calude et al., 1998).
A left-c.e. real
$\alpha $
is Solovay reducible to a left-c.e. real
$\beta $
with a Solovay constant c if and only if, for every left-c.e. approximations
$a_0,a_1,\dots \nearrow \alpha $
and
$b_0,b_1,\dots \nearrow \beta $
, there exists a computable index function
$f:\mathbb {N}\to \mathbb {N}$
such that, for every n, it holds that
Informally speaking, the reduction
$\alpha {\le }_{\mathrm {S}} \beta $
provides for every left-c.e. approximation of
$\beta $
a not slower left-c.e. approximation of
$\alpha $
. It is well known and easy to prove that the universal quantification over left-c.e. approximations to
$\alpha $
in Proposition 1.4 can be replaced by an existential quantification as follows.
Proposition 1.5. A left-c.e. real
$\alpha $
is Solovay reducible to a left-c.e. real
$\beta $
with a Solovay constant c if and only if there exist left-c.e. approximations
$a_0,a_1,\dots \nearrow \alpha $
and
$b_0,b_1,\dots \nearrow \beta $
such that, for every n, it holds that
In what follows, we refer to the characterizations of Solovay reducibility given in Definition 1.3 and in Propositions 1.4 and 1.5 as rational and index approaches, respectively.
Remark. Zheng and Rettinger [Reference Zheng, Rettinger, Chwa, Ian and Munro14] used the index approach to introduce the following modification of Solovay reducibility on the computably enumerable reals: a c.a. real is S2a-reducible to a real
$\beta $
if there exist a constant c and computable approximations
$a_0,a_1,\dots $
and
$b_0,b_1,\dots $
of
$\alpha $
and
$\beta $
, respectively, that safisfy the inequality
$|\alpha - a_n| < c(|\beta - b_n| + 2^{-n})$
for all n.
When comparing Solovay reducibility and S2a-reducibility, both are equivalent on the left-c.e. reals [Reference Zheng, Rettinger, Chwa, Ian and Munro14, Theorem 3.2(2)] but S2a-reducibility is strictly weaker on the c.a. reals [Reference Titov13, Theorem 2.1]. Some authors [Reference Hoyrup, Saucedo, Stull, Chatzigiannakis, Kaklamanis, Marx and Sannella4, Reference Miller, Day, Fellows, Greenberg, Khoussainov, Melnikov and Rosamond9, Reference Rettinger and Zheng10] consider S2a-reducibility and not Solovay reducibility as the standard reducibility for investigating the c.a. reals.
1.2 The Barmpalias–Lewis-Pye limit theorem on left-c.e. reals: two versions
Using the index approach of Calude et al., Kučera and Slaman [Reference Kucera and Slaman5] have proven that the
$\Omega $
-like reals, i.e., Martin-Löf random left-c.e. reals, form the highest Solovay degree on the set of left-c.e. reals. The core of their proof is the following assertion.
Lemma 1.6 (Kučera and Slaman, 2001).
For every two left-c.e. approximations
${a_0,a_1,\dots }$
and
${b_0,b_1,\dots }$
of a left-c.e real
$\alpha $
and a Martin-Löf random left-c.e. real
$\beta $
, respectively, there exists a constant c such that
For an explicit proof of the latter lemma (see Miller [Reference Miller, Day, Fellows, Greenberg, Khoussainov, Melnikov and Rosamond9, Lemma 1.1]. Barmpalias and Lewis-Pye [Reference Barmpalias and Lewis-Pye1] have strengthened Lemma 1.6 by showing the following theorem.
Theorem 1.7 (Barmpalias, Lewis-Pye, 2017).
For every left-c.e. real
$\alpha $
and every Martin-Löf random left-c.e. real
$\beta $
, there exists a constant
$d\geq 0$
such that, for every left-c.e. approximations
$a_0,a_1,\dots \nearrow \alpha $
and
$b_0,b_1,\dots \nearrow \beta $
, it holds that
Moreover,
$d=0$
if and only if
$\alpha $
is not Martin-Löf random.
We refer to Theorem 1.7 as index form of the Barmpalias–Lewis-Pye limit theorem. We will argue in connection with Proposition 1.9 below that the index form of the Limit Theorem, which is essentially the original formulation, can be equivalently stated, with the value of d preserved, in the following rational form (which necessitates the use of nondecreasing translation functions).
Theorem 1.8 (Rational form of the Barmpalias–Lewis-Pye limit theorem).
For every left-c.e. real
$\alpha $
and every Martin-Löf random left-c.e. real
$\beta $
, there exists a constant
$d\geq 0$
such that, for every nondecreasing translation function g from
$\beta $
to
$\alpha $
, it holds that
$$ \begin{align} \lim\limits_{q\nearrow\beta}\frac{\alpha - g(q)}{\beta - q} = d. \end{align} $$
Moreover,
$d=0$
if and only if
$\alpha $
is not Martin-Löf random.
The following proposition and its proof indicate that Theorems 1.8 and 1.7 can be considered as variants of each other.
Proof. First, we show that Theorem 1.7 follows from Theorem 1.8 with the same d. Let
$a_0,a_1,\dots $
and
$b_0,b_1,\dots $
be left-c.e. approximations of reals
$\alpha $
and
$\beta $
where
$\beta $
is Martin-Löf random. Let d be as in Theorem 1.8. Define functions f and h on
$LC(\beta )$
by
Recall that both the sequences
$a_0, a_1,\dots $
and
$b_0, b_1,\dots $
are nondecreasing (but not necessarily strictly increasing). So, by construction, f and h are nondecreasing translation functions from
$\beta $
to
$\alpha $
, and we have for all n that
${f(b_n) \le a_n \le h(b_n)}$
. As a consequence, we obtain
$$ \begin{align} d = \lim\limits_{n\to\infty}\frac{\alpha - h(b_n)}{\beta - b_n} \leq \liminf \limits_{n\to\infty}\frac{\alpha - a_n}{\beta - b_n} \leq \limsup \limits_{n\to\infty}\frac{\alpha - a_n}{\beta - b_n} \leq \lim\limits_{n\to\infty}\frac{\alpha - f(b_n)}{\beta - b_n} = d, \end{align} $$
where the equalities hold by choice of d and by applying Theorem 1.8 to h and to f. Now, in particular, the limit inferiore and limit superiore in (6) are both equal to d, i.e., the corresponding sequence of fractions converges to d. Theorem 1.7 follows because
$a_0,a_1,\dots $
and
$b_0,b_1,\dots $
have been chosen as arbitrary left-c.e. approximations of
$\alpha $
and
$\beta $
, respectively.
Next, we show that Theorem 1.8 follows from Theorem 1.7 with the same d. Let
$\alpha $
and
$\beta $
be left-c.e. reals where
$\beta $
is Martin-Löf random, and fix some strictly increasing left-c.e. approximation
$b_0,b_1,\dots $
of
$\beta $
. Let d be as in Theorem 1.7 and g be an arbitrary nondecreasing translation function from
$\beta $
to
$\alpha $
. For all n, let
$a_n=g(b_n)$
. Then, for all rationals q and for n such that
$b_n \le q < b_{n+1}$
, the monotonicity of g implies that
${a_n \leq g(q) \leq a_{n+1}}$
, hence, for all such q and n, it holds that
$$ \begin{align} \underbrace{\frac{\alpha-a_{n+1}}{\beta - b_n}}_{= \rho_1(n)} \le \underbrace{\frac{\alpha-g(q)}{\beta - q}}_{= \rho(q)} < \underbrace{\frac{\alpha-a_n}{\beta - b_{n+1}}}_{= \rho_2(n)}. \end{align} $$
Now, the sequences
$b_0, b_1,\dots $
and
$b_1, b_2,\dots $
are both left-c.e. approximations of
$\beta $
, and, by choice of g and by (1), the sequences
$a_1, a_2,\dots $
and
$a_0, a_1,\dots $
are both left-c.e. approximations of
$\alpha $
. Hence, by Theorem 1.7, we have
So for given
$\varepsilon>0$
, there is some index
$n(\varepsilon )$
such that, for all
$n> n(\varepsilon )$
, the values
$\rho _1(n)$
and
$\rho _2(n)$
differ at most by
$\varepsilon $
from d. But then, by (7), for every rational q where
$b_{n(\varepsilon )} < q < \beta $
, the value
$\rho (q)$
differs at most by
$\varepsilon $
from d. Thus, we have (5), i.e., the values
$\rho (q)$
converge to d when q tends to
$\beta $
from the left. Since g was chosen as an arbitrary nondecreasing translation function from
$\beta $
to
$\alpha $
, Theorem 1.8 follows.
Monotone translation functions have been considered before by Kumabe, Miyabe, Mizusawa, and Suzuki [Reference Kumabe, Miyabe, Mizusawa and Suzuki6], who characterized Solovay reducibility on the set of left-c.e. reals in terms of nondecreasing real-valued translation functions.
It is not complicated to check that, in case a left-c.e. real is Solovay reducible to another left-c.e. real, this is always witnessed by a nondecreasing translation function while preserving any given Solovay constant.
Proposition 1.10. Let
$\alpha $
and
$\beta $
be left-c.e. reals, and let c be a real. Then
$\alpha $
is Solovay reducible to
$\beta $
with the Solovay constant c if and only
$\alpha $
is Solovay reducible to
$\beta $
via some nondecreasing translation function g and the Solovay constant c.
Proof. For a proof of the nontrivial direction of the asserted equivalence, assume that
$\alpha {\le }_{\mathrm {S}} \beta $
with the Solovay constant c. By Proposition 1.4, choose left-c.e. approximations
$a_0,a_1,\dots \nearrow \alpha $
and
$b_0,b_1,\dots \nearrow \beta $
such that (4) holds.
Then
$\alpha $
is Solovay reducible to
$\beta $
with the Solovay constant c via the nondecreasing translation function g defined by
$g(q)=a_{\max \{n\colon b_n\leq q\}}$
.
Remark. Note that strengthening Definition 1.3 by considering only nondecreasing translation functions yields a well-defined reducibility
${\le }_{\mathrm {S}}^{\mathrm {m}} $
on
$\mathbb {R}$
, called monotone Solovay reducibility. The basic properties of
${\le }_{\mathrm {S}}^{\mathrm {m}} $
have been investigated by Titov [Reference Titov12, Chapter 3]. Note that Proposition 1.10 shows that
$\leq _S^{m}$
and
${\le }_{\mathrm {S}} $
coincide on the set of left-c.e. reals.
In Theorem 1.8, requiring the function g to be nondecreasing is crucial because, for every
$\alpha $
and
$\beta $
that fulfills the conditions there, we can construct a nonmonotone translation function g such that the left limit in (5) does not exist, as we will see in the next proposition.
Proposition 1.11. Let
$\alpha ,\beta $
be two left-c.e. reals such that
$\alpha {\le }_{\mathrm {S}} \beta $
with a Solovay constant c. Then there exists a translation function g from
$\beta $
to
$\alpha $
such that
$\alpha {\le }_{\mathrm {S}} \beta $
with the Solovay constant c via g, wherein
$$ \begin{align} \liminf\limits_{q\nearrow\beta}\frac{\alpha - g(q)}{\beta - q} = 0\quad \text{and}\quad \limsup\limits_{q\nearrow\beta}\frac{\alpha - g(q)}{\beta - q}> 0. \end{align} $$
Proof. By Proposition 1.5, fix left-c.e. approximations
$a_0,a_1\dots \nearrow \alpha $
and
$b_0,b_1,\dots \nearrow \beta $
such that (4) holds.
The desired translation function g is defined by letting
$$ \begin{align*} &g(b_n+\frac{b_{n+1} - b_n}{2^k}) = a_{n+k} && \text{ for all }n,k>0\text{ in case }b_n\neq b_{n+1}, \\ &g(b_n - \frac{b_{n}-b_{n-1}}{3^k}) = a_{n+k} - c(b_{n+k} - b_n) && \text{ for all }n,k>0\text{ in case }b_{n-1}\neq b_n, \\ &g(q) = a_{\min\{n: b_n\geq q\}} && \text{ for all other rationals }q<\beta. \end{align*} $$
Obviously, g is partially computable and defined on all rationals
$< \beta $
.
So, it suffices to show that g satisfies the conditions (2) and (8) (recall that the Solovay condition (2) implies (1) in the definition of translation functions).
In order to argue that (2) holds, we consider three cases:
-
• for
$q = b_n + \frac {b_{n+1}-b_n}{2^k}$
for some
$n,k>0$
, where
$b_n\neq b_{n+1}$
, (2) is implied by
$$\begin{align*}\alpha - g(q) = \alpha - a_{n+k}\leq \alpha - a_{n+1}<c(\beta - b_{n+1})<c(\beta - q);\end{align*}$$
-
• for
$q = b_n - \frac {b_n - b_{n-1}}{3^k}$
for some
$n,k>0$
where
$b_{n-1}\neq b_n$
, (2) follows from
$$\begin{align*}\kern-6pt\alpha - g(q) = \alpha - a_{n+k} + c(b_{n+k} - b_n) < c(\beta - b_{n+k}) + c(b_{n+k} - b_n) < c(\beta - q);\end{align*}$$
-
• for all other q, (2) is implied by
$$\begin{align*}\alpha - g(q) = \alpha - a_{\min\{n:b_n\geq q\}} < c(\beta - b_{\min\{n:b_n\geq q\}})\leq c(\beta - q)\end{align*}$$
(note that, in each case, the first strict inequality follows from (4)).
Further, the left part of (8) holds since, for every n such that
$b_n\neq b_{n+1}$
, the real
$\alpha $
is an accumulation point of
$g(q)|_{[b_n, b_{n+1}]}$
since
$g(b_n+\frac {b_{n+1} - b_n}{2^k})\underset {k\to \infty }{\to }\alpha $
.
Finally, the right part of (8) holds since, for every n such that
$b_{n-1}\neq b_n$
, the constant c is an accumulation point of
$\frac {\alpha - g(q)}{\beta - q}|_{[b_{n-1},b_n]}$
since
$$\begin{align*}\frac{\alpha - g(b_n - \frac{b_n - b_{n-1}}{3^k})}{\beta - (b_n - \frac{b_n - b_{n-1}}{3^k})} = \frac{\alpha - a_{n+k} + c(b_{n+k} - b_n)}{\beta - b_n + \frac{b_n - b_{n-1}}{3^k}} \underset{k\to\infty}{\longrightarrow}\frac{c(\beta - b_n)}{\beta - b_n} = c.\\[-47pt] \end{align*}$$
The latter proposition is a motivation to consider the Solovay reducibility via only nondecreasing translation functions in order to extend the Barmpalias–Lewis-Pye limit theorem on
$\mathbb {R}$
.
2 The theorem
Theorem 2.1. For every real
$\alpha $
and every Martin-Löf random real
$\beta $
, there exists a constant
$d\geq 0$
such that, for every nondecreasing translation function g from
$\beta $
to
$\alpha $
, it holds that
$$ \begin{align} \lim\limits_{q\nearrow\beta}\frac{\alpha - g(q)}{\beta - q} = d. \end{align} $$
Proof. Let
$\alpha $
and
$\beta $
be two reals where
$\beta $
is Martin-Löf random, and let g be a nondecreasing translation function from
$\beta $
to
$\alpha $
.
The proof is organized as follows.
In Section 2.2, we show that
$\alpha $
is Solovay reducible to
$\beta $
via the translation function g, i.e., that the fraction in (9) is bounded from above for
$q\nearrow \beta $
. This fact can be obtained using Claims 1–3, which we will state at the beginning of Section 2.2 after introducing some notation. Claim 1 will be proved by combinatorial arguments, Claim 2 can be obtained straightforwardly, and Claim 3 will be argued similarly to the case of left-c.e. reals [Reference Barmpalias and Lewis-Pye1, Reference Miller, Day, Fellows, Greenberg, Khoussainov, Melnikov and Rosamond9].
Next, in Section 2.2, we show that the left limit considered in the theorem exists by assuming the opposite, namely, that the left limit inferiore of the fraction in (9) does not coincide with its left limit superiore (note that, by the previous section, both of them differ from infinity). The contradiction will be obtained rather directly from Claims 7 –9, which we will also state in the beginning of the section. Claims 7 and 8 follow by arguments that are similar to the ones used in connection with the case of left-c.e. reals [Reference Kucera and Slaman5, Reference Miller, Day, Fellows, Greenberg, Khoussainov, Melnikov and Rosamond9], whereas the proof of Claim 9 is rather involved and has no counterpart in the left-c.e. case.
Finally, in Section 2.3, we show that the left limit considered in the theorem does not depend on the choice of the translation function by assuming the opposite, namely, that there are two translation functions having different left limits of the fraction in (9).
The notation
In the remainder of this proof, and unless explicitly stated otherwise, the term “interval” refers to a closed interval on
$\mathbb {R}$
that is bounded by rationals. Lebesgue measure is denoted by
$\mu $
, i.e., the Lebesgue measure, or measure, for short, of an interval U is
${\mu (U)= \max U - \min U}$
.
A finite test is an empty set or a tuple
$A=(U_0,\dots ,U_m)$
with
$m\geq 0$
where the
$U_i$
are not necessarily distinct nonempty intervals. For such a finite test A, we define its covering function by
that is,
${k_{A}} (x)$
is the number of intervals in A that contain the real number x. Furthermore, the measure of A is
$\mu (A) = \sum _{i\in \{0,\dots ,m\}} \mu (U_i)$
.
It is easy to see that the measure of a given finite test A can be computed by integrating its covering function on the whole domain
$[0,1]$
, i.e., for every finite test A, it holds that
$$ \begin{align*} \mu(A) =\int\limits _0^1 {k_{A}({x})} dx. \end{align*} $$
Observe that, by our definition (10) of the covering function, the values of the covering functions of two tests
$([0.2, 0.3],[0.3,0.7])$
and
$([0.2,0.7])$
differ on the argument
$0.3$
. Furthermore, for a given finite test and a rational q, by adding intervals of the form
$[q,q]$
the value of the corresponding covering function at q can be made arbitrarily large without changing the measure of the test. However, these observations will not be relevant in what follows since they relate only to the value of covering functions at rationals.
For all three sections, we fix some effective enumeration
$p_0, p_1, \dots $
without repetition of the domain of g and, for all natural n, define
$Q_n = \{p_0, \dots , p_n\}$
.
2.1 The fraction is bounded from above
First, we demonstrate that the translation function g witnesses the reducibility
$\alpha {\le }_{\mathrm {S}} \beta $
, or, equivalently, that
$$ \begin{align} \exists c \forall q<\beta \left(\frac{\alpha - g(q)}{\beta - q}<c\right). \end{align} $$
For all n and i, we will construct a finite test
$T_i^n$
based of the idea of test construction used by Miller [Reference Miller, Day, Fellows, Greenberg, Khoussainov, Melnikov and Rosamond9, Lemma 1.1] for the left-c.e. case adapted for the nonmonotone enumeration of the domain of g (the adaptation is needed since the original Miller’s construction may not satisfy Claim 2 stated below if the enumeration of the domain of g is not increasing). The construction is effective in the sense that it always terminates and is uniform in n and i.
For every n and i, let
$Y_i^n$
be the union of all intervals lying in the finite test
$T_i^n$
(note that
$Y_i^n$
can be represented as a disjoint union of finitely many intervals). For every i, let
$Y_i$
denote the union of the sets
$Y_i^0,Y_i^1,\dots $
.
The property (11) can be obtained from the following three claims.
Claim 1. For every i and n, it holds that
Claim 2. For every i and n, it holds that
Claim 3. For every i, the following implication holds:
From the first two claims we easily obtain that the Lebesgue measure of the set
$Y_i$
is also bounded by
$2^{-(i+1)}$
for every i.
For all i and
$n> 0$
,
$Y_i^n\setminus Y_i^{n-1}$
is a disjoint union of finitely many intervals, wherein a list of intervals is computable in i and n because the same holds for
$Y_i^n$
and
$Y_i^{n-1}$
.
Accordingly, the set
$Y_i$
is equal to the union of a set
$S_i$
of intervals with rational endpoints that is effectively enumerable in i and where the sum of the measures of these intervals is at most
$2^{-(i+1)}$
. By the latter two properties, the sequence
$S_0,S_1,\dots $
is a Martin-Löf test.
The real
$\beta $
is Martin-Löf random, hence the test
$S_0,S_1,\dots $
should fail on
$\beta $
. Therefore, we can fix an index i such that
$\beta $
is not contained in
$Y_i$
. By Claim 3, we obtain that
$\alpha {\le }_{\mathrm {S}} \beta $
via g with the Solovay constant
$2^{i+2}$
, which implies (11) directly by definition of
${\le }_{\mathrm {S}} $
.
It remains to construct the finite test
$T_i^n$
uniformly in i and n and to check that Claims 1–3 are fulfilled.
Outline of the construction and some properties of the finite test
$T_i^n$
Fix
$n,i\geq 0$
. We describe the construction of the finite test
$T_i^n$
, which is a reworked version of a construction used by Miller [Reference Miller, Day, Fellows, Greenberg, Khoussainov, Melnikov and Rosamond9, Lemma 1.1] in connection with left-c.e. reals.
Recall that
$p_0,\dots ,p_n$
are the first
$n+1$
elements of the fixed effective enumeration of the domain of g, and let
$\{q_0 < \cdots < q_n\}$
where
$\{p_0,\dots ,p_n\}$
is equal to
$\{q_0,\dots ,q_n\}$
. For technical reasons, let
$q_{n+1} = 1$
. In the remainder of this proof, every index refers to a number in the range between
$0$
and n, unless stated otherwise. For indices k and l, where
$k < l$
, let
and call the pair
$(k,l)$
expanding if, first,
$k<l$
and, second,
$$\begin{align*}q_k + \delta(k,l)> q_l, \quad \text{ or, equivalently, } \quad \frac{g(q_l) - g(q_k)}{q_l - q_k} > 2^{i+1}, \end{align*}$$
and nonexpanding, otherwise. For further use, observe that, since g is nondecreasing, by definition, the relations of being an expanding pair and of being a nonexpanding pair are both transitive in the sense that, if the pairs
$(k,l)$
and
$(l,m)$
are expanding, then the pair
$(k,m)$
is expanding, and the latter implication remains valid with “expanding” replaced by “nonexpanding”. The following somewhat technical observation is also straightforward.
Claim 4. For a nonexpanding pair
$(k,l)$
and an index
$m>l$
, it holds that
${q_k + \delta (k,m) \leq q_l + \delta (l,m)}$
.
Further, for all indices k and m, define the interval
$$ \begin{align} I[{k},{m}] = \begin{cases} [q_m, q_k+ \delta(k,m) ],&\text{ if } (k,m) \text{ is an expanding pair}, \\ \emptyset, &\text{ otherwise}. \end{cases} \end{align} $$
Note that the interval
$I[{k},{m}] $
has nonzero length
$\delta (k,m) - (q_m - q_k)$
in case
$(k,m)$
is an expanding pair and is empty otherwise; the latter includes the case
$m \le k$
.
For all pairs
$(k,m)$
of indices such that the interval
$I[{k},{m}] $
is nonempty, put the intersection of this interval with the unit interval
$[0,1]$
into the test
$T_i^n$
.
The following inclusion property of particular intervals can easily be implied from the construction.
Claim 5. For an expanding pair
$(k,l)$
and all m, it holds that
${I[{k},{m}] \supseteq I[{l},{m}] }$
.
Preliminaries for the proof of Claim 1
Let
$0=z_0<z_1<\dots <z_s$
be the indices in the range
$0,\dots ,n$
such that
and, for every
$j\in \{0,\dots ,s-1\}$
,
Further, for technical reasons, we fix an additional index
$z_{s+1} = n+1$
(hence
$z_{s+1}-1 = n$
) and set
$q_{n+1} = 1$
and
$g(q_{n+1}) = 1$
, so (13) means just (15) for
$j = s$
.
Since, for every
$j<s$
, the pair
$(z_j,z_{j+1})$
is nonexpanding by definition, the transitivity of nonexpanding property implies that
Claim 6. For every
$j\in \{0,\dots ,s\}$
and every real
${x\in (q_{z_j} + \delta (z_j,z_{j+1}), q_{z_{j+1}})}$
, it holds that
$x\notin I[{k},{l}] $
for all k and l in the range
$0,\dots ,n$
.
Remark. Note that, in case
$j<s$
, it holds by (14) that
${q_{z_j} + \delta (z_j,z_{j+1})\leq q_{z_{j+1}}}$
, hence the interval
${(q_{z_j} + \delta (z_j,z_{j+1}), q_{z_{j+1}})}$
used in the Claim 6 is well-defined.
In case
$j = s$
, it may occur that
$q_{z_j} + \delta (z_j,z_{j+1})> q_{z_{j+1}}$
, so, for every two reals
$a>b$
, let
$[a,b]$
conventionally denote an empty set.
Proof. By contradiction: assume that
$x\in I[{k},{m}] $
for some
$k,m$
where
${k<m}$
.
First, we fix the index
$h\in \{0,\dots ,s\}$
such that
$z_h\leq k<z_{h+1}$
. In case
$k\neq z_j$
, we know from
$k<z_{h+1}$
that
$(z_h,k)$
is an expanding pair. For that pair, Claim 5 implies that
In case
$k = z_h$
, (17) holds trivially.
On the other hand, it holds that
Here, the second inequality is straightforward in case
$h=j$
and follows from Claim 4 (since the pair
$(z_h,z_j)$
is nonexpanding by (16)) in case
$h<j$
.
The contradiction follows because
$q_{z_h} + \delta (z_h,m)$
in (18) is the right point of the interval
$I[{z_h},{m}] $
, which contains x by (17).
The proof of Claim 1
Then, due to
$$\begin{align*}Y_i^n = \bigg( \bigcup_{k,l\in\{0,\dots,n\}}I[{k},{l}] \bigg) \cap [0,1],\end{align*}$$
Claim 6 implies that
Hence we obtain that
with the measure
Therefore, by
where
$A\overset {\cdot }{\cup } B$
denotes the union of two disjoint intervals A and B, we obtain an upper bound for the measure of
$Y_i^n$
:
$$ \begin{align*} \mu(Y_i^n) = \sum_{j=0}^s \mu\big(Y_i^n\cap (q_{z_j},q_{z_{j+1}})\big) \leq \sum_{j=0}^s \frac{g(q_{z_{j+1}})-g(q_{z_j})}{2^{i+1}} = \frac{g(q_{z_{s+1}}) - g(q_{z_0})}{2^{i+1}} \leq \frac{1}{2^{i+1}}. \end{align*} $$
Here, the first inequality follows from (19) applied for all j from
$0$
to s, and the second one is implied by
$g(q_0)\geq 0$
and
$g(q_{z_{s+1}}) = g(q_{n+1}) = 1$
.
The proof of Claim 2
Let
$n,i\geq 0$
.
The finite test
$T_i^n$
is a subset of the finite test
$T_i^{n+1}$
since every intersection of an interval
$I[{k},{m}] $
, where
$0\leq k < m\leq n$
with
$[0,1)$
added into the test
$T_i^n$
will also be added into the test
$T_i^{n+1}$
as well. Hence we directly obtain that
$$\begin{align*}Y_i^n = \bigcup\limits_{I\in T_i^n} I \subseteq \bigcup\limits_{I\in T_i^{n+1}} I = Y_i^{n+1}.\end{align*}$$
The proof of Claim 3
Fix an index i such that
$\beta \notin Y_i$
. By Claim 2, it means inter alia that
$\beta \notin Y_i^n$
for every natural n.
We aim to show that
$\alpha {\le }_{\mathrm {S}} \beta $
via g with the Solovay constant
$c = 2^{i+2}$
by contradiction: fixing a rational
$q\in LC(\beta )$
such that
we can, by
$\mathrm {dom}(g)\supseteq LC(\beta )$
, fix an index K such that
$q = p_K$
. We know by definition of translation function that
${\lim \limits _{p\nearrow \beta }\big (g(p) - g(p_K)\big ) = \alpha - g(p_K)}$
, hence there exists
$\varepsilon>0$
such that
Fix an index
$M>K$
such that
$p_M\in (\beta -\varepsilon ,\beta )$
. Note that (20) implies in particular that
$g(p_M)-g(p_K)>0$
, hence
$p_K<p_M$
because the function g is nondecreasing.
Let
$\{q_0 < \cdots < q_M\}$
be the set
$\{p_0,\dots ,p_M\}$
sorted increasingly, and let
${k,m\in \{0,\dots ,M\}}$
denote two indices such that
$q_k = p_K$
and
$q_m = p_M$
. In particular, we have
${q_k=p_K<p_M=q_m}$
, hence
${k<m}$
.
To obtain a contradiction with
$\beta \notin Y_i^M$
and conclude the proof of Claim 3, and thus also of (11), it suffices to show that
$\beta $
lies within one of the intervals of the finite test
$T_i^M$
, namely, in
${I[{k},{m}] \cap [0,1)}$
.
Indeed,
$\beta \in [0,1)$
holds obviously, and
$\beta \in I[{k},{m}] $
holds by (12) since
where the right inequality is implied by (20) for
$p=q_m$
.
2.2 The left limit exists
In this section, we show that, for q converging to
$\beta $
from below, the fraction
$\frac {\alpha - g(q)}{\beta - q}$
converges, i.e., that
$$ \begin{align} \exists\lim_{q\nearrow \beta}\frac{\alpha - g(q)}{\beta - q}, \end{align} $$
by contradiction. For all
$q\in LC(\beta )$
, the fraction
$\frac {\alpha - g(q)}{\beta - q}$
is obviously positive and, by the previous section, bounded; consequently, supposing that the left limit in (21) does not exist, we can fix two rational constants c and d where
$$ \begin{align} c<d, \quad d-c <1, \text{ and } \liminf\limits_{q\nearrow \beta}\frac{\alpha - g(q)}{\beta - q}<c<d<\limsup\limits_{q\nearrow \beta}\frac{\alpha - g(q)}{\beta - q} \end{align} $$
and the rational
For a given finite subset Q of the domain of g, we will construct a finite test
$M(Q) $
by an extension of a construction used by Miller [Reference Miller, Day, Fellows, Greenberg, Khoussainov, Melnikov and Rosamond9, Lemma 1.2] in the left-c.e. case (the extension is needed since the original Miller’s construction may not satisfy Claim 9 stated below if the enumeration of the domain of g is not increasing). The construction is effective in the sense that it always terminates and yields the test
$M(Q) $
in case it is applied to a finite subset of
$\mathrm {dom}(g)$
.
Further, for every finite subset Q of the domain of g and every real p, we define two functions
${\tilde {k}_{Q}({p})} $
and
${K_{Q}({p})} $
from
$[0,1]$
to
$\mathbb {N}$
by
That is, for a given real p and a given finite subset Q of
$\mathrm {dom}(g)$
, the function
${\tilde {k}_{Q}({p})} $
returns the number of intervals containing p in the finite test
$M(Q) $
defined on Q, and the function
${K_{Q}({p})} $
returns the maximal number of intervals containing p among the finite tests
$M(H) $
defined on all possible subsets H of Q.
The desired contradiction can be obtained from the following three claims.
Claim 7. Let
$Q_0 \subseteq Q_1 \subseteq \cdots $
be a sequence of finite sets that converges to the domain of g. Then it holds that
Claim 8. For every finite subset Q of the domain of g, it holds that
$$ \begin{align} \int\limits _0^1 {\tilde{k}_{Q}({x})} dx = \mu\big(M(Q) \big) \leq g(\max Q) - g(\min Q). \end{align} $$
Claim 9. For every finite subset Q of the domain of g and for every nonrational real p in
$[0,e]$
, it holds that
Remind that
$p_0, p_1, \dots $
is an effective enumeration without repetition of the domain of g and
$Q_n = \{p_0, \dots , p_n\}$
for
$n=0,1, \dots $
. We consider a special type of step function with domain
$[0,1]$
that is given by a partition of the unit interval into finitely many intervals with rational endpoints such that the function is constant on the corresponding open intervals but may have arbitrary values at the endpoints. For the scope of this proof, a designated interval of such a step function is an interval that is the closure of a maximum contiguous open interval on which the function attains the same value. That is, the designated intervals form a partition of the unit interval except that two designated intervals may share an endpoint. Observe that, for every finite subset H of the domain of g, the corresponding cover function
${\tilde {k}_{H}({\cdot })} $
is such a step function with values in the natural numbers, and the same holds for the function
${K_{Q_n}} (\cdot )$
since
$Q_n$
has only finitely many subsets. Furthermore, for given n, the designated intervals of the function
${K_{Q_n}({\cdot })} $
together with the endpoints and function value of every interval are given uniformly effective in n because g is computable and the construction of
$M(Q_n)$
is uniformly effective in n.
For all natural numbers i and n, consider the step function
${K_{Q_n}} $
and its designated intervals. For every such interval, call its intersection with
$[0, e]$
its restricted interval. Let
$X^n_i$
be the union of all restricted designated intervals where on the corresponding designated interval the function
${K_{Q_n}} $
attains a value that is strictly larger than
$2^{i+2}$
. Let
$X_i$
be the union of the sets
$X^0_i, X^1_i, \dots $
.
By our assumption that the values of g are in
$[0,1)$
and by (23), for all n, the integral of
${\tilde {k}_{Q_n}({p})} $
from
$0$
to
$1$
is at most
$1$
, hence by (24), the integral of
${K_{Q_n}({p})} $
from
$0$
to e is at most
$2$
. Consequently, each set
$X^n_i$
has Lebesgue measure of at most
$2^{-(i+1)}$
. The latter upper bound then also holds for the Lebesgue measure of the set
$X_i$
for every i since, by the maximization in the definition of
${K_{Q_n}} $
and
By construction, for all i and
$n>0$
, the difference
$X^n_i \setminus X^{n-1}_i$
is equal to the union of finitely many intervals that are mutually disjoint except possibly for their endpoints, and a list of these intervals is uniformly computable in i and n since the functions
${K_{Q_n}} $
are uniformly computable in n. Accordingly, the set
$X_i$
is equal to the union of a set
$U_i$
of intervals with rational endpoints that is effectively enumerable in i and where the sum of the measures of these intervals is at most
$2^{-(i+1)}$
. By the two latter properties, the sequence
$U_0, U_1, \dots $
is a Martin-Löf test. By Claim 7, the values
${K_{Q_n}({e {{\beta }}})} $
tend to infinity, where
$e {{\beta }}< e$
, hence for all i, the Martin-Löf random real
$e {{\beta }}$
is contained in some interval in
$U_i$
, a contradiction. This concludes the proof that Claims 7–9 together imply that the left limit (21) exists.
It remains to construct the finite test
$M(Q) $
for a given finite subset Q of the domain of g and to check that Claims 7–9 are fulfilled.
The intervals that are used
First, we define two partial computable functions
$\gamma $
and
$\delta $
that have the same domain as g:
Due to
$c < d$
, the following claim is immediate.
Claim 10. Whenever
$g(q)$
is defined, we have
In particular, the partial function
$\gamma - \delta $
is strictly increasing on its domain, hence, for every sequence
$q_0 < q_1 < \dots $
of rationals on
$[0,\beta )$
that converges to
$\beta $
, the values
$g(q_i)$
are defined, and, therefore, the values
$\gamma (q_i) - \delta (q_i)$
converge strictly increasingly to
$(d-c)\beta $
.
Now, for given rationals p and q, we define the interval
From this definition and the definitions of
$\gamma $
and
$\delta $
, the following claim is immediate. Note that Assertion (iii) in the claim relates to expanding an interval at the right endpoint.
Claim 11.
-
(i) Any interval of the form
$R[{p},{q}] $
has the left endpoint
$e {{p}}$
. -
(ii) Consider an interval of the form
$R[{p},{q}] $
. In case
$\gamma (p) \le \gamma (q)$
, the interval has length
$\gamma (q)- \gamma (p)$
, otherwise, the interval is empty. In particular, any interval of the form
$R[{p},{p}] $
has length
$0$
. -
(iii) Let
$R[{p},{q}] $
be a nonempty interval, and assume
$\gamma (q) \le \gamma (q^{\prime })$
. Then the interval
$R[{p},{q}] $
is a subset of the interval
$R[{p},{q^{\prime }}] $
, both intervals have the same left endpoint
$e {{p}}$
, and they differ in length by
$\gamma (q^{\prime }) - \gamma (q)$
.
By choice (22) of c and d, the real
$\beta $
is an accumulation point of both the sets
$$ \begin{align} S &= \{q < \beta \colon \frac{\alpha - g(q)}{\beta - q}> d \} = \{q < \beta \colon \delta(q) < \alpha - d\beta \}, \end{align} $$
$$ \begin{align} T &= \{q < \beta \colon \frac{\alpha - g(q)}{\beta - q} < c \} \; = \{q < \beta \colon \gamma(q)> \alpha - c\beta \}. \end{align} $$
The following properties of sets S and T, which have already been used in the left-c.e. case [Reference Barmpalias and Lewis-Pye1, Reference Miller, Day, Fellows, Greenberg, Khoussainov, Melnikov and Rosamond9], will be crucial in the proof of Claim 7.
Claim 12.
-
1. The sets S and T are disjoint.
-
2. For all
$q\in S$
and
$q^{\prime }\in T$
, the interval
$R[{q},{q^{\prime }}] $
contains
$e {{\beta }}$
.
Proof. The first statement is straightforward from the left parts of (25) and (26), respectively. By definition, the interval
$R[{q},{q^{\prime }}] $
has the left endpoint
$e {{q}}$
and the right endpoint
$\gamma (q^{\prime })-\delta (q)$
. By definition of the sets S and T, on the one hand, we have
$q < \beta $
, hence
$e {{q}}<e {{\beta }}$
, on the other hand, we have
Outline of the construction of the finite test
$M(Q) $
We fix a nonempty finite subset
${Q=\{q_0 < \cdots < q_n\}}$
of the domain of g. Here, the notation used to describe Q has its obvious meaning, i.e.,
$Q =\{q_0, \dots , q_n\}$
, and
$q_i < q_{i+1}$
for all i. Note that—in contrast to Section 2.2—
$q_0,\dots ,q_n$
do not need to be the first
$n+1$
elements of the effective enumeration of
$\mathrm {dom}(g)$
. We describe the construction of the finite test
$M(Q) $
, which is an extended version of a construction used by Miller [Reference Miller, Day, Fellows, Greenberg, Khoussainov, Melnikov and Rosamond9, Lemma 1.2] in connection with left-c.e. reals. Using the notation defined in the previous paragraphs, for all i in
$\{0, \dots , n\}$
, let
$$ \begin{align*} \delta_{{i}} &= \delta(q_i) = g(q_i) - d q_i, \\ \gamma_{{i}} &= \gamma(q_i) = g(q_i) - cq_i, \\ J[{i},{j}] &= R[{q_i},{q_j}] = [\gamma(q_i) - \delta(q_i), \gamma(q_j) - \delta(q_i)] = [e {{q_i}}, \gamma_{{j}} - \delta_{{i}}]. \end{align*} $$
The properties of the intervals of the form
${R[{p},{q}] }$
extend to the intervals
${J[{i},{j}] }$
: for example, any two nonempty intervals of the form
${J[{i},{j}] }$
and
${J[{i},{j^{\prime }}] }$
have the same left endpoint, i.e.,
${\min J[{i},{j}] }$
and
${\min J[{i},{j'}] }$
are the same for all i, j, and
$j'$
.
The test
$M(Q) $
is constructed in successive steps
$j=0, 1, \dots , n$
, where, at each step j, intervals
$U_{0}^{j} , \dots , U_{n}^{j} $
are defined. Every such interval has the form
for some index
$k\in \{0, \dots , n\}$
, where
$\boldsymbol {r}^{j}({\cdot }) $
is an index-valued function that maps every index i to such index k that
$J[{i},{k}] = U_{i}^{j} $
.
At step
$0$
, for
$i=0, \dots , n$
, we set the values of the function
$\boldsymbol {r}^{0}({i}) $
by
and initialize the intervals
$U_{i}^{0} $
as zero-length intervals
In the subsequent steps, every change of an interval amounts to an expansion at the right end in the sense that, for all indices i, the intervals
$U_{i}^{0} , \dots , U_{i}^{n} $
share the same left endpoint, while their right endpoints are nondecreasing. More precisely, as we will see later, for
$i=0, \dots , n$
, we have
$$ \begin{align*} e {{q_i}} &= \min U_{i}^{0} = \cdots = \min U_{i}^{n} ,\\ e {{q_i}} &= \max U_{i}^{0} \le \cdots \le \max U_{i}^{n} , \end{align*} $$
and thus
$U_{i}^{0} \subseteq \cdots \subseteq U_{i}^{n} $
. After concluding step n, we define the finite test
In case the right endpoints of two intervals of the form
$U_{i}^{j-1} $
and
$U_{i}^{j} $
coincide, we say that the interval with index i remains unchanged at step j. Similarly, we will speak informally of the interval with index i, or
$U_{i}^{} $
, for short, to refer to the sequence
$U_{i}^{0} , \dots , U_{i}^{n} $
in the sense of one interval that is successively expanded.
For technical reasons, for an empty set
$\emptyset $
, we define
$M(\emptyset ) = \emptyset $
.
A single step of the construction and the index stair
During step
${j>0}$
, we proceed as follows. Let
$t_0$
be the largest index among
$\{0,\dots ,j-1\}$
such that
$\gamma _{{t_0}}> \gamma _{{j}}$
, i.e., let
in case such index exists and
$t_0=-1$
otherwise.
Next, define indices
$s_1, t_1, s_2, t_2, \dots $
inductively as follows. For
$h= 1, 2, \dots $
, assuming that
$t_{h-1}$
is already defined, where
$t_{h-1}<j-1$
, let
That is, the operator
$\operatorname *{\text {arg min}}$
yields a set of indices x such that
$\delta _{{x}}$
is minimum among all considered values, and
$s_h$
is chosen as the largest index in this set, and similarly for
$\operatorname *{\text {arg max}}$
and the choice of
$t_h$
.
Since we assume that
$t_{h-1}<j-1$
, the minimization in (28) is over a nonempty set of indices, hence
$s_h$
is defined and satisfies
$s_h\leq j-1$
by definition. Therefore, the maximization in (29) is over a nonempty index set, hence, also
$t_h$
is defined.
The inductive definition terminates as soon as we encounter an index
$l\ge 0$
such that
$t_l = j-1$
, which will eventually be the case by the previous discussion and because, obviously, the values
$t_0, t_1, \dots $
are strictly increasing. For this index l, we refer to the finite sequence
$(t_0 ,s_1, t_1, \dots , s_l, t_l)$
(or, for short,
$(t_0,s_1,t_1,\dots )$
in case the value of l is not important) as index stair of step j. For example, in case
$l=1$
, the index stair is
$(t_0, s_1, t_1)$
, and in case
$l=0$
, the index stair is
$(t_0)$
. Note that
$l=0$
holds if and only if even
$s_1$
could not be defined, where the latter, in turn, holds if and only if
$t_0$
is equal to
$j-1$
.
Remark. For the reader familiar with the Miller proof of [Reference Miller, Day, Fellows, Greenberg, Khoussainov, Melnikov and Rosamond9, Lemma 1.2] for the left-c.e. case: the initial fragment
$(t_0,s_1,t_1)$
of an index stair consists of indices
$t,s,x$
used by Miller. Thus, this test construction can be considered as a deep variant of the Miller’s construction.
Next, for
$i=1, \dots , n$
, we set the values of
$\boldsymbol {r}^{j}({i}) $
and define the intervals
$U_{i}^{j} $
. For a start, in case
$l\ge 1$
, let
and call this a nonterminal expansion of the interval
$U_{s_1}^{} $
at step j. In case
$l \ge 2$
, in addition, let for
$h=2, \dots , l$
and call this a terminal expansion of the interval
$U_{s_h}^{} $
at step j.
For all remaining indices, the interval with index i remains unchanged at step j, i.e., for all
$i\in \{0, \dots , n\} \setminus \{s_1, \dots , s_l\}$
, let
The choice of term “terminal expansion” is motivated by the fact that, if a terminal expansion occurs for the interval
$U_{i}^{} $
at step j, then, at all subsequent steps
$j+1,\dots ,n$
, the interval remains unchanged, as we will see later.
We conclude step j by defining for
$i=0,\dots ,n$
the half-open interval
That is, during step j, the interval with index i is expanded by adding at its right end the half-open interval
$V_{i}^{j} $
, i.e., we have
This includes the degenerated case where the interval with index i is not changed, hence,
$V^j_i$
is empty and has length
$0$
.
In what follows, in connection with the construction of a test of the form
$M(Q) $
, when appropriate, we will occasionally write
$t_0^j$
for the value of
$t_0$
chosen during step j and similarly for other values like
$s_h$
in order to distinguish the values chosen during different steps of the construction.
The proof of Claim 7
After the construction of the tests of the form
$M(Q) $
has been specified, we can already demonstrate Claim 7. Let
$Q_0 \subseteq Q_1 \subseteq \dots $
be a sequence of sets that converges to the domain of g as in the claim assumption. Any finite subset H of the
$\mathrm {dom}(g)$
will be a subset of
$Q_n$
for all sufficiently large indices n, where then, for all such n, it holds that
${\tilde {k}_{H}({e {{\beta }}})} \le {K_{Q_n}} (e {{\beta }})$
by definition of
${K_{Q_n}} $
. Consequently, in order to show Claim 7, i.e., that the values
${K_{Q_n}({e {{\beta }}})} $
tend to infinity, it suffices to show that the function
$H \mapsto {\tilde {k}_{H}} (e {{\beta }})$
is unbounded on the finite subsets H of the domain of g.
Recall that, according to (25) and (26), the subsets S and T of
$\mathrm {dom}(g)$
contain only rationals
$q < \beta $
. Let
$r_0 < r_1 < \dots $
be an index sequence that satisfies
Such a sequence can be obtained by the following nonconstructive inductive definition. Let
$r_0$
be an arbitrary number in T. Assuming that
$r_{2i}$
has already been defined, let
$r_{2i+1}$
be equal to some r in S that is strictly larger than
$r_{2i}$
. Note that such r exists since
$r_{2i} < \beta $
, and
$\beta $
is an accumulation point of S. Furthermore, assuming that
$r_{2i}$
and
$r_{2i+1}$
have already been defined, let
$r_{2i+2}$
be equal to some r in T that is strictly larger than
$r_{2i+1}$
and such that the second inequality in (37) holds. Note that such r exists because, by definition of T, we have
$ \gamma (r_{2i})> \alpha - c \beta $
, while
$\beta $
is also an accumulation point of T, and
$\gamma (r)$
converges to
$\alpha - c \beta $
when r tends nondecreasingly to
$\beta $
. Finally, observe that the first inequality in (37) holds automatically for
$r_{2i+1}$
in S and
$r_{2i+2}$
in T because, by Claim 12(1), the set S is disjoint from T, hence, by definition of T, we have
Now, let H be equal to
$\{r_0, r_1, \dots , r_{2k}\}$
, and consider the construction of
$M(H)$
. For the remainder of this proof, we will use the indices of the
$r_j$
in the same way as the indices of the
$q_j$
are used in the description of the construction above. For example, for
$i=0, \dots , k-1$
, during step
$2i+2$
of the construction of
$M(H)$
, the index
$t_0$
is chosen as the maximum index z in the range
$0, \dots , 2i+1$
such that
$\gamma (r_{2i+2}) < \gamma (r_{z})$
. By (37), this means that, in step
$2i+2$
, the index
$t_0$
is set equal to
${2i}$
and — since
$2i+1$
is the unique index strictly between
$2i$
and
$2i+2$
—the index stair of this step is
$({2i}, {2i+1}, {2i+1})$
. Accordingly, by construction, the interval
$U_{2i+1}^{2i+2} $
coincides with the interval
$R[{r_{2i+1}},{r_{2i+2}}] $
. By Claim 12(2), this interval, and thus also its superset
$U_{2i+1}^{2k} $
, contains
$e {{\beta }}$
. The latter holds for all k different values of i, hence
${\tilde {k}_{H}({e {{\beta }}})} \ge k$
. This concludes the proof of Claim 7 since k can be chosen arbitrarily large.
Some properties of the intervals
$U_{j}^{i} $
We gather some basic properties of the points and intervals that are used in the construction.
Claim 13. Let
$Q= \{q_0 < \cdots < q_n\}$
be a subset of the domain of g. Consider some step j of the construction of
$M(Q) $
, and let
$(t_0, s_1,t_1,\dots ,s_l,t_l)$
be the corresponding index stair. Then we have
$\gamma _{{j}} < \gamma _{{t_0}}$
in case
$t_0\neq -1$
.
In case the index
$s_1$
could not be defined, i.e., in case
$l=0$
, we have
$t_0=j-1$
. Otherwise, i.e., in case
$l>0$
, we have
Proof. The assertion on the relative size of
$\gamma _{{j}}$
and
$\gamma _{{t_0}}$
is immediate by definition of
$t_0$
. In case
$s_1$
cannot be defined, the range between
$t_0$
and j must be empty, and
$t_0=j-1$
follows. Next, we assume
$l>0$
and demonstrate (38) and (39). By definition of the values
$s_h$
and
$t_h$
, it is immediate that we have
$s_h \le t_h < s_{h+1}$
for all
$h\in \{1,\dots ,l-1\}$
and have
$s_l \le t_l = j-1$
. In order to complete the proof of (38), assume
$s_h = t_h$
for some h. Then we have
where the inequality holds true because
$\gamma _{{t_h}} \ge \gamma _{{j-1}}$
and
$\delta _{{s_h}} \le \delta _{{j-1}}$
hold for all h. So we obtain
$s_h = t_h = j-1$
, and thus
$h = l$
because, otherwise, i.e., in case
$s_h < j-1$
, we would have
$q_{s_h}< q_{j-1}$
.
By definition of
$s_1$
and l, it is immediate that, in case
$l=0$
, we have
$t_0=j-1$
.
It remains to show (39) in case
$l>0$
. The inequality
$\gamma _{{t_1}} \le \gamma _{{j}}$
holds because its negation would contradict the choice of
$t_0$
in the range
$0, \dots , j-1$
as largest index with maximum
$\gamma $
-value, as we have
$t_0 < t_1 < j$
by (38). In order to show
$\delta _{{s_l}} < \gamma _{{t_l}}$
, it suffices to observe that we have
$\delta _{{s_l}} \le \delta _{{t_l}}$
by choice of
$s_l$
and
$s_l \le t_l < j$
and know that
$\delta _{{t_l}} < \gamma _{{t_l}}$
from Claim 10. In order to show the remaining strict inequalities, fix h in
$\{1, \dots , l-1\}$
. By choice of
$s_h$
, we have
$\delta _{{s_h}} < \delta _{{x}}$
for all x that fulfill
$s_h < x \leq j-1$
, and since
$s_{h+1}$
is among these x, it holds
$\delta _{s_h} < \delta _{s_{h+1}}$
. By a similar argument, it follows that
$\gamma _{{t_{h+1}}} < \gamma _{{t_{h}}}$
.
Claim 14. Let
$Q= \{q_0 < \cdots < q_n\}$
be a subset of the domain of g, and consider the construction of
$M(Q) $
. Let i be in
$\{0, \dots , n\}$
. Then it holds that
Furthermore, for all steps
$j \ge i$
of the construction, it holds that
Proof. The equalities in (40) hold because the index stair of every step
${j \le i}$
contains only indices that are strictly smaller than j, and thus also than i, hence, by (35), the interval with index i remains unchanged at all such steps.
Next, we demonstrate (41) by induction over all steps
$j \ge i$
. The base case
${j=i}$
follows from (40) and because, by definition, we have
$U_{i}^{0} = J[{i},{i}] $
. At the step
$j>i$
, we consider its index stair
$(t_0, s_1, t_1,\dots , s_l, t_l)$
. Observe that all the indices that occur in this index stair are strictly smaller than j. The induction step now is immediate by distinguishing the following three cases. In case
$i=s_1$
, we have
$U_{i}^{j} = J[{i},{j}] $
. In case
$i=s_h$
for some
$h>1$
, we have
$U_{i}^{j} = J[{i},{t_{h-1}}] $
. In case i differs from all indices of the form
$s_h$
, by (35), the interval with index i remains unchanged at step j, and we are done by the induction hypothesis.
Claim 15. Let
$Q= \{q_0 < \cdots < q_n\}$
be a subset of the domain of g, and consider the construction of
$M(Q) $
. Let
$j\geq 1$
be a construction step where at least the index
$s_1$
could be defined, and let
$(t_0,s_1,t_1, \dots , s_l,t_l)$
be the index stair of this step. Then, for
$h=1, \dots , l$
, we have
Consequently, for
$i=0, \dots , n$
, we have
Proof. In order to prove the claim, fix some h in
$\{1,\dots ,l\}$
. In case
$s_h = t_h$
, by (38), we have
$h=l$
and
$s_h = t_h = j-1$
, hence, (42) holds true because, by construction and (40), we have
So we can assume the opposite, i.e., that
$s_h$
and
$t_h$
differ. We then obtain
where
$t_0^{t_h}$
, as usual, denotes the first entry in the index stair of step
$t_h$
. Here, the last two strict inequalities are immediate by Claim 13 since
$s_h$
differs from
$t_h$
. In case the first strict inequality was false, again, by Claim 13, we would have
$s_h \le t_0^{t_h} < t_h < j$
as well as
$\gamma _{{t_h}} < \gamma _{{t_0^{t_h}}}$
, which together contradict the choice of
$t_h$
. Finally, the first inequality obviously holds in case
$h=1$
and
$t_{0}=-1$
. Otherwise, we have
$\gamma _{{t_{h-1}}}> \gamma _{{t_{h}}}$
by (39) as well as
$t_{h-1} < t_{h}$
, hence, by definition, the value
$t_0^{t_h}$
will not be chosen strictly smaller than
$t_{h-1}$
.
By (43), it follows that
By definition, the index
$s_1^{t_h}$
is chosen as the largest x in the former set that minimizes
$\delta _{{x}}$
, while
$s_h$
is chosen from the latter set by the same condition, i.e., as the largest x that minimizes
$\delta _{{x}}$
. Again, by (43), the index
$s_h$
is also in the former set, therefore, it must be the largest index minimizing
$\delta _{{x}}$
there. So we have
$s_1^{t_h} = s_h$
, hence
$U_{s_h}^{t_h} = J[{s_h},{t_h}] $
follows from construction.
Next, we argue that
$U^{t_h}_{s_h} = U^{t_h}_{s_h}$
by demonstrating that
i.e., that at all steps
$y=t_h+1, \dots , j-1$
, the interval
$U_{s_h}^{} $
remains unchanged. For every such step y, by definition of
$t_{h}$
, we have
$\gamma _{{y}} < \gamma _{{t_{h}}}$
, hence
$s_h < t_h \le t_0^y$
by choice of
$t_0^y$
. Consequently, the index
$s_h$
does not occur in the index stair of step y, and we are done by (35).
We conclude the proof of the claim by showing for
$i=1, \dots , n$
the inequality
which then implies
$U_{i}^{0} \subseteq \cdots \subseteq U_{i}^{n} $
because, by construction, the latter intervals all share the same left endpoint
$\min U_{i}^{0} = e {{q_i}}$
, and j is an arbitrary index in
$\{1, \dots , n\}$
.
For indices i that are not equal to some
$s_h$
, the interval i remains unchanged at step j, and we are done. So we can assume
$i = s_h$
for some h in
$\{1, \dots , l\}$
; thus,
$\max U_{i}^{j-1} = \gamma _{{t_h}}- \delta _{{s_h}}$
follows from (42). The value
$\gamma _{{t_h}}$
is strictly smaller than both values
$\gamma _{{j}}$
and
$\gamma _{{t_{h-1}}}$
by choice of
$t_0$
and
$t_{h-1}$
. So we are done because, by construction, in case
$h=1$
, we have
$\max U_{i}^{j} = \gamma _{{j}}- \delta _{{s_h}}$
, while, in case
$h>1$
, we have
$\max U_{i}^{j} = \gamma _{{t_{h-1}}}- \delta _{{s_h}}$
.
As a corollary of Claim 15, we obtain that, when constructing a test of the form
$M(Q) $
, any terminal expansion of an interval at some step is, in fact, terminal in the sense that the interval will remain unchanged at all larger steps.
Claim 16. Let
$Q= \{q_0 < \cdots < q_n\}$
be a subset of the domain of g, and consider the construction of
$M(Q) $
. Let
$j\geq 1$
be a step of the construction, where the index
$s_2$
could be defined, and let
$(t_0,s_1,t_1, \dots , s_l,t_l)$
be the index stair of this step. Then, for every
$h=2, \dots ,l$
, it holds that
$\boldsymbol {r}^{j}({s_h}) = \boldsymbol {r}^{n}({s_h}) $
, and therefore, that
$U_{s_h}^{j} = U_{s_h}^{n} $
.
Proof. For a proof by contradiction, we assume that the claim assertion is false, i.e., we can fix some
$h\ge 2$
such that the values
$\boldsymbol {r}^{j}({s_h}) $
and
$\boldsymbol {r}^{n}({s_h}) $
differ. Let k be the least index in
$\{j+1,\dots , n\}$
such that the values
$\boldsymbol {r}^{k-1}({s_h}) $
and
$\boldsymbol {r}^{k}({s_h}) $
differ, and let
$(t_0^k, s^k_1, t^k_1, \dots )$
be the index stair of step k. Since the interval with index
$s_h$
does not remain unchanged at step k, we must have
$s_h = s^k_x$
for some
$x\geq 1$
. In order to obtain the desired contradiction, we distinguish the cases
$x=1$
and
$x>1$
. In case
$x=1$
, by construction, we have
where all relations are immediate by choice of the involved indices except the nonstrict inequality. The latter inequality holds by choice of
$t_0^k$
because, by the chain of relations on the left, we have
$t_0^k < j$
, and thus
$\gamma _{{j}} \le \gamma _{{k}}$
, while
$\gamma _{{i}} \le \gamma _{{j}}$
holds for
$i=t_0 +1, \dots , j-1$
by choice of
$t_0$
. Now, we obtain as a contradiction that
$s^k_1=s_h$
is chosen in the range
$t^k_0 +1, \dots , k-1$
as largest index that has minimum
$\delta $
-value, where this range includes
$s_1$
, hence
$\delta _{{s^k_1}} \le \delta _{{s_1}}$
, while
$\delta _{{s_1}} < \delta _{{s_h}}$
by
$h\ge 2$
.
In case
$x>1$
, we obtain
which contradicts to
$t_{h-1}< s_h = s^k_x \le t^k_x$
. The equalities in (44) follow, from left to right, from
$h\ge 2$
, from the minimality condition in the choice of k and, finally, from
$s_h=s^k_x$
and Claim 15.
The explicit description of the intervals of the form
$U_{s_h}^{j-1} $
according to Claim 15 now yields an explicit description of the endpoints of the half-open intervals of the form
$V_{i}^{j} $
, which implies, in turn, that all such intervals occurring at the same step are mutually disjoint, and the sum of their measures is equal to
$\gamma _{{j}} - \gamma _{{j-1}}$
.
Claim 17. Let
$Q= \{q_0 < \cdots < q_n\}$
be a subset of the domain of g, and consider the construction of
$M(Q) $
. Let
$j>0$
be a step of the construction.
If
$\gamma _{{j-1}}\leq \gamma _{{j}}$
, then it holds for the index stair
$(t_0,s_1,t_1, \dots , s_l,t_l)$
of this step that
$l>0$
, i.e., that
$s_1$
can be defined, and we have
In particular, the half-open intervals
$V_{0}^{j} , \dots , V_{n}^{j} $
are mutually disjoint, and the sum of their Lebesgue measures can be bounded as follows:
$$ \begin{align} \sum_{i=0}^n \mu(V_{i}^{j} ) = \sum_{h=1}^l \mu(V_{s_h}^{j} ) = \gamma_{{j}} - \gamma_{{j-1}}. \end{align} $$
If
$\gamma _{{j-1}}>\gamma _{{j}}$
, then the index stair of this step has a form
$(j-1)$
, i.e.,
$t_0 = j-1$
,
$l=0$
, all the intervals
$V_{0}^{j} , \dots , V_{n}^{j} $
are empty, i.e.,
and the sum of their Lebesgue measures is equal to zero
$$ \begin{align} \sum_{i=0}^n \mu(V_{i}^{j} ) = 0. \end{align} $$
Proof. If
$\gamma _{{j-1}}\leq \gamma _{{j}}$
, then we have
$t_0 \neq j-1$
, hence the set
$\{x: t_0 < x\leq j-1\}$
used in (28) to define
$s_1$
contains at least one index, namely,
$j-1$
, and therefore,
$s_1$
can be defined.
If
$\gamma _{{j-1}}>\gamma _{{j}}$
, then we have
$t_0 = j-1$
, hence the set
$\{x: t_0 < x\leq j-1\}$
is empty, and
$s_0$
cannot be defined.
Recall that, by construction, the intervals
$U_{i}^{0} , \dots , U_{i}^{n} $
are all nonempty and have all the same left endpoint
$\gamma _{{i}}-\delta _{{i}}$
; thus, we have
This implies (47) if
$\gamma _{{j-1}}\leq \gamma _{{j}}$
and (49) if
$\gamma _{{j-1}}>\gamma _{{j}}$
since, for i not in
$\{s_1, \dots , s_l\}$
, the interval with index i remains unchanged at step j, hence
$V_{i}^{j} $
is empty.
In case
$\gamma _{{j-1}}>\gamma _{{j}}$
, we obtain (50) directly from (49) by
$$\begin{align*}\sum_{i=0}^n \mu(V_{i}^{j} ) = \sum_{i=0}^n \mu(\emptyset) = 0,\end{align*}$$
so, from now on, we assume that
$\gamma _{{j-1}}\leq \gamma _{{j}}$
and, as we have seen before,
$l>0$
.
In order to obtain (45) and (46) in this case, it suffices to observe that
$\max U_{s_h}^{j} $
is equal to
$\gamma _{{j}} - \delta _{{s_1}}$
in case
$h=1$
and is equal to
$\gamma _{{t_{h-1}}} - \delta _{{s_h}}$
in case
$h\ge 2$
, respectively, while
$\max U_{s_h}^{j-1} = \gamma _{{t_h}}-\delta _{{s_h}}$
for
$h=1, \dots , l$
by Claim 15.
Next, we show that the half-open intervals
$V_{0}^{j} , \dots , V_{n}^{j} $
are mutually disjoint. These intervals are all empty except for
$V_{s_1}^{j} , \dots , V_{s_l}^{j} $
. In case the latter list contains at most one interval, we are done. So we can assume
$l\ge 2$
. Disjointedness of
$V_{0}^{j} , \dots , V_{n}^{j} $
then follows from:
These inequalities hold because, for
$h= 2, \dots , l$
, by Claim 13, we have
${\gamma _{{t_{h-1}}}>\gamma _{{t_{h}}}}$
and
$\delta _{{s_{h-1}}}<\delta _{{s_{h}}}$
, which together with (45) and (46) yields
Since the intervals
$V_{0}^{j} , \dots , V_{n}^{j} $
are mutually disjoint, the Lebesgue measure of their union is equal to
$$ \begin{align*} \sum_{i=0}^n \mu(V_{i}^{j} ) =\sum_{h=1}^l \mu(V_{s_h}^{j} ) &=\mu(V_{s_1}^{j} ) \;\; + \sum_{h=2}^l \mu(V_{s_h}^{j} )\\ &= (\gamma_{{j}} - \gamma_{{t_1}}) + \sum_{h=2}^l (\gamma_{{t_{h-1}}} - \gamma_{{t_{h}}}) = \gamma_{{j}} - \gamma_{{t_l}} = \gamma_{{j}} - \gamma_{{j-1}}, \end{align*} $$
where the last two equalities are implied by evaluating the telescoping sum and because
$t_l$
is equal to
$j-1$
by Claim 13, respectively.
The proof of Claim 8
Using the results on the intervals
$V_{i}^{j} $
in Claim 17, we can now easily demonstrate Claim 8. We have to show for every subset
$Q=\{q_0 <\cdots < q_n\}$
of the domain of g that
This inequality holds true because we have
$$ \begin{align*} \mu\big(M(Q) \big) &= \sum_{U \in M(Q) } \mu(U) = \sum_{i=0}^{n} \mu(U_{i}^{n} ) = \sum_{i=0}^{n} \sum_{j=1}^{n} \mu(V_{i}^{j} ) = \sum_{j=1}^{n} \sum_{i=0}^{n} \mu(V_{i}^{j} ) \\ &= \sum_{j=1}^{n} \big( \max\{\gamma_{{j}} - \gamma_{{j-1}},0\}\big) \leq g(q_n) - g(q_0). \end{align*} $$
In the first line, the first equality holds by definition of
$\mu \big (M(Q) \big )$
, while the second and the third equalities hold by construction of
$M(Q) $
and by (36), respectively.
In the second line, the equality holds because, for every j, we have
$$\begin{align*}\sum_{i=0}^{n} \mu(V_{i}^{j} ) = \max\{\gamma_{{j}} - \gamma_{{j-1}},0\}\end{align*}$$
due to the following argument: in case
$\gamma _{{j-1}}\leq \gamma _{{j}}$
, we obtain from Claim 17, (48), that
$\sum _{i=0}^n \mu (V_{i}^{j} ) = \gamma _{{j}} - \gamma _{{j-1}} \geq 0$
, and in case
$\gamma _{{j-1}}> \gamma _{{j}}$
, we obtain from Claim 17, (50), that
$\sum _{i=0}^n \mu (V_{i}^{j} ) = 0$
.
Finally, the inequality in the second line holds because the term
${g(q_n)-g(q_0)}$
can be rewritten as a telescoping sum
and, for every j from
$1$
to n, we have
due to the following argument: in case
$\gamma _{{j-1}} \leq \gamma _{{j}}$
, we have
where the equality holds since
$\gamma _{{k}} = \gamma (q_k) = g(q_j) - c q_{k}$
for every k in the range
${0,\dots ,n}$
and the right inequality is implied by
$q_{j-1}<q_{j}$
. In case
$\gamma _{{j-1}}>\gamma _{{j}}$
, we directly have
where the right inequality is implied by monotonicity of g since
${q_{j-1} < q_j}$
.
Preliminaries for the proof of Claim 9
The following claim asserts that, when adding to a finite subset Q of the domain of g one more rational that is strictly larger than all members of Q, the cover function of the test corresponding to Q increases at most by one on all nonrational arguments.
Claim 18. Let Q be a finite subset of the domain of g. Then, for every real
$p\in [0,1]$
, it holds that
Proof. Let
$Q = \{q_0 < \cdots < q_n\}$
be a finite subset of the domain of g. We consider the constructions of the tests
$M(Q\setminus \{q_n\}) $
and
$M(Q) $
and denote the intervals constructed in the latter test by
$U_{i}^{j} $
, as usual. The steps
$0$
through n of both constructions are essentially identical up to the fact that, in the latter construction, in addition, the interval
$U_{n}^{0} $
is initialized as
$[e {{q_n}},e {{q_n}}]$
in step
$0$
and then remains unchanged. Accordingly, the test
$M(Q\setminus \{q_n\}) $
consists of the intervals
$U_{0}^{n-1} , \dots , U_{n-1}^{n-1} $
, therefore, the first inequality in (52) holds true because the test
$M(Q) $
is then obtained by expanding these intervals. More precisely, in the one additional step of the construction of
$M(Q) $
, these intervals and the interval
$U_{n}^{n-1} =U_{n}^{0} $
are expanded by letting
The intervals
$V_{0}^{n} , \dots , V_{n}^{n} $
are mutually disjoint by Claim 17. Consequently, the cover functions of both tests can differ at most by one, hence also the second inequality in (52) holds true.
The following three technical claims will be used in the proof of Claim 9.
Claim 19. Let
$Q= \{q_0 < \cdots < q_n\}$
be a subset of the domain of g, and let
$p\in (0,1]$
be a real number. Let
$i,j,k$
be indices such that
${q_0 \le q_i<q_j<q_k < p}$
,
Let
$Q_i=\{q_0,\dots ,q_i\}$
and
${Q_k=\{q_0,\dots ,q_i,\dots ,q_j,\dots ,q_k\}}$
. Then the following strict inequality holds:
Proof. Let
$s=\max \operatorname *{\text {arg min}}\{\delta _{{x}}:i<x<k\}$
, and let
The following inequalities are immediate by definition:
except the two strict upper bounds for
$\gamma _{{s}}$
. The first of these bounds, i.e.,
$\gamma _{{s}}< \gamma _{{i}}$
, follows from:
where the inequalities hold, from left to right, by
$q_s < q_k <p$
, by (53), and by (54). By an essentially identical argument, this chain of relations remains valid when
$\gamma _{{i}}$
is replaced by
$\gamma _{{k}}$
, which shows the second bound, i.e.,
$\gamma _{{s}}< \gamma _{{k}}$
.
We denote the intervals that occur in the construction of
$M(Q) $
by
$U_{i}^{j} $
, as usual. As in the proof of Claim 18, we can argue that the construction of the test
$M(Q_i) $
is essentially identical to initial parts of the construction of
$M(Q_k) $
and of
$M(Q) $
, and that a similar remark holds for the tests
$M(Q_k) $
and
$M(Q) $
. Accordingly, we have
For
$x=1, \dots , i$
, the interval
$U_{x}^{i} $
is a subset of
$U_{x}^{k} $
by
$i<k$
and Claim 15. Hence it suffices to show
because the latter statement implies by
$i < s < k$
that
We will show (55) by proving that
$e {{p}}$
is strictly larger than the left endpoint and is strictly smaller than the right endpoint of the interval
$U_{s}^{k} $
. The assertion about the left endpoint, which is equal to
$\gamma _{{s}}-\delta _{{s}}=e {{q_s}}$
, holds true because the inequalities
$s < k$
and
$q_k < p$
imply together that
$q_s < p$
.
In order to demonstrate the assertion about the right endpoint, we distinguish two cases.
Case 1.
$\gamma _{{i'}}>\gamma _{{k'}}$
. In this case, let
$(t_0,s_1, t_1, \dots )$
be the index stair of the step
$k'$
. Then we have
where all inequalities are immediate by choice of
$i^{\prime }$
and
$k^{\prime }$
except the second and the third one. Both inequalities follow from the definition of
$t_0$
: the second one together with the case assumption, the third one because, by
$\gamma _{{s}} < \gamma _{{k^{\prime }}}$
and by choice of
$k^{\prime }$
, no value among
$\gamma _{{s}}, \dots , \gamma _{{k^{\prime }-1}}$
is strictly larger than
$\gamma _{{k^{\prime }}}$
.
By (56), it is immediate that the set
${\{t_0+1,\dots ,k'-1\}}$
contains s and is a subset of the set
${\{i+1,\dots ,k-1\}}$
. By definition, the indices
$s_1$
and s minimize the value of
$\delta _{{j}}$
among the indices j in the former and in the latter set, respectively, hence we have
$s=s_1$
. By construction, in step
$k^{\prime }$
, the right endpoint of the interval
$U_{s}^{k'} $
is then set to
$\gamma _{{k^{\prime }}} - \delta _{{s}}$
. So we are done with Case 1 because we have
where the first inequality holds by assumption of the claim, the second one holds by (54), and the last one holds by
$k' \le k$
and Claim 15.
Case 2.
$\gamma _{{i'}}\leq \gamma _{{k'}}$
. In this case, let
and let
$(t_0,s_1, t_1, \dots )$
be the index stair of the step r. By choice of s and by
$r \le k$
, all values among
$\delta _{{s+1}}, \dots , \delta _{{r-1}}$
are strictly larger than
$\delta _{{s}}$
, hence we have
$s_1 \le s$
by choice of
$s_1$
. Accordingly, the index
is well-defined. Next, we argue that, actually, it holds that
$s_m = s$
. Otherwise, i.e., in case
$s_m<s$
, by choice of
$s_m$
and since s is chosen as largest index in the range
$i+1, \dots , k-1$
that has minimum
$\delta $
-value, we must have
$s_m \le i$
, and thus
Therefore, the index
$i'$
belongs to the index set used to define
$t_m$
according to (29), while the values
$\gamma _{{i'+1}}, \dots , \gamma _{{r-1}}$
are all strictly smaller than
$\gamma _{{i'}}$
. The latter assertion follows for the indices in the considered range that are strictly smaller, equal, and strictly larger than s from choice of
$i'$
, from (54), and from choice of r, respectively. It follows that
$t_m \leq i'$
, hence,
$s_{m+1}$
exists and is equal to s by minimality of
$\delta _{{s}}$
and by choice of
$s_{m+1}$
in the range
${t_m +1, \dots , r-1}$
, which contains s by
${i^{\prime } < s <r}$
. But, by definition of m, we have
${s < s_{m+1}}$
, a contradiction. Consequently, we have
${s_m = s}$
.
Observe that we have
where the inequalities hold, from left to right, by assumption of the claim, by (54), and by choice of r.
In case
$m=1$
, we are done because then we have by construction
hence
$e {{p}}$
is indeed strictly smaller than the right endpoint of the interval
$U_{s}^{k} $
.
So, from now on, we can assume
$m>1$
. Then
$s_{m-1}$
and
$t_{m-1}$
are defined, and the upper bound of the interval
$U_{s}^{r} $
is set equal to
$\gamma _{{t_{m-1}}}-\delta _s$
by (33). Consequently, in case
$\gamma _{{i'}}\leq \gamma _{{t_{m-1}}}$
, both of (57) and (58) hold true with
$\gamma _{{r}}$
replaced by
$\gamma _{{t_{m-1}}}$
, and we are done by essentially the same argument as in case
$m=1$
.
We conclude the proof of the claim assertion by demonstrating the inequality
${\gamma _{{i'}}\leq \gamma _{{t_{m-1}}}}$
. The index s is chosen in the range
$i+1, \dots , k-1$
as largest index with minimum
$\delta $
-value. The latter range contains the range
$i+1, \dots , r-1$
because we have
$i < s < r \le k$
. The index
$s_{m-1}$
differs from
${s=s_m}$
and is chosen as the largest index with minimum
$\delta $
-value among indices that are less than or equal to
$r-1$
, hence
${s_{m-1} \le i}$
. By
$i \le i^{\prime } < s <r$
, the index
$i'$
belongs to the range
$s_{m-1}, \dots , r-1$
, from which
$t_{m-1}$
is chosen as largest index with maximum
$\delta $
-value according to (29), hence we obtain that
${\gamma _{{i'}}\leq \gamma _{{t_{m-1}}}}$
.
Claim 20. Let Q be a nonempty finite set of rationals, and let p be a nonrational real in
$[0,1]$
. In case
$p>\max Q$
, it holds that
Proof. The inequality
${\tilde {k}_{Q}({e {{p}}})} \leq {K_{Q}({e {{p}}})} $
is immediate by definition of
${K_{Q}({e {{p}}})} $
. We show the reverse inequality
${\tilde {k}_{Q}({e {{p}}})} \geq {K_{Q}({e {{p}}})} $
by induction on the size of Q.
In the base case, let Q be empty or a singleton set. The induction claim holds in case Q is empty because then Q is its only subset as well as in case Q is a singleton because then
${K_{Q}} $
is equal to the maximum of
${\tilde {k}_{Q}} $
and
${\tilde {k}_{\emptyset }} $
, where the latter function is identically
$0$
.
In the induction step, let Q be of size at least
$2$
. For a proof by contradiction, assume that the induction claim does not hold true for Q, i.e., that there exist a subset H of Q such that
Then we have the following chain of inequalities:
where the first and the third inequalities hold true by Claim 18, the second one holds by the induction hypothesis for the set
$Q\setminus \{\max Q\}$
, and the fourth one by (59). The first and the last values in the chain (60) are identical, and thus the chain remains true when we replace all inequality symbols by equality symbols, i.e., we obtain
Since
${\tilde {k}_{H}({e {{p}}})} $
is strictly larger than
${\tilde {k}_{H\setminus \{\max Q\}}({e {{p}}})} $
, the set H must contain
$\max Q$
, hence
$\max Q$
and
$\max H$
coincide, and H has size at least two.
Now, let
$Q=\{q_0, \dots , q_n\}$
, where
$q_0 < \cdots < q_n$
, and let
$H = \{q_{z({0})}, \dots , q_{z({n_H})}\}$
, where
$z({0}) < \dots < z({n_H})$
. Furthermore, let
$Q_i=\{q_0, \dots , q_i\}$
for all
$i=0, \dots , n$
, and let
$H_i=\{q_{z({0})}, \dots , q_{z({i})}\}$
for all
$i=0, \dots , n_H$
. So, the set Q has size
$n+1$
, its subset H has size
$n_H+1$
, and the function
${z}$
transforms indices with respect to H into indices with respect to Q. For example, since the maxima of Q and H coincide, we have
$z({n_H})=n$
.
In what follows, we consider the construction of
$M(H)$
. The index stairs that occur in this construction contain indices with respect to H, i.e., for example, the index
$t_0$
refers to
$q_{z({t_0})}$
. A similar remark holds for the intervals that occur in the construction of
$M(H)$
, i.e., for such an interval
$U_{s}^{t} $
, we have
However, as usual, for a given index i, we write
$\gamma _{{i}}$
for
$\gamma (q_i)$
and
$\delta _{{i}}$
for
$\delta (q_i)$
.
For every interval of the form
$U_{s}^{t} $
that occurs in some step of the construction of
$M(H)$
, the left endpoint
$e {{q_{z({s})}}}$
of this interval is strictly smaller than
$e {{p}}$
by assumption of the claim, hence, for every such interval, it holds that
Let
$(t_0,s_1,t_1, \dots ,s_l)$
be the index stair of step
$n_H$
of the construction of
$M(H)$
, i.e., of the last step, and recall that these indices are chosen with respect to H, e.g., the index
$t_0$
stands for
$q_{z({t_0})}$
. By the third equality in (61), for some index
$h\in \{1,\dots ,l\}$
, the interval
$V_{s_h}^{n_H} $
added during this step contains
$e {{p}}$
, that is,
By the explicit descriptions for the left and right endpoint of
$V_{s_h}^{n_H} $
according to Claim 17, we obtain
So, in the last step of the construction of
$M(H)$
, the real
$e {{p}}$
is covered via the expansion of the interval with index
$s_h$
. We argue next that, in the construction of
$M(H)$
, the last step before step
$n_H$
, in which
$e {{p}}$
is covered by the expansion of some interval, must be not larger than
$s_h$
, i.e., we show
For a proof by contradiction, assume that this equation is false. Then there is a step x of the construction of
$M(H)$
with index stair
$(t_0',s^{\prime }_1,t^{\prime }_1,\dots ,s^{\prime }_{l'},t^{\prime }_{l'})$
and some index i in
$\{1, \dots , l'\}$
such that
Observe that the indices in this index stair are indices with respect to the set
$H_x $
but coincide with indices with respect to the set H because
$H_x$
is an initial segment of H in the sense that
$H_x$
contains the least
$x+1$
members of H. In particular, the index transformation via the function
${z}$
works also for the indices in this index stair, for example, the index
$t_0'$
refers to
$z(t_0')$
.
We have
$s^{\prime }_{i} \le x$
because, otherwise, the interval
$V_{s^{\prime }_{i}}^{x} $
would be empty by Claim 14. Furthermore, the indices
$s_h$
and
$s^{\prime }_{i}$
must be distinct because
$e {{p}}$
is contained in both of the intervals
$V_{s_h}^{n_H} $
and
$V_{s^{\prime }_{i}}^{x} $
, while the former interval is disjoint from the interval
$V_{s_h}^{x} $
by
$V_{s_h}^{x} \subseteq U_{s_h}^{x} \subseteq U_{s_h}^{n_H-1} $
and
$U_{s_h}^{n_H-1} \cap V_{s_h}^{n_H} = \emptyset $
.
Next, we argue that
In the chain on the left, the last two inequalities hold by
$t_0 < t_h < n_H$
and by definition of
$t_0$
. The first inequality holds because, otherwise, in step x, the index
$t_i'> t_0'$
would have been chosen in place of
$t_0'$
. The second inequality holds by choice of
$t_h$
as largest index in the range
$s_h, \dots , n_H-1$
that has maximum
$\gamma $
-value and because this range contains x. From the latter inequality then follows the single inequality on the right since
$\gamma _{{z({t_{h}})}} < \gamma _{{z({t_{h-1}})}}$
holds by definition of index stair. From (66), we now obtain
Here, the first inequality is implied by the right part of (66) and choice of
$t_0'$
. The last inequality holds by definition of index stair. The remaining inequality holds because
$s_h$
and
$s^{\prime }_1$
are chosen as largest indices with minimum
$\delta $
-value in the ranges
$t_{h-1}+1, \dots , n_H-1$
and
$t_0'+1, \dots , x-1$
, respectively, where the latter range is a subset of the former one by the just demonstrated first inequality and since x is in H.
Now, we obtain as a contradiction to (62) that
$e {{p}}$
is in
$U_{s_h}^{n_H-1} $
since we have
Here, the first inequality holds because
$e {{p}}$
is in
$U_{s^{\prime }_i}^{x} $
by choice of i and x. The second inequality holds because, by construction,
$\max U_{s^{\prime }_i}^{x} $
is equal to
$\gamma _{{z({x})}} - \delta _{{z({s^{\prime }_i})}}$
in case
$i=1$
and is equal to
$\gamma _{{z({t^{\prime }_{i-1}})}} - \delta _{{z({s^{\prime }_i})}}$
in case
$i>1$
, where
$\gamma _{{z({t^{\prime }_{i-1}})}} \le \gamma _{{z({x})}}$
by (39) since
$t^{\prime }_{i-1}$
lies in the index stair of step x and
$i-1\geq 1$
. The third inequality holds by (66) and (67), and the final equality holds by Claim 15. This concludes the proof of (65).
By (65), during the steps
$s_h+1, \dots , n_H-1$
, none of the expansions of any interval covers
$e {{p}}$
. Now, let y be the minimum index in the range
$t_{h-1}, \dots , s_h$
such that, during the steps
$y+1, \dots , s_h$
, none of the expansions of any interval covers
$e {{p}}$
, i.e.,
Note that y is an index with respect to the set H. We demonstrate that the index y satisfies
For further use, note that inequality (68) implies that y and
$s_h$
are distinct because, otherwise, since we have
$\max Q < p$
, we would obtain the contradiction:
Now, we show (68). Assuming
$y = t_{h-1}$
, the inequality is immediate by (63) and choice of
$t_0$
in case
$h=1$
and by (64) in case
$h>1$
. So, in the remainder of the proof of (68), we can assume
$t_{h-1}<y$
.
By choice of y, we have
${\tilde {k}_{H_{y-1}}({p})} \neq {\tilde {k}_{H_{y}}({p})} $
, that implies
${\tilde {k}_{H_{y-1}}({p})} < {\tilde {k}_{H_{y}}({p})} $
by Claim 15. Consequently, for the index stair
$(t_0", s^{\prime \prime }_1,t^{\prime \prime }_1,\dots ,s^{\prime \prime }_{l"},t^{\prime \prime }_{l"})$
of step y of the construction of the test
$M(H) $
, there exists an index
$j\in \{1,\dots ,l"\}$
such that
$e {{p}}$
is in
$V_{s^{\prime \prime }_{j}}^{y} $
. Thus, in particular, it holds that
because, by construction, the value
$\max U_{s^{\prime \prime }_{j}}^{y} $
is equal to
$\gamma _{{z({y})}} - \delta _{{z({s^{\prime \prime }_{j}})}}$
in case
$j=1$
and is equal to
$\gamma _{{z({t^{\prime \prime }_{j-1}})}} - \delta _{{z({s^{\prime \prime }_{j}})}}$
in case
$h>1$
, where
$\gamma _{{z({t^{\prime \prime }_{j-1}})}}\le \gamma _{{z({y})}}$
. By (69), it is then immediate that, in order to demonstrate (68), it suffices to show that
The latter inequality follows in turn if we can show that
because the indices
$s_h$
and
$s^{\prime \prime }_j$
are chosen as largest indices with minimum
$\delta $
-value in the ranges
$t_{h-1}+1, \dots , n_H-1$
and
$t^{\prime \prime }_{j-1}+1, \dots , y-1$
, respectively, where the latter range is a subset of the former.
We conclude the proof of (70), and thus also of (68), by showing (71). The second to last inequality holds by choice of y, and all other inequalities hold by definition of index stair, except for the first one. Concerning the latter, by our assumption
$t_{h-1}< y$
, by
$y < n_H$
, and by choice of
$t_{h-1}$
, we obtain that
$\gamma _{{z({y})}} < \gamma _{{z({t_{h-1}})}}$
, which implies that
$t_{h-1} \le t_0"$
by choice of
$t_0"$
.
Now, we can conclude the proof of the claim. For the set Q and the indices
${z({y})< z({s_h}) < z({n_H})=n}$
, by (63), (64), and (68), all assumptions of Claim 19 are satisfied, hence the claim yields that
So we obtain the contradiction
where the relations follow, from left to right, by (61), by (65), by the induction hypothesis for the set
$Q_{z({s_h})}$
, and by (72).
Claim 21. Let
$Q= \{q_0 < \cdots < q_n\}$
be a subset of the domain of g, and for
${z= 0, \dots , n}$
, let
$Q_z= \{q_0, \dots , q_z\}$
. Let p be a nonrational real such that, for some index x in
$\{1, \dots , n\}$
, it holds that
$p\in [0,q_x]$
and
Then it holds that
Proof. We denote the intervals considered in the construction of the test
$M(Q) $
by
$U_{i}^{j} $
, as usual. Again, we can argue that the construction of a test of the form
$M(Q_z) $
, where
$z \le n$
is essentially identical to an initial part of the construction of
$M(Q) $
, and that accordingly such a test
$M(Q_z) $
coincides with
$(U_{0}^{z} , \dots , U_{z}^{z} )$
.
Let
$(t_0 ,s_1, t_1, \dots ,s_l, t_l)$
be the index stair of step x in the construction of
$M(Q_n) $
. By (73), there is an index h in
$\{1, \dots , l\}$
such that
$e {{p}}$
is in
$V_{s_h}^{x} $
, hence
Here, the two equalities hold by definition of the interval and by Claim 15, respectively. The strict inequality holds because
$e {{p}}$
is assumed not to be in
$U_{s_h}^{x-1} $
. The last inequality holds because
$e {{p}}$
is assumed to be in
$U_{s_h}^{x-1} $
, while, by construction, the right endpoint of the latter interval is equal to
$\gamma _{{x}}-\delta _{{s_h}}$
in case
$h=1$
and is equal to
$\gamma _{{t_h}} - \delta _{{s_h}}$
with
$\gamma _{{t_h}} \leq \gamma _{{x}}$
otherwise.
For a proof by contradiction, we assume that the conclusion of the claim is false. So we can fix an index
$y\in \{x+1,\dots ,n\}$
such that
Let
$(t_0',s^{\prime }_1,t^{\prime }_1,\dots ,s^{\prime }_{l'},t^{\prime }_{l'})$
be the index stair of step y of the construction of
$M(Q_n) $
. By essentially the same argument as in the case of (74), we can fix an index i in
$\{1, \dots , l'\}$
such that
$e {{p}}$
is in
$V_{s^{\prime }_{i}}^{y} $
, and therefore, that
By assumption, the real p is in
$[0,q_x]$
, and together with (74) and (75), we obtain
$q_{s_h}<p \le q_x$
and
$q_{s^{\prime }_{i}}<p \le q_x$
. Consequently, we have
(where the left inequality also follows from definition of index stair). In particular, we have
$t_0' < x$
, which implies by
$x<y$
and by choice of
$t_0'$
that
In order to derive the desired contradiction, we distinguish the three cases that are left open by (76) for the relative sizes of the indices
$s_h$
,
$s^{\prime }_{i}$
, and x.
Case 1.
$s_h < s^{\prime }_{i} < x$
. Since
$s_h$
is chosen in the range
$t_{h-1}+1, \dots , x-1$
as largest index with minimum
$\delta $
-value, we obtain by case assumption that
Furthermore, it holds that
Here, the first and the third inequalities hold by definition of index stair. The two last inequalities hold by (76) and by choice of y, respectively. The remaining second inequality holds because, otherwise, i.e., in case
$t^{\prime }_{i-1}< s_h$
, the range
${t^{\prime }_{i-1}+1, \dots , y-1}$
, from which
$s^{\prime }_{i}$
is chosen as largest index with minimum
$\delta $
-value, would contain
$s_h$
, which contradicts (77).
Now, we obtain a contradiction, which concludes Case 1. Due to
$t_0 < t^{\prime }_{i-1} <x$
and definition of
$t_0$
, we have
$\gamma _{{t^{\prime }_{i-1}}}\leq \gamma _{{x}}$
. The latter inequality contradicts the fact that
$t^{\prime }_{i-1}$
is chosen in the range
$s^{\prime }_{i-1}+1, \dots , y-1$
as largest index with maximum
$\gamma $
-value, where the latter range contains x by (78) and
$s^{\prime }_{i-1} < s^{\prime }_{i}$
.
Case 2.
$s_h = s^{\prime }_{i} < x$
. In this case, we have
which cannot hold since
$V_{s_h}^{x} $
and
$V_{s_h}^{y} $
are disjoint by Claim 17.
Case 3.
$s^{\prime }_{i} < s_h < x$
. In this case, we have
Here, the first inequality holds since
$s^{\prime }_{i}$
is chosen as the largest index with minimum
$\delta $
-value from a range which, by case assumption, contains
$s_h$
. The second inequality holds since
$t^{\prime }_{i}$
is chosen as the largest index with maximum
$\gamma $
-value in the range
$s^{\prime }_{i}+1, \dots , y-1$
which contains x by case assumption and
$x<y$
.
Now, we obtain a contradiction, which concludes Case 3, since we have
where the inequalities hold, from left to right, by (74), by (79), and by (75).
So, in all three cases, we obtain a contradiction, which concludes the proof of the claim.
The proof of Claim 9
Let
$Q= \{q_0 < \cdots < q_n\}$
where
$q_n<1$
be a subset of the domain of g. For
$z= 0, \dots , n$
, let
$Q_z= \{q_0, \dots , q_z\}$
, and let p be an arbitrary nonrational real in
$[0,1]$
. In order to demonstrate Claim 9, it suffices to show
Since p was chosen as an arbitrary nonrational real in
$[0,1]$
, this easily implies the assertion of Claim 9, i.e., that
${K_{Q}({p'})} \le {\tilde {k}_{Q}({p'})} + 1$
for all nonrational
$p'$
in
$[0, e]$
.
By construction, for all subsets H of Q, all intervals in the test
$M(H) $
have left endpoints of the form
$\gamma (q_i)-\delta (q_i) = e {{q_i}}$
. Consequently, in case
$p<q_0$
, none of such intervals contains
$e {{p}}$
, hence
${K_{Q_n}({e {{p}}})} = 0$
, and we are done.
So, from now on, we can assume
$q_0 < p$
. Then, among
$q_0, \dots , q_n$
, there is a maximum value that is smaller than p, and we let
be the corresponding index. It then holds that
where the equality is implied by choice of j and Claim 20, and the inequalities hold by Claim 18.
Fix some subset H of Q that realizes the value
${K_{Q}({e {{p}}})} $
in the sense that
Next, we show that, for the set H, we have
In case
${\tilde {k}_{H}({e {{p}}})} \le {\tilde {k}_{H\cap Q_j}({e {{p}}})} $
, we are done. Otherwise, let x be the least index in the range
$j+1, \dots , n$
such that
${\tilde {k}_{H\cap Q_{x}}({e {{p}}})} $
differs from
${\tilde {k}_{H\cap Q_{x-1}}({e {{p}}})} $
. Then (82) follows from:
where the equalities hold, from left to right, by choice of x, by Claim 18, by Claim 21, and since H is a subset of Q.
Now, we have
where the relations hold, from left to right, by choice of H, by (82), because
${H\cap Q_j}$
is a subset of
$Q_j$
, and by (81).
This concludes the proof of (80) and thus also of Claim 9 and, finally, of (21).
2.3 The left limit is unique
At that point, we have demonstrated that, for every nondecreasing translation function g from a Martin-Löf random real
$\beta $
to a real
$\alpha $
, the left limit
$\lim \limits _{q\nearrow \beta }\frac {\alpha - g(q)}{\beta - q}$
exists and is finite. It remains to show that this left limit does not depend on the choice of the translation function from
$\beta $
to
$\alpha $
. For a proof by contradiction, assume that there exist two translation functions f and g from
$\beta $
to
$\alpha $
such that the values
$\lim \limits _{q\nearrow \beta }\frac {\alpha - f(q)}{\beta - q}$
and
$\lim \limits _{q\nearrow \beta }\frac {\alpha - g(q)}{\beta - q}$
differ. By symmetry, without loss of generality, we can then pick rationals c and d such that
$$ \begin{align} \lim\limits_{q\nearrow \beta}\frac{\alpha - g(q)}{\beta - q} < c < d < \lim\limits_{q\nearrow \beta}\frac{\alpha - f(q)}{\beta - q}. \end{align} $$
By (83), for every rational
$q < \beta $
that is close enough to
$\beta $
, it holds that
$$\begin{align*}\frac{\alpha - g(q)}{\beta - q} < c \quad \text{and}\quad d < \frac{\alpha - f(q)}{\beta - q}. \end{align*}$$
Fix some rational
$p < \beta $
such that the two latter inequalities are both true for all rationals q in the interval
$[p, \beta )$
. We then have for all such q
$$ \begin{align} 0 < d-c <\frac{\big(\alpha - f(q)\big) - \big(\alpha - g(q)\big)}{\beta - q} = \frac{g(q)-f(q)}{\beta - q}, \end{align} $$
and consequently, letting
$e = d-c$
,
where the lower bound is immediate by
$q < \beta $
, and the upper bound follows by multiplying the first and the last terms in (84) by
$\beta -q$
and rearranging. Let
For every q in D, define the intervals
Fix some effective enumeration
$q_0, q_1, \dots $
of D and, along an enumeration of D, whenever
$I_q$
is disjoint from the previous selected
$I_q$
, put
$U_q$
into the test. This will be a well-defined Solovay test that fails on
$e {{\beta }}$
(so we obtain a contradiction with the Martin-Löf randomness of
$\beta $
because e is rational) since, after selecting finitely many
$I_q$
, there is always an interval
$[b,\beta )$
disjoint from them, and by
$\lim \limits _{q\nearrow \beta } f(q) = \alpha $
and
$\lim \limits _{q\nearrow \beta } \big (g(q)-f(q)\big ) = 0$
, there must be another suitable
$q\in [b,\beta )$
selected by the test. By (85), this interval contains
$\beta $
. Finally, the measure of the constructed test is bounded by
${1-p}$
from above (and thus finite) since
${\mu (I_q) = \mu (U_q)}$
for every
${q\in D}$
, and all selected
$I_q$
lie in
$[p,1]$
and are mutually disjoint by the test construction.
This concludes the proof of the uniqueness of the left limit, as well as the whole proof of Theorem 2.1.
3 Conclusion and further extensions
Theorem 2.1 can be interpreted as an indication that the existence of the left limit in (9) is not an exceptional feature of left-c.e. Martin-Löf random reals but rather an inherent property of Solovay reducibility to arbitrary Martin-Löf random reals via nondecreasing translation functions.
This allows us to suppose that the understanding of the Solovay reducibility from an effective real
$\alpha $
to another effective real
$\beta $
as the existence of a “faster” approximation of
$\alpha $
compared to a fixed approximation of
$\beta $
can be captured in terms of monotone translation functions. A characterization of that kind of the S2a-reducibility has been found in 2024 by Kumabe, Miyabe, and Suzuki [Reference Kumabe, Miyabe and Suzuki7, Theorem 3.7].
Theorem 3.1 (Kumabe et al., 2024).
Let
$\alpha $
and
$\beta $
be two c.a. reals. Then
$\alpha {\le }_{\mathrm {S}}^{\mathrm {2a}} \beta $
if and only if there exist a lower semi-computable Lipschitz function
${f:\mathbb {R}\to \mathbb {R}}$
and an upper semi-computable Lipschitz function
${h: \mathbb {R}\to \mathbb {R}}$
such that
$f(x)\leq h(x)$
for all
$x\in \mathbb {R}$
and
${f(\beta ) = h(\beta ) = \alpha }$
.
We conjecture that the Barmpalias–Lewis-Pye limit theorem can be generalized on the set of c.a. reals for the S2a-reducibility.
Conjecture 3.2. Let
$\alpha $
be a c.a. real and
$\beta $
be a Martin-Löf random c.a. real that fulfills
$\alpha {\le }_{\mathrm {S}}^{\mathrm {2a}} \beta $
via the functions f and h as in Theorem 3.1. Then there exists a constant d such that
where d does not depend on the choice of f and h witnessing the reducibility
$\alpha {\le }_{\mathrm {S}}^{\mathrm {2a}} \beta $
. Moreover,
$d=0$
if and only if
$\alpha $
is not Martin-Löf random.
By Merkle and Titov [Reference Merkle and Titov8, Corollary 2.10], the set of Schnorr random reals is closed upwards relative to the Solovay reducibility via total translation functions. We still do not know whether the Barmpalias–Lewis-Pye limit theorem or the Kučera–Slaman theorem can be adapted for Schnorr randomness.
Acknowledgements
The main result of this article, Theorem 2.1, is a somewhat strengthened version of the main result of my doctoral dissertation. I would like to thank my advisor Wolfgang Merkle for supervising the dissertation and for discussions that helped me to improve its presentation.
Funding
The author is supported in part by the ANR project FLITTLA ANR-21-CE48-0023.









