1 Introduction
The concern of this article is the interplay of two fundamental structures arising in classical computability, the lattice of computably (recursively) enumerable (c.e.) sets and Turing reducibility
$\leqslant _T$
. Turing [Reference Turing18] defined
$A\leqslant _T B$
as the most general notion of reducibility between problems coded as sets of non-negative integers. Reducibilities give rise to partial orderings which measure relative computational complexity. In the classic paper [Reference Post11], Post suggested that the study of the lattice of c.e. sets was fundamental in computability theory. In the words of Soare [Reference Soare17, p. viii]
“Post [Reference Post11] stripped away the formalism associated with the development of recursive function theory in the 1930’s and revealed in a clear informal way the essential properties of r.e. sets and their role in Gödel’s incompleteness theorem”.
Computably enumerable sets are the halting sets of Turing machines, i.e., the domains of partial computable functions. They thus represent natural “semi-decidable” problems, such as instances of Hilbert’s 10th problem, the Entscheidungsproblem, the Post Correspondence Problem, sets of consequences of formal systems, and many others.
The interplay of these two basic objects, Turing reducibility and c.e. sets, has a long and rich history. The c.e. sets under union and intersection form a lattice, denoted by
${\mathcal E}$
, and a common object of study is
${\mathcal E}^*$
, which is
${\mathcal E}$
modulo the congruence
$=^*$
, where
$A=^*B$
means that the symmetric difference of A and B is finite. The Turing degrees of c.e. sets form an upper semilattice, denoted by
${\mathcal R}$
. Ever since the groundbreaking paper of Post, there has been a persistent intuition that structural properties of c.e. sets have reflections in their degrees, and vice versa. In particular, definability in
${\mathcal E}$
should be linked with information content as measured by Turing reducibility.
The simplest possible illustration of this is the fact that the complemented members of
${\mathcal E}$
are exactly the members of
${\mathbf 0}$
, the degree of the computable sets. One of the main avenues of attack has been to link properties of c.e. sets with the jump operator,
$A'=\{e\mid \Phi _e^A(e)\operatorname {\mathrm {\downarrow }}\}$
, the halting problem relative to A. The jump operator gives one way of understanding information content of members of
${\mathcal R}$
. A deep example is Martin’s result [Reference Martin10] that the Turing degrees of maximal sets are exactly the high c.e. Turing degrees. Here a c.e. set A is high if
$\emptyset "\equiv _{\text{T}} A'$
, i.e., if the jump operator does not distinguish between A and the halting problem
$\emptyset '$
. A coinfinite c.e. set A is maximal if it represents a co-atom in
${\mathcal E^*}$
, that is, if for every c.e. set
$W\supseteq A$
, either
$A =^* W$
or
$A=^* \mathbb N$
.
On the other hand, we call A low if
$A'\equiv _{\text{T}} \emptyset '$
, that is, if the jump operator does not distinguish between A and the computable sets. Such sets have relatively little information content, so we would guess that low sets should perhaps share some properties with the computable sets.
This intuition has seen many realizations in the study of the Turing degrees of low c.e. sets. One such example is Robinson’s [Reference Robinson12] result that if we have c.e. sets
$B\leqslant _T A$
with B low, then we can split
$A=A_1\sqcup A_2$
as two c.e. sets with
$A_1\oplus B$
Turing incomparible with
$A_2\oplus B$
. This result shows that splitting and density can be combined above low c.e. degrees, something that, famously, Lachlan [Reference Lachlan8] showed fails in general.
How is this intuition reflected in the properties of low c.e. sets in
${\mathcal E}^*$
? Soare [Reference Soare16] showed that if A is low then
$\mathcal {L}^*(A)\cong {\mathcal E}^*$
. Here
$\mathcal {L}^*(A)$
denotes the lattice of c.e. supersets of A modulo
$=^*$
. Thus it is impossible to distinguish A from
$\emptyset $
using properties of its lattice of supersets. We remark that Soare proved that the isomorphism is effective in that there are computable function f and g with
$W_e\mapsto W_{f(e)}$
(from
$\mathcal {L}^*(A)$
to
${\mathcal E}^*$
) and
$W_e\mapsto W_{g(e)}$
for the return map, inducing the isomorphism.
Note that Martin’s Theorem says that if a c.e. degree is high then it contains a maximal set A. If A is maximal then
$\mathcal {L}^*(A)$
is the two-element lattice—as far as possible from
$\mathcal {E}^*$
. It is natural to wonder what level of computational power for degrees stop c.e. sets with those degrees from resembling computable sets. A natural demarcation seems to be at low
$_2$
. Recall that a set A is low
$_2$
if the double jump does not distinguish between A and the computable sets:
$A"\equiv _{\text{T}} \emptyset "$
.Footnote
1
Shoenfield [Reference Shoenfield13] proved that if a c.e. Turing degree is not low
$_2$
, then it contains a c.e. set B that has no maximal superset. Hence, in particular,
$\mathcal {L}^*(B)\not \cong {\mathcal E}^*$
. It follows that if there is a collection
${\mathcal J}$
of degrees characterized by their jumps, such that deg
$(A)\in {\mathcal J}$
implies
$\mathcal {L}^*(A)\cong {\mathcal E}^*$
, then
${\mathcal J}$
must be a subclass of the low
$_2$
degrees.
Maass [Reference Maass9] extended Soare’s result to prove that for a c.e. set A,
$\mathcal {L}^*(A)$
is effectively isomorphic to
${\mathcal E}^*$
if and only if A is semilow
$_{1.5}$
. Here a c.e. set A is called semilow
$_{1.5}$
if
$\{e:W_e\subseteq ^* A\}$
is
$\Sigma _2$
. Being semilow
$_{1.5}$
is a “pointwise” variation of lowness. It is known that a c.e. degree that is not low contains a c.e. set which is not semilow
$_{1.5}$
. In particular, there are low
$_2$
sets which are not semilow
$_{1.5}$
(Downey, Jockusch, and Schupp [Reference Downey, Jockusch and Schupp3, Theorem 1.5]).
The following conjecture has been around for some time.
Conjecture 1.1 (Soare and others).
If A is a coinfinite
$\text {low}_2$
c.e. set then
${\mathcal {L}^*(A)\cong {\mathcal E}^*}$
.
This conjecture is stated as an open question in Soare [Reference Soare17]. We remark that Conjecture 1.1 has been claimed as a theorem in several places, notably as being due to Harrington, Lachlan, Maass, and Soare as stated in Harrington–Soare [Reference Harrington and Soare5, Theorem 1.8], as well as in lectures by Soare in the late 1990s. But despite the best efforts of computability-theorists in the last 30 or so years, no proof has appeared, so it is surely an open question by this stage.
Notice that for this conjecture to hold, we must need some other method of constructing the isomorphism than an effective isomorphism. Likely if Conjecture 1.1 is true, then methods along the lines of the powerful
$\Delta _3$
methods introduced by Soare and Harrington [Reference Harrington and Soare5] or Cholak [Reference Cholak1] will likely be needed. Cholak [Reference Cholak1] gave some partial evidence for the validity of the conjecture by showing that every semilow
$_2$
c.e. set with the outer splitting property has
$\mathcal {L}^*(A)\cong {\mathcal E}^*$
. It is not important for our story what these properties are, save to say that they give weaker guessing methods than semilow
$_{1.5}$
-ness, but which are sufficient given the
$\Delta _3$
isomorphism method. In this article we make a modest contribution supporting Conjecture 1.1. We will soon explain what we mean by guessing, and how our work fits into the programme of establishing the conjecture.
Lachlan [Reference Lachlan6] proved that if A is low
$_2$
then A has a maximal superset (i.e.,
$\mathcal {L}^*(A)$
contains a co-atom). This is currently the strongest general result about all low
$_2$
c.e. sets. It is likely not too difficult to modify Lachlan’s technique to extend this result to k-quasimaximal sets, c.e. sets which are the intersections of exactly k maximal sets, by simultaneously constructing k maximal supersets
$M_1,\dots ,M_k$
of A such that
$M_i\neq ^*M_j$
for
$i\ne j$
. If Q is k-quasimaximal then
$\mathcal {L}^*(Q) $
is a k-atom Boolean algebra.
Lachlan [Reference Lachlan7] proved that
$\mathcal {L}^*(A)$
is a Boolean algebra if and only if it is hyperhypersimple. Hyperhypersimple sets were defined, if not constructed, by Post [Reference Post11]. Recall that a coinfinite c.e. set A is called simple if its complement
${A^\complement } = \mathbb N\smallsetminus A$
contains no infinite c.e. subsets. A much stronger property is that A is hyperhypersimple, which is defined as there being is no collection of infinitely many finite, pairwise disjoint, uniformly c.e. sets, such that each element of the collection intersects the complement of A. However, Lachlan’s characterisation in terms of Boolean algebras is used more often. A hyperhypersimple set A is atomless if
$\mathcal {L}^*(A)$
is the atomless Boolean algebra. In a talk in 2006, Cholak pointed out the next test case for the low
$_2$
conjecture: atomless hyperhypersimple sets. In this article we affirm Cholak’s conjecture.
Theorem 1.2. Every coinfinite
$\text {low}_2\!$
c.e. set has an atomless hyperhypersimple superset.
We now try to explain why this is an important test case for the full low
$_2$
conjecture. By necessity, this explanation is somewhat technical as it revolves around issues of
$\Delta _3$
guessing and arguments akin to those of the deep paper [Reference Harrington and Soare5] by Harrington and Soare.
The reader might recall Friedberg’s [Reference Friedberg4] construction of a maximal set M. For each e, we need to ensure that if
$W_e\supseteq M$
, then either
$W_e=^* M$
or
$W_e=^*\mathbb N$
, whilst still keeping M coinfinite. At each stage s we will have listed in order the elements of the complement
$M^\complement _s=\{b_{i,s}\mid i\in \omega \}$
. To make
${M}$
coinfinite, we make sure that for each e,
$\lim _s b_{e,s}=b_e$
exists. For the maximality requirement, the idea is to make, for all
$n\geqslant e$
, the
$n{}^{\text {th}}$
member of the complement of M a member of
$W_e$
, as follows. Assuming that
$b_{e,s},\dots ,b_{n-1,s}\in W_{e,s}$
but
$b_{n,s}\not \in W_{e,s-1}$
, if we see some
$x=b_{m,s}\in W_{e,s}$
for
$m\geqslant n$
, we make
$x=b_{n,s+1}$
by enumerating the elements
$b_{n,s},\dots ,b_{m-1,s}$
into
$M_{s+1}$
, causing
$b_{n,s+1}\in W_{e}.$
If this happens for each
$n\geqslant e$
then
$W_e\cup M=^* {\mathbb N}.$
On the other hand, if we fail to keep finding such
$b_{m,s}$
, from some point n onwards, then
$W_e\cup M=^* M$
.
Of course, we have to worry about differing requirements putting things into M since
$W_e$
might want to make
$b_{n,s}\in W_e$
, and some other
$W_{e'}$
might want to put this into M to cause
$b_{n,t}\in W_{e'}$
, so we need some way to reconcile such conflicting requests. This is done using the Friedberg-Muchnik priority method. In modern terminology, if
$e'>e$
, then the requirement dealing with
$W_{e'}$
will guess the eventual behaviour of
$W_e$
, and therefore be able to align with it: either wait for elements of
$M^\complement $
to enter
$W_e$
, before attmepting to find elements of the complement in
$W_{e'}$
; or only start acting at a stage after which no more elements of
$M^\complement $
enter
$W_e$
. To implement this, Friedberg’s brilliant idea was using e-states.Footnote
2
The e-state of b at stage s is the binary string that records the indices
$d\leqslant e$
such that
$b\in W_{d,s}$
. Friedberg’s idea is to put, for each
$n\geqslant e$
,
$b_{n,s}$
into the lexicographically maximized e-state. This means that almost all of the complement of M will be in the same e-state, and hence be in
$W_e$
or out of
$W_e$
. In our constructions below, we use the modern tree-of-strategies terminology, rather than explicitly using e-states. However, the notion of e-states appears necessary when constructing isomorphisms of lattices of c.e. sets, and so will be useful for the current discussion.
Suppose that we try to emulate Friedberg’s method to construct a maximal
${M\supset A}$
, where the opponent is controlling the c.e. set A. It is within the opponent’s power to take elements from the complement
${M}^\complement _s$
such as
$b_{i,s}$
, and declare that they are in M, since they are in
$A_{s+1}$
, and
$M\supseteq A$
. The danger in implementing the strategy above for dealing with
$W_e$
is that we could believe that we see infinitely many elements of
$M^\complement $
in
$W_e$
, and keep dumping elements not yet seen in
$W_e$
into M, but then the elements of
$W_e$
go into A. This would result in M being cofinite. We should think of the elements of A as “phantom elements”, ones that shouldn’t really be considered, except that we cannot know this in advance. The way around this is for the requirement to guess whether there are infinitely many numbers that are in
$W_e$
and not in A, i.e., if
$W_e\smallsetminus A$
is infinite. This statement is
$\Pi _2(A)$
; since
$A $
is
$\text {low}_2$
, this means that this statement is
$\Delta _3$
.
Lachlan’s proof in [Reference Lachlan6] (see also [Reference Soare17, Theorem XI.5.1]Footnote
3
) is a reasonably delicate construction that uses a version of
$\Delta _3$
guessing. Lachlan’s original proof seemed ad hoc and combinatorial. As part of our article we give a new thematic proof of Lachlan’s Theorem based around
$\Delta _3$
guessing on an
$\omega $
-branching priority tree. As both the “yes” and “no” answer to the question whether
$W_e\smallsetminus A$
is infinite are
$\Sigma _3$
(as
$\Delta _3 = \Sigma _3\cap \Pi _3$
), the outcomes are pairs
$(\text {"yes"},n)$
and
$(\text {"no"},n)$
, where n is the witness for the
$\Sigma _3$
predicate holding. When considering more than one requirement, we are guessing the answers for a Boolean combination of
$\Pi _2(A)$
questions; but these too will be
$\Delta _3$
, so will be subject to the same guessing procedure.
$\Delta _3$
priority arguments are much less common in computability theory than the usual infinite injury arguments (such as the Thickness Lemma, or the Minimal Pair argument) which can be performed on finitely branching trees. Harrington and Soare [Reference Harrington and Soare5], Downey and Greenberg [Reference Downey and Greenberg2], and Shore and Slaman [Reference Shore and Slaman14, Reference Shore and Slaman15] are some examples of
$\Delta _3$
arguments.
The method of
$\Delta _3$
guessing does not appear to be sufficient for results stronger than the construction of a maximal superset. Suppose that we are trying to show that
$\mathcal {L}^*(A)\cong \mathcal {E}^*$
. For each e and each e-state
$\sigma $
, we need to ensure that infinitely many elements of
$\mathbb N$
have e-state
$\sigma $
(with respect to the list of c.e. subsets of
$\mathbb N$
) if and only if infinitely many elements of
$A^\complement $
have e-state
$\sigma $
, with respect to a listing of the c.e. supersets of A. The e-states correspond to measures of intersections and non-intersections of c.e. sets.
The method of
$\Delta _3$
guessing will allow us to correctly guess which e-states we should try to fill. But now we have the more complicated task of actually finding potential elements and enumerating them into the correct sets, so that we can match e-states as required. The difference between the maximal set construction and the more general construction is that in the former, for each e, there will be exactly one e-state that we need to populate. So there, the only difficulty is in guessing which e-state will be populated; once this is decided, all elements of the complement of M will be enumerated into the same sets (as usual, except for finitely many). If there are more than one e-state to populate, say
$\sigma $
and
$\tau $
, then given an element b that at a stage s seems to be in the complement of the set we are building, we need to decide whether to enumerate b into sets so as to make the e-state of b either
$\sigma $
or
$\tau $
. We need to ensure that both
$\sigma $
and
$\tau $
will be populated by infinitely many “true” elements, elements that are not in A. The question is whether given such b at stage s, should we believe that this is the true situation, and, moreover, what do we do if we act on false beliefs?
The key to all proofs we mentioned (Soare, Maass, Cholak, etc.), where
$\mathcal {L}^*(A)$
is shown to be isomorphic to
${\mathcal E}^*$
is some kind of guessing procedure to understand when elements seem to be truly outside A. If A is low, semilow, or even semilow
$_{1.5}$
we have a “pointwise” testing process where we can guess whether individual elements are in
$A^\complement $
, and get the answer more-or-less immediately. For example, suppose that A is low. Then using, for example, the Robinson trick, we can ask
$A'$
, and hence
$\emptyset '$
, whether some
$z\in {A}^\complement _s$
(in some desirable e-state) is actually outside A, and must eventually get a true “yes” if there are infinitely many such potential z. In his proof that semilow
$_{1.5}$
c.e. sets have lattices of supersets effectively automorphic with
${\mathcal E}^*$
[Reference Maass9], Maass points out that indeed, it is not enough to know that there are infinitely many elements in some state, but we also need to capture them. Maass’s outer splitting property, mentioned above, is an elaboration on the low guessing technique, that works if A is semilow
$_{1.5}$
.
The key problem appears to be that of splitting e-states. For some e-state
$\sigma $
, we have somehow guaranteed that infinitely many true elements (elements outside A) have e-state
$\sigma $
. Suppose that
$\Delta _3$
guessing tells us that both
$\sigma \hat {\,\,} 0$
and
$\sigma \hat {\,\,} 1$
are
$(e+1)$
-states that need “filling”. In the construction, we are given an infinite set E of elements in state
$\sigma $
, and we know that infinitely many of these are outside A. The problem is to split E into two sets
$E_0$
and
$E_1$
, while ensuring that both
$E_0\smallsetminus A$
and
$E_1\smallsetminus A$
are infinite. We would then enumerate the elements of
$E_1$
into the
$(e+1){}^{\text {th}}$
c.e. superset of A, making them have
$(e+1)$
-state
$\sigma \hat {\,\,} 1$
, while keeping the elements of
$E_0$
outside that set, making them have
$(e+1)$
-state
$\sigma \hat {\,\,} 0$
. This is precisely what is required in the construction of an atomless hyperhypersimple superset. As Cholak pointed out,Footnote
4
the dynamics of Lachlan’s construction do not seem to be modifiable to obtain this.
In our main construction in Section 3, we introduce a novel method for performing this splitting. This method does not rely solely on
$\Delta _3$
guessing; we also use domination properties of
$\text {low}_2$
sets, namely, the fact that if A is
$\text {low}_2$
, then there is a
$\emptyset '$
-computable function that dominates all functions computable from A.
To end this discussion, we should mention why our new technique does not seem to immediately solve the original problem of showing that
$\mathcal {L}^*(A)\cong \mathcal {E}^*$
. The issue is very delicate, and as is often the case, relies on discrepancies in timing. When constructing an atomless hyperhypersimple superset H, we are, in some sense, controlling the supersets of H, which means that all e-states have equal status (this is by necessity an imprecise simplification). When constructing an isomorphism, the opponent has within their power to shift elements from one e state to another. There is a difference between what are colloquially known as “low” and “high” e-states. The opponent can move elements from low to high e-states. Unlike the low guessing construction, or the construction relying on the outer splitting property, our splitting method is not immediate: we have to wait an unknown amount of time until we get a certification that certain elements are outside A, and for finitely many elements, such certification may never happen. While we are waiting for certification, the opponent will shift elements outside a desirable low state. With our new splitting method, we are thus able to split high states, but not low states. In the hyperhypersimple construction, all states are high.
In addition to Lachlan’s characterisation of hyperhypersimple sets, he also classified the isomorphism types of the resulting Boolean algebras: he showed that the Boolean algebras
$\mathcal {L}^*(A)$
for hyperhypersimple sets A are precisely the
$\Sigma _3$
-Boolean algebras—quotients of the computable, atomless Boolean algebra by
$\Sigma _3$
ideals. We show in this article that Theorem 1.2 can be extended to obtain all such algebras as the lattice of supersets of a superset of any given
$\text {low}_2$
c.e. set.
Theorem 1.3. Suppose that A is a coinfinite,
$\text {low}_2$
c.e. set. Let B be a
$\Sigma _3$
Boolean algebra. Then there is a c.e. superset
$H\supseteq A$
with
$\mathcal {L}^*(H)\cong B$
.
2 Maximal supersets
As mentioned above, in order to present our proof of Theorem 1.2, it would be useful to first give a “modern” or at least “thematic” proof of Lachlan’s theorem, using
$\Delta _3$
guessing on a priority tree.
Theorem 2.1 (Lachlan [Reference Lachlan6]).
Every
$\mathrm{low}_2\!$
coinfinite c.e. set has a maximal superset.
2.1 Discussion
We are given a coinfinite,
$\text {low}_2$
c.e. set A; we enumerate a maximal c.e. set
$M\supseteq A$
.
This proof follows, to a certain extent, the
$\Delta _3$
automorphism machinery of Harrington and Soare [Reference Harrington and Soare5]. The construction is performed on a tree of strategies, similarly to many infinite injury priority arguments. Nodes on the tree represent guesses about the eventual behaviour of some aspects of the construction, and use their guesses to meet a requirement that they are assigned to. In the current construction, nodes of length
$e+1$
make guesses about how the
$e{}^{\text {th}}$
c.e. set
$W_e$
interacts with the construction, and attempt to meet the
$e{}^{\text {th}}$
maximality requirement: either
$W_e\cup M =^*\mathbb N$
, or
$W_e\cup M=^* M$
. We will identify a true path, the path of nodes whose guesses are correct; nodes on the true path will be successful in meeting their requirements.
In some ways, though, the usage of the tree of strategies is quite different from most constructions. For one, at a stage s of the construction, we do not define a path of “accessible” nodes, those whose guesses appear to be correct at that stage. Further, there will not be much explicit interaction between strategies: they will not impose restraint, or cause initialisation of other strategies. We will not use the terminology of relative priority.
More importantly, we use the tree as the hardware of a pinball machine. We view the priority tree as growing downwards; so the root of the tree is the “top” node. During the construction, we place some balls at the root of the tree. These balls represent numbers that at a stage s of the construction are not in
$A_s$
(the stage s approximation to the given c.e. set A). At any given stage, there will be only finitely many balls on the machine, but we will ensure that every element of the complement
$A^\complement $
of A is placed on the machine at some stage. Once placed at the root, we allow balls to move between nodes on the tree. Some balls may be enumerated into M, at which time they are removed from the machine. This includes all the elements of A, since
$A\subseteq M$
. We will ensure that balls that are never enumerated into M only move during finitely many stages of the construction, so they eventually arrive at a permanent “resting place”. The main mechanics of the construction are the determination of which balls move, where they move to, and which balls are removed from the machine.
We remark that the term “pinball machine” was earlier used by Lerman, to organise priority arguments (often used for embedding lattices into the c.e. degrees). The Harrington–Soare pinball machine, that we use here, is different in that the balls move from the root to other nodes, which is an opposite direction to Lerman’s machines.
As mentioned in the introduction, another aspect of the Harrington–Soare machinery is the employment of
$\Delta _3$
-guessing. This is the main way that we use, in this construction, the assumption that A is
$\text {low}_2$
. In a typical
$\Pi _2$
argument (such as the minimal pair construction), during the construction we guess the outcome of
$\Pi _2$
questions based on what we see at each stage. In the current construction, we ask more complicated questions, namely,
$\Pi _2(A)$
questions, and we use the fact that
$\Pi _2(A)\subseteq \Delta _3$
. Rather than observing the construction and making guesses accordingly, we are given an approximation to the answer to a
$\Delta _3$
statement, that may or may not be aligned with what is measured at a given stage; we just know that in the limit, the approximation will give us the correct answer. As we shall detail below, we will rely on the recursion theorem to ensure that we indeed approximate the correct answers. The guessing process for
$\Delta _3$
facts is more complicated than that of
$\Pi _2$
facts. Namely, to guess membership in a
$\Delta _3$
set S, we need to guess an answer (
$n\in S$
or
$n\notin S$
), and to guess an existential witness for the
$\Sigma _3$
predicate being guessed. This implies that the tree of strategies needs to be infinite-branching.
2.1.1 The requirements.
Let us consider now the goals of the construction. As discussed, the balls that are never removed from the machine will be precisely the elements of
$M^\complement $
. Each such ball will eventually settle as a “resident” of some node
$\beta $
on the tree. For a node
$\alpha $
on the tree, we will let
$Y {(\alpha) }$
denote the collection of balls which are permanent residents of nodes
$\beta \succcurlyeq \alpha $
. The rules about ball movement will ensure that
$Y {(\alpha) }$
is a d.c.e. set. To enter
$Y {(\alpha) }$
, a ball x will need to first pass through
$\alpha $
, i.e., be a resident of
$\alpha $
at some stage. It may later move to extensions of
$\alpha $
. It is possible that it will be removed from
$Y {(\alpha) }$
by being pulled by some node, that lies to the left of
$\alpha $
on the tree. In any case, balls can be removed from
$Y {(\alpha) }$
by enumerating them into M. We will ensure, however, that
$Y {(\alpha) }\cup M$
is in fact a c.e. set, though not uniformly in
$\alpha $
. This is because either only finitely many balls outside M will ever enter
$Y {(\alpha) }$
, or finitely many balls outside M ever leave
$Y {(\alpha) }$
, or
$Y {(\alpha) }= \emptyset $
.
We will ensure the following:
-
(a) For each node
$\beta $ on the true path,
$Y {(\beta)} =^* {M}^\complement $ . That is, all but finitely many elements of
${M}^\complement $ will have passed through
$\beta $ at some stage and settled at
$\beta $ or some extension of
$\beta $ , i.e., below
$\beta $ (recall that our trees grow downwards).
-
(b) For each e, if
$\beta $ is the node on the true path of length
$e+1$ , then
$Y {(\beta)} $ is either almost contained in, or almost disjoint from,
$W_e$ .
-
(c) We also need to ensure that M is coinfinite.
For (a), we will ensure that only finitely many balls have permanent residence to the left of
$\beta $
, to the right of
$\beta $
, or above
$\beta $
.
For (b), we will ensure that either all balls that ever pass through
$\beta $
are already seen to be in
$W_e$
when they do, or that we know that only finitely many elements of
$W_e$
will ever be available to pass through
$\beta $
. In the first case, the node
$\beta $
will ensure that all but finitely many balls outside
$W_e$
will be enumerated into M (we will say that they are eliminated from the machine).
For (c), we will ensure that infinitely many nodes
$\beta $
on the true path hold on to balls and ensure that they are not eliminated, i.e., keep them out of M. Similar action will ensure that balls indeed only move finitely much, i.e., they do eventually settle at some node.
2.2 Setup
We now go into the details.
2.2.1 The tree.
We let the tree of strategies be the collection of all sequences of the symbols
${\texttt {fin}}_n$
and
$\infty _n$
. These symbols are called outcomes. We write
$\alpha \preccurlyeq \beta $
to indicate that
$\alpha $
is a prefix of
$\beta $
.
In addition to extension, we will use the Kleene–Brouwer ordering on the tree. To define this we fix an ordering of the outcomes, say

We say that
$\alpha $
lies to the left of
$\beta $
, and write
$\alpha <_L\beta $
, if
$\alpha $
and
$\beta $
are incomparable (neither is a prefix of the other), and
$\alpha (k)< \beta (k)$
for the least k such that
${\alpha (k)\ne \beta (k)}$
. We write
$\alpha \leqslant \beta $
if either
$\alpha <_L\beta $
or
$\beta \preccurlyeq \alpha $
. As this is the only linear ordering of strategies that we will use, we use the notation
$\alpha \leqslant \beta $
rather than
$\alpha \leqslant _{\text {KB}} \beta $
.
We let
${\lambda }$
denote the root of the tree (the empty sequence).
2.2.2 Notation for the pinball machine.
As discussed, at each stage, finitely many balls x will reside at some nodes of the machine. We let

denote the collection of x that reside at the node
$\alpha $
at the beginning of stage s. We let
$Y {(\alpha)}_{s}$
be the collection of all balls that at the beginning of stage s reside at
$\alpha $
or below
$\alpha $
, i.e., at some extension of
$\alpha $
:

So
$Y {{(\lambda)}}_{s}$
is the collection of balls that are on the machine at the beginning of stage s.
2.2.3 True stage, and delayed, enumerations.
Since the given set A is
$\text {low}_2$
, it is not high. C.e. sets that are not high have “true stage” enumerations with respect to any given computable growth rate: for any computable function f, there is a computable enumeration
$(A_r)$
Footnote
5
of A such that for infinitely many stages r, the first
$f(r)$
many elements of the stage r complement
${A}^\complement _r$
are “correct”: they are elements of the complement
${A}^\complement $
(see [Reference Soare17, Lemma XI.1.6]).Footnote
6
We fix an enumeration
$(\tilde {A}_r)$
of A, that is true with respect to the function
$f(r) = r^2$
. During the construction, though, it will be useful to delay this enumeration. We will define, during the construction, an enumeration
$(A_s)$
of A, such that for all s there is some r such that
$A_s = \tilde A_r$
. We will write
$n(s)=r$
.
Let s be a stage, and suppose that
$n(s)=r$
. We will let the construction run: balls will move around, or be enumerated into M, and it will be useful to keep track of these actions as happening during several stages: s,
$s+1$
,
$s+2$
, and so on. While we are doing this, we keep our enumeration of A fixed: that is,
$\tilde A_r = A_s = A_{s+1} = A_{s+2} = \dots $
. After finitely many stages, we will reach a stage
$t\geqslant s$
at which we decide that there is nothing we want to do. We then declare t to be a “new balls” stage, and set
$A_{t+1} = \tilde {A}_{r+1}$
,Footnote
7
i.e., set
$n(t+1) = r+1$
.
Thus, the function
$s\mapsto n(s)$
will be weakly increasing and onto; we will have
$n(s+1)\ne n(s)$
exactly when s is a new balls stage, in which case
$n(s+1) = n(s)+1$
. What is important is that the enumeration
$(\tilde A)_r$
is given to us, whereas the enumeration
$(A_s)$
is defined by us during the construction: based on how the construction develops, we decide whether to declare a “new balls” stage, i.e., when to increase n by 1 and thus enumerate more numbers into A. We start with
$n(0)=0$
. Observe that for any stage s,
$n(s)$
is the number of new balls stages
$t<s$
. We will ensure that there are infinitely many new balls stages, so that indeed
$\bigcup _s A_s = A$
.
Definition 2.2. We let

A stage s is A-true if
$Q_s \subseteq {A}^\complement $
.
Thus, if
$s<t$
are successive new balls stages, then
$Q_{s+1} = Q_{s+2} = \cdots = Q_t$
, and
$s+1$
is A-true if and only if all the stages
$s+1,s+2,\dots , t$
are A-true. A stage s is A-true if and only if the stage
$r = n(s)$
is true for the enumeration
$(\tilde A_r)$
with respect to
$r\mapsto r^2$
. By assumption, there are infinitely many such r. As we will ensure that
$s\mapsto n(s)$
is onto, there will be infinitely many A-true stages s.
We will ensure that at every stage s,
$Y {{(\lambda)}}_{s}\smallsetminus M_s = Q_s$
. So at a new balls stage s, we will place all the elements of
$Q_{s+1}$
that have not yet been put on the machine, at the root. As stages go by, some of these elements may be enumerated into M.
The reason for using true stages for the function
$f(r) = r^2$
is the following. If s is a new balls stage then

Thus, at a new balls stage s we will be adding more than
$n(s)$
-many new balls to the root of the machine; and we will use the fact that
$n(s)\to \infty $
. As time goes by, we will be getting balls at the root in increasing number, and infinitely often, all of these balls will be A-correct.
2.2.4
$\Delta _3$
guessing.
By assumption, A is
$\text {low}_2$
, i.e.,
$\Pi _2(A)\subseteq \Delta _3$
. Thus, any Boolean combination of
$\Pi _2(A)$
statements is equivalent to a
$\Sigma _3$
statement. Since there is a universal
$\Pi _2(A)$
set, this is effective: given
$\psi $
, a Boolean combination of
$\Pi _2(A)$
statements, we can effectively find a
$\Sigma _3$
statement
$\chi $
such that
$\psi $
holds if and only if
$\chi $
holds.
In turn, from
$\chi $
we can effectively produce a uniformly computable collection of nondecreasing sequences
$\bar \ell (\psi ,n) = {(\ell ({\psi ,n})_{s})}$
for
$n\in \mathbb N$
such that:
-
•
$\psi $ holds if and only if for some n,
$\bar \ell (\psi ,n)$ is unbounded.
To see this, write
$\chi $
as
$\exists n\forall x\exists y\,\theta (n,x,y)$
. Then we let
$\ell (\psi ,n)_{s}$
be the greatest
$x<s$
such that for all
$x'\leqslant x$
, there is some
$y<s$
such that
$\theta (n,x',y)$
holds.
Shortly, for each node
$\alpha $
on the tree of strategies, we will formulate an “
$\alpha $
question”
$\psi (\alpha )$
, which will be a
$\Pi _2(A)$
statement. In other words, the collection of nodes
$\alpha $
such that
$\psi (\alpha )$
holds is
$\Pi _2(A)$
. Using the transformation above for the statements
$\psi (\alpha )$
and
$\lnot \psi (\alpha )$
(both are Boolean combinations of
$\Pi _2(A)$
statements), we obtain families of sequences
$\bar \ell (\psi (\alpha ),n)$
and
$\bar \ell (\lnot \psi (\alpha ),n)$
. By the recursion theorem, we know a computable index for the construction, and so can have access to these families of sequences during the construction.Footnote
8
2.2.5 The
$\alpha $
question.
Definition 2.3. Let
$\alpha $
be a node. The statement
$\psi (\alpha )$
is:
For every
$k\in \mathbb N$
there is an A-true stage s for which

As just discussed, during the construction we have access to the sequences
$\bar \ell (\psi (\alpha ),n)$
and
$\bar \ell (\lnot \psi (\alpha ),n)$
. Now for each node
$\beta $
we define a (single) sequence
$\bar \ell (\beta )$
by induction on the length
$|\beta |$
:
-
•
$\ell ({\lambda })_{s}= s$ ;
-
•
$\ell (\beta \hat {\,\,}\infty _n)_{s} = \min \{ \ell (\beta )_{s}, \ell (\psi (\beta ),n)_{s} \}$ ;
-
•
$\ell (\beta \hat {\,\,}{\texttt {fin}}_n)_{s} = \min \{ \ell (\beta )_{s}, \ell (\lnot \psi (\beta ),n)_{s} \}$ .
Thus, via the sequences
$\bar \ell (\gamma )$
, the children
$\gamma $
of
$\beta $
together try to answer the
$\beta $
question.
Lemma 2.4. For all
$\alpha $
:
-
(a) for all s,
$\ell (\alpha )_{s}\leqslant s$ ;
-
(b) if
$\alpha \preccurlyeq \beta $ then
$\ell (\alpha )_{s}\geqslant \ell (\beta )_{s}$ ;
-
(c) if
$\bar \ell (\alpha )$ is unbounded then there is some child
$\beta $ of
$\alpha $ such that
$\bar \ell (\beta )$ is unbounded.
Proof. Mostly immediate; (c) holds because for every
$\alpha $
, one of
$\psi (\alpha )$
and
$\lnot \psi (\alpha )$
holds.
Also, by definition,
$\bar \ell ({\lambda })$
is unbounded. We can therefore define the following.
Definition 2.5. The true path is the path of nodes
$\alpha $
such that
$\bar \ell (\alpha )$
is unbounded, but for every
$\beta <_L\alpha $
,
$\bar \ell (\beta )$
is bounded.
In other words,
${\lambda }$
is on the true path, and if
$\alpha $
is on the true path, then the child of
$\alpha $
on the true path is the leftmost child
$\beta $
for which
$\bar \ell (\beta )$
is unbounded. This is the leftmost child that guesses a correct witness for the
$\Sigma _3$
version of
$\psi (\alpha )$
or its negation. Lemma 2.4(c) implies that the true path is infinite.
2.2.6 Pulling and eliminating.
We will now discuss the heart of the construction: how to decide where balls move, and which are enumerated into M. The following terminology will be useful:
-
• a fin -node is a nonzero node
$\beta $ whose last entry is
${\texttt {fin}}_n$ for some n (that is,
$\beta = (\beta ^-)\hat {\,\,} {\texttt {fin}}_n$ for some n); and similarly,
-
• an
$\infty $ -node is a nonzero node whose last entry is
$\infty _n$ for some n.
Here note that for a nonzero node
$\alpha $
, we let
$\alpha ^-$
denote
$\alpha $
’s parent, the result of removing the last entry of
$\alpha $
.
The next definition will govern the movement and enumeration of balls. We will explain the details after we give the definition. The rough idea, though, is the following. If
$\beta $
is an
$\infty $
-node of length
$e+1$
, then we will ensure that
$Y {(\beta)} \subseteq W_e$
by only allowing elements that have already entered
$W_e$
to enter
$Y {(\beta)} $
. On the other hand, if
$\beta $
is a
${\texttt {fin}}$
-node, then we get
$Y {(\beta)} \cap W_e =^* \emptyset $
“for free”, by the fact that the
$\beta ^-$
question
$\psi (\beta ^-)$
fails. As discussed, in the first case, an
$\infty $
-node
$\beta $
will want to enumerate into M any balls it sees that are outside
$W_{e,s}$
. If
$\beta $
is correct about its guess, then this will not cause
$\mathbb N\subseteq ^* M$
, because
$\psi (\beta ^-)$
holds: sufficiently many balls will not be enumerated into M. The complexity of the definitions comes from the need to deal with nodes to the left of the true path, that have wrong opinions.
For any node
$\beta $
, let

and let

Here recall that
$\gamma \leqslant \beta $
denotes the Kleene–Brouwer ordering, not the lexicographic ordering.
Definition 2.6. A ball x is pullable by a nonzero node
$\beta $
at stage s if:
-
(i)
$x\in Y {(\beta ^-)}_{s}\smallsetminus Y {(\leqslant \!\beta) }_{s}$ ;
-
(ii)
$x> |\beta |$ ;
-
(iii)
$|Y {(\beta)}_{s}| < \ell (\beta )_{s}$ ; and
-
(iv) if
$\beta $ is an
$\infty $ -node then
$x\in W_{|\beta ^-|,s}$ .
A ball x is eliminable by a nonzero node
$\beta $
at stage s if:
-
(v)
$x\in Y {(\beta ^-)}_{s}\smallsetminus Y {\leqslant \! \beta }{s}$ ;
-
(vi)
$x \ne \min Y {(\beta ^-)}_{s}$ ; and
-
(vii)
$x< \ell (\beta )_{s}$ .
Let us explain. (i) and (v) mean that x resides at
$\beta $
’s parent
$\beta ^-$
, or at or below one of
$\beta $
’s `of
$\beta $
. These are the regions from which
$\beta $
is allowed to pull. Thus, balls on the machine will only move downward in the Kleene–Brouwer ordering: either to the left, or drop from a parent to a child.
(ii) will help ensure that each ball outside M eventually stops moving: it cannot keep being pulled by longer and longer nodes. (vi) will help show that M is coinfinite. A node
$\beta ^-$
on the true path will guard one ball (the smallest that it can see), and ensure that it is not enumerated into M.
The fact that an
$\infty $
-node
$\beta $
is only allowed to pull balls in
$W_e$
is in conflict with the requirement that
$Y {(\beta)} =^* {M}^\complement $
. It is simple to reconcile the conflict: if at a stage s, x is in a region from which
$\beta $
can pull, and
$x\notin W_{e,s}$
, then
$\beta $
can simply enumerate x into M and remove it from the machine. The danger, of course, is that numbers go into
$W_e$
slowly, and we may be too hasty in enumerating them into M, risking
$M=^*\mathbb N$
. To avoid this, we use the sequences
$\bar \ell (\gamma )$
, which will ensure that each node
$\gamma $
to the left of the true path eventually stops enumerating into M numbers from
${A}^\complement $
. If
$\beta $
is an
$\infty $
-node on the true path, we need to ensure that
$Y {(\beta)} $
is infinite, i.e., that it sees enough numbers in
$W_e$
sufficiently early. We will show that this follows from
$\psi (\beta ^-)$
being true.
Thus, (iii) and (vii) ensure that nodes to the left of the true path eventually stop acting: they stop eliminating or pulling balls
$x\in {A}^\complement $
. (Balls
$x\in A$
are “phantom” balls; the right way to think about them is as if they were never there, and so pulling them does not count as acting.) This uses the fact that if
$\gamma $
lies to the left of the true path, then
$\bar \ell (\gamma )$
is bounded. Of course, these restrictions also restrain the action of a node
$\beta $
on the true path. Here, we note the difference between (iii) and (vii), i.e., the difference between the conditions
$x<\ell (\beta )_{s}$
and
$|Y {(\beta)}_{s}|<\ell (\beta )_{s}$
. The issue is of timing.
The main step of the verification will be to ensure that if
$\beta $
lies on the true path, then the number of balls in
$Y {(\beta)}$
during A-true stages is unbounded (this is Lemma 2.14 below). Suppose that this is known for
$\beta ^-$
(either by induction, or directly from
$\psi (\beta ^-)$
, if
$\beta $
is an
$\infty $
-node). We need to ensure that if the opportunity arises, in a true stage, then
$\beta $
will act by pulling balls. It could be that at such stages,
$\ell (\beta )_{s}$
is too small, compared to the size of the balls that need pulling. This is why we do not require that
$x< \ell (\beta )_{s}$
when deciding whether to pull x or not; we need to consider the number of balls we currently have, i.e.,
$|Y {(\beta)}_{s}|$
.
On the other hand, there is no reason for haste in eliminating balls; we just need to ensure that in the limit, all but finitely many “deviant” balls do end up in M (these are the balls outside
$W_e$
, when
$\beta $
ensures that
${M}^\complement \subseteq ^* W_e$
). So it is legitimate for us to require the stronger condition
$x< \ell (\beta )_{s}$
when considering elimination. And in fact, it is important that we do so. The reason is a little delicate. Suppose that
$\alpha $
, the node on the true path, is a
${\texttt {fin}}$
-node, but that
$\gamma $
is an
$\infty $
-child of
$\alpha ^-$
that lies to the left of
$\alpha $
. Since
$\gamma $
is allowed to only pull elements of
$W_e$
(again
$e = |\alpha ^-|$
), and
$W_e$
may be small (indeed, may be empty), it may be the case that
$Y {(\gamma)}$
never reaches the size
$\lim _s \ell (\gamma )_{s}$
. Thus,
$\gamma $
is always “hungry” for balls. If we allowed
$\gamma $
to eliminate balls whenever
$|Y {(\gamma)}_{s}| < \ell (\gamma )_{s}$
, it may never stop doing so. For pulling balls, the weaker condition
$|Y {(\gamma)}_{s}| < \ell (\gamma )_{s}$
is sufficient, since if
$\gamma $
keeps pulling balls
$x\notin {A}^\complement $
, we will argue that eventually the size of
$Y {(\gamma)}$
reaches
$\lim _s \ell (\gamma )_{s}$
, and
$\gamma $
will stop acting.
We will need the following for the construction.
Lemma 2.7. If x is a ball on the machine at some stage s, and is pullable by some node
$\beta $
at stage s, then there is a
$\leqslant $
-least such
$\beta $
.
Proof. There are only finitely many nodes
$\alpha $
such that
$x\in Y {(\alpha)}_{s}$
, namely, those
$\alpha \preccurlyeq \gamma $
where
$x\in S {(\gamma)}_{s}$
. The ball x is pullable only by children of such nodes
$\alpha $
, and ones which are
$\leqslant \gamma $
. The order-type of all of these nodes (according to the Kleene–Brouwer ordering) is a well-ordering, namely, of order-type
$\omega $
(finitely many to the left of
$\gamma $
, and the children of
$\gamma $
).
2.3 Construction
At the beginning of stage s, we already have specified
$M_s$
and
$Y {(\alpha)}_{s}$
for all
$\alpha $
. We start with
$M_0 = \emptyset $
, and no ball is on the machine at the beginning of stage
$0$
.
At stage s there are three possibilities.
-
(a) If there is a ball on the machine which is pullable by some node, then for each such ball x, let
$\beta $ be the
$\leqslant $ -least node by which x is pullable; we move x to
$\beta $ (by setting
$x\in S{(\beta)}_{s+1}$ ).Footnote 9
-
(b) If no ball on the machine is pullable by some node, but some ball on the machine is eliminable by some node, then we enumerate each such x into
$M_{s+1}$ and remove it from the machine.
-
(c) If no ball on the machine is either pullable or eliminable, then s is declared to be a new balls stage. Recall that this means that we set
$n(s+1)=n(s)+1$ , i.e., that possibly
$A_{s+1}\ne A_s$ .
-
(i) We let
$M_{s+1} = M_s\cup A_{s+1}$ , and remove any
$x\in A_{s+1}$ from the machine.
-
(ii) We place any
$x\in Q_{s+1}\smallsetminus M_{s+1}$ , which is not already on the machine, at the root of the machine, i.e., we put it into
$S {{(\lambda)}}_{s+1}$ .
-
2.4 Verification
Lemma 2.8. For all s,
$Y {{(\lambda)}}_{s} = Q_s\smallsetminus M_s$
.
Proof. By induction on s. If
$x\in Y {{(\lambda)}}_{s+1}\smallsetminus Y {{(\lambda)}}_{s}$
then
$x\in Q_{s+1}\smallsetminus M_{s+1}$
; if
$x\in Q_s\smallsetminus Q_{s+1}$
then
$x\in A_{s+1}$
and so
$x\in M_{s+1}$
and so
$x\notin Y {{(\lambda)}}_{s+1}$
.
2.4.1 Finite ball movement.
As mentioned above, we say that x is a permanent resident of a node
$\beta $
if
$x\in S{(\beta)}_{s}$
for all but finitely many stages s. We let
$S{(\beta)}$
be the set of permanent residents of
$\beta $
.
Lemma 2.9. If
$x\in \bigcup _s Y {{(\lambda)}}_{s}$
and
$x\notin M$
then x is a permanent resident of some node.
Proof. Since
$x\notin M$
, for all but finitely many s, we have
$x\in Y {{(\lambda)}}_{s}$
. By (ii) of Definition 2.6, if x resides at some node
$\beta $
at some stage, then
$|\beta |\leqslant x$
. Among such nodes, x only moves downward in the Kleene–Brouwer ordering. Since the subtree of nodes of length
$\leqslant x$
is well-founded (has no infinite path), the Kleene–Brouwer ordering restricted to such nodes is well-founded, and so, x cannot move infinitely often.
Lemma 2.9 implies the following.
Lemma 2.10. There are infinitely many new balls stages.
Thus, as
$n(s)\to \infty $
, every
$x\notin M$
is eventually put on the machine and is never removed from the machine. This shows the following.
Lemma 2.11. Every
$x\notin M$
is a permanent resident of some node.
We let
$Y {(\alpha) } = \lim _s Y {(\alpha)}_{s} = \bigcup _{\beta \succcurlyeq \alpha }S{(\beta)}$
; so
$Y {{(\lambda)}} = {M}^\complement $
.
2.4.2 True path.
Recall that the true path (Definition 2.5) is the collection of nodes
$\alpha $
such that
$\bar \ell (\alpha )$
is unbounded, but for all
$\gamma <_L\alpha $
,
$\bar \ell (\gamma )$
is bounded. We observed that the true path is an infinite path of the tree.
Lemma 2.12. If
$\alpha $
lies on the true path, then there are only finitely many stages at which some node
$\gamma <_L\alpha $
either pulls or eliminates a ball
$x\in {A}^\complement $
.
Proof. This is proved by induction on the length of
$\alpha $
; it is vacuously true for
$\alpha = {\lambda }$
. Suppose that this is known for
$\alpha ^-$
. Let

By Lemma 2.4(b),

Then by (vii) of Definition 2.6, no number
$x\geqslant l$
is ever eliminated by any
$\gamma <_L\alpha $
extending
$\alpha ^-$
(indeed, whether it is an element of
${A}^\complement $
or not). By induction (from left to right) on the children
$\beta $
of
$\alpha ^-$
which lie to the left of
$\alpha $
, we show that
$\beta $
pulls only finitely many
$x\in {A}^\complement $
. Suppose that this is true for all
$\gamma <_L\beta $
. Then after some stage, every ball
$x\notin A$
which is pulled by
$\beta $
, is not later pulled by a node to the left of
$\beta $
, and is not eliminated by any descendant of
$\beta $
, and so is an element of
$Y {(\beta)}$
. So if
$\beta $
pulls infinitely many
$x\in {A}^\complement $
, then
$Y(\beta)$
is infinite. But then,
$|Y {(\beta)}_{s}|\geqslant l$
for all but finitely many stages s. By (iii) of Definition 2.6, at such stages,
$\beta $
pulls no balls, a contradiction.
As a result:
Lemma 2.13. If
$\alpha $
lies on the true path, then
$Y {(<_{L}\!\alpha)}$
is finite.
The main lemma is the following.
Lemma 2.14. If
$\alpha $
lies on the true path, then for every k there is some A-true stage s such that
$|Y {(\alpha)}_{s}|\geqslant k$
.
Proof. We prove this by induction on the length of
$\alpha $
. First we consider
$\alpha = {\lambda }$
. If s is a new balls stage and
$s+1$
is A-true, then, as discussed above, at least
$n(s)$
many new balls from
$Q_{s+1}$
are placed in
$S {{(\lambda)}}_{s+1}$
; and
$n(s)\to \infty $
.
Suppose that
$\alpha \ne {\lambda }$
lies on the true path, and that the lemma holds for
$\alpha ^-$
. Let
$s_0$
be a stage after which no
$\gamma <_L\alpha $
pulls or eliminates any ball
$x\in {A}^\complement $
; in particular, no
$\gamma <_L\alpha $
pulls any ball during an A-true stage
$s>s_0$
. Further, if
$s>s_0$
is A-true then
$Y {(<_{L}\!\alpha)}_{s} = Y {(<_{L}\!\alpha)}$
, as all balls on the machine at stage s are from
${A}^\complement $
. Let
$m = |Y {(<_{L}\!\alpha)}|$
, which by Lemma 2.13 is finite.
Let
$k\in \mathbb N$
. There are two cases.
First, suppose that
$\alpha $
is a
${\texttt {fin}}$
-node. By induction, there is an A-true stage
$s>s_0$
for which

Choose such s sufficiently large so that
$\ell (\alpha )_{s}>k$
. If
$|Y {(\alpha)}_{s}|\geqslant k$
then we are done. Otherwise,
$|Y {(\alpha)}_{s}|<\ell (\alpha )_{s}$
. Every
$x\in Y {(\alpha ^-)}_{s}\smallsetminus Y {(\leqslant \!\alpha)}_{s}$
larger than
$|\alpha |$
is pullable by
$\alpha $
at stage s. Now
$Y {(\alpha ^-)}_{s}$
contains at most one
$x\leqslant |\alpha |$
, namely,
$x= |\alpha |$
(as every
$x\in Y {(\alpha ^-)}_{s}$
has size
$> |\alpha ^-|$
)Footnote
10
; and as discussed,
$|Y {(<_{L}\!\alpha)}_{s}|=m$
. Hence, there remain at least
$k-|Y {(\alpha)}_{s}|$
many balls in
$Y {(\alpha ^-)}_{s}$
that are pullable by
$\alpha $
. Since
$s>s_0$
, and s is A-true, no node to the left of
$\alpha $
pulls such balls at stage s. Hence, at stage s, all balls pullable by
$\alpha $
will actually be pulled by
$\alpha $
, and moved to
$Y {(\alpha)}_{s+1}$
. Also, no balls already in
$Y {(\alpha)}_{s}$
will be pulled to the left. Since balls are pulled at stage s, no balls are eliminated (anywhere on the machine) at stage s—we only eliminate balls if no balls are pullable. So
$Y {(\alpha)}_{s}\subseteq Y {(\alpha)}_{s+1}$
, and overall, we see that
$|Y {(\alpha)}_{s+1}|\geqslant k$
. Since s is not a new balls stage,
$s+1$
is also A-true, and so is as required.
If
$\alpha $
is an
$\infty $
-node, then the argument is the same, except that we need an A-true stage
$s>s_0$
such that
$\ell (\alpha )_{s}>k$
and such that

since
$\alpha $
is allowed to pull only balls from
$W_{|\alpha ^-|,s}$
. However, since
$\bar \ell (\alpha )$
is unbounded,
$\psi (\alpha ^-)$
holds, which gives us exactly what we need. Note that in this case, we do not need to use the inductive hypothesis on
$\alpha ^-$
, nor is it sufficient for our purposes.Footnote
11
Remark 2.15. During the proof of Lemma 2.14, we relied on our stipulation that we do not eliminate any balls during a stage at which some balls are pullable. So we showed that we can get
$|Y {(\alpha)}_{s+1}|\geqslant k$
, but it is possible that immediately after that, some balls from
$Y {(\alpha)}_{s+1}$
are eliminated, reducing the size of the set. It seems quite silly that Lemma 2.14 would depend on such an unimportant timing trick.
Indeed, it does not. We could allow balls to be pulled and eliminated at the same stage. But then, in order to prove Lemma 2.14, we would need to first show that if more and more balls are supplied to
$Y {(\alpha)}$
, then many of these balls would not be eliminated. We will now do this; it is merely a convenience for our presentation, to delay this part of the argument, instead of essentially proving Lemmas 2.14 and 2.16 together.
2.4.3 M is not everything.
Lemma 2.16. For every
$\alpha $
which lies on the true path,
$Y {(\alpha)} \ne \emptyset $
.
Proof. Let
$\alpha $
be a node which lies on the true path; let
$s_0$
be a stage after which no ball
$x\notin A$
is pulled or eliminated by any node
$\gamma <_L\alpha $
. Let x be the smallest number in
${A}^\complement $
for which there are stages
$t\geqslant s>s_0$
such that:
-
–
$x\in Y {(\alpha)}_{t}$ ;
-
– s is A-true and
$x \leqslant \max Y {(\alpha)}_{s}$ .
Such x exists by Lemma 2.14 (we can take
$t=s$
). Let stages
$t \geqslant s> s_0$
witness x.
By induction on stages
$r\geqslant t$
, we argue that
$x\in Y {(\alpha)}_{r}$
and that
$x = \min Y {(\alpha)}_{r}$
. The minimality of x ensures that
$x = \min Y {(\alpha)}_{t}$
. Let
$r\geqslant t$
, and suppose that
$x = \min Y {(\alpha)}_{r}$
. By Definition 2.6(vi), at stage r, x is not eliminable by any child of
$\alpha $
; since
$r>s_0$
, x is not eliminated by any node at stage r. Similarly, since
$r>s_0$
, x is not pulled by any node to the left of
$\alpha $
at stage r. Hence,
$x\in Y {(\alpha)}_{r+1}$
. Any
$y<x$
on the machine at stage
$r+1$
is an element of
${A}^\complement $
, as s was A-true, and so by minimality of x, will not enter
$Y {(\alpha)}_{r+1}$
.
As a result:
Lemma 2.17. M is coinfinite.
Proof. Since the true path is infinite, for every finite
$D\subseteq {M}^\complement $
there is some
$\alpha $
on the true path such that
$D\cap Y {(\alpha) } = \emptyset $
;
$Y {(\alpha) } \subseteq {M}^\complement $
and so by Lemma 2.16,
${M}^\complement \ne D$
.
2.4.4 Maximality.
Lemma 2.18. For every
$\alpha $
on the true path,
$Y {(\alpha) } =^* {M}^\complement $
.
Proof. We know this holds for
$\alpha = {\lambda }$
, and so it suffices to show that if
$\alpha \ne {\lambda }$
lies on the true path then
$Y {(\alpha) } =^* Y {(\alpha ^-)}$
. By Lemma 2.13,
$Y {(<_{L}\!\alpha)}$
is finite, so it suffices to show that all but finitely many elements of
$Y {(\alpha ^-)}\smallsetminus Y {(<_{L}\!\alpha)}$
are in
$Y {(\alpha) }$
. Let
$x\in Y {(\alpha ^-)}\smallsetminus Y {(<_{L}\!\alpha)}$
, and suppose that
$x> \min Y {(\alpha ^-)}$
.
If s is sufficiently late, then
$\ell (\alpha )_{s}>x$
,
$x\in Y {(\alpha ^-)}_{s}\smallsetminus Y {(<_{L}\!\alpha)}_{s}$
, and
$x>\min Y {(\alpha ^-)}_{s}$
. Then x is eliminable by
$\alpha $
at s. Hence, x must in fact be pulled by
$\alpha $
at stage s.
Remark 2.19. Recall that the reason for eliminating balls is when we want
${M}^\complement \subseteq ^* W_e$
; we then need to enumerate the complement of
$W_e$
into M. However, this was not incorporated into the definition of “eliminable”, and it seems that Lemma 2.18 relies on elements of
$W_e$
being eliminable by
${\texttt {fin}}$
-nodes (when
$|Y {(\alpha)}_{s}|\geqslant \ell (\alpha )_{s}$
). This is not necessary. In Definition 2.6, we could add the clause “
$\beta $
is an
$\infty $
node and
$x\notin W_{|\alpha ^-|,s}$
” to the definition of eliminability.Footnote
12
But then we would need to separate the proof of Lemma 2.18 into cases. If
$\alpha $
is an
$\infty $
-node, then the proof is as above. If it is a
${\texttt {fin}}$
-node, then we argue that as
$\bar \ell (\alpha )$
is unbounded, it will eventually pull balls x as above.
The proof of Theorem 2.1 is concluded with the following lemma.
Lemma 2.20. M is maximal.
Proof. Let
$e\in \mathbb N$
; let
$\alpha $
be the node on the true path of length
$e+1$
.
First, suppose that
$\alpha $
is an
$\infty $
-node. Every ball pulled by
$\alpha $
is already an element of
$W_e$
, so
$Y {(\alpha)}\subseteq W_e$
. By Lemma 2.18,
${M}^\complement \subseteq ^* W_e$
.
Next, suppose that
$\alpha $
is a
${\texttt {fin}}$
-node. Since
$\bar \ell (\alpha )$
is unbounded, we know that
$\psi (\alpha ^-)$
fails: there is some k such that for every A-true stage s,

Then
$|Y {(\alpha ^-)}\cap W_e| \leqslant k$
. For otherwise, there would be a set
$D\subseteq Y {(\alpha ^-)}\cap W_e$
of size
$k+1$
. But then, for all but finitely many stages s,
$D\subseteq Y {(\alpha ^-)}_{s}\cap W_{e,s}$
, so
$|Y {(\alpha ^-)}_{s}\cap W_{e,s} | \geqslant k+1$
for some A-true stage s, which is not the case. It follows that
${M}^\complement \cap W_e =^*\emptyset $
.
3 Atomless supersets
We modify the construction above to prove Theorem 1.2: every coinfinite
$\text {low}_2$
c.e. set has an atomless, hyperhypersimple superset.
The new ingredient is a process for splitting a stream of balls in two, each infinite outside of A.
3.0.1 Boolean algebras and binary trees.
Recall that we can generate Boolean algebras from trees. If
$T\subseteq 2^{<\omega }$
is a tree, then
$B(T)$
is the quotient of the free Boolean algebra with generators
$\sigma \in T$
, modulo the relations:
-
• every
$\tau \in T$ is the join of its children;
-
• if
$\sigma ,\tau \in T$ are incomparable then
$\sigma \wedge \tau = 0_{B(T)}$ .
Equivalently, we can think of
$B(T)$
as the collection of finite unions of the sets
$[\sigma ]\cap [T]$
, for all
$\sigma \in T$
, ordered by set inclusion; in this version,
$\tau $
is identified with
$[\tau ]\cap [T]$
.
Note that this implies that if
$\tau $
is a leaf (or more generally, if
$\tau $
is not extendible to an infinite path on T), then
$\tau = 0_{B(T)}$
; and that
${\lambda }=1_{B(T)}$
. Every countable Boolean algebra can be presented in such a way (up to isomorphism) by considering a tree of nonzero finite Boolean combinations of any sequence of generators.Footnote
13
For our purposes in this section, we rely on the fact that the atomless Boolean algebra is
$B(2^{<\omega })$
, the one generated by the full binary tree.Footnote
14
Thus, given a coinfinite
$\text {low}_2$
c.e. set A, we will enumerate a superset
$H\supseteq A$
, and construct a tree of sets
$\langle {Z(\rho )\,:\, \rho \in 2^{<\omega }}\rangle $
satisfying:
-
(i)
$Z({\lambda })=^* {H}^\complement $ ;
-
(ii) for each
$\rho $ ,
$Z(\rho )$ is infinite;
-
(iii) for each
$\rho $ ,
$H\cup Z(\rho )$ is c.e.;
-
(iv) for all
$\rho \in 2^{<\omega }$ ,
$Z(\rho \hat {\,\,} 0)$ and
$Z(\rho \hat {\,\,} 1)$ are disjoint, and
$Z(\rho ) =^* Z(\rho \hat {\,\,} 0)\cup Z(\rho \hat {\,\,} 1)$ ;
-
(v) for every c.e. set
$W\supseteq H$ there is a finite set
$D\subset 2^{<\omega }$ such that
$W\smallsetminus H=^*\bigcup \left \{ Z(\rho ) \,:\, \rho \in D \right \}$ .
This suffices: to see that
$\mathcal {L}^*(H)$
is a Boolean algebra, it suffices to show that it is complemented (see [Reference Soare17, Section X.2]). Indeed, if
$W\supseteq H$
is c.e., by (v), let
$D\subseteq 2^{<\omega }$
be a finite set such that
$W\smallsetminus H =^* \bigcup \left \{ Z(\rho ) \,:\, \rho \in D \right \}$
. By (iv), we may assume that all elements of D have the same length, say k. Let
$E = \{0,1\}^k\smallsetminus D$
; then
$H\cup \bigcup \left \{ Z(\rho ) \,:\, \rho \in E \right \}$
is a complement of W in
$\mathcal {L}^*(H)$
; by (iii), it is c.e. (ii) and (iv) also ensure that
$\mathcal {L}^*(H)$
has no atoms.
As we are using the
$\Delta _3$
machinery, the sets
$Z(\rho )\cup H$
will not be uniformly c.e.; rather, they will be
$=^*$
to sets whose indices we can read off the true path of the construction.
3.1 The setup
We will use the same general mechanism as above. We will have an infinite-branching tree of strategies, that will serve as a pinball machine on which we move balls around. When numbers are enumerated into H, they will be removed from the machine; the complement
${H}^\complement $
will consist of the balls that have permanent residence at some node on the tree.
Toward enumerating the sets
$Z(\rho )$
, each ball x on the machine at some stage s will have a label
$\rho \in 2^{<\omega }$
. As with residence on the tree, the label of a ball
$x\notin H$
will stabilise (indeed, the label can change only when x moves). If the final label of x is
$\rho $
, then we will put x into
$Z(\rho ')$
for all
$\rho '\preccurlyeq \rho $
.
We will use the notation
$S {(\alpha)}_{s}$
,
$Y {(\alpha)}_{s}$
,
$Y {(<_{L}\!\alpha)}_{s}$
etc. as above. We will use similar notation that also specifies labels. Namely, for a node
$\alpha $
on the tree and
$\rho \in 2^{<\omega }$
, we will let
$S {(\alpha ,\rho)}_{s}$
denote the collection of all balls x which at the beginning of stage s, reside at
$\alpha $
and have label
$\rho $
. We will let

That is,
$Y {(\alpha ,\rho)}_{s}$
consists of the balls x which at the beginning of stage s,
-
• reside at some
$\alpha '\succcurlyeq \alpha $ , and
-
• have a label
$\rho '\succcurlyeq \rho $ .
So as discussed, the elements of
$Y {(\alpha ,\rho)}_{s}$
are those elements in
$Y {(\alpha)}_{s}$
that at stage s we intend to put in
$Z(\rho )$
. So at the end of the verification, we will let
$Z(\rho ) = Y {({\lambda },\rho)}$
, the collection of all balls
$x\in Y {{(\lambda)}}$
whose permanent label extends
$\rho $
. As in the previous construction, if
$\alpha $
is on the true path, then
$Y {(\alpha) } =^* {H}^\complement $
, so
$Z(\rho )=^* Y {(\alpha ,\rho)}$
.
3.1.1 The requirements.
In the current construction, we will have two kinds of requirements. The analogues of the maximality requirements from the previous construction are the requirements for meeting (v) above. For each
$\rho \in 2^{<\omega }$
of length e, we will ensure that either
$Z(\rho )\subseteq ^* W_e$
or
$Z(\rho )\cap W_e=^*\emptyset $
. Thus,
${D = \left \{ \rho \in \{0,1\}^e \,:\, Z(\rho )\subseteq ^* W_e \right \}}$
will show that (v) holds for
$W_e$
. The way to meet these requirements will be very similar to the previous construction, except that the
$\alpha $
question will be more complicated; for each
$\rho \in \{0,1\}^e$
, we will need to guess whether we will see enough balls to make
$Z(\rho )\subseteq W_e$
while ensuring that
$Z(\rho )$
is infinite. The mechanism of pulling and eliminating will be used to achieve that, but we will see that the definitions of “pullable” and “eliminable” need to be more complicated.
The other requirements are new: they ensure that (iv) above holds. Namely, for each
$\rho $
, we need to ensure that
$Z(\rho \hat {\,\,} 0)$
and
$Z(\rho \hat {\,\,} 1)$
form a splitting of
$Z(\rho )$
(up to finite differences), into two infinite sets. In the construction, we will have nodes devoted to such a requirement. The task for such a node is to take balls with label
$\rho $
, and decide whether to extend their label to either
$\rho \hat {\,\,} 0$
or
$\rho \hat {\,\,} 1$
. The difficulty, of course, is that during the construction, we do not know whether a ball x is in A or not. It would be very bad if all but finitely many balls x that we direct to
$\rho \hat {\,\,} 0$
, for example, will end up in A, as that would make
$Z(\rho \hat {\,\,} 0)$
finite. Unlike the other kind of requirement,
$\Delta _3$
guessing is not sufficient for this task: we know that there will be infinitely many
$x\notin A$
with label
$\rho $
. We need to somehow obtain individual balls that we have a good reason to guess are not in A.
Our main contribution is precisely this: a new method for certifying that groups of balls are not in A. As we will explain later, in fact, to perform this certifying and splitting, it will be notationally convenient to have not just one level of nodes devoted for each requirement, but two; nodes and their children together will perform these tasks.
So we will have two kinds of nodes.
-
(i) Decision nodes, of length
$3e$ , whose task, for each
$\rho \in \{0,1\}^e$ , to decide
$W_e$ on
$Y {(\alpha ,\rho)}$ . We call a node of length
$3e$ an e-decision node.
-
(ii) Splitting nodes and their children, of lengths
$3e+1$ and
$3e+2$ . The children of a decision node
$\alpha $ of length
$3e+1$ will pull balls with labels
$\rho $ of length e, and decide to extend their label to either
$\rho \hat {\,\,} 0$ or
$\rho \hat {\,\,} 1$ . We call a node of length
$3e+1$ a parent e-splitting node, and a node of length
$3e+2$ a child e-splitting node.
3.1.2 True balls rather than true stages.
As above, we will make use of a true-stage enumeration
$(\tilde A_r)$
of A with respect to the function
$f(r)=r^2$
. We will use the same mechanism as above to slow down this enumeration to an enumeration
$(A_s)$
defined during the construction, again letting
$n(s)=r$
when
$A_s = \tilde A_r$
. As in the previous construction, if we move or enumerate balls, or perform any other action, at a stage s, then will set
$A_{s+1}=A_s$
(by setting
$n(s+1)=n(s)$
). When no other action is taken at stage s, we will declare s to be a new balls stage, and set
$n(s+1)=n(s)+1$
.
As in the very first part of the proof of Lemma 2.14, this ensures that infinitely often, the root of the tree receives large collections of balls which are all outside A.
However, unlike the previous construction, there will be an element of delay in ball movement. In order to ensure that both
$Y {(\alpha ,\rho \hat {\,\,} 0)}$
and
$Y {(\alpha ,\rho \hat {\,\,} 1)}$
receive many balls outside A, a child of a splitting node
$\alpha $
will hold on to many balls in
$Y {(\alpha ,\rho)}$
until it receives confirmation from
$\emptyset '$
that these balls are indeed outside A. (Infinitely often, this confirmation will be incorrect, but infinitely often it will be correct, and this will be sufficient for our purposes; we will see that what is important, is that each individual attempt at certification makes only finitely many mistakes.) The stages at which such confirmation is received do not need to line up with the A-true stages. So we will not be able to directly prove Lemma 2.14 for the current construction. Rather, instead of looking for “completely true” stages, at which every ball on the machine is correct, we will simply ensure that we get more and more true balls.
Definition 3.1. A number
$x\in {A}^\complement _s$
is A-true at stage s if

That is, not only x will not enter A in the future, but no number
$y\leqslant x$
currently outside A will enter A in the future. If we get enough of these, we do not really care that larger balls are not A-true at the same stage.
Definition 3.2. As in the previous construction, we let
$Q_s$
be the set of
$n(s)^2$
-smallest elements of
${A}^\complement _s$
. We let
$C_s$
denote the collection of
$x\in Q_s$
which are A-true at stage s.
3.1.3 The questions and the tree.
A decision node
$\alpha $
needs to decide, for each
$\rho $
of length e, whether to make
$Z(\rho )$
almost contained in
$W_e$
or almost disjoint from
$W_e$
. This means that the
$\alpha $
question is more complicated. Let
$\alpha $
be an e-decision node. For each set
$D\subseteq \{0,1\}^e$
, the statement
$\psi (\alpha ,D)$
says:
D is the set of
$\rho \in \{0,1\}^{e}$
for which for every k there is some s such that

The outcomes of
$\alpha $
are

for each
$D\subseteq \{0,1\}^{e}$
and each
$n\in \mathbb N$
, ordered in order-type
$\omega $
in some way. Note that
$\psi (\alpha ,D)$
will hold for precisely one
$D\subseteq \{0,1\}^e$
, and we will ensure that
$Z(\rho )\subseteq ^* W_e$
if and only if
$\rho \in D$
(and otherwise,
$Z(\rho )\cap W_e =^*\emptyset $
).
Since each
$\psi (\alpha ,D)$
is a finite Boolean combination of
$\Pi _2(A)$
statements, it is equivalent to a
$\Sigma _3$
statement, so as in the previous construction, we obtain uniformly computable, nondecreasing sequences
$\bar \ell (\psi (\alpha ,D),n)$
, such that
$\psi (\alpha ,D)$
holds if and only if for some n,
$\bar \ell (\psi (\alpha ,D),n)$
is unbounded. As above, we use these to define sequences
$\bar \ell (\alpha )$
for nodes
$\alpha $
; these will only be used for nodes
$\alpha $
that are the children of decision nodes. So we recursively define:
-
• If
$\alpha = (\alpha ^-)\hat {\,\,} D_n$ is the child of a decision node
$\alpha ^-$ , then we let
$\ell (\alpha )_{s}$ be the minimum of
$\ell (\psi (\alpha ^-,D),n)_{s}$ , and
$\ell (\beta )_{s}$ for any
that is also the child of a decision node.
It is more difficult to explain now why we need both parent and children splitting nodes. If
$\alpha $
is a parent e-splitting node, then its children are
$\alpha \hat {\,\,} n$
for all
$n\in \mathbb N$
, ordered naturally. If
$\beta $
is such a child, then
$\beta $
has a unique child
$\beta ^+$
on the tree (which will in turn be an
$e+1$
-decision node). The outcomes n of
$\alpha $
do not quite represent guesses about the behaviour of
$\alpha $
; we will discuss them later.
3.1.4 Updating previous definitions.
We update notation from the previous construction. For a node
$\alpha $
and
$\rho \in 2^{<\omega }$
, and a stage s, we let:
-
•
$Y {(<_{L}\!\alpha ,\rho)}_{s}$ be the collection of balls that at the beginning of stage s, reside at some node
$\beta $ that lies to the left of
$\alpha $ , and have label extending
$\rho $ . That is,
$Y {(<_{L}\!\alpha ,\rho)}_{s} = \bigcup _{\beta <_L\alpha } Y {(\beta ,\rho)}_{s} = Y {(<_{L}\!\alpha)}_{s}\cap Y {({\lambda },\rho)}_{s}$ ;
-
•
$Y {(\leqslant \!\alpha ,\rho)}_{s} = Y {(<_{L}\!\alpha ,\rho)}_{s}\cup Y {(\alpha ,\rho)}_{s}$ .
Definition 3.3. Let
$\alpha = (\alpha ^-)\hat {\,\,} D_n$
be a child of an e-decision node, and let
${\rho \in \{0,1\}^{e}}$
.
A ball x is pullable by
$(\alpha ,\rho )$
at stage s if the following all hold:
-
(i)
$x\in Y {(\alpha ^-,\rho)}_{s}\smallsetminus Y {(\leqslant \! \alpha ,\rho)}_{s}$ ;
-
(ii)
$x> |\alpha |$ ;
-
(iii) either:
-
•
$|Y {(\alpha ,\rho)}_{s}| < \ell (\alpha )_{s}$ , or
-
•
$Y {(\alpha ,\rho)}_{s}\ne \emptyset $ and
$x< \max Y {(\alpha ,\rho)}_{s}$ ;
-
-
(iv) if
$\rho \in D$ then
$x\in W_{e,s}$ .
A ball x is eliminable by
$(\alpha ,\rho )$
at stage s if:
-
(v)
$x\in Y {(\alpha ^-,\rho)}_{s}\smallsetminus Y {(\leqslant \!\alpha ,\rho)}_{s}$ ;
-
(vi)
$x\ne \min Y {(\alpha ^-,\rho)}_{s}$ ; and
-
(vii)
$x< \ell (\alpha )_{s}$ .
We need to comment on the new part of this definition, namely, the second part of (iii). The issue is that, as mentioned above, we will not be able to directly show that
$Y {(\alpha)}_{s}$
is “full” during A-true stages (in the sense of Lemma 2.14), only that it gets “filled” by A-true balls (see Proposition 3.13(b) below). Consider a node
$\alpha $
on the true path (a child of a decision node). At a stage s, the parent
$\alpha ^-$
may have many A-true balls. At the same stage,
$\alpha $
already has larger balls and so does not feel “hungry” (
$|Y {(\alpha ,\rho)}_{s}| \geqslant \ell (\alpha )_{s}$
). But because s is not an A-true stage, the balls that
$\alpha $
has may be false. The problem is solved by allowing
$\alpha $
to pull balls smaller than ones it already has, even if it does not feel “hungry”. This modification is sufficiently tame so that nodes on the left of the true path will again only pull finitely many balls from
${A}^\complement $
.Footnote
15
3.1.5 Certification.
Since A is
$\text {low}_2$
and c.e., some
$\emptyset '$
-computable function dominates all A-computable functions (this follows from Martin’s characterisation [Reference Martin10] of the high degrees as those that compute functions that dominate all computable functions).
Definition 3.4. We fix
$\varphi $
to be a
$\emptyset '$
-computable function that dominates all A-computable functions. We also fix a computable approximation
$(\varphi _s) = (\varphi _0,\varphi _1,\dots )$
of
$\varphi $
.
Let
$\alpha $
be a parent e-splitting node, and fix
$\rho \in \{0,1\}^e$
. The very rough idea of splitting is that
$\alpha $
will define blocks of balls, and hold on to these balls until
$\varphi $
gives confirmation to release these balls to nodes below. The
$k{}^{\text {th}}$
block will contain at least
$2k$
many balls. For now, let us forget about
$\alpha $
’s children, and imagine that released balls are passed to the next splitting node (if
$\alpha $
did not have multiple children then there would be a unique
$(e+1)$
-splitting node extending
$\alpha $
). When the
$k{}^{\text {th}}$
block of balls is certified by
$\varphi $
, then k of the balls in that block will receive the label
$\rho \hat {\,\,} 0$
, and the rest, the label
$\rho \hat {\,\,} 1$
.
For this purpose, we will define an A-computable function
$f^{\alpha ,\rho }$
. An input
$k\in \mathbb N$
for
$f^{\alpha ,\rho }$
indicates an attempt to capture and test the
$k{}^{\text {th}}$
block of balls. Suppose that at some stage r, for all
$m<k$
, the
$m{}^{\text {th}}$
-block of balls is currently defined, and that
$\alpha $
holds
$2k$
many balls with label
$\rho $
(that are not associated with any existing block). We would then declare these balls to constitute the
$k{}^{\text {th}}$
block, and define
$f^{\alpha ,\rho }_{r+1}(k)> \varphi _{r}(k)$
. The
$A_{r+1}$
-use
$u = u^{\alpha ,\rho }_{r+1}(k)$
of this computation will bound at least
$2k$
-many elements of the block.
We then wait for a stage
$s>r$
at which one of two things happens. Either
${A_s\restriction {\,{u}}\ne A_{r+1}\restriction {\,{u}}}$
, in which case
$f^{\alpha ,\rho }_s(k)\operatorname {\mathrm {\uparrow }}$
. This A-change likely involves some elements of the block entering A. We will then wait for
$\alpha $
to obtain new balls, that will allow us to redefine a new
$k{}^{\text {th}}$
block.
Otherwise, we hope to see
$\varphi _s(k)> f^{\alpha ,\rho }_s(k) = f^{\alpha ,\rho }_{r+1}(k)$
. This means that
${\varphi _s(k)> \varphi _r(k)}$
. We will regard this as evidence that elements from the block will not later enter A, and say that the block is certified. We then process the block as described above, splitting it evenly between
$\rho \hat {\,\,} 0$
and
$\rho \hat {\,\,} 1$
, and passing the balls in the block to the next node below.
Of course, it is possible that neither of these events happens: the computation
$f^{\alpha ,\rho }_{r+1}(k)$
is never undefined, but
$\varphi (k)$
never exceeds the value of that computation. Therefore, while we wait, we will try to define the
$(k+1){}^{\text {th}}$
-block, and so on.
The argument that this works will be by contradiction. We want to argue that infinitely many blocks will be released, and infinitely many of those will be A-correct, i.e., only contain balls from
${A}^\complement $
. Assuming this is not the case, we will want to show that
$f^{\alpha ,\rho }$
is a total function. If this is shown, then we obtain the desired contradiction from the fact that
$\varphi $
dominates
$f^{\alpha ,\rho }$
, showing that for almost all k, the correct
$k{}^{\text {th}}$
block will in fact be certified and released. To show that
$f^{\alpha ,\rho }$
is total (under the assumption for contradiction), we use the fact that
$\varphi _s(k)$
changes only finitely many times. This implies that the
$k{}^{\text {th}}$
block will be released only finitely many times, and so eventually, an A-correct
$k{}^{\text {th}}$
block will be defined, showing that
$f^{\alpha ,\rho }(k)\operatorname {\mathrm {\downarrow }}$
.
Thus, the use of the certification process is in the delay it imposes on the release of balls. If we didn’t wait for certification, then it is possible that we would keep redefining and releasing some
$k{}^{\text {th}}$
block, none of whose versions is A-correct. It would then be possible that of the blocks that we release, all but finitely many of the balls that we target to
$\rho \hat {\,\,} 0$
, say, end up in A, resulting in
$Z(\rho \hat {\,\,} 1)=^*Z(\rho )$
, so the splitting requirement is not met.
We now come to the need for multiple children. The reason is delicate. We want to argue that an A-correct definition of some
$f^{\alpha ,\rho }(k)$
will eventually be made. Imagine the following sequence of events:
-
(1)
$f^{\alpha ,\rho }(k)$ is defined at some stage.
-
(2) While waiting for certification, we also define
$f^{\alpha ,\rho }(k+1)$ (after all, we do not know where
$\varphi $ starts dominating).
-
(3) The
$(k+1){}^{\text {th}}$ block is certified and released, but not the
$k{}^{\text {th}}$ one.
-
(4) Some x in the
$k{}^{\text {th}}$ block enters A, making both
$f^{\alpha ,\rho }(k)$ and
$f^{\alpha ,\rho }(k+1)$ undefined.
Now, all we know, by induction, is that an analogue of Lemma 2.14 holds for
$\alpha $
: for all m, there is some stage s at which
$C_s\cap Y {(\alpha ,\rho)}_{s}$
has size at least m. If s is such a stage after these events unfolded, then it is possible that the bulk of
$C_s\cap Y {(\alpha ,\rho)}_{s}$
is actually lying well below
$\alpha $
—it is part of the
$(k+1)$
-block that was released at step (3) above. Since these balls are not currently residing at
$\alpha $
, we cannot use them to define a new
$k{}^{\text {th}}$
-block. Unless…we pull them back up to
$\alpha $
from wherever they are currently residing.
Now, in all similar constructions, pulling balls from below is a source of innumerable problems. Just as a simple example, it becomes much more difficult to argue that balls outside H will eventually reach a permanent residence, since pulling balls back up is increasing in the Kleene–Brouwer ordering. More seriously, it will be difficult to show that the sets
$H\cup Y(\gamma ,\rho )$
, for
$\gamma $
on the true path, are c.e. So we will not do this. A work-around is to allow
$\alpha $
to have infinitely many children
$\alpha \hat {\,\,} n$
. While waiting for certification, each block will “reside” at one of the children. Indeed the block waiting at a child
$\beta $
will simply be
$S {(\beta ,\rho)}_{s}$
.
While the
$k{}^{\text {th}}$
block is waiting at
$\alpha \hat {\,\,} n$
, all blocks defined later will reside at children of
$\alpha $
that lie to the right of
$\alpha \hat {\,\,} n$
, namely, children
$\alpha \hat {\,\,} m$
for various
$m>n$
. When the
$k{}^{\text {th}}$
block is dissolved (at step (4) above), it is now fine for
$\alpha \hat {\,\,} n$
to pull all balls from the right, even those that were released to lower nodes, and use them to constitute a new
$k{}^{\text {th}}$
block.
One can think of
$\alpha \hat {\,\,} n$
as representing the guess that
$\varphi $
starts majorising
$f^{\alpha ,\rho }$
from input n. This is not exactly right but the intuition is not far off.
Another point to make is that for distinct
$\rho $
, the function
$\varphi $
may start dominating the various functions
$f^{\alpha ,\rho }$
at different locations. The argument above will show that for each
$\rho \in \{0,1\}^e$
, there is some child
$\beta $
that releases
$\rho $
-blocks infinitely often. However, to obtain a child of
$\alpha $
on the true path, we need the same
$\beta $
to work for all
$\rho $
. Thus, in the definition of certification below, we will let a child
$\beta $
hold on to its blocks, until all of them (one for each
$\rho $
) are certified, and only then will it release them all.
We can now give the details. Let
$\alpha $
be a parent e-splitting node; let
$\rho \in \{0,1\}^e$
. As discussed, we will define a function
$f^{\alpha ,\rho }$
, with intended oracle A. So at various stages s, for various k, we may have
$f^{\alpha ,\rho }_s(k)\operatorname {\mathrm {\downarrow }}$
or not, and if it is defined, then we will declare a use
$u = u^{\alpha ,\rho }_s(k)$
for this computation. The usual rule applies: if
$f^{\alpha ,\rho }_s(k)\operatorname {\mathrm {\downarrow }}$
, with use u, and
$A_s\restriction {\,{u}} =A_{s+1}\restriction {\,{u}}$
, then
$f^{\alpha ,\rho }_{s+1}(k)\operatorname {\mathrm {\downarrow }} = f^{\alpha ,\rho }_s(k)$
with the same use. If, on the other hand,
$A_s\restriction {\,{u}} \ne A_{s+1}\restriction {\,{u}}$
, then we declare that
$f^{\alpha ,\rho }_{s+1}(k)\operatorname {\mathrm {\uparrow }}$
. Note that this will only happen if s is a new balls stage, and during such stages, we do not define new computations
$f^{\alpha ,\rho }_{s+1}(k)$
.
We will define
$f^{\alpha ,\rho }_s(k)$
only for
$k\geqslant 1$
, since there is no point in dealing with a block of size 0.Footnote
16
We will ensure that if
$k\geqslant 1$
and
$f^{\alpha ,\rho }_s(k+1)\operatorname {\mathrm {\downarrow }}$
then
$f^{\alpha ,\rho }_s(k)\operatorname {\mathrm {\downarrow }}$
as well, that is, the domain of
$f^{\alpha ,\rho }_s$
is a (finite) initial segment of
$\mathbb N\smallsetminus \{0\}$
(see Lemma 3.15 below).
When we define a computation
$f^{\alpha ,\rho }_r(k)$
(at some stage
$r-1$
), then we will declare that the
$k{}^{\text {th}}$
block is currently waiting at some child
$\beta $
of
$\alpha $
; we will record this by setting
$k^{\rho }_s(\beta ) = k$
(note that
$\alpha $
is determined by
$\beta $
). This notation implies that at most one
$\rho $
-block is waiting at
$\beta $
at a given stage.
If
$k^{\rho }_s(\beta )\operatorname {\mathrm {\downarrow }}=k$
then
$f^{\alpha ,\rho }_s(k)\operatorname {\mathrm {\downarrow }}$
. Thus, if
$k^{\rho }_s(\beta )\operatorname {\mathrm {\downarrow }}=k$
but
$f^{\alpha ,\rho }_{s+1}(k)\operatorname {\mathrm {\uparrow }}$
then we declare that
$k^{\rho }_{s+1}(\beta )\operatorname {\mathrm {\uparrow }}$
. That is, if the
$k{}^{\text {th}}$
block is waiting at
$\beta $
at (the beginning of) stage s, but is dissolved when passing from stage s to stage
$s+1$
, then no block is waiting at
$\beta $
at stage
$s+1$
.
If
$\beta $
releases balls at stage s, then we will also set
$k^{\alpha ,\rho }_{s+1}(\beta )\operatorname {\mathrm {\uparrow }}$
, even though
$f^{\alpha ,\rho }_{s+1}(k)\operatorname {\mathrm {\downarrow }}$
; at stage
$s+1$
, the block does not wait at
$\beta $
anymore.
We will ensure that if
$k^{\rho }_s(\beta )\operatorname {\mathrm {\downarrow }}$
, then for every child
$\gamma $
of
$\alpha $
that lies to the left of
$\beta $
, we have
$k^{\rho }_s(\gamma )\operatorname {\mathrm {\downarrow }}$
as well (and
$k^{\rho }_s(\gamma )<k^{\rho }_s(\beta )$
). Again, see Lemma 3.15.
Definition 3.5. Let
$\alpha $
be a parent e-splitting node, let
$\rho \in \{0,1\}^{e}$
, and let
$\beta $
be a child of
$\alpha $
. Let s be a stage.
-
(a) A ball x is pullable by
$(\beta ,\rho )$ at stage s if:
-
(i)
$x\in Y {(\alpha ,\rho)}_{s}\smallsetminus Y {(\leqslant \!\beta ,\rho)}_{s}$ ; and
-
(ii)
$k^\rho _s(\beta )\operatorname {\mathrm {\uparrow }}$ (that is, no
$\rho $ -block is waiting at
$\beta $ at stage s).
-
-
(b) Let
$k\geqslant 1$ . We say that
$(\beta ,\rho ,k)$ is ready for definition at stage s if:
-
(iii)
$f^{\alpha ,\rho }_s(k)\operatorname {\mathrm {\uparrow }}$ , and
$f^{\alpha ,\rho }_s(k-1)\operatorname {\mathrm {\downarrow }}$ ;
-
(iv)
$k^{\rho }_s(\beta )\operatorname {\mathrm {\uparrow }}$ ; and
-
(v) letting
$v = u^{\alpha ,\rho }_s(k-1)$ if
$k>1$ ,
$v=0$ otherwise,
$S {(\beta ,\rho)}_{s}$ contains at least
$2k$ many balls greater than v.
-
-
(c) We say that
$\beta $ is certified at stage s, if for all
$\rho \in \{0,1\}^e$ :
-
(vi)
$k^\rho _s(\beta )\operatorname {\mathrm {\downarrow }}$ ; and
-
(vii)
$\varphi _s(k)> f^{\alpha ,\rho }_s(k)$ , where
$k=k^\rho _s(\beta )$ .
-
Lemma 3.6. Suppose that x is pullable by some node at stage s. Then:
-
(a) For each node
$\beta $ , there is at most one label
$\rho $ such that x is pullable by
$(\beta ,\rho )$ .
-
(b) There is a
$\leqslant $ -least node
$\beta $ such that x is pullable by
$(\beta ,\rho )$ for some
$\rho $ .
Proof. For (a), suppose that x is pullable by
$(\beta ,\rho )$
. Then either
$\beta $
is a child of an e-decision node or a child e-splitting node,
$|\rho |=e$
, and
$x\in Y {(\beta ^-,\rho)}$
. So
$\rho $
is the initial segment of length e of x’s label at stage s.
(b) is as in Lemma 2.7.
Lemma 3.7. Let
$\alpha $
be a parent e-splitting node, and let
$\rho \in \{0,1\}^e$
. Let s be a stage. Suppose that no balls are pullable by any node at stage s. Then there is at most one child
$\beta $
of
$\alpha $
and one k such that
$(\beta ,\rho ,k)$
is ready for definition at stage s.
Proof. The number k is unique by (iii) of Definition 3.5. Say
$\beta <\beta '$
are children of
$\alpha $
. If
$k^{\rho }_s(\beta )\operatorname {\mathrm {\uparrow }}$
, then every
$x\in Y {(\beta ',\rho)}_{s}$
is pullable by
$(\beta ,\rho )$
; so by assumption on s,
$Y {(\beta ',\rho)}_{s} =\emptyset $
, whence
$(\beta ',\rho ,k)$
cannot be ready for definition at s by (v).
3.2 Construction
We start with
$H_0 =\emptyset $
and no ball on the machine.
At stage s we operate according to the first case that applies.
-
(a) There is a ball on the machine which is pullable by some
$(\beta ,\rho )$ . For each such ball x, let
$\beta $ be the
$\leqslant $ -least such. We move x to reside at
$\beta $ at stage
$s+1$ , and we set x’s label at stage
$s+1$ to be
$\rho $ .
-
(b) There is a ball x which is eliminable by some pair
$(\beta ,\rho )$ . We enumerate all such x into
$H_{s+1}$ and remove them from the machine.
-
(c) There is some child e-splitting node that is certified at stage s. In this case, for each such
$\beta $ for which there is no
$\gamma <_L\beta $ that is also certified at s, for each
$\rho \in \{0,1\}^e$ , letting
$k = k^\rho _s(\beta )$ , we:
-
• move all
$x\in S {(\beta ,\rho)}_{s}$ to the unique child
$\beta ^+$ of
$\beta $ ;
-
• for all children
$\gamma \geqslant \beta $ of
$\alpha $ , we set
$k^{\rho }_{s+1}(\gamma )\operatorname {\mathrm {\uparrow }}$ ;
-
• for the k-least many balls
$x\in S {(\beta ,\rho)}_{s}$ , we change their label to
$\rho \hat {\,\,} 0$ ; for all other balls just moved to
$\beta ^+$ , we change the label to
$\rho \hat {\,\,} 1$ .
-
-
(d) There is some child e-splitting node
$\beta $ , some
$\rho \in \{0,1\}^e$ , and some
$k\geqslant 1$ , such that
$(\beta ,\rho ,k)$ is ready for definition at stage s.For each such pair
$(\beta ,\rho )$ , for the unique such k, letting
$\alpha = \beta ^-$ , we define
$$\begin{align*}f^{\alpha,\rho}_{s+1}(k) = \varphi_s(k)+1. \end{align*}$$
$v = u^{\alpha ,\rho }_s(k-1)$ if
$k>1$ ,
$v=0$ otherwise. Let
$x_1,x_2,\dots ,$ enumerate (in order) the elements of
$S {(\beta ,\rho)}_{s}$ that are greater than v. We let the
$A_{s}$ -use
$u^{\alpha ,\rho }_{s+1}(k)$ of the new computation be
$x_{2k}+1$ .We set
$k^{\rho }_{s+1}(\beta ) = k$ .
-
(e) If no case above applies, then s is declared to be a new balls stage.
-
(i) We let
$H_{s+1} = H_s\cup A_{s+1}$ , and remove any
$x\in A_{s+1}$ from the machine.
-
(ii) We place any
$x\in Q_{s+1}\smallsetminus H_{s+1}$ not already on the machine at the root of the machine, and give it the empty string as a label.
-
(iii) For any e-splitting child node
$\beta $ , a child of some
$\alpha $ , if
$k^\rho _s(\beta )\operatorname {\mathrm {\downarrow }} = k$ but
$f^{\alpha ,\rho }_{s+1}(k)\operatorname {\mathrm {\uparrow }}$ , then
$k^\rho _{s+1}(\beta )\operatorname {\mathrm {\uparrow }}$ .
-
3.3 Verification
We start with some observations on ball movements and labels, that follow from the instructions.
Lemma 3.8. Let
$\alpha $
be a node, s be a stage, and suppose that
$x\in S {(\alpha)}_{s}$
.
-
(a) If
$t>s$ and
$x\in S{(\beta)}_{t}$ , then
$\beta \leqslant \alpha $ (in the Kleene–Brouwer ordering).
-
(b) If
$t>s$ and
$x\in S {(\alpha)}_{t}$ (that is, x has not moved between stages s and t), then x’s label at stage s is the same as x’s label at stage t.
-
(c) If
$\alpha $ is an e-decision or e-splitting node (parent or child), then x’s label at stage s has length e.
-
(d) If
$t>s$ and
$x\in Y {(\alpha)}_{t}$ then x’s label at stage t extends x’s label at stage s.
The proof of Lemma 2.8 gives the following.
Lemma 3.9. For all s,
$Y {{(\lambda)}}_{s} = Q_s\smallsetminus H_s$
.
Lemma 3.10. If a ball x resides at a node
$\alpha $
at some stage, then
$|\alpha | \leqslant x+2$
.
Proof. If
$\alpha $
is the root then this is immediate. Otherwise, let
$\eta \preccurlyeq \alpha $
be the longest node which is the child of a decision node. Each node only receives balls that have already passed through their parent. Hence, there is a stage at which x was pulled by
$\eta $
. By Definition 3.3(ii),
$x>|\eta |$
; and
$|\alpha |\leqslant |\eta |+2$
.
By Lemmas 3.8(a) and 3.10, as in the proof of Lemma 2.9, we getthe following.
Lemma 3.11. If
$x\in \bigcup _s Y {{(\lambda)}}_{s}$
and
$x\notin H$
then x is a permanent resident of some node.
Lemma 3.12. There are infinitely many new ball stages.
Proof. The point is that if not, then case (d) of the construction (defining new blocks) can happen at most finitely often. In more detail: suppose, for a contradiction, that
$s_0$
is the last new balls stage. There is some stage
$s_1>s_0$
after which no balls are moved on the machine, or eliminated from the machine. If a new computation
$f^{\alpha ,\rho }_{s+1}(k)$
is defined at some stage
$s>s_1$
, then this computation is never made undefined. For every child
$\beta $
of
$\alpha $
, for all
$t>s$
, if
$S {(\beta ,\rho)}_{t}\ne \emptyset $
then
$k^\rho _t(\beta )\operatorname {\mathrm {\downarrow }}$
(at stage s, the fact that a new computation is defined, implies that there are no children of
$\alpha $
which have balls, but insufficiently many to form a block: such balls would be pulled away earlier than stage s). Hence, no further values of
$f^{\alpha ,\rho }$
will be defined after stage s.
Since every element of
${A}^\complement $
is added to the root of the tree at some stage, it follows that every
$x\in {H}^\complement $
is a permanent resident of some node.
3.3.1 Defining the true path.
We define the true path inductively. The root
${\lambda }$
is declared to lie on the true path. Let
$\alpha $
be a node on the true path.
-
• If
$\alpha $ is a decision node, then the leftmost child
$\beta $ of
$\alpha $ with
$\bar \ell (\beta )$ unbounded is on the true path.
-
• If
$\alpha $ is a parent splitting node, then the leftmost child
$\beta $ of
$\alpha $ that releases balls infinitely often, lies on the true path.
-
• If
$\alpha $ is a child splitting node, then its unique child is also on the true path.
As above, if
$\alpha $
is a decision node, then as one of the statements
$\psi (\alpha ,D)$
is true, one of
$\alpha $
’s children will lie on the true path. On the other hand, we will need to work to show that if
$\alpha $
is a parent splitting node on the true path, then one of its children lies on the true path. For now, we do not know that the true path is infinite.
In fact, to show that a parent splitting node
$\alpha $
which lies on the true path has a child on the true path, we will need to already know that the analogues of Lemmas 2.12 and 2.14 hold for
$\alpha $
. Thus, we will need to prove both by simultaneous induction.
Recall (Definition 3.2) that we let

the collection of balls
$x\in Q_s$
which are A-true at stage s.
Proposition 3.13. Suppose that a node
$\alpha $
lies on the true path.
-
(a) There is a stage after which for every node
$\gamma $ that lies to the left of
$\alpha $ ,
$\gamma $ does not eliminate any balls from
${A}^\complement $ , nor is any ball from
${A}^\complement $ moved to
$\gamma $ .Footnote 17
-
(b) Suppose that
$\alpha $ is an e-splitting or decision node. Then for all
$\rho \in \{0,1\}^e$ , for every k, there is some stage s such that
$|Y {(\alpha ,\rho)}_s\cap C_s|\geqslant k$ .
The proof of Proposition 3.13 will take some work. As discussed, we prove it by induction on the length of
$\alpha $
. For now, we note:
Lemma 3.14. Let
$\alpha $
be a node; suppose that there is a stage after which no ball is moved to a node
$\gamma <_L\alpha $
. Then
$Y {(<_{L}\!\alpha)}$
is finite.
Proof. Every ball in
$Y {(<_{L}\!\alpha)}$
is in
${A}^\complement $
, and is at some stage moved to a node
$\gamma $
that lies to the left of
$\alpha $
. By assumption, there are only finitely many such stages, and at each stage, only finitely many balls are on the machine.
3.3.2 Case I: The root
We start the inductive verification of Proposition 3.13 by verifying that it holds for the root. (a) is vacuous in this case; (b) is as in Lemma 2.14. We note that this is the only part of the proof in which we use the fact that we are using the true-stage enumeration of A.
3.3.3 Case II: Decision nodes
Suppose that
$\alpha $
is an e-decision node that lies on the true path, and suppose that Proposition 3.13 holds for
$\alpha $
. Let
$\beta $
be
$\alpha $
’s child that lies on the true path; we show that Proposition 3.13 holds for
$\beta $
as well. Let
$D\subseteq \{0,1\}^e$
such that
$\beta = \alpha \hat {\,\,} D_n$
for some n. That is,
$\rho \in D$
if and only if for all k there is some s such that
$|Y {(\alpha ,\rho)}_{s}\cap W_{e,s}\cap C_s|\geqslant k$
.
To show that (a) holds for
$\beta $
, we only need a minor modification of the proof of Lemma 2.12, reflecting the new second part of Definition 3.3(iii).
Let l be the maximum of
$\lim _s \ell (\gamma )_s$
, where
$\gamma $
is a child of
$\alpha $
that lies to the left of
$\beta $
. Again, for any such
$\gamma $
and any
$\varepsilon \succcurlyeq \gamma $
that is a child of some decision node,
$\lim _s \ell (\varepsilon )_s \leqslant l$
. By Definition 3.3(vii), no such
$\varepsilon $
ever eliminates any
$x\geqslant l$
.
Let
$\gamma $
be a child of
$\alpha $
that lies to the left of
$\beta $
, and suppose (by induction) that every child
$\delta $
of
$\alpha $
that lies to the left of
$\gamma $
eventually stops pulling any
$x\in {A}^\complement $
(and so, from some stage, no ball from
${A}^\complement $
is moved to any node that lies to the left of
$\gamma $
).
Let
$\rho \in \{0,1\}^e$
. For a contradiction, suppose that
$(\gamma ,\rho )$
pulls infinitely many balls from
${A}^\complement $
. By our assumption, almost all such balls remain in
$Y {(\gamma)}$
. By Lemma 3.8(d), the balls pulled by
$(\gamma ,\rho )$
that remain in
$Y {(\gamma)}$
also remain in
$Y {(\gamma ,\rho)}$
. Let s be a late stage at which
$Y {(\gamma ,\rho)}_{s}$
contains at least l many elements of
${A}^\complement $
; these balls will not be eliminated, so for all
$t\geqslant s$
,
$|Y {(\gamma ,\rho)}_{t}|\geqslant l$
. Let
$a = \max Y {(\gamma ,\rho)}_{s}$
. Then by induction on
$t\geqslant s$
we show that
$a = \max Y {(\gamma ,\rho)}_{t}$
: if this holds for t, then by (iii) of Definition 3.3, the only numbers pulled by
$(\gamma ,\rho )$
are
$<a$
.
The proof that (b) holds for
$\beta $
is similar to that of Lemma 2.14, but here we use the new (iii) of Definition 3.3. Fix
$\rho \in \{0,1\}^e$
. By (a), let
$s_0$
be a stage after which no ball from
${A}^\complement $
is moved to, or eliminated by, any node
$\gamma <_L\beta $
.
By Lemma 3.14,
$Y {(<_{L}\!\beta ,\rho)}$
is finite; let
$m = |Y {(<_{L}\!\beta ,\rho)}|$
. If
$s>s_0$
and
${x\in Y {(<_{L}\!\beta)}_{s}\cap C_s}$
, then as x will not be later eliminated (and
$x\notin A$
),
$x\in Y {(<_{L}\!\beta)}$
. By Lemma 3.8(b), for all
$s>s_0$
,
$Y {(<_{L}\!\beta ,\rho)}_{s}\cap C_s \subseteq Y {(<_{L}\!\beta ,\rho)}$
. Hence, for all
$s>s_0$
,
$|Y {(<_{L}\!\beta ,\rho)}_{s}\cap C_s| \leqslant m$
.
Let
$k\in \mathbb N$
. Suppose that
$\rho \notin D$
. By induction, there is some
$s>s_0$
such that
$|Y {(\alpha ,\rho)}_{s}\cap C_s| \geqslant k+m+3$
and such that
$\ell (\beta )_{s}>k$
. Let

Since
$x\geqslant |\alpha |-2$
for all
$x\in Y {(\alpha)}_{s}$
(Lemma 3.10),
$|R|\geqslant k - |Y {(\beta ,\rho)}_{s}\cap C_s|$
. So as in the maximal set construction, we will be done once we show that either
$|Y {(\beta ,\rho)}_{s}\cap C_s|\geqslant k$
, or that every
$x\in R$
is pullable by
$(\beta ,\rho )$
at stage s (note that in the latter case, s will not be a new balls stage, so
$C_s = C_{s+1}$
). Suppose that
$|Y {(\beta ,\rho)}_{s}\cap C_s|< k$
. The new part is that it is possible, in this case, that
$|Y {(\beta ,\rho)}_{s}|\geqslant \ell (\beta )_{s}$
. If
$|Y {(\beta ,\rho)}_{s}|< \ell (\beta )_{s}$
then certainly all balls in R are pullable. If not, though, since
$\ell (\beta )_{s}>k$
, there must be some
$z\in Y {(\beta ,\rho)}_{s}$
which is not in
$C_s$
. However,
$C_s\smallsetminus H_s$
is an initial segment of the balls on the machine at stage s. So all balls in R are smaller than z. By the new part of Definition 3.3(iii), all balls in R are pullable by
$(\beta ,\rho )$
at stage s.
The case
$\rho \in D$
is the same, using the fact that as
$\psi (\alpha ,D)$
holds; there is some
${s>s_0}$
such that
$\ell (\beta )_{s}>k$
, and
$|Y {(\alpha ,\rho)}_{s}\cap C_s\cap W_{e,s}| \geqslant k+m+3$
. This completes the proof that Proposition 3.13 holds for
$\beta $
.
3.3.4 Case III: Splitting nodes
For the rest of the proof of Proposition 3.13, let
$\alpha $
be a parent e-splitting node that lies on the true path, and suppose that the proposition holds for
$\alpha $
. We show that
$\alpha $
has a child
$\beta $
on the true path, and that the proposition holds for the unique child
$\beta ^+$
of
$\beta $
(and hence also for
$\beta $
).
Until the end of the proof of Proposition 3.13, we fix a stage
$s_0$
witnessing that Proposition 3.13(a) holds for
$\alpha $
.
Lemma 3.15. Let
$\rho \in \{0,1\}^e$
; let s be a stage.
-
(a) Let
$k> 1$ . If
$f^{\alpha ,\rho }_s(k)\operatorname {\mathrm {\downarrow }}$ then
$f^{\alpha ,\rho }_{s}(k-1)\operatorname {\mathrm {\downarrow }}$ .
-
(b) If
$\beta $ is a child of
$\alpha $ and
$k^\rho _s(\beta )\operatorname {\mathrm {\downarrow }}$ , then
$f^{\alpha ,\rho }_s(k^\rho _s(\beta ))\operatorname {\mathrm {\downarrow }}$ .
-
(c) If
$\beta $ is a child of
$\alpha $ and
$k^\rho _s(\beta )\operatorname {\mathrm {\downarrow }}$ , then for every child
$\gamma <\beta $ of
$\alpha $ ,
$k^\rho _s(\gamma )\operatorname {\mathrm {\downarrow }}$ and
$k^\rho _s(\gamma ) < k^\rho _s(\beta )$ .
Proof. (a) follows from the fact that when we define a new computation
$f^{\alpha ,\rho }_{s+1}(k)$
, we have
$f^{\alpha ,\rho }_s(k-1)\operatorname {\mathrm {\downarrow }}$
, and we set
$u^{\alpha ,\rho }_{s+1}(k)>u^{\alpha ,\rho }_{s}(k-1)$
. Also, we note that such a stage s is not a new balls stage, so
$A_s=A_{s+1}$
, so
$u^{\alpha ,\rho }_{s}(k-1) = u^{\alpha ,\rho }_{s+1}(k-1)$
.
(b) is by our stipulation that when an A-change causes a computation
$f^{\alpha ,\rho }_s(k)$
to become undefined at stage
$s+1$
, and
$k = k^{\rho }_s(\beta )$
, then we set
$k^\rho _{s+1}(\beta )\operatorname {\mathrm {\uparrow }}$
.
(c) follows from the previous two parts, and the fact that a new computation is always set up at the leftmost “free” child (a child with
$k^\rho _s(\beta )\operatorname {\mathrm {\uparrow }}$
); see the proof of Lemma 3.7.
Recall that we say that a computation
$f^{\alpha ,\rho }_s(k)\operatorname {\mathrm {\downarrow }}$
is A-correct if
$A_s\restriction {\,{u}} = A\restriction {\,{u}}$
, where
$u=u^{\alpha ,\rho }_s(k)$
is the use of the computation
$f^{\alpha ,\rho }_s(k)$
. Also recall that
$f^{\alpha ,\rho }_s(k)$
is A-correct if and only if for all
$t\geqslant s$
,
$f^{\alpha ,\rho }_t(k)\operatorname {\mathrm {\downarrow }}$
.
Lemma 3.16. Let
$\beta $
be a child of
$\alpha $
; let
$\rho \in \{0,1\}^e$
; let s be a stage. Suppose that
$k=k^\rho _s(\beta )\operatorname {\mathrm {\downarrow }}$
, and suppose that the computation
$f^{\alpha ,\rho }_s(k)$
was defined after stage
$s_0$
, and is A-correct. Then
$S {(\beta ,\rho)}_{s}$
contains
$2k$
many balls
$x< u^{\alpha ,\rho }_s(k)$
.
Proof. Let r be the stage at which the computation was defined. Let X be the set of
$x\in S {(\beta ,\rho)}_{r}$
such that
$x<u^{\alpha ,\rho }_{r+1}(k)$
. Then by the construction,
$|X|\geqslant 2k$
, and by assumption that the computation is A-correct,
$X\subseteq {A}^\complement $
. By induction on
$t\in [r,s]$
we see that
$X\subseteq S {(\beta ,\rho)}_{t}$
. Suppose that this holds for t, and
$t<s$
. Since
$t>s_0$
, no ball from X is pulled by a node to the left of
$\alpha $
. If
$\gamma <\beta $
is a child of
$\alpha $
, and any ball from X is moved to
$Y {(\gamma)}_{t+1}$
, then
$k^\rho _{t+1}(\gamma )\operatorname {\mathrm {\uparrow }}$
; by Lemma 3.15,
$k^\rho _{t+1}(\beta )\operatorname {\mathrm {\uparrow }}$
. But then, since
$f^{\alpha ,\rho }_t(k)$
is A-correct, there is no stage
$w>t$
at which
$k^\rho _w(\beta )=k$
, contradicting the hypothesis of this lemma. Hence,
$X\subseteq S{(\beta)}_{t+1}$
.
The following follows from the construction.
Lemma 3.17. Let
$\beta $
be a child of
$\alpha $
, and let
$\rho \in \{0,1\}^e$
. Let
$s\geqslant s_0$
be a stage.
-
(a) Suppose that for all
$t\geqslant s$ ,
$k^\rho _t(\beta )\operatorname {\mathrm {\downarrow }}$ . Then
$f^{\alpha ,\rho }_s(k^\rho _s(\beta ))$ is A-correct, and for all
$t\geqslant s$ ,
$k^\rho _t(\beta )=k^\rho _s(\beta )$ , and
$\beta $ does not release balls at stage t.
-
(b) Suppose that
$s\geqslant s_0$ ,
$k^\rho _s(\beta )\operatorname {\mathrm {\downarrow }}$ , and
$f^{\alpha ,\rho }_s(k^\rho _s(\beta ))$ is A-correct. Suppose, further, that for all children
$\gamma \leqslant \beta $ of
$\alpha $ , for all
$t\geqslant s$ ,
$\gamma $ does not release balls at stage t. Then for all
$t\geqslant s$ ,
$k^\rho _t(\beta )\operatorname {\mathrm {\downarrow }}$ .
Definition 3.18. Let
$\beta $
be a child of
$\alpha $
, and let
$\rho \in \{0,1\}^e$
. We write
$k^\rho (\beta )\operatorname {\mathrm {\downarrow }}$
if
$k^\rho _s(\beta )\operatorname {\mathrm {\downarrow }}$
for all but finitely many s.
The following is the main lemma.
Lemma 3.19. Let
$\beta $
be a child of
$\alpha $
, and suppose that for every every child
$\gamma <\beta $
of
$\alpha $
, for every
$\rho \in \{0,1\}^e$
,
$k^\rho (\gamma )\operatorname {\mathrm {\downarrow }}$
. Then for all
$\rho \in \{0,1\}^{e}$
, there are infinitely many stages t such that
$k^\rho _t(\beta )\operatorname {\mathrm {\downarrow }}$
and
$f^{\alpha ,\rho }_t(k^\rho _t(\beta ))$
is A-correct.
An immediate consequence is the following.
Lemma 3.20. Let
$\beta $
be a child of
$\alpha $
, and suppose that for every every child
$\gamma <\beta $
of
$\alpha $
, for every
$\rho \in \{0,1\}^e$
,
$k^\rho (\gamma )\operatorname {\mathrm {\downarrow }}$
. Then either:
-
1. for all
$\rho \in \{0,1\}^{e}$ ,
$k^\rho (\beta )\operatorname {\mathrm {\downarrow }}$ ; or
-
2.
$\beta $ releases balls infinitely often, in fact, for all
$\rho \in \{0,1\}^{e}$ there are infinitely many t such that
$\beta $ releases balls at stage t, and
$f^{\alpha ,\rho }_t(k^\rho _t(\beta ))$ is A-correct.
Proof. Let
$s_1\geqslant s_0$
be a stage witnessing the assumption on the children
$\gamma <\beta $
of
$\alpha $
: for all
$s\geqslant s_1$
, for all
$\rho \in \{0,1\}^e$
,
$k^\rho _s(\gamma )\operatorname {\mathrm {\downarrow }}$
.
Let
$\rho \in \{0,1\}^e$
. Let
$s\geqslant s_1$
be a stage such that
$k^\rho _s(\beta )\operatorname {\mathrm {\downarrow }}$
, and
$f^{\alpha ,\rho }_s(k^\rho _s(\beta ))$
is A-correct. If
$\beta $
does not release any balls after stage s, then by Lemma 3.17(b), stage s shows that
$k^\rho (\beta )\operatorname {\mathrm {\downarrow }}$
. So if
$\beta $
releases balls only finitely many times, (1) holds for
$\beta $
.
Suppose that
$\beta $
releases balls infinitely often; let t be the least stage
$\geqslant s$
at which
$\beta $
releases balls. The assumption that
$f^{\alpha ,\rho }_s(k^\rho _s(\beta ))$
is A-correct, and
$s\geqslant s_1$
, imply that
$k= k^\rho _t(\beta ) = k^\rho _s(\beta )$
and that
$f^{\alpha ,\rho }_t(k)\operatorname {\mathrm {\downarrow }}$
is A-correct, so t is one of the stages as required for (2).
Proof of Lemma 3.19.
Let
$s_1\geqslant s_0$
be a stage witnessing the assumption on the children
$\gamma <\beta $
of
$\alpha $
. Fix some
$\rho \in \{0,1\}^{e}$
. Let
$s_2\geqslant s_1$
be any stage. We show that there is some
$t\geqslant s_2$
such that
$k^\rho _t(\beta )\operatorname {\mathrm {\downarrow }}$
and
$f^{\alpha ,\rho }_t(k^\rho _t(\beta ))$
is A-correct. Suppose, for a contradiction, that this is not the case. In other words:
-
⊗1 If
$s\geqslant s_2$ and
$k^\rho _s(\beta )\operatorname {\mathrm {\downarrow }}=k$ , then there is some
$t>s$ such that
$f^{\alpha ,\rho }_t(k)\operatorname {\mathrm {\uparrow }}$ .
Let m be the least such that there is some
$s\geqslant s_2$
such that
$f^{\alpha ,\rho }_s(m)\operatorname {\mathrm {\uparrow }}$
. For all
$k<m$
,
$f^{\alpha ,\rho }_{s_2}(k)\operatorname {\mathrm {\downarrow }}$
is A-correct. Hence, our assumption for contradiction implies:
-
⊗2 For all
$s\geqslant s_2$ , either
$k^\rho _s(\beta )\operatorname {\mathrm {\uparrow }}$ , or
$k^\rho _s(\beta )\geqslant m$ .
We claim:
-
⊗3 There are infinitely many stage s at which
$f^{\alpha ,\rho }_s(m)\operatorname {\mathrm {\uparrow }}$ .
Suppose, for a contradiction, that this is not the case. Thus,
$f^{\alpha ,\rho }(m)\operatorname {\mathrm {\downarrow }}$
. Let r be least such that
$f^{\alpha ,\rho }_r(m)$
is A-correct, i.e., the correct computation
$f^{\alpha ,\rho }(m)$
is defined at stage
$r-1$
. By the choice of m,
$r>s_2$
. The computation is established at some child
$\gamma $
of
$\alpha $
:
$k^\rho _r(\gamma )=m$
. Since
$r>s_1$
,
$\gamma \geqslant \beta $
. By Lemma 3.15, if
$\gamma>\beta $
then
$k^\rho _r(\beta )\operatorname {\mathrm {\downarrow }}<m$
, contradicting
$\otimes _2$
. Hence,
$\gamma =\beta $
. But then, by
$\otimes _1$
, there is some
$s>r$
such that
$f^{\alpha ,\rho }_s(m)\operatorname {\mathrm {\uparrow }}$
, giving the desired contradiction that establishes
$\otimes _3$
.
We also observe:
-
⊗4 If
$s\geqslant s_2$ and
$f^{\alpha ,\rho }_s(m)\operatorname {\mathrm {\uparrow }}$ then
$k^\rho _s(\beta )\operatorname {\mathrm {\uparrow }}$ .
This follows from Lemma 3.15 and
$\otimes _2$
: if
$k=k^\rho _s(\beta )\operatorname {\mathrm {\downarrow }}$
then
$f^{\alpha ,\rho }_s(k)\operatorname {\mathrm {\downarrow }}$
and
$k\geqslant m$
, so
$f^{\alpha ,\rho }_s(m)\operatorname {\mathrm {\downarrow }}$
.
There are only finitely many stages s such that
$k^\rho _s(\beta ) = m$
and
$\beta $
releases balls at stage s (this is the main point of using the certification process). To see this, let s be such that
$k^\rho _s(\beta )=m$
and
$\beta $
releases balls at stage s. Let r be the stage at which the computation
$f^{\alpha ,\rho }_s(m)$
was defined. Then
$\varphi _r(m) < f^{\alpha ,\rho }_s(m) < \varphi _s(m)$
, in particular,
$\varphi _r(m)\ne \varphi _s(m)$
. Thus, there is at most one such stage s after the last stage at which
$\varphi _t(m)$
changes.
Let
$s_3\geqslant s_2$
be a stage after which there are no such stages s; by
$\otimes _3$
, assume that
$f^{\alpha ,\rho }_{s_3}(m)\operatorname {\mathrm {\uparrow }}$
.
-
⊗5 For all
$s\geqslant s_3$ , either
$f^{\alpha ,\rho }_s(m)\operatorname {\mathrm {\uparrow }}$ or
$k^\rho _s(\beta )=m$ .
We prove
$\otimes _5$
by induction on
$s\geqslant s_3$
. By assumption, this holds for
$s=s_3$
. Let
$s\geqslant s_3$
. First, suppose that
$f^{\alpha ,\rho }_s(m)\operatorname {\mathrm {\uparrow }}$
. By
$\otimes _4$
,
$k^\rho _s(\beta )\operatorname {\mathrm {\uparrow }}$
. Hence, if a new computation
$f^{\alpha ,\rho }_{s+1}(m)$
is defined at stage s, it is established at
$\beta $
, setting
$k^\rho _{s+1}(\beta )=m$
. If not, then
$f^{\alpha ,\rho }_{s+1}(m)\operatorname {\mathrm {\uparrow }}$
. Next, suppose that
$k^\rho _s(\beta )=m$
. Since
$s\geqslant s_3$
,
$\beta $
does not release any balls at stage s. Hence, either
$k^\rho _{s+1}(\beta )=m$
, or
$f^{\alpha ,\rho }_{s+1}(m)\operatorname {\mathrm {\uparrow }}$
.
Putting
$\otimes _5$
and the choice of
$s_3$
together, we obtain:
-
⊗6
$\beta $ does not release balls at any stage
$s\geqslant s_3$ .
By choice of
$s_0$
and
$s_1$
, after stage
$s_3$
, no ball from
${A}^\complement $
is moved to
$\beta ^+$
, or from
$\beta ^+$
to any node to its left. So for all
$s\geqslant s_3$
,
$Y {(\beta ^+,\rho)}_{s} = Y {(\beta ^+,\rho)}$
, so
$Y {(\beta ^+,\rho)}$
is finite. Let
$N = |Y {(\beta ^+,\rho)}|$
. Let
$v = u^{\alpha ,\rho }(m-1)$
if
$m>1$
,
$v=0$
otherwise.
By the assumption that Proposition 3.13(b) holds for
$\alpha $
, let
$s\geqslant s_3$
be a stage at which
$|Y {(\alpha ,\rho)}_s\cap C_s|\geqslant N+ 2m+v$
. Let X be the set of
$x\in Y {(\alpha ,\rho)}_{s}\cap C_s$
such that
$x\notin Y {(\beta ^+)}_{s}$
and
$x> v$
.
Then
$|X|\geqslant 2m$
, and for all
$t\geqslant s$
,
$X\subseteq Y {(\alpha ,\rho)}_{t}\smallsetminus Y {(\beta ^+)}_{t}$
. By
$\otimes _3$
and
$\otimes _4$
, let
$t>s$
be a stage such that
$k^\rho _t(\beta )\operatorname {\mathrm {\uparrow }}$
. Then by Definition 3.5, either
$X\subseteq S {(\beta ,\rho)}_{t}$
, or balls are pulled by
$\beta $
at stage t, resulting in
$X\subseteq S {(\beta ,\rho)}_{t+1}$
. In either case, we obtain a stage
$r\geqslant s_3$
at which
$f^{\alpha ,\rho }_r(m)\operatorname {\mathrm {\uparrow }}$
,
$k^\rho _r(\beta )\operatorname {\mathrm {\uparrow }}$
, and
$X\subseteq S {(\beta ,\rho)}_{r}$
. So at stage r we define a new computation
$f^{\alpha ,\rho }_{r+1}(m)$
and set
$k^\rho _{r+1}(\beta )=m$
. The use
$u^{\alpha ,\rho }_{r+1}(m)$
is defined to be
$x+1$
for some
$x\in X$
. Since
$X\subseteq C_r$
, this shows that
$f^{\alpha ,\rho }_{r+1}(m)$
is A-correct. This contradicts
$\otimes _3$
, which finishes the proof of Lemma 3.19.
Lemma 3.21. Some child of
$\alpha $
lies on the true path.
Recall that this means that some child of
$\alpha $
releases balls infinitely often.
Proof. Suppose, for a contradiction, that no child of
$\alpha $
releases balls infinitely often. By induction on the children of
$\alpha $
, from left to right, we see, using Lemma 3.20, that for every child
$\beta $
of
$\alpha $
, for every
$\rho \in \{0,1\}^e$
,
$k^\rho (\beta )\operatorname {\mathrm {\downarrow }}$
. Fixing
$\rho $
, since the map
$\beta \mapsto k^\rho (\beta )$
is injective (Lemma 3.15), we see that
$f^{\alpha ,\rho }$
is total.
Since each
$f^{\alpha ,\rho }$
is A-computable,
$\varphi $
dominates
$\max _{\rho } f^{\alpha ,\rho }$
. Say this domination starts at
$k^*$
. Let
$\beta $
be a child of
$\alpha $
that lies sufficiently to the right, so that for all
$\rho \in \{0,1\}^e$
,
$k^\rho (\beta )\geqslant k^*$
. Let s be sufficiently late so that for each
$\rho $
, for all
$t\geqslant s$
,
$k^\rho _t(\beta )\operatorname {\mathrm {\downarrow }} = k^\rho (\beta )$
, and
$\varphi _s(k^\rho _t)> f^{\alpha ,\rho }(k^\rho (\beta ))$
. But then,
$\beta $
is certified at each stage
$t\geqslant s$
, and so, will release balls—contradicting Lemma 3.17.
Let
$\beta $
be the child of
$\alpha $
that lies on the true path.
Lemma 3.22. Proposition 3.13 holds for both
$\beta $
and
$\beta ^+$
.
Proof. By Lemma 3.20, for every child
$\gamma <\beta $
of
$\alpha $
, for all
$\rho $
,
$k^\rho (\gamma )\operatorname {\mathrm {\downarrow }}$
. By Lemma 3.17, eventually, each such child stops pulling any balls (whether from
${A}^\complement $
or not), and does not release any balls. Thus, for each such
$\gamma $
, only finitely many balls ever enter
$Y {(\gamma)}$
, and each such ball is eventually removed from the machine, or stops moving. This gives part (a) of Proposition 3.13 for
$\beta ^+$
(and hence for
$\beta $
as well).
Part (b) follows from Lemma 3.20 as well. Fix
$\rho \in \{0,1\}^e$
. Let t be a stage. If
$k^\rho _t(\beta )\operatorname {\mathrm {\downarrow }}=k$
,
$f^{\alpha ,\rho }_t(k)$
is A-correct, and
$\beta $
releases balls at stage t, then by Lemma 3.16,
$|S {(\beta ^+,\rho )}_{t+1}\cap C_{t+1}|\geqslant 2k$
. By construction, in fact, for each
$i=0,1$
,
$|S {(\beta ^+,\rho \hat {\,\,} i)}_{t+1}\cap C_{t+1}|\geqslant k$
.
If
$t'>t$
,
$k^\rho _{t'}(\beta )\operatorname {\mathrm {\downarrow }} = k'$
, and
$f^{\alpha ,\rho }_{t'}(k')$
is A-correct, then
$k'>k$
, showing that these numbers k go to
$\infty $
.
Corollary 3.23. The true path is infinite, and Proposition 3.13 holds for every
$\alpha $
on the true path.
3.3.5 The rest
The rest is now straightforward. For each
$\rho \in 2^{<\omega }$
, let

be the collection of balls
$x\notin H$
whose permanent label extends
$\rho $
.
Lemma 3.24. Let
$\alpha $
on the true path be an e-decision or splitting node. For every
$\rho \in \{0,1\}^{e}$
,
$Y {(\alpha ,\rho)}\ne \emptyset $
.
Proof. Very much like the proof of Lemma 2.16, except that replacing true stages by
$C_s$
actually makes things slightly simpler. We may assume that
$\alpha $
is an e-decision node. Let
$s_0$
be a stage after which nodes that lie to the left of
$\alpha $
do not eliminate any balls, or pull any balls in
${A}^\complement $
. Let

x exists by Proposition 3.13. Say
$s_1>s_0$
and
$x\in C_{s_1}\cap Y {(\alpha ,\rho)}_{s_1}$
. By induction on
$t\geqslant s_1$
we see that
$x = \min Y {(\alpha ,\rho)}_{t}$
. For
$t = s_1$
this is because
$C_{s_1}$
is an initial segment of
$Q_{s_1}$
. Suppose that
$x = \min Y {(\alpha ,\rho)}_{t}$
. Then x is not eliminable by any node
$\gamma \succcurlyeq \alpha $
, and so
$x\in Y {(\alpha ,\rho)}_{t+1}$
. Let
$y<x$
. If
$y\in Y {(\alpha ,\rho)}_{t+1}$
, then as
$C_{s_1}\subseteq C_{t+1}$
, we have
$y\in C_{t+1}$
, contradicting the minimality of x.
Corollary 3.25. Let
$\alpha $
on the true path be an e-decision or splitting node. For every
$\rho \in \{0,1\}^{e}$
,
$Y {(\alpha ,\rho)}$
is infinite.
It follows that each
$Z(\rho )$
is infinite.
Lemma 3.26. For every
$\alpha $
on the true path,
$Y {(\alpha) } =^* {H}^\complement $
.
Proof. Like the proof of Lemma 2.18. The new part is when
$\alpha $
is a child e-splitting node; we need to show that every ball that lies to the right of
$\alpha $
but below
$\alpha ^-$
is eventually pullable by
$\alpha $
. This follows from the fact that
$\alpha $
releases balls infinitely often; if
$\alpha $
releases balls at stage s, then
$k^\rho _{s+1}(\alpha )\operatorname {\mathrm {\uparrow }}$
for each
$\rho $
, and then every such ball x is pullable by
$\alpha $
at stage
$s+1$
.
As a result, for all
$\rho $
, if
$\alpha $
is an e-decision or splitting node on the true path, then
$Z(\rho ) =^* Y {(\alpha ,\rho)}$
.
Lemma 3.27. For every
$\rho $
,
$H\cup Z(\rho )$
is c.e.
Proof. Let
$\alpha $
be an e-decision node on the true path, where
$|\rho | = e$
. Then
$Y {(\alpha ,\rho)}$
is c.e.: from some stage onwards, any ball outside H which enters
$Y {(\alpha) }$
with label
$\rho $
, will stay in
$Y {(\alpha ,\rho)}$
.
Lemma 3.28. For every
$\rho $
,
$Z(\rho ) =^* Z(\rho \hat {\,\,} 0)\cup Z(\rho \hat {\,\,} 1)$
, and
$Z(\rho \hat {\,\,} 0)\cap Z(\rho \hat {\,\,} 1) = \emptyset $
.
Proof. If
$\alpha $
is an e-decision node on the true path with
$e= |\rho |+1$
, then
$Y {(\alpha ,\rho)} = Y {(\alpha ,\rho \hat {\,\,} 0)} \cup Y {(\alpha ,\rho \hat {\,\,} 1)}$
.
Lemma 3.29. For every e,
$W_e\cap {H}^\complement $
is the union of finitely many sets
$Z(\rho )$
, up to finite difference.
Proof. Like the proof of Lemma 2.20. Let
$\alpha $
on the true path be a child of an e-decision node. Then for all
$\rho \in \{0,1\}^{e}$
, either
$Y {(\alpha ,\rho)}\subseteq W_e$
, or
$Y {(\alpha ,\rho)}\cap W_e =^* \emptyset $
. The result follows since
$Z(\rho )=^* Y(\alpha ,\rho )$
for each such
$\rho $
, and
$Z(\rho )$
for
$\rho \in \{0,1\}^e$
partition
${H}^\complement $
(up to finite difference).
Corollary 3.30. H is atomless hyperhypersimple.
4 Other Boolean algebras
In this section, we explain how to modify the proof of Theorem 1.2 to show:
Theorem 4.1. Let A be
$\text {low}_2$
and coinfinite. For any
$\Sigma _3$
-Boolean algebra B there is some c.e.
$H\supseteq A$
such that
$\mathcal {L}^*(H)\cong B$
.
Let B be a
$\Sigma _3$
-Boolean algebra. This means that it is the quotient of the (standard computable copy of the) atomless Boolean algebra by a
$\Sigma _3$
ideal. An equivalent characterisation is in terms of Boolean algebras generated by trees. Recall that for a tree
$T\subseteq 2^{<\omega }$
we defined the Boolean algebra
$B(T)$
, generated from the elements of T according to the rules: (i) every
$\tau \in T$
is the join of its children and (ii) if
$\sigma ,\tau \in T$
are incomparable then
$\sigma \wedge \tau = 0_{B(T)}$
. We noted that this implies that if
$\tau $
is non-extendible on T, then
$\tau = 0_{B(T)}$
(and the atoms of
$B(T)$
are
$\tau $
where
$\tau $
isolates a unique path of T). Note that there are other ways of producing Boolean algebras from trees, for example, ones in which leaves of the tree are atoms. However, the advantage of the representation that we chose is:
-
• The
$\Sigma _3$ Booleans algebras are precisely the algebras
$B(T)$ , where T is
$\Delta _3$ .
(If we used the other representation, we would need
$\Sigma _3$
trees). The idea of the modified construction is to add nodes that decide, for each
$\rho \in \{0,1\}^e$
, whether
$\rho \in T$
or not, again utilising
$\Delta _3$
guessing. A child
$\beta $
of such
$\alpha $
that guesses that
$\rho \notin T$
will attempt to enumerate all of
$Y {(\alpha ,\rho)}$
into H. The technique is identical to eliminating balls to decide
$W_e$
on
$Y {(\alpha ,\rho)}$
.
4.1 The construction
Most of the construction is as above, so we explain the new ingredients. We now have four kinds of nodes:
-
• e-tree nodes, of length
$4e$ ;
-
• e-decision nodes, of length
$4e+1$ ;
-
• parent and child e-splitting nodes, of lengths
$4e+2$ and
$4e+3$ .
For an e-tree node
$\alpha $
and
$E\subseteq \{0,1\}^e$
, the statement
$\psi (\alpha ,E)$
is:

This is a finite Boolean combination of
$\Delta _3$
statements. Again the children are
$\alpha \hat {\,\,} E_n$
for
$E\subseteq \{0,1\}^e$
and
$n<\omega $
. We define
$\bar \ell (\alpha \hat {\,\,} E_n) = \bar \ell (\psi (\alpha ,E),n))$
as above.
Definition 4.2. Let
$\beta = \alpha \hat {\,\,} E_n$
be a child of an e-tree node
$\alpha $
, and let
$\rho \in \{0,1\}^e$
. We say that a ball x is pullable by
$(\beta ,\rho )$
at a stage s if:
-
(i)
$x\in Y {(\alpha ,\rho)}_{s}\smallsetminus Y {(\leqslant \!\beta ,\rho)}_{s}$ ;
-
(ii)
$\rho \in E$ ; and
-
(iii) either:
-
•
$|Y {(\beta ,\rho)}_{s}|< \ell (\beta )_{s}$ , or
-
•
$Y {(\beta ,\rho)}_{s}\ne \emptyset $ and
$x< \max Y {(\beta ,\rho)}_{s}$ .
-
We say that x is eliminable by
$(\beta ,\rho )$
at stage s if:
-
(iv)
$x\in Y {(\alpha ,\rho)}{s}\smallsetminus Y {(\leqslant \!\beta ,\rho)}_{s}$ ;
-
(v)
$\rho \notin E$ ;
-
(vi)
$x\ne \min Y {(\alpha ,\rho)}_{s}$ ; and
-
(vii)
$x<\ell (\alpha )_{s}$ .
We need to modify Definition 3.5(c) so that we require certification only for those
$\rho \in \{0,1\}^e$
that are guessed to be on the tree T. If
$\alpha $
is an e-decision, e-splitting, or
$(e+1)$
-tree node, then we let

denote the set
$E\subseteq \{0,1\}^e$
such that for some n,
$\varepsilon \hat {\,\,} E_n\preccurlyeq \alpha $
, where
$\varepsilon $
is the e-tree node preceding
$\alpha $
. We let
$E(\lambda ) = \{\lambda \}$
. Then in the new version of Definition 3.5(c), we require that the conditions hold for those
$\rho \in E(\beta )$
.
The rest of the construction follows the atomless hyperhypersimple construction above, verbatim.
4.2 The verification
For the verification, we only need to replace the second part of Proposition 3.13. The new version of Proposition 3.13(b) replaces
$\{0,1\}^e$
by
$E(\alpha )$
:
-
• If
$\alpha $ lies on the true path, then for all
$\rho \in E(\alpha )$ , for all k, there is some s such that
$|Y {(\alpha ,\rho)}_{s}\cap C_s|\geqslant k$ .
The rest of the verification follows without change, showing that if
$\rho \in T$
then
$Z(\rho )$
is infinite, while if
$\rho \notin T$
then
$Z(\rho )$
is finite.
Funding
The authors thank the Marsden Fund of New Zealand.