We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We define $\Psi $-autoreducible sets given an autoreduction procedure $\Psi $. Then, we show that for any $\Psi $, a measurable class of $\Psi $-autoreducible sets has measure zero. Using this, we show that classes of cototal, uniformly introenumerable, introenumerable, and hyper-cototal enumeration degrees all have measure zero.
By analyzing the arithmetical complexity of the classes of cototal sets and cototal enumeration degrees, we show that weakly 2-random sets cannot be cototal and weakly 3-random sets cannot be of cototal enumeration degree. Then, we see that this result is optimal by showing that there exists a 1-random cototal set and a 2-random set of cototal enumeration degree. For uniformly introenumerable degrees and introenumerable degrees, we utilize $\Psi $-autoreducibility again to show the optimal result that no weakly 3-random sets can have introenumerable enumeration degree. We also show that no 1-random set can be introenumerable.
Suppose that we have a method which estimates the conditional probabilities of some unknown stochastic source and we use it to guess which of the outcomes will happen. We want to make a correct guess as often as it is possible. What estimators are good for this? In this work, we consider estimators given by a familiar notion of universal coding for stationary ergodic measures, while working in the framework of algorithmic randomness, i.e., we are particularly interested in prediction of Martin-Löf random points. We outline the general theory and exhibit some counterexamples. Completing a result of Ryabko from 2009 we also show that universal probability measure in the sense of universal coding induces a universal predictor in the prequential sense. Surprisingly, this implication holds true provided the universal measure does not ascribe too low conditional probabilities to individual symbols. As an example, we show that the Prediction by Partial Matching (PPM) measure satisfies this requirement with a large reserve.
The
$\Omega $
numbers—the halting probabilities of universal prefix-free machines—are known to be exactly the Martin-Löf random left-c.e. reals. We show that one cannot uniformly produce, from a Martin-Löf random left-c.e. real
$\alpha $
, a universal prefix-free machine U whose halting probability is
$\alpha $
. We also answer a question of Barmpalias and Lewis-Pye by showing that given a left-c.e. real
$\alpha $
, one cannot uniformly produce a left-c.e. real
$\beta $
such that
$\alpha - \beta $
is neither left-c.e. nor right-c.e.
Recently, a connection has been established between two branches of computability theory, namely between algorithmic randomness and algorithmic learning theory. Learning-theoretical characterizations of several notions of randomness were discovered. We study such characterizations based on the asymptotic density of positive answers. In particular, this note provides a new learning-theoretic definition of weak 2-randomness, solving the problem posed by (Zaffora Blando, Rev. Symb. Log. 2019). The note also highlights the close connection between these characterizations and the problem of convergence on random sequences.
The aim of this paper is to shed light on our understanding of large scale properties of infinite strings. We say that one string
$\alpha $
has weaker large scale geometry than that of
$\beta $
if there is color preserving bi-Lipschitz map from
$\alpha $
into
$\beta $
with small distortion. This definition allows us to define a partially ordered set of large scale geometries on the classes of all infinite strings. This partial order compares large scale geometries of infinite strings. As such, it presents an algebraic tool for classification of global patterns. We study properties of this partial order. We prove, for instance, that this partial order has a greatest element and also possess infinite chains and antichains. We also investigate the sets of large scale geometries of strings accepted by finite state machines such as Büchi automata. We provide an algorithm that describes large scale geometries of strings accepted by Büchi automata. This connects the work with the complexity theory. We also prove that the quasi-isometry problem is a
$\Sigma _2^0$
-complete set, thus providing a bridge with computability theory. Finally, we build algebraic structures that are invariants of large scale geometries. We invoke asymptotic cones, a key concept in geometric group theory, defined via model-theoretic notion of ultra-product. Partly, we study asymptotic cones of algorithmically random strings, thus connecting the topic with algorithmic randomness.
We present an overview of higher randomness and its recent developments. After an introduction, we provide in the second section some background on higher computability, presenting in particular $\Pi^1_1$ and $\Sigma^1_1$ sets from the viewpoint of the computability theorist. In the third section we give an overview of the different higher randomness classes: $\Delta^1_1$-randomness, $\Pi^1_1$-Martin-Löf randomness, higher weak-2 randomness, higher difference randomness, and $\Pi^1_1$-randomness. We then move on to study each of these classes, separating them and inspecting their respective lowness classes. We put more attention on $\Pi^1_1$-Martin-Löf randomness and $\Pi^1_1$-randomness: The former is the higher analogue of the most well-known and studied class in classical algorithmic randomness. We show in particular how to lift the main classical randomness theorems to the higher settings by putting continuity in higher reductions and relativisations. The latter presents, as we will see, many remarkable properties and does not have any analogue in classical randomness. Finally in the eighth section we study randomness along with a higher hierarchy of complexity of sets, motivated by the notion of higher weak-2 randomness. We show that this hierarchy collapses eventually.
In this introductory survey, we provide an overview of the major developments of algorithmic randomness with an eye towards the historical development of the discipline. First we give a brief introduction to computability theory and the underlying mathematical concepts that later appear in the survey. Next we selectively cover four broad periods in which the primary developments in algorithmic randomness occurred: (1) the mid-1960s to mid-1970s, in which the main definitions of algorithmic randomness were laid out and the basic properties of random sequences were established; (2) the 1980s through the 1990s, which featured intermittent and important work from a handful of researchers; (3) the 2000s, during which there was an explosion of results as the discipline matured into a fully-fledged subbranch of computability theory; and (4) the early 2010s, in which ties between algorithmic randomness and other subfields of mathematics were discovered. The aim of this survey is to provide a point of entry for newcomers to the field and a useful reference for practitioners.
The halting probability of a Turing machine was introduced by Chaitin, who also proved that it is an algorithmically random real number and named it Omega. Since his seminal work, many popular expositions have appeared, mainly focusing on the metamathematical or philosophical significance of this number (or debating against it). At the same time, a rich mathematical theory exploring the properties of Chaitin's Omega has been brewing in various technical papers, which quietly reveals the significance of this number to many aspects of contemporary algorithmic information theory. The purpose of this survey is to expose these developments and tell a story about Omega which outlines its multi-faceted mathematical properties and roles in algorithmic randomness.
This is a survey of constructive and computable measure theory with an emphasis on the close connections with algorithmic randomness. We give a brief history of constructive measure theory from Brouwer to the present, emphasizing how Schnorr randomness is the randomness notion implicit in the work of Brouwer, Bishop, Demuth, and others. We survey a number of recent results showing that classical almost everywhere convergence theorems can be used to characterize many of the common randomness notions including Schnorr randomness, computable randomness, and Martin-Löf randomness. Last, we go into more detail about computable measure theory, showing how all the major approaches are basically equivalent (even though the definitions can vary greatly).
In this survey, we lay out the central results in the study of algorithmic randomness with respect to biased probability measures. The first part of the survey covers biased randomness with respect to computable measures. The central technique in this area is the transformation of random sequences via certain randomness-preserving Turing functionals, which can be used to induce non-uniform probability measures. The second part of the survey covers biased randomness with respect to non-computable measures, with an emphasis on the work of Reimann and Slaman on the topic, as well as the contributions of Miller and Day in developing Levin's notion of a neutral measure. We also discuss blind randomness as well as van Lambalgen's theorem for both computable and non-computable measures. As there is no currently-available source covering all of these topics, this survey fills a notable gap in the algorithmic randomness literature.
Ergodic theory is concerned with dynamical systems -- collections of points together with a rule governing how the system changes over time. Much of the theory is concerned with the long term behavior of typical points-- how points behave over time, ignoring anomalous behavior from a small number of exceptional points. Computability theory has a family of precise notions of randomness: a point is "algorithmically random'' if no computable test can demonstrate that it is not random. These notions capture something essential about the informal notion of randomness: algorithmically random points are precisely the ones that have typical orbits in computable dynamical systems. For computable dynamical systems with or without assumptions of ergodicity, the measure 0 set of exceptional points for various theorems (such as Poincaré's Recurrence Theorem or the pointwise ergodic theorem) are precisely the Schnorr or Martin-Löf random points identified in algorithmic randomness.
We discuss the different contexts in which relativization occurs in randomness and the effect that the relativization chosen has on the results we can obtain. We study several characterizations of the K-trivials in terms of concepts ranging from cuppability to density, and we consider a uniform relativization for randomness that gives us more natural results for computable randomness, Schnorr randomness, and Kurtz randomness than the classical relativization does (the relativization for Martin-Löf randomness is unaffected by this change). We then evaluate the relativizations we have considered and suggest some avenues for further work.
Algorithmic randomness lies at the intersection between computability theory and probability theory. In order to fully explore this interaction, one naturally needs a computable version of measurable functions. While several such notions appear in the literature, most of them do not interact well with algorithmic randomness because they are only defined up to a null set. Therefore, we need a computable notion of measurable function which is well defined on algorithmically random points, and this is what layerwise computability precisely does. This article is a survey about this notion. We give the main definitions, the most important properties, and several applications of this notion. We prioritize motivating this framework and explaining its salient features.
The field of algorithmic randomness studies the various ways to define the inherent randomness of individual points in a space (for example, the Cantor space or Euclidean space). Classically, this quest seems quixotic. However, the theory of computing allows us to give mathematically meaningful notions of random points. In the past few decades, algorithmic randomness has been extended to make use of resource-bounded (e.g., time or space) computation. In this survey we survey these developments as well as their applications to other parts of mathematics.
Numerous learning tasks can be described as the process of extrapolating patterns from observed data. One of the driving intuitions behind the theory of algorithmic randomness is that randomness amounts to the absence of any effectively detectable patterns: it is thus natural to regard randomness as antithetical to inductive learning. Osherson and Weinstein [11] draw upon the identification of randomness with unlearnability to introduce a learning-theoretic framework (in the spirit of formal learning theory) for modelling algorithmic randomness. They define two success criteria—specifying under what conditions a pattern may be said to have been detected by a computable learning function—and prove that the collections of data sequences on which these criteria cannot be satisfied correspond to the set of weak 1-randoms and the set of weak 2-randoms, respectively. This learning-theoretic approach affords an intuitive perspective on algorithmic randomness, and it invites the question of whether restricting attention to learning-theoretic success criteria comes at an expressivity cost. In other words, is the framework expressive enough to capture most core algorithmic randomness notions and, in particular, Martin-Löf randomness—arguably, the most prominent algorithmic randomness notion in the literature? In this article, we answer the latter question in the affirmative by providing a learning-theoretic characterisation of Martin-Löf randomness. We then show that Schnorr randomness, another central algorithmic randomness notion, also admits a learning-theoretic characterisation in this setting.
We show that for each computable ordinal $\alpha > 0$ it is possible to find in each Martin-Löf random ${\rm{\Delta }}_2^0 $ degree a sequence R of Cantor-Bendixson rank α, while ensuring that the sequences that inductively witness R’s rank are all Martin-Löf random with respect to a single countably supported and computable measure. This is a strengthening for random degrees of a recent result of Downey, Wu, and Yang, and can be understood as a randomized version of it.
We investigate the strength of a randomness notion ${\cal R}$ as a set-existence principle in second-order arithmetic: for each Z there is an X that is ${\cal R}$-random relative to Z. We show that the equivalence between 2-randomness and being infinitely often C-incompressible is provable in $RC{A_0}$. We verify that $RC{A_0}$ proves the basic implications among randomness notions: 2-random $\Rightarrow$ weakly 2-random $\Rightarrow$ Martin-Löf random $\Rightarrow$ computably random $\Rightarrow$ Schnorr random. Also, over $RC{A_0}$ the existence of computable randoms is equivalent to the existence of Schnorr randoms. We show that the existence of balanced randoms is equivalent to the existence of Martin-Löf randoms, and we describe a sense in which this result is nearly optimal.
We study genericity and randomness with respect to ITTMs, continuing the work initiated by Carl and Schlicht. To do so, we develop a framework to study randomness in the constructible hierarchy. We then answer several of Carl and Schlicht’s question. We also ask a new question one the equality of two classes of randoms. Although the natural intuition would dictate that the two classes are distinct, we show that things are not as simple as they seem. In particular we show that the categorical analogues of these two classes coincide, in contradiction with the natural intuition. Even though we are not able to answer the question for randomness in this article, we delineate and sharpen its contour and outline.
In this paper, we study the power and limitations of computing effectively generic sequences using effectively random oracles. Previously, it was known that every 2-random sequence computes a 1-generic sequence (as shown by Kautz) and every 2-random sequence forms a minimal pair in the Turing degrees with every 2-generic sequence (as shown by Nies, Stephan, and Terwijn). We strengthen these results by showing that every Demuth random sequence computes a 1-generic sequence and that every Demuth random sequence forms a minimal pair with every pb-generic sequence (where pb-genericity is an effective notion of genericity that is strictly between 1-genericity and 2-genericity). Moreover, we prove that for every comeager ${\cal G} \subseteq {2^\omega }$, there is some weakly 2-random sequence X that computes some $Y \in {\cal G}$, a result that allows us to provide a fairly complete classification as to how various notions of effective randomness interact in the Turing degrees with various notions of effective genericity.
We study randomness beyond ${\rm{\Pi }}_1^1$-randomness and its Martin-Löf type variant, which was introduced in [16] and further studied in [3]. Here we focus on a class strictly between ${\rm{\Pi }}_1^1$ and ${\rm{\Sigma }}_2^1$ that is given by the infinite time Turing machines (ITTMs) introduced by Hamkins and Kidder. The main results show that the randomness notions associated with this class have several desirable properties, which resemble those of classical random notions such as Martin-Löf randomness and randomness notions defined via effective descriptive set theory such as ${\rm{\Pi }}_1^1$-randomness. For instance, mutual randoms do not share information and a version of van Lambalgen’s theorem holds.
Towards these results, we prove the following analogue to a theorem of Sacks. If a real is infinite time Turing computable relative to all reals in some given set of reals with positive Lebesgue measure, then it is already infinite time Turing computable. As a technical tool towards this result, we prove facts of independent interest about random forcing over increasing unions of admissible sets, which allow efficient proofs of some classical results about hyperarithmetic sets.