Christopher P. Porter


The focus of my research is the theory of algorithmic randomness. My overall goals in this work are to understand the various roles that randomness plays in mathematical, computational, and statistical practice; to identify formal limitations to the task of defining randomness, manifested in artifacts of definitions of randomness that yield new instances of incompleteness, undecidability, and independence results; and to determine both the extent to which commonly-held intuitions about the concept of randomness are illuminated by the theory of algorithmic randomness and the extent to which these intuitions are challenged by the theory.


Here is a current list of my papers that are either published or submitted for publication:


Randomness for computable measures and initial segment complexity (with Rupert Hölzl). Submitted.

According to the Levin-Schnorr theorem, a sequence X\in2^{\omega} is Martin-Löf random with respect to the Lebesgue measure if and only if \mathrm{K}(X\upharpoonright n)\geq n-O(1) for every n, where \mathrm{K} denotes prefix-free Kolmogorov complexity. Roughly, this means that the Martin-Löf random sequences are precisely those sequences with high initial segment complexity. It is well-known that the Levin-Schnorr theorem can be extended to proper sequences, that is, sequences that are random with respect to some computable measure on 2^\omega. However, in this more general setting the initial segment complexity of sequences that are random with respect to different computable measures can vary widely.
In this paper, we study the various growth rates of proper sequences. In particular, we show the initial segment complexity of a proper sequence X is bounded from below by a computable function if and only if X is random with respect to some computable, continuous measure. We also identify a global computable lower bound for the initial segment complexity of all \mu-random sequences for a computable, continuous measure \mu. Furthermore we show that there are proper sequences with extremely slow-growing initial segment complexity, i.e., there is a proper sequence the initial segment complexity of which is infinitely often below every computable function, and even a proper sequence the initial segment complexity of which is dominated by all computable functions. Lastly, we prove various facts about such the Turing degrees of such sequences and show that they are useful in the study of certain classes of pathological measures on 2^\omega, namely diminutive measures and trivial measures.

The interplay of classes of algorithmically random objects (with Quinn Culver). Accepted for publication in the Journal of Logic and Analysis.

We study algorithmically random closed subsets of 2^\omega, algorithmically random continuous functions from 2^\omega to 2^\omega, and algorithmically random Borel probability measures on 2^\omega, especially the interplay between these three classes of objects. Our main tools are preservation of randomness and its converse, the no randomness ex nihilo principle, which say together that given an almost-everywhere defined computable map between an effectively compact probability space and an effective Polish space, a real is Martin-Löf random for the pushforward measure if and only if its preimage is random with respect to the measure on the domain. These tools allow us to prove new facts, some of which answer previously open questions, and reprove some known results more simply.
Our main results are the following. First we answer an open question of Barmapalias, Brodhead, Cenzer, Remmel, and Weber by showing that \mathcal{X}\subseteq2^\omega is a random closed set if and only if it is the set of zeros of a random continuous function on 2^\omega. As a corollary we obtain the result that the collection of random continuous functions on 2^\omega is not closed under composition. Next, we construct a computable measure Q on the space of measures on 2^\omega such that \mathcal{X}\subseteq2^\omega is a random closed set if and only if \mathcal{X} is the support of a Q-random measure. We also establish a correspondence between random closed sets and the random measures studied by Culver in previous work. Lastly, we study the ranges of random continuous functions, showing that the Lebesgue measure of the range of a random continuous function is always contained in (0,1).

• On analogues of the Church-Turing thesis in algorithmic randomness. Accepted for publication in the Review of Symbolic Logic.

In this article, I consider the status of several statements analogous to the Church-Turing thesis that assert that some definition of algorithmic randomness captures the intuitive conception of randomness. I argue that we should not only reject the theses that have appeared in the algorithmic randomness literature, but more generally that we ought not evaluate the adequacy of a definition of randomness on the basis of whether it captures the so-called intuitive conception of randomness to begin with. Instead, I argue that a more promising alternative is to evaluate the adequacy of a definition of randomness on the basis of whether it captures what I refer to as a “notion of almost everywhere typicality.” In support of my main claims, I will appeal to recent work in showing the connection between definitions of algorithmic randomness and certain “almost everywhere” theorems from classical mathematics.

• Deep \Pi^0_1 classes (with Laurent Bienvenu).  Accepted for publication the Bulletin of Symbolic Logic.

A set of infinite binary sequences \mathcal{C}\subseteq2^\omega is negligible if there is no partial probabilistic algorithm that produces an element of this set with positive probability. The study of negligibility is of particular interest in the context of \Pi^0_1 classes. In this paper, we introduce the notion of depth for \Pi^0_1 classes, which is a stronger form of negligibility. Whereas a negligible \Pi^0_1 class \mathcal{C} has the property that one cannot probabilistically compute a member of \mathcal{C} with positive probability, a deep \Pi^0_1 class \mathcal{C} has the property that one cannot probabilistically compute an initial segment of a member of \mathcal{C} with high probability. That is, the probability of computing a length n initial segment of a deep \Pi^0_1 class converges to 0 effectively in n.
We prove a number of basic results about depth, negligibility, and a variant of negligibility that we call \mathit{tt}-negligibility. We also provide a number of examples of deep \Pi^0_1 classes that occur naturally in computability theory and algorithmic randomness. We also study deep classes in the context of mass problems, we examine the relationship between deep classes and certain lowness notions in algorithmic randomness, and establish a relationship between members of deep classes and the amount of mutual information with Chaitin's \Omega.

Randomness and semi-measures (with Laurent Bienvenu, Rupert Hölzl, and Paul Shafer). Forthcoming in the Notre Dame Journal of Formal Logic.

A semi-measure is a generalization of a probability measure obtained by relaxing the additivity requirement to super-additivity. We introduce and study several randomness notions for left-c.e. semi-measures, a natural class of effectively approximable semi-measures induced by Turing functionals. Among the randomness notions we consider, the generalization of weak 2-randomness to left-c.e. semi-measures is the most compelling, as it best reflects Martin-Löf randomness with respect to a computable measure. Additionally, we analyze a question of Shen's on the behavior of Turing functionals that induce the same left-c.e. semi-measure, a positive answer to which would also have yielded a reasonable randomness notion for left-c.e. semi-measures. Unfortunately though, we find a negative answer, except for some special cases.

• Algorithmically random functions and effective capacities (with Doug Cenzer). Theory and Applications of Models of Computation, 12th Annual Conference, TAMC 2015, Singapore, May 18-20, 2015, Proceedings (Lecture Notes in Computer Science), pp. 23-37, 2015.

We continue the investigation of algorithmically random functions and closed sets, and in particular the connection with the notion of capacity. We study notions of random continuous functions given in terms of a family of computable measures called symmetric Bernoulli measures. We isolate one particular class of random functions that we refer to as online random functions F, where the value of y(n) for y=F(x) may be computed from the values of x(0),\dotsc,x(n). We show that random online functions are neither onto nor one-to-one. We give a necessary condition on the members of the ranges of online random functions in terms of initial segment complexity and the associated computable capacity. Lastly, we introduce the notion of online partial Martin-Löf random function on 2^\omega and give a family of online partial random functions the ranges of which are precisely the random closed sets introduced by Barmpalias, Brodhead, Cenzer, Dashti, and Weber.

Demuth's path to algorithmic randomness (with André Nies and Antonín Kučera). The Bulletin of Symbolic Logic, 21(3), September 2015, pp. 270-305.

Osvald Demuth (1936-1988) studied constructive analysis from the viewpoint of the Russian school of constructive mathematics. In the course of his work he introduced various notions of effective null set which, when phrased in classical language, yield a number of major algorithmic randomness notions. In addition, he proved several results connecting constructive analysis and randomness that were rediscovered only much later.
In this paper, we trace the path that took Demuth from his constructivist roots to his deep and innovative work on the interactions between constructive analysis, algorithmic randomness, and computability theory. We will focus specifically on (i) Demuth's work on the differentiability of Markov computable functions and his study of constructive versions of the Denjoy alternative, (ii) Demuth's independent discovery of the main notions of algorithmic randomness, as well as the development of Demuth randomness, and (iii) the interactions of truth-table reducibility, algorithmic randomness, and semigenericity in Demuth's work. 

Trivial measures are not so trivial. Theory of Computing Systems, Volume 56 Issue 3, April 2015.

Although algorithmic randomness with respect to various non-uniform computable measures is well-studied, little attention has been paid to algorithmic randomness with respect to computable trivial measures, where a measure \mu on 2^\omega is trivial if the support of \mu consists of a countable collection of sequences. In this article, it is shown that there is much more structure to trivial computable measures than has been previously suspected.

Kolmogorov on the role of randomness in probability theoryMathematical Structures in Computer Science, invited publication in special issue, “Developments of the concepts of randomness, statistic, and probability,” 24, June 2014.

In this article, I discuss the extent to which Kolmogorov drew upon von Mises' work in addressing the problem as to why probability is applicable to events in the real world, which I refer to as the applicability problem. In particular, I highlight the role of randomness in Kolmogorov's account, and I argue that this role differs significantly from the role that randomness plays in von Mises' account.

Strong reductions in effective randomness (with Laurent Bienvenu), Theoretical Computer Science, Vol. 459, November 2012, pp. 55-68.

We study generalizations of Demuth's Theorem, which states that the image of a Martin-Löf random real under a truth-table reduction is either computable or Turing equivalent to a Martin-Löf random real. We show that Demuth's Theorem holds for Schnorr randomness and computable randomness (answering a question of Franklin), but that it cannot be strengthened by replacing the Turing equivalence in the statement of the theorem with wtt-equivalence. We also provide some additional results about the Turing and tt-degrees of reals that are random with respect to some computable measure.


Here is a list of projects that are currently in progress:

• The interplay between Demuth randomness and effective genericity (with Laurent Bienvenu).

• The V'yugin algebra of non-negligible classes (with Rupert Hölzl).

• Random members of a \Pi^0_1 class (with Douglas Cenzer).

• Rank and randomness (with Douglas Cenzer and Rupert Hölzl).


You can read my dissertation here:

Mathematical and philosophical perspectives on algorithmic randomness. Doctoral dissertation, University of Notre Dame, 2012.

The mathematical portion of my dissertation is a study of the various interactions between definitions of algorithmic randomness, Turing functionals, and non-uniform probability measures on Cantor space. Chapters 1 and 2 introduce the main results and the relevant background for the remaining chapters. In Chapter 3, I examine the connection between Turing functionals and a number of different definitions of randomness, culminating in a number of characterizations of these definitions of randomness in terms of a priori complexity, a notion of initial segment complexity given in terms of Turing functionals. In Chapter 4, I investigate possible generalizations of Demuth’s Theorem, an important theorem in algorithmic randomness concerning the behavior of random sequences under truth-table reducibility. One technique developed in this chapter, that of inducing a probability measure by means of a special type of truth-table functional that we call a tally functional, proves to be very useful. I use this technique to study randomness with respect to trivial computable measures in both Chapters 5 and 6.

In the philosophical portion of my dissertation, I consider the problem of producing a correct definition of randomness, as introduced in Chapter 7. Some have claimed that one definition of randomness in particular, Martin-Löf randomness, captures the so-called intuitive conception of randomness, a claim known as the Martin-Löf-Chaitin Thesis. However, as some have also offered alternative definitions as capturing our intuitions of randomness, giving rise to the problem of determining which, if any, of these definitions is correct.

Prior to evaluating the Martin-Löf-Chaitin Thesis and related randomness-theoretic theses, the historical Chapters 8 and 9 discuss two roles of definitions of randomness, both of which motivated much early work in the development of algorithmic randomness: the resolutory role of randomness (as part of Richard von Mises’ frequentist approach to probability), which is successfully filled by a definition of randomness that allows for the solution of problems in von Mises’ theory of probability, and the exemplary role of randomness (identified by the probability theorist Jean Ville), which is successfully filled by a definition of randomness that counts as random those sequences that exemplify the properties typically held by sequences chosen at random.

In Chapter 10, I lay out the status of the Martin-Löf-Chaitin Thesis, discussing the evidence that has been offered in support of it, as well as the arguments that have been raised against it. In Chapter 11, I argue that the advocate of a claim like the Martin-Löf-Chaitin Thesis faces what I call the Justificatory Challenge: she must present a precise account of the so-called intuitive conception of randomness, so as to justify the claim that her preferred definition of randomness is the correct one while blocking the claim of correctness made on behalf of alternative definitions of randomness. Lastly, in Chapter 12, I present two further roles for definitions of randomness to play (beyond those discussed in Chapters 8 and 9), which I call the calibrative role of randomness and the limitative role of randomness, both of which can be successfully filled by multiple definitions of randomness. Definitions filling the calibrative role allow us to calibrate the level of randomness necessary and sufficient for certain “almost-everywhere" results in classical mathematics to hold, while definitions filling the limitative role illuminate a phenomenon known as the indefinite contractibility of the notion of randomness. Moreover, I argue that in light of the fact that many definitions can successfully fill these two roles, we should accept what we call the No-Thesis Thesis: there is no definition of randomness that (i) yields a well-defined, definite collection of random sequences and (ii) captures everything that mathematicians have taken to be significant about the concept of randomness.