It has been almost two decades since John Norton first introduced his material theory of induction in this journal. His new book, the first volume in the new open-access series of the British Society for the Philosophy of Science, summarizes and extends much of the work he has published on the topic since.
What this book does not offer is a solution to Hume’s problem of induction. In the book’s epilogue, Norton briefly sketches the “material dissolution” he proposed in earlier work but defers further elaboration to the future. He urges the reader first to engage with the central claim of the current book: that inductive inferences are warranted, not by formal schemas or rules, but by background facts.
The first two chapters set out the basis for this claim. Chapter 1 introduces the distinction between the material approach and formal approaches to induction and presents in some detail a historical illustration (Marie Skłodowska-Curie’s inference to the general crystallographic form of radium). Norton discusses how the material theory, but no formal theory, provides a successful analysis of the warrant for this inference. This is already an instantiation of the first two of three arguments for the material theory that are systematically presented in chapter 2: the failure of all attempts at a universally applicable formal theory of induction and the successful accommodation of exemplars of good inductive inferences by the material theory.
The third, more “foundational” argument further aims to clarify why the material theory is right. This is because inductive inference is powered by local facts, a claim that is established in two steps. First, all inductive inference is ampliative so that each such inference must fail in some “inhospitable” world and the warrant for a successful inference lies in the factual circumstance of a hospitable world. Second, there is no universally applicable such warranting fact (no meaningful “principle of the uniformity of nature”) so that we could not after all recover a universal scheme relying on that universal fact.
The remainder of the book is divided into two parts. The first part substantiates the first two arguments by showing the failure of formal schemes and the success of a material analysis for various types of inductive argument. Chapter 3 does so for replicability of experiments, chapter 4 for analogical reasoning, and chapters 8 and 9 for inference to the best explanation. Chapters 6 and 7 argue that a preference for simplicity in induction is a manifestation of background facts, and chapter 5 reasons that there is no role for epistemic values other than as surrogates for background facts.
The second part of the book is fully devoted to what Norton views as the main attempt at a universal formal scheme of induction, the Bayesian approach. Chapter 10 lays down Norton’s case against its universality, which consists of two components. The first is a rehearsal of the foundational argument of chapter 2 that inductive inferences are powered by local facts. The lesson Norton draws is that Bayesianism or probabilism, like any “calculus of induction,” must have boundaries on its domain of application or warrant. As examples of what lies outside of these inevitable boundaries, Norton exhibits various relations of inductive support that cannot be represented probabilistically. These include relations of “completely neutral support” or indifference and, in dedicated chapters, the objective support relations in infinite lotteries (chapter 13), in problems with continuum outcome spaces and with outcomes corresponding to nonmeasurable sets (chapter 14), in indeterministic systems (chapter 15), and in quantum systems (chapter 16).
The second component of Norton’s case against Bayesianism is a rebuttal of “proofs offered in the literature as demonstrating the necessity of probabilities,” that is, arguments for probabilism. The general counterargument is that all such proofs must be circular, because they are deductive arguments for a contingent conclusion (the necessity of probabilities) and hence must contain premises at least as strong. Norton goes on to exhibit this circularity for the Dutch book argument; the axiomatic approaches by Cox and Jaynes; and, in chapter 11, contemporary accuracy arguments. A final objection that Norton raises against Bayesianism is the arbitrariness introduced by the choice of subjective priors. Norton diagnoses this as a form of “incompleteness” of the calculus: priors must introduce inductive content that is beyond the evidence. In chapter 12, he presents an incompleteness theorem that no formal calculus (that satisfies a property of stability under disjunctive refinements) can be neutral in this sense.
The book’s project is thus for a significant part a negative one. The material theory remains because all (universality-aspiring) formal approaches fail. However, the more principled arguments that Norton offers for the necessary inadequacy of formal approaches are also the least persuasive. For instance, the particular cases of objective support relations that escape probabilistic representation are interesting and should give pause to anyone with hope for a universal probabilism in this sense. But it is unclear how these cases are illustrations of the foundational argument of chapter 2, which shows a necessary restriction, not in the kind of inductive problems to which any particular formal approach is at all applicable, but in the circumstances in which any particular inductive inference is successful or warranted. More important, this foundational argument only clearly applies to a picture of formal approaches to which few would still subscribe, namely, as aspiring to some universally applicable extrapolation rule. (Arguments against this aspiration are indeed a staple of the literature both in philosophy [e.g., van Fraassen Reference van Fraassen1989, chap. 6] and in computer science [e.g., the “no-free-lunch theorems”].)
It is, in particular, hard to recognize the picture that Norton is attacking in the modern subjective Bayesian philosophy of science literature. That some problems escape a Bayesian treatment is perfectly consonant with the eclectic outlook presented in the book by Sprenger and Hartmann (Reference Sprenger and Hartmann2019). Even if this constitutes a challenge for more monotheistic accounts, such as Howson and Urbach’s (Reference Howson and Urbach2006), the foundational point that each induction must rely on factual assumptions is here, like elsewhere, simply acknowledged and embraced. Howson (Reference Howson2000) gives a book-length treatment on the need for inductive assumptions that are codified in the prior: exactly Norton’s point that the prior must introduce inductive content. The picture of subjective Bayesianism that arises is a general formal approach that for each specific application requires domain-specific input. It is unclear why, in the Bayesian case, Norton must reject such input out of hand as “merely expressions of prejudice,” rather than welcoming this requirement as allowing us to bring the relevant facts into the analysis. Of course, one might have good reasons to think that such a Bayesian analysis is unhelpful or ill motivated (and here Norton’s critique of the various arguments for probabilism is more pertinent), but these go beyond the idea that Bayesianism, qua formal theory, must be blind to material facts.
In general, modern formal approaches to inductive inference explicitly make room for context-dependent inductive assumptions. This in itself does not infringe on their formal character or generality. For instance, the standard theoretical framework for machine learning methods, statistical learning theory, provides general formal success guarantees for certain learning algorithms—formal inductive methods (see Harman and Kulkarni Reference Harman and Kulkarni2006). Like Bayesian methods, these algorithms need on each application to be provided with context-dependent assumptions (here a hypothesis class), and the relevant success guarantees are likewise relative to these assumptions. But the guarantees are still general in that we can instantiate them with a large class of possible hypothesis classes. This is not to say that the theory is universal in scope: everything further relies on an assumption that the data are generated independent and identically distributed (i.i.d.) from some distribution. All of this means that, with Norton, any application of the algorithm will give good results (it is in this sense warranted) only if the particular inductive problem in fact matches both the particular inductive assumptions and the i.i.d. assumption. But the point is that this in no way excludes a general formal theory that justifies the use of certain general learning algorithms and not others. In contrast, on Norton’s outlook, the very existence of research in the mathematical theory of statistics and machine learning seems either a mystery or a case of mass delusion.
The way forward is plausibly an account that can do justice both to the material and to the formal aspects of the warrant for inductive inferences, and indeed to further aspects like epistemic goals (cf. Steel Reference Steel2005). Norton is (of course) right that no purely formal calculus cuts it. And Norton’s theory is certainly not a “Very Bad Thing that must be stopped,” as he worries some people think. The book’s in-depth focus on the factual aspects of induction makes it a unique and valuable contribution to the literature. Norton’s material analyses of the assumptions at play in various forms and instances of inductive reasoning, many on the basis of real episodes from the sciences, are important and stimulating reading for anyone concerned with any of these various types of reasoning. But the book’s overall spirit that formal theories of induction are a bad thing that must be stopped is something this positive contribution could do without.
Acknowledgment
This work was supported by the Deutsche Forschungsgemeinschaft (DFG; German Research Foundation) (project 437206810).