Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-14T17:03:49.336Z Has data issue: false hasContentIssue false

A Note on Parsimony

Published online by Cambridge University Press:  14 March 2022

Everett J. Nelson*
Affiliation:
University of Washington, Seattle, Washington

Extract

In this paper I wish to offer a suggestion in support of the thesis that if a given set of facts is explained by two rival explanations A and B, where A consists of a single hypothesis H1, and B consists of at least two independent hypotheses H2 and H3, then, other things being equal, A is more probable than B. That this view is true is seldom questioned, though I have never seen any reason given for it, which would justify the methodological value so many philosophers attribute to it. It is not my purpose to discuss simplicity of explanation in general, but simply to point out that the Multiplicative Axiom of the calculus of probabilities justifies the above thesis. I shall call this thesis the Principle of Parsimony.

Type
Research Article
Copyright
Copyright © Philosophy of Science Association 1936

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 J. M. Keynes, in A Treatise on Probability, p. 210, states this axiom as follows: ab/h· = ·a/bh·b/h· = ·b/ah·a/h, which may be read this way: The probability that the conjunctive proposition ab is true, relative to evidence h, equals the probability of a on evidence “b and h” multiplied by the probability of b on evidence h, etc.

2 Cf., William Savery, “Chance and Cosmogony,” Philosophical Review, March, 1932. This is one of the few attempts to state and justify this principle. I am quite aware that the word “parasimony” is often ambiguously applied to various types of simplicity. It is of course incumbent on those giving it another meaning to define it precisely, and if they attribute methodological significance to it, to validate it.

3 In general terms, p 1, p 2,. … p n are constituents of one hypothesis only if, for every constituent p, there is another constituent or a combination of other constituents q such that p/qh is greater than p/h.

4 In general terms, p 1 is independent of the set p 2, p 3,. … p n only if p 1 is independent of every one of these latter ones singly and of every combination of them.

5 A comparatively simple illustration is found in a circumstantial evidence case in court, in which the prosecution attempts to explain all the facts by an hypothesis connecting them with one individual, the accused, while the defense explains some of the facts by one hypothesis, and other facts by another hypothesis independent of the first.

6 We may make this conclusion more general by saying that of two rival hypotheses, one will be more probable than the other if the propositions p 1, q 1 of the first, and the propositions p 2, q 2 of the second, are such that p 1/q 1h is greater than p 2/q 2h, other things being equal. This can of course be further generalized for any number of rival hypotheses consisting of any number of propositions.

If no two meaningful hypotheses be completely independent, because of some principle of induction (which is needed to justify all probable judgments), then this general formulation, and not the specific one in the text, would be applicable.