Article contents
Prediction, Probability, and Pragmatics1
Published online by Cambridge University Press: 01 January 2020
Extract
Along with such criteria as truth, comprehensiveness, explanatory adequacy, and simplicity (as well as others), philosophers of science usually also mention predictive accuracy as a criterion of theory choice. But while philosophers have devoted attention to the problem of the logical structure of scientific prediction, it seems that little attention has been devoted to the difficult question of what precisely constitutes predictive accuracy, at least ‘predictive accuracy’ in the sense in which I will discuss it here.
I will in this paper discuss the role of predictive accuracy in theory choice. But before that, I will address the problem of what constitutes predictive accuracy more generally and independently of its role in theory choice. I will approach the problem of predictive accuracy from a pragmaticpoint of view, and then try to assess the role of predictive accuracy in theory choice from that perspective.
- Type
- Research Article
- Information
- Copyright
- Copyright © The Authors 2000
References
2 I will not assume, or otherwise consider, anything about any details regarding the mechanisms or procedures by which predictors come up with their predictions. For example, I will not consider the question of how a predictor may improve his or her or its performance by learning from past successes and failures. Instead, only probabilistic relations among predictions and events predicted will be considered — for example, these probabilistic relations may have already settled down as a result of a predictor's adjustments of method as a result of learning.
3 In what follows, if, say, ‘X’ and ‘Y’ are understood as denoting events, then expressions like ‘∼X’, ‘X& Y’ and ‘X iff Y’ should be understood as denoting, respectively, the event of X not happening, the event of both X andY happening, and the event of either X & Y or ∼X& ∼Y happening. Also, relative frequency here can be understood either as actual relative frequency (if the concern is the actual predictive success of a predictor) or hypothetical relative frequency in a hypothetical infinite population of predictions and events (if the concern has to do with the predictor's ability, which is a dispositional notion). One might wonder why I don't also allow a propensity interpretation. A propensity relation between two events is a (statistical) relation of cause and effect. But I want to consider the probabilistic relations between, on the one hand, events of predictions being made and, on the other hand, the events about which the predictor makes predictions. But the events of neither of these two kinds confer propensities upon events of the other kind to occur. The predictions do not cause the events predicted nor do the latter cause the former. Rather, the causal structure of prediction involves common causes of things of these two kinds. See Eells, E. Rational Decision and Causality (New York and Cambridge, UK: Cambridge University Press 1982), 210–11CrossRefGoogle Scholar. And the structure can be very complex, involving several possible causally prior states that confer differing propensities on the various events of the two kinds. So I believe that an analysis of predictive accuracy in terms of propensities would be possible, though it would have to be much more complex. Thus it is for simplicity that I adopt here a relative frequency conception. But if it is true that propensities determine the values of hypothetical relative frequencies (as Fetzer, J. seems to suggest in Scientific Knowledge [Dordrecht: Reidel 1981], 111)CrossRefGoogle Scholar, a propensity interpretation would be allowable here. Also, a subjective interpretation of probability (as degrees of belief in the appropriate corresponding propositions) could be appropriate for an agent's assessment of a predictor's accuracy.
4 In what follows I will mainly be concerned with cases in which none of these probabilities is 0 or 1. But in the case of comparing two scientific theories using these quantities, one might urge that if Pr(P & ∼R) and Pr(∼P & R) aren't both equal to 0 for both theories, then both theories should simply be rejected. That is, if a theory, together with true statements of initial conditions and auxiliary assumptions, will sometimes imply a falsehood, then we should simply the reject the theory. Two points should be made about this. First, it has of course been recognized for some time that theories aren't (some even say shouldn't be) rejected simply because of (rare) conflict between observational consequences and the facts; the problem may be, for example, with the auxiliary assumptions. And second, theory comparison with respect to predictive accuracy will have to be carried out in terms of subjective (or inductive or epistemic) probability- the probabilities that are available to us — since we may not know the true relative frequencies; and surely non-zero probability, in this sense, of P&∼R or ∼P&R doesn't even imply that the relevant theory will ever conflict with the facts, let alone that we should reject the theory.
5 The distinction between pragmatic and epistemic concepts is not meant to coincide with a difference between goal-directedness and non-goal-directedness; rather, it is a question of the kinds of goals. I follow some rough customary philosophical usage in counting truth, comprehensiveness, explanatory value, and perhaps also simplicity as among epistemic goals, while pragmatic goals may include monetary profit, food, shelter, prestige, and so on. Examples involving each kind of goals will be discussed below.
6 This is a reflection, by the way, of the nonequivalence of two ways of characterizing the degree to which there is a probabilistic dependence between two events. Using events P and R in connection with some predictor as an example, two ways are the differences: Pr(R/ P)- Pr(R/ ∼P) and Pr(P / R)- Pr(P / ∼R) (and other functions of the relevant probabilities have been suggested).
7 With different initial values for the probability of rain and for the accuracy of the predictor in the first sense, the discrepancy between the two senses of accuracy can be even more dramatic. This example, by the way, doesn't show that the two criteria of accuracy are inconsistent. This will be discussed below. The point here is just that the two are not equivalent.
8 Isaac Levi showed me, in correspondence, a more extreme version of this example, showing that satisfaction of the first criterion of predictive accuracy does not imply satisfaction of the second. And in his ‘Newcomb's Many Problems’ (Theory and Decision 6 [1975] 161-75), he argues that the prima facie paradoxicalness of Newcomb's ‘paradox’ depends on incorrectly construing predictive accuracy in such a way that the second suggested criterion characterizes predictive accuracy. See Eells, E. (‘Newcomb's Many Solutions,’ Theory and Decision 10 [1984] 59–105)CrossRefGoogle Scholar, however, who argues that the prima facie paradoxicalness of Newcomb's paradox relies on no such thing, but only on the requirement that there should be a positive correlation between the prediction and the event predicted.
9 That is, you should be indifferent if you knew all the probabilities (and, of course, your utilities). The probabilities here are assumed to be objective relative frequencies, which you may not know. The expected values here are objectively expected utilities, where the probabilities are objective. What's rational, many philosophers have argued, is to do the act that has the greatest subjective expected utility, where the probabilities are subjective, i.e., are your own assessments. But use of objectively expected utility is appropriate here, for a predictor is accurate to the degree to which following his advice increases the objective probability of the good outcomes accruing to you. The degree to which following his advice increases the subjective probability of this is the degree to which you believe him to be accurate. So subjective probabilities and subjectively expected utilities would be appropriate for an agent's assessment of the value of a predictor. It is also worth mentioning that the desirabilities should be thought of as subjective, i.e., as your own assessments, at least in so far as it's possible that you are mistaken about what the proper preparations for rain and for no rain are. This is to ensure that the desirability of preparing as you do for rain when it rains is greater that the desirability of not so preparing when it rains; and the same for when it doesn't rain.
10 This is Levi's, I. rationale in Gambling with Truth (New York: Knopf 1967), 78Google Scholar.
11 These points are consistent with Levi's approach in Gambling with Truth to the idea of content, which uses uniform regular measures over his ‘ultimate partitions.’ And I can see no reason why, in our example of rain prediction, the set ﹛R,∼R﹜ cannot be taken to be an ultimate partition. In his The Enterprise of Knowledge (Cambridge, MA: The MIT Press 1980), however, he no longer requires uniformity of these measures.
12 Note that this is a kind of exception to what I said in note 1 about the procedures used by, in this case, forecasters of chances. Seidenfeld (1985) makes a related point about calibration, turning it into a criticism of this measure of predictive performance.
- 3
- Cited by