Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-27T05:39:29.007Z Has data issue: false hasContentIssue false

Entropy and Uncertainty

Published online by Cambridge University Press:  01 April 2022

Teddy Seidenfeld*
Affiliation:
Department of Philosophy, Carnegie-Mellon University

Abstract

This essay is, primarily, a discussion of four results about the principle of maximizing entropy (MAXENT) and its connections with Bayesian theory. Result1 provides a restricted equivalence between the two: where the Bayesian model for MAXENT inference uses an “a priori“ probability that is uniform, and where all MAXENT constraints are limited to 0–1 expectations for simple indicator-variables. The other three results report on an inability to extend the equivalence beyond these specialized constraints. Result2 established a sensitivity of MAXENT inference to the choice of the algebra of possibilities even though all empirical constraints imposed on the MAXENT solution are satisfied in each measure space considered. The resulting MAXENT distribution is not invariant over the choice of measure space. Thus, old and familiar problems with the Laplacian principle of Insufficient Reason also plague MAXENT theory. Result3 builds upon the findings of Friedman and Shimony (1971; 1973) and demonstrates the absence of an exchangeable, Bayesian model for predictive MAXENT distributions when the MAXENT constraints are interpreted according to Jaynes's (1978) prescription for his (1963) Brandeis Dice problem. Lastly, Result4 generalizes the Friedman and Shimony objection to cross-entropy (Kullback-information) shifts subject to a constraint of a new odds-ratio for two disjoint events.

Type
Research Article
Copyright
Copyright © The Philosophy of Science Association 1986

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

I thank J. Kadane, I. Levi, and A. Shimony for their detailed, constructive comments on an earlier draft of this paper. Also, I have benefited from discussions with: A. Denzau, C. Genest, P. Gibbons, E. Greenberg, E. Jaynes, M. Schervish, G. Tsebelis, B. Wise, the members of the Philosophy Department Colloquium at Carnegie-Mellon University, and other helpful critics at the 29th NBER-NSF Seminar on Bayesian Inference in Economics.

Support for this research came from the Department of Preventive Medicine, Washington University (St. Louis), and N.S.F. Grant #SES-8607300.

References

Burbea, J., and Rao, C. R. (1982), “On the Convexity of Some Divergence Measures Based on Entropy Functions”, IEEE Transactions on Information Theory IT-28 (3): 489–95.CrossRefGoogle Scholar
Carnap, R. (1952), The Continuum of Inductive Methods. Chicago: University of Chicago Press.Google Scholar
Courant, R., and Hilbert, D. (1963), Methods of Mathematical Physics. Vol. 1. New York: Interscience Publishers.Google Scholar
Dawid, A. P.; Stone, M.; and Zidek, J. V. (1973), “Marginalization Paradoxes in Bayesian and Structural Inference”, Journal of the Royal Statistical Society Series B35: 189–233; with discussion.CrossRefGoogle Scholar
Denzau, A. T.; Gibbons, P. C.; and Greenberg, E. (1984), “Bayesian Estimation of Proportions with an Entropy Prior”, Department of Economics, Washington University, St. Louis.Google Scholar
Dias, P. M., and Shimony, A. (1981), “A Critique of Jaynes' Maximum Entropy Principle”, Advances in Applied Mathematics 2: 172–211.Google Scholar
Fisher, R. A. (1973), Statistical Methods and Scientific Inference. Third Edition. New York: Hafner.Google Scholar
Frieden, B. R. (1972), “Restoring with Maximum Likelihood and Maximum Entropy”, Journal of the Optical Society of America 62: 511–18.CrossRefGoogle ScholarPubMed
Frieden, B. R. (1984), “Dice, Entropy and Likelihood”, Optical Sciences Center, University of Arizona, Tucson, Arizona.Google Scholar
Friedman, K., and Shimony, A. (1971), “Jaynes's Maximum Entropy Prescription and Probability Theory”, Journal of Statistical Physics 3: 381–84.CrossRefGoogle Scholar
Good, I. J. (1971), “46656 Varieties of Bayesians”, American Statistician 25: 6263. (Reprinted in Good Thinking, Minneapolis: University of Minnesota Press, 1983.)Google Scholar
Hobson, A. (1971), Concepts in Statistical Mechanics. New York: Gordon and Breach.Google Scholar
Hobson, A., and Cheng, Bin-Kang (1973), “A Comparison of the Shannon and Kullback Information Measures”, Journal of Statistical Physics 7: 301–10.CrossRefGoogle Scholar
Jaynes, E. T. (1957), “Information Theory and Statistical Mechanics”, I and II, Physical Review 106: 620–30; 108: 171–90. (Reprinted in Jaynes 1983.)CrossRefGoogle Scholar
Jaynes, E. T. (1963), “Information Theory and Statistical Mechanics”, 1962 Brandeis Summer Institute in Theoretical Physics, Ford, K. (ed.). New York: Benjamin. (Reprinted in Jaynes 1983.)Google Scholar
Jaynes, E. T. (1978), “Where Do We Stand on Maximum Entropy?”, in The Maximum Entropy Formalism, Levine, R. D. and Tribus, M. (eds.). Cambridge: The MIT Press. (Reprinted in Jaynes 1983.)Google Scholar
Jaynes, E. T. (1979), “Concentration of Distributions at Entropy Maxima”, in Jaynes 1983.Google Scholar
Jaynes, E. T. (1980), “Marginalization and Prior Probabilities”, in Bayesian Analysis in Econometrics and Statistics, Zellner, A. (ed.). Amsterdam: North-Holland. (Reprinted in Jaynes 1983.)Google Scholar
Jaynes, E. T. (1981), “What Is the Question?”, in Bayesian Statistics, Bernardo, J. M. et al. (eds.). Valencia Spain: University of Valencia Press. (Reprinted in Jaynes 1983.)Google Scholar
Jaynes, E. T. (1983), Papers on Probability, Statistics, and Statistical Physics. Rosenkrantz, R. (ed.). Dordrecht: D. Reidel Publishing.Google Scholar
Jaynes, E. T. (1983), “Highly Informative Priors”, in Proceedings of the Second Valencia International Meeting on Bayesian Statistics, J. M. Bernardo et al. (eds.).Google Scholar
Jeffreys, H. (1961), Theory of Probability. Third Edition. Oxford: Oxford University Press.Google Scholar
Kadane, J.; Schervish, M.; and Seidenfeld, T. (1986), “Statistical Implications of Finitely Additive Probability”, in Bayesian Inference and Decision Techniques: Essays in Honor of Bruno de Finetti, Goel, P. K. and Zellner, A. (eds.). Amsterdam: North-Holland.Google Scholar
Kullback, S. (1951), Information Theory and Statistics. New York: Wiley.Google Scholar
Levi, I. (1981), “Direct Inference and Confirmational Conditionalization”, Philosophy of Science 48: 532–52.CrossRefGoogle Scholar
Manski, C. F., and McFadden, D. (eds.). (1981), Structural Analysis of Discrete Data with Econometric Applications. Cambridge: The MIT Press.Google Scholar
Rosenkrantz, R. (1977), Inference, Method and Decision. Dordrecht: D. Reidel Publishing.CrossRefGoogle Scholar
Rowlinson, J. (1970), “Probability, Information, and Entropy”, Nature 255: 1196–98.Google Scholar
Seidenfeld, T. (1979), “Why I Am Not an Objective Bayesian”, Theory and Decision 11: 413–40.CrossRefGoogle Scholar
Shannon, C. (1948), “A Mathematical Theory of Communication”, Bell System Technical Journal 27: 379423, 623–56.CrossRefGoogle Scholar
Shimony, A. (1973), “Comment on the Interpretation of Inductive Probabilities”, Journal of Statistical Physics 9: 187–91.CrossRefGoogle Scholar
Shore, J., and Johnson, R. (1980), “Axiomatic Derivation of the Principle of Maximum Entropy and the Principle of Minimum Cross-Entropy”, IEEE Transactions on Information Theory IT-26 (1): 2637.CrossRefGoogle Scholar
Shore, J., and Johnson, R. (1981), “Properties of Cross-Entropy Minimization”, IEEE Transactions on Information Theory IT-27 (4): 472–82.CrossRefGoogle Scholar
Sudderth, W. (1980), “Finitely Additive Priors, Coherence and the Marginalization Paradox”, Journal of the Royal Statistical Society B 42: 339–41.Google Scholar
Tribus, M., and Rossi, R. (1973), “On the Kullback Information Measure as a Basis for Information Theory: Comments on a Proposal by Hobson and Chang”, Journal of Statistical Physics 9: 331–38.CrossRefGoogle Scholar
van Fraassen, B. (1981), “A Problem for Relative Information Minimizers in Probability Kinematics”, British Journal for Philosophy of Science 34: 375–79.Google Scholar
Wiener, N. (1948), Cybernetics. New York: Wiley.Google ScholarPubMed
Williams, P. M. (1980), “Bayesian Conditionalisation and the Principle of Minimum Information”, British Journal for Philosophy of Science 31: 131–44.CrossRefGoogle Scholar