Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-26T04:36:07.642Z Has data issue: false hasContentIssue false

Statistical Dogma and the Logic of Significance Testing

Published online by Cambridge University Press:  01 April 2022

Stephen Spielman*
Affiliation:
Herbert H. Lehman College

Extract

In a recent note ([3]) Roger Carlson presented a rather negative appraisal of my treatment of the logic of Fisherian significance testing in [10]. The main issue between us involves Carlson's thesis that, within the limits set by Fisher, standard significance tests are valuable tools of data analysis as they stand, i.e., without modification of the structure of the reasoning they employ. Call this the adequacy thesis. In my paper I argued that (i) the pattern of reasoning employed by tests of significance needs to be justified in spite of its unquestioned acceptance by most researchers, (ii) The best justification offered to date—by R. A. Fisher—is seriously defective. Therefore (iii) the adequacy thesis is not justified. I proposed some alterations of the pattern of reasoning employed by tests of significance and argued that they guarantee that tests are adequate epistemological tools.

Type
Discussion
Copyright
Copyright © Philosophy of Science Association 1978

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] Berkson, J.Tests of Significance Considered as Evidence.” Journal of the American Statistical Association 37 (1942): 325335.10.1080/01621459.1942.10501760CrossRefGoogle Scholar
[2] Carlson, R. Statisitcs. San Francisco: Holden-Day, 1973.Google Scholar
[3] Carlson, R.Discussion: The Logic of Tests of Significance.” Philosophy of Science 43 (1976): 116128.10.1086/288672CrossRefGoogle Scholar
[4] Cramer, H. Mathematical Methods of Statistics. Princeton: Princeton University Press, 1946.Google Scholar
[5] Fishburn, P. Utility Theory for Decision Making. New York: Wiley, 1970.10.21236/AD0708563CrossRefGoogle Scholar
[6] Fisher, R. A.Note on Dr. Berkson's Criticism of Tests of Significance.” Journal of the American Statistical Association 38 (1943): 103104.10.1080/01621459.1943.10501783CrossRefGoogle Scholar
[7] Lehman, R. L. Testing Statistical Hypotheses. New York: Wiley, 1959.Google Scholar
[8] Neyman, J. Lectures and Conferences on Mathematical Statistics,Google Scholar
[9] Scheffe, H.Statistical Inference in the Non-Parametric Case.” Annals of Mathematical Statistics 14 (1943): 305332.10.1214/aoms/1177731355CrossRefGoogle Scholar
[10] Spielman, S.The Logic of Tests of Significance.” Philosophy of Science 41 (1974): 211225.10.1086/288590CrossRefGoogle Scholar
[11] Spielman, S.A Refutation of the Neyman-Pearson Theory of Testing.” British Journal for the Philosophy of Science 24 (1973): 201222.10.1093/bjps/24.3.201CrossRefGoogle Scholar
[12] Spielman, S.On the Infirmities of Gillies's Rule.” British Journal for the Philosophy of Science 25 (1974): 261265.10.1093/bjps/25.3.261CrossRefGoogle Scholar
[13] Sterling, T. D.Publication Decisions and their Possible Effect on Inferences Drawn from Tests of Significance or Vice-Versa.” Journal of the American Statistical Association 54 (1959): 3034.Google Scholar