Hostname: page-component-78c5997874-8bhkd Total loading time: 0 Render date: 2024-11-15T01:48:20.079Z Has data issue: false hasContentIssue false

Aspects of Truth: Statistics, Bias, and Confounding

Published online by Cambridge University Press:  21 June 2016

Leon F. Burmeister
Affiliation:
Department of Health Care and Epidemiology, the University of British Columbia, Vancouver, British Columbia
Sam Sheps*
Affiliation:
Department of Health Care and Epidemiology, the University of British Columbia, Vancouver, British Columbia
David Birnbaum
Affiliation:
Department of Health Care and Epidemiology, the University of British Columbia, Vancouver, British Columbia
*
Department of Health Care and Epidemiology, 5804 Fairview Ave., Vancouver, British Columbia, V6T 1Z3

Extract

Conclusions drawn from research can be true, can be distortions resulting from bias or confounding, or can be untrue because of random chance. The essential role of epidemiology is to provide an approach to the appraisal of data concerning health and disease in order to discern the truth.

The first article in this series discussed issues of validity and reliability in relation to diagnostic testing. A second article considered statistical concepts in the special case of inferring that a null hypothesis is true. This article will consider related aspects of research design, analysis, and the logic of scientific thought, while the next article in the series will discuss the role of confidence intervals in the analysis and interpretation of data.

Type
Statistics for Hospital Epidemiology
Copyright
Copyright © The Society for Healthcare Epidemiology of America 1992

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Abramson, JH. Making Sense of Data. New York, NY: Oxford University Press; 1988.Google Scholar
2. Birnbaum, D, Sheps, SB. Validation of new tests. Infect Control Hosp Epidemiol. 1991;12:622.Google Scholar
3. Burmeister, LE Proving the null hypothesis. Infect Control Hosp Epidemiol. 1992;13:5557.Google Scholar
4. Machin, D, Campbell, MJ. Statistical Tables for the Design of Clinical Trials. Oxford, England: Blackwell Scientific Publications; 1987.Google Scholar
5. Lachin, JM. An Introduction to sample size determination and power analysis for clinical trials. Controlled Clinical Trials. 1981;2:93113.Google Scholar
6. Hierholzer, WJ Jr. Health care data, the epidemiologist's sand: comments on the quantity and quality of data. Am J Med. 1991;91(suppl 3B):21S.Google Scholar
7. Sackett, DL. Bias in analytic research. J Chron Dis. 1979;32:51.Google Scholar
8. Freeman, J, Goldmann, DA, McGowan, JE Jr. Confounding and the analysis of multiple variables in hospital epidemiology. Infect Control Hosp Epidemiol. 1987;8:465.Google Scholar
9. Hanley, JA, Lippman-Hand, A. If nothing goes wrong, is every thing all right? Interpreting zero numerators. JAMA. 1983;249:1743.Google Scholar
10. Rothman, KJ. Modern Epidemiology. Boston, Mass: Little, Brown and Co.; 1986115125.Google Scholar
11. Gehan, EA. The Determination of the number of patients required in a preliminary and follow-up trial of a new chemotherapeutic agent. J Chron Dis. 1961;13:346.CrossRefGoogle Scholar