Book contents
- Frontmatter
- Contents
- Preface
- 1 An Introduction to Computer-intensive Methods
- 2 Maximum Likelihood
- 3 The Jackknife
- 4 The Bootstrap
- 5 Randomization and Monte Carlo Methods
- 6 Regression Methods
- 7 Bayesian Methods
- References
- Appendix A An Overview of S-PLUS Methods Used in this Book
- Appendix B Brief Description of S-PLUS Subroutines Used in this Book
- Appendix C S-PLUS Codes Cited in Text
- Appendix D Solutions to Exercises
- Index
- References
3 - The Jackknife
Published online by Cambridge University Press: 09 December 2009
- Frontmatter
- Contents
- Preface
- 1 An Introduction to Computer-intensive Methods
- 2 Maximum Likelihood
- 3 The Jackknife
- 4 The Bootstrap
- 5 Randomization and Monte Carlo Methods
- 6 Regression Methods
- 7 Bayesian Methods
- References
- Appendix A An Overview of S-PLUS Methods Used in this Book
- Appendix B Brief Description of S-PLUS Subroutines Used in this Book
- Appendix C S-PLUS Codes Cited in Text
- Appendix D Solutions to Exercises
- Index
- References
Summary
Introduction
The jackknife was invented by Quenouille (1949) as a means of eliminating bias in an estimate. Tukey (1958) suggested that Quenouille's method could be used as a non-parametric means of estimating the mean and variance of an estimate, and coined the term “jackknife,” to signify an all-purpose statistical tool. The jackknife has proven to be invaluable in the estimation of parameters for which standard techniques are unsatisfactory. However, at the outset it must be recognized that this method is not without assumptions and should not be used without justification, either from a theoretical or numerical analysis. In this chapter, I shall describe the jackknife method, first in a very general sense and then by a series of examples taken from the biological literature.
The jackknife: a general procedure
Point estimation
Suppose we wish to estimate some parameter θ. To do so using the jackknife method, we first estimate θ according to the appropriate algorithm (e.g., we might be estimating the coefficients in a linear regression, in which case the algorithm could be the least squares regression method): let this estimate be . Next we delete a single datum from the data set. This datum could be a single observation or it could be a group of observations (e.g., in a genetical analysis there might be n families, each consisting of m individuals, and the datum to be dropped is a family rather than an individual).
- Type
- Chapter
- Information
- Publisher: Cambridge University PressPrint publication year: 2006