Book contents
- Frontmatter
- Contents
- Preface
- 1 An Introduction to Computer-intensive Methods
- 2 Maximum Likelihood
- 3 The Jackknife
- 4 The Bootstrap
- 5 Randomization and Monte Carlo Methods
- 6 Regression Methods
- 7 Bayesian Methods
- References
- Appendix A An Overview of S-PLUS Methods Used in this Book
- Appendix B Brief Description of S-PLUS Subroutines Used in this Book
- Appendix C S-PLUS Codes Cited in Text
- Appendix D Solutions to Exercises
- Index
- References
6 - Regression Methods
Published online by Cambridge University Press: 09 December 2009
- Frontmatter
- Contents
- Preface
- 1 An Introduction to Computer-intensive Methods
- 2 Maximum Likelihood
- 3 The Jackknife
- 4 The Bootstrap
- 5 Randomization and Monte Carlo Methods
- 6 Regression Methods
- 7 Bayesian Methods
- References
- Appendix A An Overview of S-PLUS Methods Used in this Book
- Appendix B Brief Description of S-PLUS Subroutines Used in this Book
- Appendix C S-PLUS Codes Cited in Text
- Appendix D Solutions to Exercises
- Index
- References
Summary
Introduction
Regression is probably one of the most powerful tools in the data analysis package of the biologist, particularly when considered within the very broad framework of general linear models. Nevertheless, there are a number of problems with the approach that can be resolved by the use of computer-intensive methods. The most difficult problem, and that which is the focus of the present chapter, is the problem of determining which variables to include in a regression and how to include them. For example, should a predictor variable, X, be entered simply as X or would a better fit be obtained using a polynomial form such as X2, or even a more general function, which we might not have any a priori reason to formulate? With a single predictor the problem is not very acute, because one can plot the data and visually inspect the pattern of covariation with the response variable, Y. But suppose the pattern is clearly non-linear and none of the usual transformation methods (e.g., log, square-root, arcsine, etc.) linearizes the data: the computer intensive methods outlined in this chapter can be used to both describe the pattern of covariation and to test its fit relative to other models. With multiple predictors the situation can be very problematic if the predictors are complex functions or there are non-linear interactions between predictors.
- Type
- Chapter
- Information
- Publisher: Cambridge University PressPrint publication year: 2006
References
- 1
- Cited by