Book contents
- Frontmatter
- Contents
- Preface
- 1 An Introduction to Computer-intensive Methods
- 2 Maximum Likelihood
- 3 The Jackknife
- 4 The Bootstrap
- 5 Randomization and Monte Carlo Methods
- 6 Regression Methods
- 7 Bayesian Methods
- References
- Appendix A An Overview of S-PLUS Methods Used in this Book
- Appendix B Brief Description of S-PLUS Subroutines Used in this Book
- Appendix C S-PLUS Codes Cited in Text
- Appendix D Solutions to Exercises
- Index
- References
2 - Maximum Likelihood
Published online by Cambridge University Press: 09 December 2009
- Frontmatter
- Contents
- Preface
- 1 An Introduction to Computer-intensive Methods
- 2 Maximum Likelihood
- 3 The Jackknife
- 4 The Bootstrap
- 5 Randomization and Monte Carlo Methods
- 6 Regression Methods
- 7 Bayesian Methods
- References
- Appendix A An Overview of S-PLUS Methods Used in this Book
- Appendix B Brief Description of S-PLUS Subroutines Used in this Book
- Appendix C S-PLUS Codes Cited in Text
- Appendix D Solutions to Exercises
- Index
- References
Summary
Introduction
Suppose that we have a model with a single parameter, θ, that predicts the outcome of an event that has some numerical value y. Further, suppose we have two choices for the parameter value, say θ1 and θ2, where θ1 predicts that the numerical value of y will occur with a probability p1 and θ2 predicts that the numerical value of y w`ill occur with a probability p2. Which of the two choices of θ is the better estimate of the true value of θ? It seems reasonable to suppose that the parameter value that gave the highest probability of actually observing what was observed would be the one that is also closer to the true value of θ. For example, if p1 equals 0.9 and p2 equals 0.1, then we would select θ1 over θ2, because the model with θ2 predicts that one is unlikely to observe y, whereas the model with θ1 predicts that one is quite likely to observe y. We can extend this idea to many values of θ by writing our predictive model as a function of the parameter values, ϕ(θi) = pi, where i designates particular values of θ. More generally, we can dispense with the subscript and write ϕ(θ) = p, thereby allowing θ to take on any value. By the principle of maximum likelihood we select the value of θ that has the highest associated probability, p.
- Type
- Chapter
- Information
- Publisher: Cambridge University PressPrint publication year: 2006