Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- PART IV STATISTICAL ESTIMATION
- 13 Principles of statistical estimation
- 14 Statistical least squares estimation
- 15 Maximum likelihood method
- 16 Bayesian estimation method
- 17 From Gauss to Kalman: sequential, linear minimum variance estimation
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
14 - Statistical least squares estimation
from PART IV - STATISTICAL ESTIMATION
Published online by Cambridge University Press: 18 December 2009
- Frontmatter
- Contents
- Preface
- Acknowledgements
- PART 1 GENESIS OF DATA ASSIMILATION
- PART II DATA ASSIMILATION: DETERMINISTIC/STATIC MODELS
- PART III COMPUTATIONAL TECHNIQUES
- PART IV STATISTICAL ESTIMATION
- 13 Principles of statistical estimation
- 14 Statistical least squares estimation
- 15 Maximum likelihood method
- 16 Bayesian estimation method
- 17 From Gauss to Kalman: sequential, linear minimum variance estimation
- PART V DATA ASSIMILATION: STOCHASTIC/STATIC MODELS
- PART VI DATA ASSIMILATION: DETERMINISTIC/DYNAMIC MODELS
- PART VII DATA ASSIMILATION: STOCHASTIC/DYNAMIC MODELS
- PART VIII PREDICTABILITY
- Epilogue
- References
- Index
Summary
This chapter provides an introduction to the principles and techniques of statistical least squares estimation of an unknown vector x ∈ ℝn when the observations are corrupted by additive random noise. While the techniques and developments in this chapter parallel those of Chapter 5, the key assumption relative to the random nature of the observation sets this chapter apart. An immediate consequence is that the estimates are random variables and we now need to contend with the additional challenge of quantifying its mean, variance and many of the other desirable attributes such as unbiasedness, efficiency, consistency, to mention a few.
Section 14.1 contains the derivation of the statistical least squares estimate. An analysis of the quality of the fit between the linear model and the data is presented in Section 14.2. The Gauss–Markov theorem and its implications of optimality of the linear least squares estimates are covered in Section 14.3. A discussion of the model error and its impact on the quality of the least squares estimate is presented in Section 14.4.
Statistical least squares estimate
Consider the linear estimation problem where the unknown x ∈ ℝn and the known observation z ∈ ℝm are related as
where H ∈ ℝm×n is a known matrix and v is the additive random noise corrupting the observations. For definiteness, it is assumed that m > n. This noise vector v is not observable and to render the problem tractable, the following assumptions are made.
- Type
- Chapter
- Information
- Dynamic Data AssimilationA Least Squares Approach, pp. 240 - 253Publisher: Cambridge University PressPrint publication year: 2006