Article contents
SMOOTHING-BASED INITIALIZATION FOR LEARNING-TO-FORECAST ALGORITHMS
Published online by Cambridge University Press: 23 June 2017
Abstract
Under adaptive learning, recursive algorithms are proposed to represent how agents update their beliefs over time. For applied purposes, these algorithms require initial estimates of agents perceived law of motion. Obtaining appropriate initial estimates can become prohibitive within the usual data availability restrictions of macroeconomics. To circumvent this issue, we propose a new smoothing-based initialization routine that optimizes the use of a training sample of data to obtain initials consistent with the statistical properties of the learning algorithm. Our method is generically formulated to cover different specifications of the learning mechanism, such as the least-squares and the stochastic gradient algorithms. Using simulations, we show that our method is able to speed up the convergence of initial estimates in exchange for a higher computational cost.
- Type
- Articles
- Information
- Copyright
- Copyright © Cambridge University Press 2017
Footnotes
An earlier version of this paper was presented at the 2013 Computing in Economics and Finance conference in Vancouver. We thank to our discussants for helpful comments. We also gratefully acknowledge the comments provided by one Associate Editor and two referees. Finally, we thank the Editor Professor William A. Barnett for the quick responsiveness and handling of our submission. Any remaining errors are ours.
References
REFERENCES
- 2
- Cited by