Published online by Cambridge University Press: 23 November 2005
We first explore empirical evidence of parameter and shock uncertainties in a state-space model with Markov switching. The evidence indicates that uncertainties in the U.S. economy have been too great to accurately define monetary policy rules. We then explore monetary policy rules under uncertainty with two approaches: the RLS learning algorithm and robust control. The former allows the parameters to be learned for a given model. Yet, as our results of the RLS learning in a framework of optimal control indicate, the state variables do not necessarily converge even in a nonstochastic model. The latter, by permitting uncertainty with respect to model misspecification, allows for a broader framework. Our study on robust control shows that robust optimal monetary policy rules reveal a stronger response to fluctuations in inflation and output than when no uncertainty exists, implying that uncertainty does not necessarily require caution.