Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-26T06:09:42.192Z Has data issue: false hasContentIssue false

Linear quadratic approximation of rationally inattentive control problems

Published online by Cambridge University Press:  08 November 2023

Jianjun Miao*
Affiliation:
Department of Economics, Boston University, Boston, MA, USA
Bo Zhang
Affiliation:
Institute of Chinese Financial Studies, Southwestern University of Finance and Economics, Chengdu, China
*
Corresponding author: Jianjun Miao; Email: miaoj@bu.edu
Rights & Permissions [Opens in a new window]

Abstract

This paper proposes a linear quadratic approximation approach to dynamic nonlinear rationally inattentive control problems with multiple states and multiple controls. An efficient toolbox to implement this approach is provided. Applying this toolbox to five economic examples demonstrates that rational inattention can help explain the comovement puzzle in the macroeconomics literature.

Type
Articles
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. Introduction

People do not pay attention to all available information because processing information is costly. Sims (Reference Sims1998, Reference Sims2003, Reference Sims2011) introduces rational inattention (RI) models that study how people should optimize when their abilities to translate external data into action are constrained by a finite Shannon capacity to process information. Such models can generate sluggish and inertia responses to external information without introducing frictions like adjustment costs and thus have wide applications in macroeconomics.Footnote 1

Despite the rapid growth of this literature, a major hurdle for beginners is the difficulty of solving general multivariate nonlinear control problems under RI. Sims (Reference Sims2003, Reference Sims2011) formulates such problems in the linear-quadratic Gaussian (LQG) framework. Using dynamic semidefinite programming in this framework, Miao, Wu, and Young (henceforth MWY) (Reference Miao, Wu and Young2022) provide a characterization of the optimal solution under RI and both the value function-based and first-order conditions-based numerical methods to compute such a solution. However, little is known about how to solve general multivariate nonlinear rationally inattentive control problems.

The goal of this paper is to fill this gap in the literature by making three contributions. First, we develop an LQ approximation approach that consists of two broad steps. In step 1, we take the decision-maker (DM)’s information structure as exogenously given and then solve the nonlinear control problem under partial information by LQ approximation. The approximated linear policy function is the same as that under full information by the certainty equivalence principle. In step 2, we use the methods of MWY (Reference Miao, Wu and Young2022) to solve for the optimal information structure. Under Gaussian uncertainty, the optimal information structure can be represented by a signal vector that is a linear transformation of hidden states plus a noise. Both the linear transformation and the noise covariance matrix are endogenously chosen.

Second, we develop a Matlab toolbox to efficiently compute LQ approximations for multivariate nonlinear rationally inattentive control problems. This toolbox is robust and easy to apply in practice. It delivers the same approximated linear policy function based on the perturbation approach as in the dynamic stochastic general equilibrium (DSGE) literature computed by a number of public packages, for example, Dynare.

Third, we apply our toolbox to five economic examples that can be formulated as rationally inattentive nonlinear control problems with multivariate states and multivariate controls. In a PC with Intel Core i7-10700 CPU and 64 GB memory, it takes less than 1 s to solve each of these examples for a wide range of parameter values. The first example illustrates the pitfall of the ad hoc LQ approximation method. The next three examples consider a rationally inattentive social planner’s resource allocation problems with various sources of exogenous shocks. These decision problems come from the real business cycles (RBC) literature, though we do not study decentralized market equilibrium. We use these examples to illustrate how RI combined with variable capital utilization can generate comovement among investment, consumption, and labor hours in response to various shocks to the demand side of the economy. In the last example, we study a consumption/saving problem with both durable and nondurable goods similar to Luo et al. (Reference Luo, Nie and Young2015). The utility function takes a general power form beyond a quadratic function. This example shows that RI can generate damped and delayed responses of both durable and nondurable consumption to income shocks.

Our paper is related to two strands of the literature. First, the LQ approximation method in step 1 of our solution approach is related to the Hamiltonian approach to the deterministic continuous-time control problem in Magill (Reference Magill1977). Judd (Reference Judd1998) points out that it may deliver an inaccurate solution if one adopts an ad hoc LQ approximation method by simply computing a quadratic approximation to the objective function and a first-order approximation to the model structural relations. Magill’s approach has been applied to the optimal policy problems in discrete time by Levine et al. (Reference Levine, Pearlman and Pierse2008) and Benigno and Woodford (Reference Benigno and Woodford2012). We extend Magill’s approach to incorporate partial information in discrete time.

Second, our paper is related to the RI literature in the LQG framework.Footnote 2 Most papers in this literature focus on dynamic tracking problems in which states follow exogenous dynamics. For these problems, Sims (Reference Sims2003) proposes a brute force optimization method for the univariate case in the frequency domain.Footnote 3 Peng (Reference Peng2005), Peng and Xiong (Reference Peng and Xiong2006), and Maćkowiak and Wiederholt (Reference Maćkowiak and Wiederholt2009) propose methods under a signal independence assumption. For a general case with one action that is driven by possibly multiple autoregressive moving average (ARMA) processes, Maćkowiak et al. (Reference Maćkowiak, Matějka and Wiederholt2018) develop a method based on the state space representation without any ad hoc restriction on the signal form. Afrouzi and Yang (Reference Afrouzi and Yang2021) develop numerical methods to solve general multivariate tracking problems.

While Sims (Reference Sims2003, Reference Sims2011) formulates rationally inattentive control problems with endogenous state dynamics in the LQG framework, his solution approach applies to the univariate case in which the optimal signal is equal to the state plus a noise. Luo (Reference Luo2008) and Luo et al. (Reference Luo, Nie and Young2015) apply Sims’s approach when a multivariate consumption/saving problem can be reduced to the univariate case. As aforementioned, MWY (Reference Miao, Wu and Young2022) propose an approach to study general multivariate rationally inattentive control problems. Unlike other approaches in the literature that only compute the steady state solution for the optimal information structure with the subjective discount factor equal to 1, both MWY (Reference Miao, Wu and Young2022) and Afrouzi and Yang (Reference Afrouzi and Yang2021) compute transition dynamics as well as steady state with any discount factor between 0 and 1. For simplicity here, we focus on the steady state information structure only.

For nonlinear control or tracking problems, the literature typically uses LQ approximations to transform these problems into tracking problems in the LQG framework. While this procedure is feasible in some cases [e.g. Maćkowiak and Wiederholt (Reference Maćkowiak and Wiederholt2009, Reference Maćkowiak and Wiederholt2015, Reference Maćkowiak and Wiederholt2020), Maćkowiak et al. (Reference Maćkowiak, Matějka and Wiederholt2018), and Zorn (Reference Zorn2018)], it is cumbersome and may not apply to more complicated nonlinear control problems, in which controls affect state dynamics. For these control problems, ad hoc LQ approximations may lead to inaccurate solutions [Judd (Reference Judd1998)]. MWY (Reference Miao, Wu and Young2022) apply the Kydland and Prescott (Reference Kydland and Prescott1982) approach to conduct LQ approximations to a nonlinear investment problem. This approach is a special case of our new approach in this paper with linear constraints only. Our new approach can apply to a much wider class of economic problems beyond those in MWY (Reference Miao, Wu and Young2022) and Maćkowiak–Wiederholt’s studies. Moreover, the Matlab toolbox implements our approach and should be useful for researchers.

The remainder of the paper proceeds as follows. Section 2 formulates a general rationally inattentive control problem. Section 3 presents an LQ approximation approach. Section 4 describes a Matlab toolbox to implement our LQ approximation approach. Section 5 studies five examples to demonstrate the toolbox. Section 6 concludes. All proofs are relegated to an appendix.

2. Rationally inattentive control problem

In this section, we first present the standard control model under full information and then formulate the rationally inattentive control problem.

Consider an infinite-horizon discrete-time setup and time is denoted by $t\geq 0.$ Let $x_{t}$ denote an $n_{x}\times 1$ vector of states at time $t$ . States evolve according to the dynamics:

(1) \begin{align} x_{t+1}=g\!\left ( x_{t},u_{t},\epsilon _{t+1}\right ),\text{ }t\geq 0, \end{align}

where $x_{0}$ is exogenously given, $u_{t}$ is an $n_{u}\times 1$ control vector, $\epsilon _{t+1}$ is an $n_{\epsilon }\times 1$ white noise vector with an identity covariance matrix, and $g\,:\,\mathbb{R}^{n_{x}}\times \mathbb{R}^{n_{u}}\times \mathbb{R}^{n_{\epsilon }}\rightarrow \mathbb{R}^{n_{x}}.$ The state vector $x_{t}$ may consist of exogenous components such as AR(1) shocks and endogenous components such as capital.

Under full information, the DM observes the history of shocks $\epsilon ^{t}=\left \{ \epsilon _{1},\ldots,\epsilon _{t}\right \}$ at any time $t\geq 1$ and the history of states $x^{t}=\left \{ x_{0},x_{1},\ldots,x_{t}\right \} .$ A control plan $\left \{ u_{t}\right \}$ is a sequence of controls that map $x^{t}$ to $u_{t}\!\left ( x^{t}\right )$ for $t\geq 0,$ where all functions $u_{t}\!\left ( x^{t}\right )$ are measurable. Let $\Gamma _{0}$ denote the set of all such control plans. Then the DM’s objective under full information is to solve the following problem:

\begin{align*} \sup _{\left \{ u_{t}\right \} \in \Gamma _{0}}\mathbb{E}\left [ \sum _{t=0}^{\infty }\beta ^{t}f\!\left ( x_{t},u_{t}\right ) \right ] \end{align*}

subject to (1), where $f\,:\,\mathbb{R}^{n_{x}}\times \mathbb{R}^{n_{u}}\rightarrow \mathbb{R}$ and $\beta \in \left ( 0,1\right ) .$

Next, we turn to the control problem under RI. Suppose that the initial state $x_{0}$ is random and has a probability measure denoted by $\mu _{0}\!\left ( dx_{0}\right ) .$ The DM does not fully observe the state $x_{t}$ at any time $t\geq 0.$ The DM can acquire endogenous information about the state by paying information costs. Assume that information costs are measured in utility units. The DM chooses a signal about the state $x_{t}$ with realization $s_{t}$ in some signal space $\mathbb{S}$ . The DM’s choice is a strategy pair $\left ( \left \{ u_{t}\right \},\left \{ q_{t}\right \} \right )$ composed of

  • an information strategy $\left \{ q_{t}\right \}$ consisting of a sequence of distributions $q_{t}\!\left ( ds_{t}|x^{t},s^{t-1}\right )$ for all $s^{t},$ $x^{t}$ , $t\geq 0,$ $s^{-1}=\varnothing ;$

  • a control plan $\left \{ u_{t}\right \}$ consisting of a sequence of functions $u_{t}\,:\,\mathbb{S}^{t}\rightarrow \mathbb{R}^{n_{u}},$ which deliver a control $u_{t}=u_{t}\!\left ( s^{t}\right )$ after observing a history of signals $s^{t}$ for $t\geq 0.$

Let $\Gamma$ denote the set of all such strategies $\left ( \left \{ u_{t}\right \},\left \{ q_{t}\right \} \right ) .$ Following Sims (Reference Sims2011), we model the information cost by the discounted mutual information. To define it formally, we need to construct the joint distribution of states and signals. The function $g$ and the distribution of $\epsilon _{t+1}$ induce a transition kernel for the state, denoted by $\pi \!\left ( dx_{t+1}|x_{t},u_{t}\right ) =\Pr \left ( g\!\left ( x_{t},u_{t},\epsilon _{t+1}\right ) \in dx_{t+1}|x_{t},u_{t}\right ) .$ The state transition kernel $\pi$ and the strategy $\left ( \left \{ u_{t}\right \},\left \{ q_{t}\right \} \right )$ induce a sequence of joint distributions for $x^{t+1}$ and $s^{t}$ recursively

\begin{align*} \mu _{t+1}\!\left ( dx^{t+1},ds^{t}\right ) =\pi \!\left ( dx_{t+1}|x_{t},u_{t}\!\left ( s^{t}\right ) \right ) q_{t}\!\left ( ds_{t}|x^{t},s^{t-1}\right ) \mu _{t}\!\left ( dx^{t},ds^{t-1}\right ), \end{align*}

where $\mu _{0}\!\left ( dx^{0},ds^{-1}\right ) =\mu _{0}\!\left ( dx_{0}\right )$ is given and $s^{-1}=\varnothing$ . Using this sequence of distributions, we can compute the prior/predictive distributions $\mu _{t}\!\left ( dx_{t}|s^{t-1}\right )$ and the posteriors $\mu _{t}\!\left ( dx_{t}|s^{t}\right )$ , and hence we can define the discounted information cost as:

(2) \begin{align} \sum _{t=0}^{T}\beta ^{t}\mathcal{I}\!\left ( x_{t};\,s_{t}|s^{t-1}\right ), \end{align}

where

(3) \begin{align} \mathcal{I}\!\left ( x_{t};\,s_{t}|s^{t-1}\right ) =H\!\left ( x_{t}|s^{t-1}\right ) -H\!\left ( x_{t}|s^{t}\right ). \end{align}

Here, $H\!\left ( X|Y\right )$ denotes the conditional entropy of a random variable $X$ given $Y.$ Footnote 4 Entropy measures the amount of uncertainty. Equation (3) shows that the mutual information measures the reduction of uncertainty about the state after observing additional information. As Sims (Reference Sims2011) and MWY (Reference Miao, Wu and Young2022) argue, introducing discounting in equation (2) ensures dynamic consistency in the choice of optimal information structure and thus one can apply the dynamic programming method.

Now we are ready to formulate the rationally inattentive control problem as follows:

Problem 1. (Rationally inattentive control)

(4) \begin{align} \sup _{\left ( \left \{ u_{t}\right \},\left \{ q_{t}\right \} \right ) \in \Gamma }\text{ }\mathbb{E}\left [ \sum _{t=0}^{\infty }\beta ^{t}f\!\left ( x_{t},u_{t}\right ) \right ] -\lambda \sum _{t=0}^{\infty }\beta ^{t}\mathcal{I}\!\left ( x_{t};\,s_{t}|s^{t-1}\right ) \end{align}

subject to equation (1), where $\lambda \gt 0.$

The parameter $\lambda \gt 0$ transforms the discounted mutual information into utility units. If $\lambda =0,$ then information is free and thus Problem 1 is reduced to the standard control problem under full information. If $\lambda \gt 0,$ acquiring information incurs information costs measured in utility units and thus reduces the DM’s utility. The parameter $\lambda$ may be interpreted as the Lagrange multiplier associated with an information processing constraint. Then, Problem 1 can be interpreted as a relaxed problem derived from a constrained optimization problem given information processing constraints [Sims (Reference Sims2011) and MWY (Reference Miao, Wu and Young2022)]. Without additional structure, Problem 1 is hard to analyze as one has to solve for both the optimal control $\left \{ u_{t}\right \}$ and the optimal information structure $\left \{ q_{t}\right \}$ . In the next section, we propose an LQG approximation approach.

3. Linear quadratic approximation

We first fix the information structure and solve the optimal control problem under partial information. Specifically, we take the filtration $\left \{ s^{t}\right \}$ generated by the histories of signals $s^{t}$ as given. The DM’s information set is $\left \{ s^{t}\right \}$ . Let $\Gamma _{1}$ denote the set of control plans that are adapted to the filtration $\left \{ s^{t}\right \} .$

Consider the following control problem under partial information:

\begin{align*} \sup _{\left \{ u_{t}\right \} \in \Gamma _{1}}\mathbb{E}\left [ \sum _{t=0}^{\infty }\beta ^{t}f\!\left ( x_{t},u_{t}\right ) \right ] \end{align*}

subject to equation (1). Under the standard concavity and differentiability conditions on $f$ and $g$ , the following first-order conditions are necessary and sufficient for optimality:Footnote 5

(5) \begin{equation} u_{t} \,:\,\text{ }\mathbb{E}_{t}\left [ f_{u}\!\left ( x_{t},u_{t}\right ) \right ] +\Lambda _{t}^{\prime }\mathbb{E}_{t}\left [g_{u}\!\left ( x_{t},u_{t},\epsilon _{t+1}\right ) \right ] =0, \end{equation}
(6) \begin{equation} x_{t+1} \,:\,\text{ }\Lambda _{t}=\beta \mathbb{E}_{t}\left [ f_{x}\!\left (x_{t+1},u_{t+1}\right ) +\Lambda _{t+1}^{\prime }g_{x}\!\left (x_{t+1},u_{t+1},\epsilon _{t+2}\right ) \right ], \end{equation}

where $\beta ^{t}\Lambda _{t}$ is the Lagrange multiplier associated with equation (1) and $\mathbb{E}_{t}\left [ \cdot \right ] \equiv \mathbb{E}\left [ \cdot |s^{t}\right ]$ . Here, $\left \{ \Lambda _{t}\right \}$ is adapted to the filtration $\left \{ s^{t}\right \} .$ Unlike in the full information case, the state $x_{t}$ is not observable and thus equation (5) involves estimate $\mathbb{E}_{t}\left [ \cdot \right ] \equiv \mathbb{E}\left [ \cdot |s^{t}\right ]$ given partial information $s^{t}.$

In a nonstochastic steady state, $\epsilon _{t}=0$ , $x_{t}=\overline{x},$ $u_{t}=\overline{u},$ and $\Lambda _{t}=\overline{\Lambda }$ for all $t.$ It follows from equations (5) and (6) that

(7) \begin{equation} f_{u}\!\left ( \bar{x},\bar{u}\right ) +\overline{\Lambda }^{\prime }g_{u}\!\left (\bar{x},\bar{u},0\right ) =0, \end{equation}
(8) \begin{equation} \beta f_{x}\!\left ( \bar{x},\bar{u}\right ) +\beta \overline{\Lambda }^{\prime }g_{x}\!\left ( \bar{x},\bar{u},0\right ) = \overline{\Lambda }. \end{equation}

Together with the steady state version of equation (1),

\begin{align*} \bar{x}=g\!\left ( \bar{x},\bar{u},0\right ), \end{align*}

equations (7) and (8) determine a steady state solution for $\left ( \overline{x},\overline{u},\overline{\Lambda }\right ) .$ Suppose that a solution exists. Then linearizing equations (1), (5), and (6) around this steady state yields the system:

(9) \begin{equation} g_{u}^{\prime }\widetilde{\Lambda }_{t}+\left ( f_{uu}+\overline{\Lambda }^{\prime }g_{uu}\right ) \widetilde{u}_{t}+\left ( f_{ux}+\overline{\Lambda }^{\prime }g_{ux}\right ) \mathbb{E}_{t}\left [ \widetilde{x}_{t}\right ] =0, \end{equation}
(10) \begin{equation} \widetilde{\Lambda }_{t}=\beta \mathbb{E}_{t}\left [ \left ( f_{xx}+\overline{\Lambda }^{\prime }g_{xx}\right ) \widetilde{x}_{t+1}+\left ( f_{xu}+\overline{\Lambda }^{\prime }g_{uu}\right ) \widetilde{u}_{t+1}+g_{x}^{\prime }\widetilde{\Lambda }_{t+1}\right ], \end{equation}
(11) \begin{equation} \widetilde{x}_{t+1}=g_{x}\widetilde{x}_{t}+g_{u}\widetilde{u}_{t}+g_{\epsilon }\epsilon _{t+1}, \end{equation}

where all partial derivatives are evaluated at the nonstochastic steady state $\left ( \overline{x},\overline{u},0\right )$ and a variable with a tilde denotes the level deviation from its steady state, for example, $\widetilde{x}_{t}\equiv x_{t}-\overline{x}$ . As the state $x_{t}$ or $\widetilde{x}_{t}$ is unobservable, the DM must estimate it given information $s^{t}.$ We thus have $\mathbb{E}_{t}\left [ \widetilde{x}_{t}\right ]$ in equation (9).

The above linear system can be solved by the standard method, and the solution takes a linear form. In particular, the certainty equivalence principle holds in that the optimal policy satisfies

(12) \begin{align} \widetilde{u}_{t}=-F\mathbb{E}_{t}\left [ \widetilde{x}_{t}\right ], \end{align}

where $F$ is the same as that obtained in the deterministic linear system with $\epsilon _{t}=0$ for all $t$ .

Next, we study how the optimal information structure is determined. To apply the LQG framework of MWY (Reference Miao, Wu and Young2022), we adopt the LQ approximation approach adapted from Magill (Reference Magill1977) and take into account the impact of partial information. This approach delivers a quadratic approximation of the objective function and a linear approximation of the constraint. To ensure the resulting LQ control problem gives a linear solution that is the same as equation (12), it is critical to approximate the Hamiltonian function defined as:

(13) \begin{align} H\!\left ( x_{t},u_{t},\epsilon _{t+1}\right ) =f\!\left ( x_{t},u_{t}\right ) +\overline{\Lambda }^{\prime }g\!\left ( x_{t},u_{t},\epsilon _{t+1}\right ). \end{align}

Notice that $\overline{\Lambda }$ in $H$ is the steady state Lagrange multiplier. Then, we have the following result:

Lemma 1. Suppose that $f$ and $g$ are twice continuously differentiable and $\lim _{T\rightarrow \infty }\mathbb{E}\left [ \widetilde{x}_{T}\right ] =0$ . Then given equation ( 1 ),

\begin{eqnarray*} \mathbb{E}\left [ \sum _{t=0}^{\infty }\beta ^{t}f\!\left ( x_{t},u_{t}\right ) \right ] &\approx &\frac{1}{2}\mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}\left [ \widetilde{x}_{t}^{\prime },\widetilde{u}_{t}^{\prime }\right ] \left [ \begin{array}{c@{\quad}c} H_{xx} & H_{xu} \\ H_{ux} & H_{uu}\end{array}\right ] \left [ \begin{array}{c} \widetilde{x}_{t} \\ \widetilde{u}_{t}\end{array}\right ] \\ &&+\frac{f\!\left ( \overline{x},\bar{u}\right ) }{1-\beta }+\frac{1}{2}\text{Tr}\!\left ( H_{\epsilon \epsilon }\right ), \end{eqnarray*}

up to second-order moments of $\left ( \widetilde{x}_{t},\widetilde{u}_{t},\epsilon _{t+1}\right ),$ where Tr $\left ( \cdot \right )$ denotes a trace operator, and $H_{xx},$ $H_{xu},$ $H_{ux},$ and $H_{uu\text{ }}$ denote the second-order partial derivatives of $H$ evaluated at the nonstochastic steady state $\left ( \overline{x},\overline{u},0\right )$ .

Ignoring the constant and higher-order terms, the following lemma shows that the LQ problem gives the same solution as that obtained by linearizing the first-order conditions for the original nonlinear control problem.

Lemma 2. The following problem

\begin{align*} \max _{\left \{ \widetilde{u}_{t}\right \} \in \Gamma _{1}}\frac{1}{2}\mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}\left [ \widetilde{x}_{t}^{\prime },\widetilde{u}_{t}^{\prime }\right ] \left [ \begin{array}{c@{\quad}c} H_{xx} & H_{xu} \\ H_{ux} & H_{uu}\end{array}\right ] \left [ \begin{array}{c} \widetilde{x}_{t} \\ \widetilde{u}_{t}\end{array}\right ] \end{align*}

subject to equation ( 11 ) gives the same solution as that delivered by the system ( 9 ), ( 10 ), and ( 11 ).

The LQ approach of Kydland and Prescott (Reference Kydland and Prescott1982) is a special case. They assume that the constraint function $g$ is linear. Then the Hessian matrix of $H$ evaluated at the steady state is the same as the Hessian matrix of the objective function $f$ evaluated at the steady state. Thus, taking quadratic approximation of $H$ is the same as that of $f.$ While one can often use a suitable change of variables to make the constraint function linear, the Hamiltonian approach is more general and more convenient.

Notice that the ad hoc LQ approach differs from our correct Hamiltonian approach by the second-order derivative terms of function $g.$ The intuition comes from the linearized first-order conditions (9) and (10). The ad hoc LQ approach disregards second-order derivative terms of $g$ in equations (9) and (10), leading to an inaccurate solution if $g$ is nonlinear.

By Lemma 2, we obtain the following LQ approximation of the rationally inattentive control problem.

Problem 2. (Rationally inattentive LQ control)

\begin{align*} \sup _{\left ( \left \{ \widetilde{u}_{t}\right \},\left \{ q_{t}\right \} \right ) \in \Gamma }\frac{1}{2}\mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}\left [ \widetilde{x}_{t}^{\prime },\widetilde{u}_{t}^{\prime }\right ] \left [ \begin{array}{c@{\quad}c} H_{xx} & H_{xu} \\ H_{ux} & H_{uu}\end{array}\right ] \left [ \begin{array}{c} \widetilde{x}_{t} \\ \widetilde{u}_{t}\end{array}\right ] -\lambda \sum _{t=0}^{\infty }\beta ^{t}\mathcal{I}\!\left ( x_{t};\,s_{t}|s^{t-1}\right ) \end{align*}

subject to equation ( 11 ).

To simplify the computation of the mutual information and stay in the LQG framework of Sims (Reference Sims2011) and MWY (Reference Miao, Wu and Young2022), we consider Gaussian shocks only.

Assumption 1. (i) The initial state $x_{0}$ is Gaussian with mean $\overline{x}_{0}$ and covariance matrix $\Sigma _{-1}.$ (ii) The innovation $\epsilon _{t}$ is identically and independently drawn from a standard Gaussian distribution.

Problem 2 corresponds to the infinite-horizon version of Problem 2 in MWY (Reference Miao, Wu and Young2022) using the following notations:

(14) \begin{equation} Q \equiv -\frac{1}{2}H_{xx},\text{ }R\equiv -\frac{1}{2}H_{uu},\text{ }S\equiv -\frac{1}{2}H_{xu}, \end{equation}
(15) \begin{equation} A \equiv g_{x},\text{ }B\equiv g_{u},\text{ }W\equiv g_{\epsilon }g_{\epsilon }^{\prime }. \end{equation}

As shown in Sims (Reference Sims2011) and Tanaka et al. (Reference Tanaka, Kim, Parrio and Mitter2017), the optimal information structure will be Gaussian in that the optimal estimate of $\widetilde{x}_{t}$ conditional on $s^{t}$ is Gaussian with mean zero and covariance matrix $\Sigma _{t}=\mathbb{E}\left [ \left ( \widetilde{x}_{t}-\mathbb{E}\left [ \widetilde{x}_{t}|s^{t}\right ] \right ) \left ( \widetilde{x}_{t}-\mathbb{E}\left [ \widetilde{x}_{t}|s^{t}\right ] \right ) ^{\prime }|s^{t}\right ] .$ Moreover, a linear signal $s_{t}$ of the form

(16) \begin{align} s_{t}=C_{t}\widetilde{x}_{t}+v_{t} \end{align}

can generate such a posterior covariance matrix $\Sigma _{t},$ where $v_{t}$ is a Gaussian white noise with mean zero and covariance matrix $V_{t}.$ This noise is independent of $\left \{ \epsilon _{t}\right \}$ . As $\Sigma _{t}$ will be endogenously chosen, and both $C_{t}$ and $V_{t}$ are also endogenous and satisfy

\begin{align*} C_{t}^{\prime }V_{t}^{-1}C_{t}=\Sigma _{t}^{-1}-\left ( A\Sigma _{t-1}A+W\right ) ^{-1}. \end{align*}

Given Gaussian uncertainty, the mutual information takes an explicit form:

(17) \begin{equation} \mathcal{I}\!\left ( x_{0};\,s_{0}|s^{-1}\right ) =\mathcal{I}\!\left ( \widetilde{x}_{t};\,s_{t}|s^{t-1}\right ) =\frac{1}{2}\log \det \left ( \Sigma _{-1}\right ) -\frac{1}{2}\log \det \!\left ( \Sigma _{0}\right ), \end{equation}
(18) \begin{equation} \mathcal{I}\!\left ( x_{t};\,s_{t}|s^{t-1}\right ) =\mathcal{I}\!\left ( \widetilde{x}_{t};\,s_{t}|s^{t-1}\right ) =\frac{1}{2}\log \det \left ( A\Sigma _{t-1}A^{\prime }+W\right ) -\frac{1}{2}\log \det \Sigma _{t},\text{ } \end{equation}

for $t\geq 1.$

Now, we have mapped Problem 2 into the framework of MWY (Reference Miao, Wu and Young2022). We can then apply their method and their toolbox described in Miao and Wu (Reference Miao and Wu2021) to solve this problem. In the next section, we describe the details.

4. A Matlab toolbox

In this section, we describe a Matlab toolbox to implement the LQG approximation approach described in the previous section. The only inputs that the user needs to provide are the state vector $x_{t}$ , the control vector $u_{t},$ the objective function $f$ , the constraint function $g$ , and the nonstochastic steady state values of $\left ( x_{t},u_{t},\Lambda _{t}\right )$ given by $\left ( \overline{x},\overline{u},\overline{\Lambda }\right )$ . For the example in Section 5.2, the Matlab code RBCPref_model.m describes $x_{t},$ $u_{t},$ $f,$ and $g,$ and the code RBCPref_model_ss.m solves the steady state. We use the Matlab Symbolic Math toolbox to compute the analytical derivatives of $f$ and $g$ in the code anal_derivative.m. The code num_derivative.m evaluates the analytical derivatives at the steady state values.

The code RBCPref_Run.m is the main code that implements the following steps.

Step 1. Compute matrices $Q,R,S,A,B,W$ in equations (14) and (15). Then use the code LQG.m to compute the Riccati equation for $P\,:\,$

(19) \begin{align} P=Q+\beta A^{\prime }PA-\left ( \beta A^{\prime }PB+S\right ) \left ( R+\beta B^{\prime }PB\right ) ^{-1}\left ( \beta B^{\prime }PA+S^{\prime }\right ), \end{align}

and

\begin{align*} F=\left ( R+\beta B^{\prime }PB\right ) ^{-1}\left ( S^{\prime }+\beta B^{\prime }PA\right ). \end{align*}

The optimal policy is given by equation (12).

Step 2. The optimal information structure $\left \{ \Sigma _{t}\right \} _{t=0}^{\infty }$ solves the following problem.

Problem 3. (Optimal information structure for Problem 2)

\begin{align*} \min _{\left \{ \Sigma _{t}\right \} _{t=0}^{\infty }}\text{ }\sum _{t=0}^{\infty }\beta ^{t}\left [ \mathrm{tr}\!\left ( \Omega \Sigma _{t}\right ) +\lambda \mathcal{I}\!\left ( x_{t};\,s_{t}|s^{t-1}\right ) \right ] \end{align*}

subject to equations ( 17 ) and ( 18 ),

(20) \begin{equation} \Sigma _{t}\preceq A\Sigma _{t-1}A^{\prime }+W, \end{equation}
(21) \begin{equation} \Sigma _{0}\preceq \Sigma _{-1}, \end{equation}

for $t\geq 1,$ Footnote 6 where $\Omega$ is given by:

(22) \begin{align} \Omega =F^{\prime }(R+\beta B^{\prime }PB)F. \end{align}

For this problem to be a well-defined convex optimization problem, we make the following assumption [Afrouzi and Yang (Reference Afrouzi and Yang2021) and MWY (Reference Miao, Wu and Young2022)]:

Assumption 2. $W\succeq 0$ and $AA^{\prime }+W\succ 0.$

If this assumption is violated, then the matrix $A\Sigma _{t-1}A^{\prime }+W$ may be singular even though $\Sigma _{t-1}$ is nonsingular, for example, $\Sigma _{t-1}=I.$ In this case, the mutual information in equation (18) is not well defined. Moreover, this assumption is also sufficient for the convexity of Problem 3. We will focus on the steady state solution $\Sigma$ to the above problem. The Matlab code RI_SS_FOC.m computes this solution.

Step 3. Use the code RI_SIG.m to compute the steady state optimal signal structure $\left ( C,V\right )$ that generates the optimal steady state posterior covariance matrix $\Sigma .$ Notice that the optimal signal structure $\left ( C,V\right )$ is derived from the following equation:

\begin{align*} C^{\prime }V^{-1}C=\Sigma ^{-1}-\left ( A\Sigma A+W\right ) ^{-1}, \end{align*}

and the solution is not unique.

Step 4. Use the code RI_IRF3.m to generate impulse response functions (IRFs) using the steady state Kalman filter for the state space system:

(23) \begin{align} \widetilde{x}_{t+1} & =A\widetilde{x}_{t}+B\widetilde{u}_{t}+L\epsilon _{t+1},\nonumber\\\widetilde{u}_{t} & =-F\mathbb{E}_{t}\left [ \widetilde{x}_{t}\right ],\\s_{t} & =C\widetilde{x}_{t}+v_{t},\nonumber \end{align}

where $L\equiv g_{\epsilon }$ and $v_{t}$ is a Gaussian white noise with covariance matrix $V$ and is independent of $\left \{ \epsilon _{t}\right \} .$ The Kalman filter is given by:

(24) \begin{equation} \widehat{x}_{t} =\left ( I-KC\right ) \left ( A-BF\right ) \widehat{x}_{t-1}+K\left ( C\widetilde{x}_{t}+v_{t}\right ), \end{equation}
(25) \begin{equation} \widetilde{x}_{t+1} =A\widetilde{x}_{t}-BF\widehat{x}_{t}+L\epsilon _{t+1},\text{ }t\geq 0, \end{equation}

where $\widehat{x}_{t}=\mathbb{E}_{t}\left [ \widetilde{x}_{t}\right ]$ and the matrix $K$ is the Kalman gain:

(26) \begin{align} K\equiv \left ( A\Sigma A^{\prime }+W\right ) C^{\prime }\left [ C\!\left ( A\Sigma A^{\prime }+W\right ) C^{\prime }+V\right ] ^{-1}. \end{align}

So far we have described how to solve a linear approximate solution. In economics, one is often interested in a log-linear approximate solution for positive variables. Such a solution can be easily found by noting that

(27) \begin{align} \log x_{t}-\log \overline{x}\approx \frac{\widetilde{x}_{t}}{\overline{x}}, \end{align}

for any positive variable $x_{t}$ up to a first-order approximation. We can then divide the linear approximate solution for any variable of interest by its steady state value to obtain its log-linear approximate solution. The other way is to make a change of variables $x_{t}^{l}=\log x_{t}$ and $u_{t}^{l}=\log u_{t}$ and linearize with respect to $x_{t}^{l}$ and $u_{t}^{l}$ . We also replace the signal equation (16) by:

\begin{align*} s_{t}=C_{t}\widetilde{x}_{t}^{l}+v_{t} \end{align*}

and replace the mutual information $\mathcal{I}\!\left ( \widetilde{x}_{t};\,s_{t}|s^{t-1}\right )$ by $\mathcal{I}\!\left ( \widetilde{x}_{t}^{l};\,s_{t}|s^{t-1}\right ) .$ We can then apply the previous toolbox.

5. Examples

In this section, we present five examples to illustrate our toolbox. The first example illustrates some pitfalls of the ad hoc LQ approximation method. The next three examples are about social planner problems as in the RBC literature. We do not consider decentralized market equilibrium because our toolbox applies only to decision problems. The final example studies a consumption/saving problem and shows that one can suitably define the state and control variables to transform a complicated decision problem into our framework. For all examples, we focus on the solution methods instead of quantitative economic implications, and thus parameter values are chosen for illustration, but not for matching data closely.

5.1. Pitfalls

Judd (Reference Judd1998) uses a simple deterministic growth model in continuous time to show that the ad hoc LQ approximation method can generate inaccurate solutions. In this subsection, we present a stochastic growth model in discrete time to highlight additional issues that may arise under full information and under limited information with RI.

Formally, consider a planner’s choice of consumption and capital processes $\left \{ C_{t}\right \}$ and $\left \{ K_{t+1}\right \}$ under full information:

\begin{align*} \max \mathbb{E}\left [ \sum _{t=0}^{\infty }\beta ^{t}\log C_{t}\right ] \end{align*}

subject to

\begin{align*} C_{t}+K_{t+1}=\exp \!\left ( a_{1t}+a_{2t}\right ) K_{t}^{\alpha },\text{ }\left ( \Lambda _{3t}\right ) \end{align*}

where $a_{1t}$ and $a_{2t}$ follow independent AR(1) processes:

(28) \begin{align} a_{i,t+1}=\rho _{i}a_{it}+\epsilon _{i,t+1},\text{ }\left ( \Lambda _{it}\right ),\text{ }i=1,2. \end{align}

Here, $\epsilon _{i,t},i=1,2,$ are Gaussian white noises with variances $\sigma _{i}^{2}.$ Let $\Lambda _{it},$ $i=1,2,3,$ be the Lagrange multipliers associated with the above three equations.

The above control problem admits a closed-form solution:

\begin{align*} K_{t+1}=\alpha \beta \exp \!\left ( a_{1t}+a_{2t}\right ) K_{t}^{\alpha },\text{ }C_{t}=\left ( 1-\alpha \beta \right ) \exp \!\left ( a_{1t}+a_{2t}\right ) K_{t}^{\alpha }. \end{align*}

The deterministic steady state is given by:

\begin{align*} \overline{K}=\left ( \alpha \beta \right ) ^{1/\left ( 1-\alpha \right ) },\text{ }\overline{C}=\left ( 1-\alpha \beta \right ) \overline{K}^{\alpha }, \end{align*}
\begin{align*} \overline{\Lambda }_{3}=\frac{1}{\overline{C}},\text{ }\overline{\Lambda }_{1}=\frac{\beta \overline{\Lambda }_{3}\overline{K}^{\alpha }}{1-\beta \rho _{1}},\text{ }\overline{\Lambda }_{2}=\frac{\beta \overline{\Lambda }_{3}\overline{K}^{\alpha }}{1-\beta \rho _{2}}. \end{align*}

The true linearized solution around the steady state is given by:

(29) \begin{align} \widetilde{K}_{t+1} & =\alpha \widetilde{K}_{t}+\alpha \beta \overline{K}^{\alpha }a_{1t}+\alpha \beta \overline{K}^{\alpha }a_{2t},\nonumber\\[4pt]\widetilde{C}_{t} & =\left ( 1-\alpha \beta \right ) \alpha \overline{K}^{\alpha -1}\widetilde{K}_{t}+\left ( 1-\alpha \beta \right ) \overline{K}^{\alpha }a_{1t}+\left ( 1-\alpha \beta \right ) \overline{K}^{\alpha }a_{2t}. \end{align}

Then both optimal consumption and capital are stationary processes.

If one adopts the ad hoc LQ approximation method, one solves the following approximated problem:

\begin{align*} \max _{\left \{ \widetilde{C}_{t},\widetilde{K}_{t+1}\right \} }\mathbb{E}\left [ \sum _{t=0}^{\infty }\beta ^{t}\left ( \log \overline{C}+\frac{1}{\overline{C}}\widetilde{C}_{t}-\frac{1}{2\overline{C}^{2}}\widetilde{C}_{t}^{2}\right ) \right ] \end{align*}

subject to the linearized constraints:

\begin{align*} \widetilde{C}_{t}+\widetilde{K}_{t+1}=\overline{K}^{\alpha }\left ( a_{1t}+a_{2t}\right ) +\alpha \overline{K}^{\alpha -1}\widetilde{K}_{t}. \end{align*}

This problem becomes a permanent income model of consumption in the LQ framework. The Euler equation is given by:

\begin{align*} \widetilde{C}_{t}=\mathbb{E}_{t}\widetilde{C}_{t+1}, \end{align*}

which implies that optimal consumption is nonstationary and follows a random walk. It follows that the optimal consumption level is proportional to the permanent income:

\begin{align*} \widetilde{C}_{t}=\left ( \alpha \overline{K}^{\alpha -1}-1\right ) \widetilde{K}_{t}+\frac{\left ( \alpha \overline{K}^{\alpha -1}-1\right ) \overline{K}^{\alpha }}{\alpha \overline{K}^{\alpha -1}-\rho _{1}}a_{1t}+\frac{\left ( \alpha \overline{K}^{\alpha -1}-1\right ) \overline{K}^{\alpha }}{\alpha \overline{K}^{\alpha -1}-\rho _{2}}a_{2t}. \end{align*}

Clearly, this solution is inaccurate compared to equation (29).

Now we turn to the case under limited information. Suppose that the planner does not fully observe any state of the model and can acquire information about the states subject to discounted information costs. Since our method nests the full information case, it delivers the same linearized solution as in equation (29). To show quantitative implications, we set parameter values $\alpha =0.33,$ $\beta =0.99,$ $\rho _{1}=0.9,$ $\sigma _{1}=0.01,$ $\rho _{2}=0.5,$ $\sigma _{2}=0.05,$ and $\lambda =0.005.$ Figure 1 plots the IRFs of consumption to a one-standard deviation innovation shock to the two total factor productivity (TFP) components. The left two panels display the result derived from our method and toolbox. One can verify that the numerical solution under full information is consistent with the analytical solution in equation (29). Under RI, the initial consumption response is weaker, and the later response is delayed. It takes a longer time for consumption to return to its steady state level.

Figure 1. Impulse responses of $C_{t}$ to a one-standard-deviation innovation in the two TFP components under full information and under RI. The left and right two panels display results using our method and the ad hoc LQ approximation method, respectively. All vertical axes are measured in percentage changes from the steady state.

The right two panels display the result derived from the ad hoc LQ approximation method. Optimal consumption under full information (solid lines) jumps to a higher steady state level immediately and stays at that level forever. But the initial consumption response under RI is weaker. Consumption gradually rises to a higher steady state level. This result is consistent with that in Luo (Reference Luo2008) and MWY (Reference Miao, Wu and Young2022) for permanent income LQ models of consumption but is drastically different from the true linearized solution shown earlier.

The above example demonstrates that the ad hoc LQ approximation method may generate not only an inaccurate solution but also qualitatively different economic behavior both under full information and under limited information with RI. The ad hoc LQ approximation method works only when the constraints (1) are linear [see Judd (Reference Judd1998)]. In this case, the Hessian for the term $\overline{\Lambda }^{\prime }g\!\left ( x_{t},u_{t},\epsilon _{t+1}\right )$ in equation (13) is equal to zero. This case can happen for pure tracking problems when exogenous states follow AR(1) processes like equation (28).

It merits emphasis that our LQ approach implies that the certainty equivalence principle holds for the approximated LQ control problem. This implies that uncertainty does not matter for the linear decision rule. Because this principle does not apply to the general nonlinear control problem, our LQ approach is not well suited to handle questions such as welfare comparisons across alternative stochastic environments. For example, Kim and Kim (Reference Kim and Kim2003) show that in a simple two-agent economy, a welfare comparison based on an evaluation of the utility function using a linear approximation to the policy function may yield the spurious result that welfare is higher under autarky than under full risk sharing. Our LQ approach is also not suitable to study questions related to risk premium, in which uncertainty plays an important role. To study these questions, one has to take at least a second-order approximation to the decision rule. In this case, our solution method under RI does not apply.

5.2. Planner’s problem I: preference shock

We next consider a more complicated social planner’s problem. Under full information, the planner’s objective is to maximize the representative household’s utility over consumption $\left \{ C_{t}\right \}$ and labor $\left \{ N_{t}\right \}$ :

\begin{align*} \mathbb{E}\left [ \sum _{t=0}^{\infty }\beta ^{t}U\!\left ( C_{t},N_{t},z_{t}\right ) \right ],\text{ }U\!\left ( C,N,z\right ) =\exp \!\left ( z\right ) \log \left ( C\right ) -\chi \frac{N^{1+\nu }}{1+\nu }, \end{align*}

subject to

(30) \begin{equation} C_{t}+I_{t} =\exp \!\left ( a_{t}\right ) \left ( e_{t}K_{t}\right ) ^{\alpha }N_{t}^{1-\alpha }, \end{equation}
(31) \begin{equation} K_{t+1} =\left ( 1-\delta \left ( e_{t}\right ) \right ) K_{t}+I_{t}, \end{equation}
(32) \begin{equation} z_{t+1} =\rho _{z}z_{t}+\epsilon _{z,t+1}, (\Lambda _{1t}) \end{equation}
(33) \begin{equation} a_{t+1} =\rho _{a}a_{t}+\epsilon _{a,t+1},\text{ }\left ( \Lambda _{2t}\right )\end{equation}

for $t\geq 0,$ where $K_{t},$ $I_{t},$ $e_{t},$ $a_{t},$ and $z_{t}$ represent capital, investment, capital utilization rate, TFP shock, and preference shock, respectively. Equations (30) and (31) are the resource constraint and the law of motion for capital, respectively. Equations (32) and (33) give AR(1) process specifications, where $\epsilon _{a,t+1}$ and $\epsilon _{z,t+1}$ are independent Gaussian white noises with variances $\sigma _{a}^{2}$ and $\sigma _{z}^{2}$ .

More intensively utilized capital raises capital efficiency but also makes capital depreciate faster. Let the depreciation rate satisfy

\begin{align*} \delta \left ( e_{t}\right ) =\delta +\phi _{e}\frac{\left ( e_{t}\right ) ^{\gamma }-1}{\gamma },\text{ }\gamma \gt 1,\text{ }\phi _{e}\gt 0. \end{align*}

Suppose that the planner does not fully observe any state of the model and can acquire information about the states subject to discounted information costs. We choose the state vector as $x_{t}=\left ( z_{t},a_{t},K_{t}\right ) ^{\prime }$ and the control vector as $u_{t}=\left ( C_{t},N_{t},e_{t}\right ) ^{\prime }.$ Assume that the initial values of the states $K_{t},a_{t},$ and $z_{t}$ are drawn from independent Gaussian distributions. After using equation (30) to substitute $I_{t}$ in equation (31), we obtain the state transition equation:

(34) \begin{align} K_{t+1}=\left ( 1-\delta \right ) K_{t}+\exp \!\left ( a_{t}\right ) \left ( e_{t}K_{t}\right ) ^{\alpha }N_{t}^{1-\alpha }-C_{t}.\text{ }\left ( \Lambda _{3t}\right ) \end{align}

Then the planner’s problem is transformed into our framework.

Let $\Lambda _{it},$ $i=1,2,3,$ denote the (undiscounted) Lagrange multipliers associated with equations (32), (33), and (34). Their steady state values are given by:

\begin{align*} \overline{\Lambda }_{1}=\frac{\beta \log \left ( \overline{C}\right ) }{1-\beta \rho _{z}},\text{ }\overline{\Lambda }_{2}=\frac{\beta U_{c}\left ( \overline{C},\overline{N}\right ) \overline{K}^{\alpha }\overline{N}^{1-\alpha }}{1-\beta \rho _{a}}\text{, }\overline{\Lambda }_{3}=U_{c}\!\left ( \overline{C},\overline{N}\right ). \end{align*}

To solve this problem numerically, we set parameter values as $\alpha =0.33,$ $\delta =0.025,$ $\nu =1,$ $\beta =0.99,$ $\gamma =1.2,$ $\rho _{a}=0.95,$ $\rho _{z}=0.8,$ $\sigma _{a}=\sigma _{z}=0.01,$ and $\lambda =0.002.$ We also choose $\chi =7.8827$ such that the steady state labor is $1/3$ and $\phi _{e}=0.0351$ such that the steady state capital utilization rate is 1 $.$ The linearized solution gives the policy function $\widetilde{u}_{t}=-F\mathbb{E}\left [ \widetilde{x}_{t}|s^{t}\right ],$ where

\begin{align*} F=\left [ \begin{array}{c@{\quad}c@{\quad}c} -0.6978 & -0.3520 & -0.0213 \\ -0.0286 & -0.2855 & 0.0061 \\ -0.0662 & -1.8091 & 0.0955\end{array}\right ]. \end{align*}

Notice that the Kydland–Prescott LQ approximation method does not apply to the above formulation because equation (34) is nonlinear.

One can easily check that the linearized state transition matrix $A$ is invertible, but the covariance matrix $W$ for the state transition noise is not invertible. Assumption 2 is satisfied and one can apply our toolbox to solve the RI problem. The optimal steady state posterior covariance matrix is given by:

\begin{align*} \Sigma =\left [ \begin{array}{c@{\quad}c@{\quad}c} 0.0276 & -0.0028 & -0.0286 \\ -0.0028 & 0.0348 & 0.0382 \\ -0.0286 & 0.0382 & 1.9343\end{array}\right ] \times 10^{-2}. \end{align*}

The posterior variances of $z_{t}$ and $a_{t}$ are smaller than their unconditional values $5.2632\times 10^{-4}$ and $0.001$ . Even though $z_{t}$ and $a_{t}$ are ex ante independent, they are ex post negatively correlated. This is due to the following optimal signal form computed from our toolbox:

(35) \begin{align} s_{t}=0.3015z_{t}+0.9507a_{t}+0.0725\widetilde{K}_{t}+v_{t}, \end{align}

where $v_{t}$ is a Gaussian white noise with variance 0.0023. When observing the same realization of the signal $s_{t},$ the planner may attribute a positive preference shock $z_{t}$ to a negative TFP shock $a_{t},$ holding $\widetilde{K}_{t}$ and $v_{t}$ constant $.$ On the other hand, observing a positive signal, the planner may attribute it to a positive TFP shock or a positive preference shock, holding $\widetilde{K}_{t}$ and $v_{t}$ constant.

We find both methods discussed in Section 4 generate the same log-linearized optimal policy functions and almost the same IRFs for the log-linearized solution. Here, we present the IRFs using the first method. As is well known, there is no comovement in response to a preference shock under full information. In this case, a positive preference shock causes consumption and labor to rise, but investment to fall, as shown in Figure 2. By contrast, under RI, the optimal signal form in equation (35) implies that the inattentive planner confuses a preference shock with a TFP shock given the same realization of the signal. Observing a positive signal, the planner may interpret a positive preference shock as a positive TFP shock. Given the parameterization, consumption, labor, investment, and hence output all rise on impact in response to a positive preference shock.

Figure 2. Impulse responses of $I_{t}$ , $N_{t}$ , and $C_{t}$ to a one-percent innovation in the TFP and preference shocks under full information and under RI. All vertical axes are measured in percentage changes from the steady state.

Notice that capital utilization plays an important role in this comovement result. By results not reported here but available upon request, we find that there is no comovement even under RI if there is no endogenous capital utilization. The intuition is as follows: a positive preference shock raises consumption and labor. But the magnitude of the rise of labor and hence output is too small, leading investment to fall by the resource constraint. With endogenous capital utilization, a rise in labor also causes capital to be utilized more intensively and thus raises output further. The large increase in output allows investment to rise.Footnote 7

The bottom panels of Figure 2 shows that RI generates damped and delayed responses to TFP shocks. Moreover, the responses are hump-shaped, even though there is no adjustment cost in this model.

As shown in MWY (Reference Miao, Wu and Young2022), the signal dimension weakly increases as the information cost parameter $\lambda$ decreases. Intuitively, as the information cost becomes smaller, the planner acquires more information. For example, when $\lambda =0.0005,$ the optimal signal takes the form:

\begin{align*} s_{t}=\left [ \begin{array}{c@{\quad}c@{\quad}c} -0.3704 & -0.9285 & -0.0249 \\ 0.2356 & -0.1199 & 0.9644\end{array}\right ] \left [ \begin{array}{c} z_{t} \\ a_{t} \\ \widetilde{K}_{t}\end{array}\right ] +v_{t}, \end{align*}

where $v_{t}$ is a two-dimensional vector of independent Gaussian noises with variances 0.0004 and 0.1347. As $\lambda$ becomes smaller, acquiring information is less costly and hence the solution under RI is closer to that under full information.

As is well known in control theory, the optimal solution under full information does not depend on a particular choice of states and controls. For example, one can choose the control vector as $u_{t}=\left ( I_{t},N_{t},e_{t}\right ) ^{\prime }$ instead of $\left ( C_{t},N_{t},e_{t}\right ) ^{\prime }.$ Then one can eliminate $C_{t}$ using equation (30) and replace the state transition equation (34) for capital with equation (31). This procedure does not affect the solution under full information but matters under RI. The intuition is that control variables must be known to the DM; that is, they must be adapted to the DM’s information set $\left \{ s^{t}\right \}$ . This may force some other variables not adapted to the information set by the model constraints. For example, if one chooses $\left ( I_{t},N_{t},e_{t}\right ) ^{\prime }$ as the control vector, then consumption $C_{t}$ may not be adapted to $s^{t}$ because the resource constraint (30) must be satisfied. If both $C_{t}$ and $I_{t}$ were adapted to $s^{t},$ then output $\exp \!\left ( a_{t}\right ) \left ( e_{t}K_{t}\right ) ^{\alpha }N_{t}^{1-\alpha }$ would be adapted too. But this is generally impossible because both $a_{t}$ and $K_{t}$ are unobserved states.

5.3. Planner’s problem II: investment shock

In this subsection, we remove the preference shock in the previous example but introduce an investment shock. Under full information, the planner’s problem is to maximize the following objective:

\begin{align*} \mathbb{E}\left [ \sum _{t=0}^{\infty }\beta ^{t}U\!\left ( C_{t},N_{t}\right ) \right ],\text{ }U\!\left ( C,N\right ) =\log \left ( C\right ) -\chi \frac{N^{1+\nu }}{1+\nu }, \end{align*}

subject to equation (30) and

(36) \begin{align} K_{t+1} &=\left ( 1-\delta \!\left ( e_{t}\right ) \right ) K_{t}+\exp \!\left (z_{t}\right ) I_{t},\text{ }(\Lambda _{3t}), \\ a_{t+1} &=\rho _{a}a_{t}+\epsilon _{a,t+1},\text{ }(\Lambda _{2t}), \nonumber \\ z_{t+1} &=\rho _{z}z_{t}+\epsilon _{z,t+1},\text{ }(\Lambda _{1t}), \nonumber \end{align}

where $z_{t}$ represents an investment shock and $\Lambda _{it},i=1,2,3$ are the Lagrange multipliers associated with the state transition equations for $z_{t}$ , $a_{t},$ and $K_{t}.$ Here, $\epsilon _{a,t}$ and $\epsilon _{z,t}$ are independent Gaussian white noise with variances $\sigma _{a}^{2}$ and $\sigma _{z}^{2}.$

We choose $x_{t}=\left ( z_{t},a_{t},K_{t}\right ) ^{\prime }$ as the state vector and $u_{t}=\left ( C_{t},N_{t},e_{t}\right ) ^{\prime }$ as the control vector. Use equation (30) to substitute for $I_{t}$ in equation (36). The steady state values of $\Lambda _{it},$ $i=1,2,3,$ satisfy

\begin{align*} \overline{\Lambda }_{1}=\frac{\beta \overline{I}}{\left ( 1-\beta \rho _{z}\right ) \overline{C}},\text{ }\overline{\Lambda }_{2}=\frac{\beta U_{c}\left ( \overline{C},\overline{N}\right ) \overline{K}^{\alpha }\overline{N}^{1-\alpha }}{1-\beta \rho _{a}}\text{, }\overline{\Lambda }_{3}=U_{c}\!\left ( \overline{C},\overline{N}\right ), \end{align*}

given the steady state capital utilization rate $\overline{e}=1.$

We set the parameter values as $\alpha =0.33,$ $\delta =0.025,$ $\nu =1,$ $\beta =0.99,$ $\gamma =1.2,$ $\rho _{a}=0.95,$ $\rho _{z}=0.7,$ $\sigma _{a}=0.01$ , $\sigma _{z}=0.01,$ and $\lambda =0.002.$ We choose $\chi =7.8827$ and $\phi _{e}=0.0351$ such that the steady state labor and capital utilization rate are equal to $1/3$ and 1, respectively. The coefficient matrix in the optimal linearized policy function $\widetilde{u}_{t}=-F\mathbb{E}\left [ \widetilde{x}_{t}|s^{t}\right ]$ is given by:

\begin{align*} F=\left [ \begin{array}{c@{\quad}c@{\quad}c} 0.6716 & -0.3520 & -0.0213 \\ -0.3881 & -0.2855 & 0.0061 \\ -2.0462 & -1.8091 & 0.0955\end{array}\right ]. \end{align*}

The matrix $W$ is singular, but $A$ is invertible and thus $AA^{\prime }+W$ is invertible. As Assumption 2 is satisfied, our toolbox can be applied to compute the optimal information structure under RI. We find the optimal posterior covariance matrix for the state is given by:

\begin{align*} \Sigma =\left [ \begin{array}{c@{\quad}c@{\quad}c} 0.0195 & -0.0018 & -0.0028 \\ -0.0018 & 0.0335 & 0.0335 \\ -0.0028 & 0.0335 & 1.7259\end{array}\right ] \times 10^{-2}. \end{align*}

and the optimal signal takes the following form:

(37) \begin{align} s_{t}=0.2594z_{t}+0.9632a_{t}+0.0706\widetilde{K}_{t}+v_{t}, \end{align}

where $v_{t}$ is a Gaussian white noise with variance 0.0022 $.$ The signal is one-dimensional and indicates that the planner may attribute a positive investment shock to a positive TFP shock given a positive realization of the signal.

Figure 3 presents the IRFs to the two shocks. As is well known, the investment shock cannot generate comovement between consumption and investment under full information. In this case, a positive investment shock causes investment to rise, but consumption to fall. By contrast, under RI, the planner confuses an investment shock with a TFP shock given the optimally chosen signal of the form (37). Thus, the investment responses to either a positive TFP shock or a positive investment shock are damped and delayed. Moreover, consumption rises on impact following a positive investment shock, generating comovement with investment.

Figure 3. Impulse responses of $I_{t}$ , $N_{t}$ , and $C_{t}$ to a one-percent innovation in the TFP and investment shocks under full information and under RI. All vertical axes are measured in percentage changes from the steady state.

Again endogenous capital utilization plays an important role. With exogenous capital utilization with $e_{t}=1$ for all $t,$ the model under RI could not generate comovement. In response to a positive investment shock, both investment and capital utilization rise, causing output to rise more beyond the rise of labor. This allows consumption to rise.

5.4. Planner’s problem III: news shock

We now remove the investment shock in the previous example and consider a news shock. Under full information, the planner’s problem is to maximize the following objective:

\begin{align*} \mathbb{E}\left [ \sum _{t=0}^{\infty }\beta ^{t}U\!\left ( C_{t},N_{t}\right ) \right ],\text{ }U\!\left ( C,N\right ) =\log \left ( C\right ) -\chi \frac{N^{1+\nu }}{1+\nu }, \end{align*}

subject to equations (30), (31), and

\begin{align*} a_{t+1}=\rho _{a}a_{t}+\epsilon _{a,t+1}+\epsilon _{n,t-h}, \end{align*}

where $\epsilon _{n,t-h}$ is a Gaussian white noise with variance $\sigma _{n}^{2}.$ The noise $\epsilon _{n,t}$ represents a news shock to the future TFP that is announced at date $t$ but realized in $h+1$ period later.

This model does not directly fit into our general framework. We use a simple example with $h=2$ to illustrate how this model can be transformed into our framework. We define the state vector as $x_{t}=\left ( a_{t},K_{t},y_{1t},y_{2t},y_{3t}\right ) ^{\prime }$ and the control vector as $u_{t}=\left ( C_{t},N_{t},e_{t}\right ) ^{\prime }.$ The state transition equations become

\begin{align*} a_{t+1} & =\rho _{a}a_{t}+\epsilon _{a,t+1}+y_{1t},\text{ }\left ( \Lambda _{1t}\right )\\[3pt]K_{t+1} & =\left ( 1-\delta \!\left ( e_{t}\right ) \right ) K_{t}+\exp \!\left ( a_{t}\right ) \left ( e_{t}K_{t}\right ) ^{\alpha }N_{t}^{1-\alpha }-C_{t}\text{, }(\Lambda _{2t})\\[3pt]y_{1,t+1} &= y_{2t},\text{ }\left ( \Lambda _{3t}\right ) \\[3pt] y_{2,t+1} &= y_{3t},\text{ }\left ( \Lambda _{4t}\right ) \\[3pt] y_{3,t+1} &= \epsilon _{n,t+1},\text{ }\left ( \Lambda _{5t}\right ) \end{align*}

where $\Lambda _{it},$ $i=1,2,..,5,$ represent the Lagrange multipliers associated with these equations. Only the steady state value of $\Lambda _{2t}$ matters for the LQ approximation as the others are associated with linear constraints.

We choose the same parameter values as in Section 5.1 except for $\lambda =0.005$ . In addition, we set $\sigma _{n}=1\%.$ Using our toolbox, we derive the coefficient matrix in the linearized policy function:

\begin{align*} F=\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} -0.3520 & -0.0213 & -0.3293 & -0.3184 & -0.3078 \\ -0.2855 & 0.0061 & 0.1327 & 0.1283 & 0.1240 \\ -1.8091 & 0.0955 & 0.3066 & 0.2964 & 0.2865\end{array}\right ]. \end{align*}

The optimal signal takes the form:

(38) \begin{align} s_{t}=0.6619a_{t}+0.0490K_{t}+0.5340\epsilon _{n,t-2}+0.4184\epsilon _{n,t-1}+0.3151\epsilon _{n,t}+v_{t}, \end{align}

where $v_{t}$ is a Gaussian white noise with variance 0.0026. This signal form is similar to that in Maćkowiak et al. (Reference Maćkowiak, Matějka and Wiederholt2018). Their model does not have the endogenous capital state. Maćkowiak and Wiederholt (Reference Maćkowiak and Wiederholt2020) include capital in an equilibrium model and show that the household’s or firm’s decision problem can be approximated by an LQ tracking problem. These two papers show that RI can help explain the comovement puzzle. Unlike these papers, here we analyze a social planner’s control problem in which $K_{t}$ is a hidden state and is included in the signal.

As is well known, news shock cannot generate comovement in a standard RBC model under full information [Beaudry and Portier (Reference Beaudry and Portier2004)]. As shown in Figure 4, a positive news shock about future TFP raises consumption but reduces labor and investment. Introducing variable capital utilization alone does not help because the capital utilization declines making output declines even more. Thus, a positive news shock cannot cause both consumption and investment to rise. To generate comovement in the RBC framework, Jaimovich and Rebelo (Reference Jaimovich and Rebelo2009) argue that one has to introduce two more elements: preferences that allow the modeler to parameterize the strength of wealth effects and investment adjustment costs.

Figure 4. Impulse responses of $I_{t}$ , $N_{t}$ , and $C_{t}$ to a one-percent innovation in the TFP and news shocks under full information and under RI. All vertical axes are measured in percentage changes from the steady state.

By contrast, Figure 4 shows that RI combined with variable capital utilization can generate comovement. The intuition comes from the optimal signal form in equation (38), which implies that the planner cannot distinguish a positive news shock about the future TFP from a positive shock to the current TFP. Thus, the planner raises labor supply, investment, and capital utilization in response to a positive news shock. The model also generates damped, delayed, and hump-shaped responses to the current TFP shock, even though there is no investment adjustment cost.

5.5. Durable and nondurable consumption

In this subsection, we study an agent’s consumption/saving problem with both durable and nondurable goods. We modify the models of Bernanke (Reference Bernanke1985) and Luo et al. (Reference Luo, Nie and Young2015) by introducing a general power utility function and adjustment costs.

Under full information, the agent chooses nondurable consumption $\left \{ c_{t}\right \},$ durable good investment $\left \{ I_{t}\right \},$ and asset holdings $\left \{ b_{t+1}\right \}$ to maximize discounted utility:

\begin{align*} \mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}\left [ U\!\left ( c_{t},k_{t}\right ) -\frac{\phi _{b}}{2}\left ( b_{t}-\overline{b}\right ) ^{2}\right ],\text{ }U\!\left ( c,k\right ) =\frac{\left ( c^{\theta }k^{1-\theta }\right ) ^{1-\gamma }}{1-\gamma }, \end{align*}

subject to

(39a) \begin{align} c_{t}+b_{t+1}+I_{t}+\frac{\phi _{k}k_{t}}{2}\left ( \frac{I_{t}}{k_{t}}-\delta \right ) ^{2} & =Rb_{t}+\overline{y}\exp \!\left ( y_{1t}+y_{2t}+\epsilon _{zt}\right ),\text{ }\Lambda _{5t}\nonumber\\k_{t+1} & =\left ( 1-\delta \right ) k_{t}+I_{t},\text{ }(\Lambda _{4t})\nonumber\\y_{1,t+1} & = \rho _{1}y_{1t}+\epsilon _{1,t+1},(\Lambda _{1,t}) \\ y_{2,t+1} & = \rho _{2}y_{2t}+\epsilon _{2,t+1},(\Lambda _{2t})\nonumber \end{align}

where $\beta =1/R,$ $k_{t}$ represents durable consumption, and $\epsilon _{1t},$ $\epsilon _{2t},$ and $\epsilon _{zt}$ are independent Gaussian white noise with variances $\sigma _{1}^{2},$ $\sigma _{2}^{2},$ and $\sigma _{z}^{2},$ respectively. Here, $y_{1t}$ and $y_{2t}$ are two persistent components of labor income and $\epsilon _{zt}$ is a purely temporary component.

Assume that durable investment incurs quadratic costs with parameter $\phi _{k}\gt 0$ . Following Schmitt-Grohé and Uribe (Reference Schmitt-Grohé and Uribe2003), we also introduce portfolio adjustment costs with parameter $\phi _{b}\gt 0$ to ensure that there is a nonstochastic steady state for the model given $\beta =1/R.$ One can show that the steady state asset holdings are $\overline{b}$ and the steady state nondurable and durable consumption is given by:

\begin{align*} \overline{c}=\frac{(R-1)\bar{b}+\bar{y}}{1+\frac{\delta \beta (1-\theta )}{\theta \lbrack 1-\beta (1-\delta )]}}\gt 0,\text{ }\overline{k}=\frac{\beta \!\left ( 1-\theta \right ) }{\theta \left [ 1-\beta \!\left ( 1-\delta \right ) \right ] }\overline{c}. \end{align*}

This is a version of the permanent income hypothesis in that both durable and nondurable consumption levels are proportional to the annuity value of his human and nonhuman wealth $\left ( R-1\right ) \overline{b}+\overline{y}$ in the steady state.

When there is no adjustment cost ( $\phi _{k}=\phi _{b}=0$ ) and when utility is quadratic, for example, $U\!\left ( c,k\right ) =-\left ( c-c_{\max }\right ) ^{2}-\theta \!\left ( k-k_{\max }\right ) ^{2}$ , Luo et al. (Reference Luo, Nie and Young2015) show that one can choose the expected lifetime resource (analogous to permanent income) as a single state variable so that optimal dural and nondurable consumption is linear in this state variable. As both consumption processes follow a random walk, there is no nonstochastic steady state. Unfortunately, their approach does not apply to models beyond the LQ framework, like our model.

We now apply our LQ approximation approach. We choose the state vector as $x_{t}=\left ( y_{1t},y_{2t},y_{zt},k_{t},b_{t}\right ) ^{\prime }$ and the control vector as $u_{t}=\left ( c_{t},I_{t}\right ) ^{\prime }.$ The state variable $y_{zt}$ represents the temporary shock $\epsilon _{zt}$ , and its transition equation is given by:

\begin{align*} y_{z,t+1}=\epsilon _{z,t+1}.\text{ }\left ( \Lambda _{3t}\right ) \end{align*}

Replace $\epsilon _{zt}$ with $y_{zt}$ in (39a). Then the model fits into our general framework. Let $\Lambda _{it},$ $i=1,\ldots,5,$ denote the Lagrange multipliers associated with the state transition equations for $x_{t}$ . One can derive their steady state values as:

\begin{align*} \overline{\Lambda }_{1}=& \frac{\beta \overline{y}}{1-\beta \rho _{1}}U_{c},\text{ }\overline{\Lambda }_{2}=\frac{\beta \overline{y}}{1-\beta \rho _{2}}U_{c}, \\ \overline{\Lambda }_{3}& =\beta \overline{y}{U}_{c},\text{ }\overline{\Lambda }_{4}=\overline{\Lambda }_{5}=U_{c},\text{ }U_{c}=\left ( \overline{c}^{\theta }\overline{k}^{1-\theta }\right ) ^{-\gamma }\theta \overline{c}^{\theta -1}\overline{k}^{1-\theta }. \end{align*}

Suppose that the agent does not fully observe the states and acquires information to learn about the states subject to discounted information costs modeled by the discounted mutual information. To solve the model numerically, we set the following parameter values: $R=1.01,$ $\beta =1/R,$ $\gamma =2,$ $\theta =0.8,$ $\overline{y}=1,$ $\overline{b}=0.2,$ $\delta =0.02,$ $\phi _{k}=0.5,$ $\phi _{b}=0.001,$ $\rho _{1}=0.97,$ $\rho _{2}=0.8,$ $\sigma _{1}^{2}=0.0001,$ $\ \sigma _{2}^{2}=0.003,$ $\sigma _{z}^{2}=0.01,$ and $\lambda =0.005.$

Our toolbox delivers the coefficient matrix in the linearized policy function:

\begin{align*} F=\left [ \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} -0.3980 & -0.1037 & -0.0234 & -0.0071 & -0.0236 \\ -0.6889 & -0.2660 & -0.0851 & 0.1489 & -0.0860\end{array}\right ]. \end{align*}

The optimal signal is given by:

(40) \begin{align} s_{t}=0.9593\widetilde{y}_{1t}+0.2678\widetilde{y}_{2t}+0.0604\widetilde{y}_{zt}-0.0240\widetilde{k}_{t}+0.0610\widetilde{b}_{t}+v_{t}, \end{align}

where $v_{t}$ is a Gaussian white noise with variance 0.0129.

Figure 5 shows IRFs to a one-percent positive innovation shock to $y_{1t}.$ Under full information, both nondurable consumption $c_{t}$ and the durable good expenditure $I_{t}$ rise on impact and decline monotonically to the steady state. By contrast, both responses under RI are damped, delayed, and have a hump shape. Figures 6 and 7 present IRFs to a one-percent positive innovation shock to the less persistent income $y_{2t}$ and the purely temporary income $y_{zt}.$ These figures show that they have similar patterns, though the magnitudes of responses are different. The intuition comes from the optimal signal form in (40). The agent cannot differentiate three sources of income shocks given this signal.

Figure 5. Impulse responses to a one-percent innovation shock to $y_{1t}$ under full information and under RI. All vertical axes are measured in percentage changes from the steady state.

Figure 6. Impulse responses to a one-percent innovation shock to $y_{2t}$ under full information and under RI. All vertical axes are measured in percentage changes from the steady state.

Figure 7. Impulse responses to a one-percent innovation shock to $\epsilon _{zt}$ under full information and under RI. All vertical axes are measured in percentage changes from the steady state.

6. Conclusion

In this paper, we have proposed an LQ approximation approach to dynamic nonlinear rationally inattentive control problems with multiple states and multiple controls. We have also provided a toolbox to implement this approach efficiently. Applying our toolbox to five economic examples, we show that RI possibly combined with variable capital utilization can help explain the comovement puzzle in the DSGE literature.

While our approach is quite general and can be easily implemented just as in the DSGE literature, some qualifications and weaknesses need to be stressed. First, our approach is local in nature and needs the problem to be smooth and the solution to be interior. Thus, our approach does not apply to problems with occasionally binding constraints or nonsmooth objective functions. Second, our approach applies to decision problems under Gaussian uncertainty only. Solving market equilibrium problems needs an outer loop to find equilibrium prices. Third, our approach obtains a correct linearized solution around a deterministic steady state. As Kim and Kim (Reference Kim and Kim2003) point out, when this linearized solution is used to evaluate welfare, it can lead to spurious results. Finally, because the certainty equivalence principle holds in our approach, the equilibrium distribution from the linearized solution does not take into account the feedback of risk on the DMs’ behavior in the nonlinear control problems.

Appendix A: Proofs

Proof of Lemma 1: Define the function $H$ as in (13). Using (1), we derive

\begin{align*} \mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}f\!\left ( x_{t},u_{t}\right ) =\mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}\left [ f\!\left ( x_{t},u_{t}\right ) -\overline{\Lambda }^{\prime }\left ( x_{t+1}-g\!\left ( x_{t},u_{t},\epsilon _{t+1}\right ) \right ) \right ]. \end{align*}

Take a second-order Taylor expansion around the nonstochastic steady state. The constant term is

\begin{align*} \frac{f\!\left ( \overline{x},\bar{u}\right ) }{1-\beta }. \end{align*}

The first-order terms are equal to

\begin{eqnarray*} &&\mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}\left [ f_{x}\widetilde{x}_{t}+f_{u}\widetilde{u}_{t}-\overline{\Lambda }^{\prime }\left ( \widetilde{x}_{t+1}-g_{x}\widetilde{x}_{t}-g_{u}\widetilde{u}_{t}-g_{\epsilon }\widetilde{\epsilon }_{t+1}\right ) \right ] \\ && \quad =\mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}\widetilde{u}_{t}\left ( f_{u}+\overline{\Lambda }^{\prime }g_{u}\right ) +\mathbb{E}\!\left ( f_{x}+\overline{\Lambda }^{\prime }g_{x}\right ) \widetilde{x}_{0}+\mathbb{E}\left [ \beta \!\left ( f_{x}+\overline{\Lambda }^{\prime }g_{x}\right ) \widetilde{x}_{1}-\overline{\Lambda }^{\prime }\widetilde{x}_{1}\right ] \\ &&\qquad+\beta \mathbb{E}\left [ \beta \!\left ( f_{x}+\overline{\Lambda }^{\prime }g_{x}\right ) \widetilde{x}_{2}-\overline{\Lambda }^{\prime }\widetilde{x}_{2}\right ] +\ldots +\lim _{T\rightarrow \infty }\overline{\Lambda }^{\prime }\beta ^{T}\mathbb{E}\left [ \widetilde{x}_{T+1}\right ]. \end{eqnarray*}

Clearly, $\mathbb{E}\widetilde{x}_{0}=0$ as $x_{0}$ is exogenously given and $\mathbb{E}\widetilde{\epsilon }_{t+1}=0$ as $\widetilde{\epsilon }_{t+1}=\epsilon _{t+1}.$ It follows from (7), (8), and $\lim _{T\rightarrow \infty }\mathbb{E}\left [ \widetilde{x}_{T+1}\right ] =0$ that the above first-order terms are equal to zero.

The second-order terms are equal to

\begin{eqnarray*} &&\frac{1}{2}\mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}\left [ \begin{array}{c} \widetilde{x}_{t} \\ \widetilde{u}_{t} \\ \widetilde{\epsilon }_{t+1}\end{array}\right ] ^{\prime }\left [ \begin{array}{l@{\quad}l@{\quad}l} H_{xx} & H_{xu} & H_{x\epsilon } \\ H_{ux} & H_{uu} & H_{u\epsilon } \\ H_{\epsilon x} & H_{ux} & H_{\epsilon \epsilon }\end{array}\right ] \left [ \begin{array}{c} \widetilde{x}_{t} \\ \widetilde{u}_{t} \\ \widetilde{\epsilon }_{t+1}\end{array}\right ] \\[6pt] &&\quad=\frac{1}{2}\mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}\left [ \widetilde{x}_{t}^{\prime },\widetilde{u}_{t}^{\prime }\right ] \left [ \begin{array}{c@{\quad}c} H_{xx} & H_{xu} \\ H_{ux} & H_{uu}\end{array}\right ] \left [ \begin{array}{c} \widetilde{x}_{t} \\ \widetilde{u}_{t}\end{array}\right ] +\frac{1}{2}\text{Tr}\!\left ( H_{\epsilon \epsilon }\right ), \end{eqnarray*}

because $\mathbb{E}\epsilon _{t+1}=0$ and

\begin{align*} \mathbb{E}\left [ \widetilde{\epsilon }_{t+1}^{\prime }H_{\epsilon \epsilon }\widetilde{\epsilon }_{t+1}\right ] =\text{Tr}\!\left ( H_{\epsilon \epsilon }\mathbb{E}\left [ \widetilde{\epsilon }_{t+1}\widetilde{\epsilon }_{t+1}^{\prime }\right ] \right ) =\text{Tr}\!\left ( H_{\epsilon \epsilon }\right ). \end{align*}

Combining the terms above yields the desired result. Q.E.D.

Proof of Lemma 2: Define the Lagrangian as:

\begin{align*} \frac{1}{2}\mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}\left [ \widetilde{x}_{t}^{\prime },\widetilde{u}_{t}^{\prime }\right ] \left [ \begin{array}{c@{\quad}c} H_{xx} & H_{xu} \\ H_{ux} & H_{uu}\end{array}\right ] \left [ \begin{array}{c} \widetilde{x}_{t} \\ \widetilde{u}_{t}\end{array}\right ] -\mathbb{E}\sum _{t=0}^{\infty }\beta ^{t}\Phi _{t}^{\prime }\left [ \widetilde{x}_{t+1}-g_{x}\widetilde{x}_{t}-g_{u}\widetilde{u}_{t}-g_{\epsilon }\epsilon _{t+1}\right ], \end{align*}

where $\Phi _{t}$ denotes the Lagrange multiplier adapted to $s^{t}$ . The first-order conditions are given by:

\begin{eqnarray*} \widetilde{u}_{t} &:&\text{ }H_{uu}\widetilde{u}_{t}+H_{ux}\mathbb{E}_{t}\left [ \widetilde{x}_{t}\right ] +g_{u}^{\prime }\Phi _{t}=0, \\[5pt] \widetilde{x}_{t+1} &:&\text{ }\Phi _{t}=\beta \mathbb{E}_{t}\left [ H_{xx}\widetilde{x}_{t+1}+H_{xu}\widetilde{u}_{t+1}+g_{x}^{\prime }\Phi _{t+1}\right ]. \end{eqnarray*}

By the definition of $H$ in equation (13), we deduce that the above conditions are the same as equations (9) and (10) by setting $\Phi _{t}=\widetilde{\Lambda }_{t}.$ Q.E.D.

Footnotes

We are grateful to two anonymous referees and an associate editor for helpful comments. Bo Zhang is grateful to the support by the Guanghua Talent Project of Southwestern University of Finance and Economics.

2 There is also a literature that studies rationally inattentive discrete choice problems [e.g. Woodford (Reference Woodford2009), Matějka and McKay (Reference Matějka and McKay2015), Steiner et al. (Reference Steiner, Stewart and Matějka2017), Caplin et al. (Reference Caplin, Dean and Leahy2019), and Miao and Xing (in press)]. Sims (Reference Sims2006), Mondria (Reference Mondria2010), Kacperczyk et al. (Reference Kacperczyk, Van Nieuwerburgh and Veldkamp2016), Kőszegi and Matějka (Reference Kőszegi and Matějka2020), Hébert and La’O (Reference Hébert and La’O2021), Fulton (Reference Fulton2022), and Miao and Su (Reference Miao and Su2023) study static RI models with continuous choices. By contrast, our paper focuses on dynamic control problems with continuous choices.

3 See, for example, Zorn (Reference Zorn2018) and Maćkowiak and Wiederholt (Reference Maćkowiak and Wiederholt2015) for applications of this approach.

4 It is defined as:

\begin{align*} H\!\left ( X|Y\right ) =\int \int p\!\left ( x,y\right ) \log p(x|y)dxdy, \end{align*}

where $p\!\left ( x,y\right )$ and $p\!\left ( x|y\right )$ are the joint pdf of $\left ( X,Y\right )$ and the conditional pdf of $X$ given $Y.$ See Cover and Thomas (Reference Cover and Thomas2006).

5 There is also a transversality condition:

\begin{align*} \lim _{t\rightarrow \infty }\beta ^{t}\mathbb{E}\left [ \lambda _{t}x_{t+1} \right ] =0. \end{align*}

6 We use the conventional matrix inequality notations: $X\succ \!\left ( \succeq \right ) Y$ means that $X-Y$ is positive definite (semidefinite) and $X\prec \left ( \preceq \right ) Y$ means $X-Y$ is negative definite (semidefinite).

7 See Angeletos and Lian (Reference Angeletos and Lian2022) for a discussion of the importance of both partial information and variable capital utilization for generating comovement.

References

Afrouzi, H. and Yang, C. (2021) Dynamic Rational Inattention and the Phillips Curve. Working Paper, Columbia University.10.2139/ssrn.3770462CrossRefGoogle Scholar
Angeletos, G.-M. and Lian, C. (2017) Incomplete information in macroeconomics: Accommodating frictions in oordination. In: Angeletos, G.-M. and Lian, C. (eds.), Handbook of Macroeconomics, vol. 2, pp. 10651240. Amsterdam: North Holland.Google Scholar
Angeletos, G.-M. and Lian, C. (2022) Confidence and the propagation of demand shocks. Review of Economic Studies 89(3), 10851119.10.1093/restud/rdab064CrossRefGoogle Scholar
Beaudry, P. and Portier, F. (2004) An exploration into Pigou’s theory of cycles. Journal of Monetary Economics 51(6), 11831216.10.1016/j.jmoneco.2003.10.003CrossRefGoogle Scholar
Benigno, P. and Woodford, M. (2012) Linear-quadratic approximation of optimal policy problems. Journal of Economic Theory 147(1), 142.10.1016/j.jet.2011.10.012CrossRefGoogle Scholar
Bernanke, B. (1985) Adjustment costs, durable, and aggregate consumption. Journal of Monetary Economics 15(1), 4168.10.1016/0304-3932(85)90052-2CrossRefGoogle Scholar
Caplin, A., Dean, M. and Leahy, J. (2019) Rational inattention, optimal consideration sets, and stochastic choice. Review of Economic Studies 86(3), 10611094.10.1093/restud/rdy037CrossRefGoogle Scholar
Cover, T. M. and Thomas, J. A. (2006) Elements of Information Theory, 2nd ed. New York: John Wiley & Sons, Inc.Google Scholar
Fulton, C. (2022) Choosing what to pay attention to. Theoretical Economics 17(1), 153184.10.3982/TE3850CrossRefGoogle Scholar
Hébert, B. and La’O, J. (2021) Information Acquisition, Efficiency, and Non-fundamental Volatility. Working Paper, Columbia University and Stanford University.10.3386/w26771CrossRefGoogle Scholar
Jaimovich, N. and Rebelo, S. (2009) Can news about the future drive the business cycle? American Economic Review 99(4), 10971118.10.1257/aer.99.4.1097CrossRefGoogle Scholar
Judd, K. L. (1998) Numerical Methods in Economics. Cambridge, MA: The MIT Press.Google Scholar
Kacperczyk, M., Van Nieuwerburgh, S. and Veldkamp, L. (2016) A rational theory of mutual funds’ attention allocation. Econometrica 84(2), 571626.10.3982/ECTA11412CrossRefGoogle Scholar
Kim, J. and Kim, S. (2003) Spurious welfare reversals in international business cycle models. Journal of International Economics 60(2), 471500.10.1016/S0022-1996(02)00047-8CrossRefGoogle Scholar
Kydland, F. E. and Prescott, E. C. (1982) Time to build and aggregate fluctuations. Econometrica 50(6), 13451370.10.2307/1913386CrossRefGoogle Scholar
Kőszegi, B. and Matějka, F. (2020) Choice simplification: A theory of mental budgeting and naive diversification. Quarterly Journal of Economics 135(2), 11531207.10.1093/qje/qjz043CrossRefGoogle Scholar
Levine, P., Pearlman, J. and Pierse, R. (2008) Linear-quadratic approximation, external habit, and targeting rules. Journal of Economic Dynamics and Control 32(10), 33153349.10.1016/j.jedc.2008.02.001CrossRefGoogle Scholar
Luo, Y. (2008) Consumption dynamics under information processing constraints. Review of Economic Dynamics 11(2), 366385.10.1016/j.red.2007.07.003CrossRefGoogle Scholar
Luo, Y., Nie, J. and Young, E. R. (2015) Slow information diffusion and the inertial behavior of durable consumption. Journal of the European Economic Association 13(5), 805840.10.1111/jeea.12125CrossRefGoogle Scholar
Magill, M. J. P. (1977) Some new results on the local stability of the process of capital accumulation. Journal of Economic Theory 15(1), 174210.10.1016/0022-0531(77)90075-8CrossRefGoogle Scholar
Matějka, F. and McKay, A. (2015) Rational inattention to discrete choices: A new foundation for the multinomial logit model. American Economic Review 105(1), 272298.10.1257/aer.20130047CrossRefGoogle Scholar
Maćkowiak, B., Matějka, F. and Wiederholt, M. (2018) Dynamic rational inattention: Analytical results. Journal of Economic Theory 176, 650692.10.1016/j.jet.2018.05.001CrossRefGoogle Scholar
Maćkowiak, B., Matějka, F. and Wiederholt, M. (2020) Rational Inattention: A Review. Working Paper, Sciences Po., CEPR DP1540.Google Scholar
Maćkowiak, B. and Wiederholt, M. (2009) Optimal sticky prices under rational inattention. American Economic Review 99(3), 769803.10.1257/aer.99.3.769CrossRefGoogle Scholar
Maćkowiak, B. and Wiederholt, M. (2015) Business cycle dynamics under rational inattention. Review of Economic Studies 82(4), 15021532.10.1093/restud/rdv027CrossRefGoogle Scholar
Maćkowiak, B. and Wiederholt, M. (2020) Rational Inattention and the Business Cycle Effects of Productivity and News Shocks. Working Paper, European Central Bank and Sciences Po.Google Scholar
Miao, J. and Su, D. (2023) Asset market equilibrium under rational inattention. Economc Theory 75(1), 130.10.1007/s00199-021-01396-zCrossRefGoogle Scholar
Miao, J. and Wu, J. (2021) A Matlab Toolbox to Solve Dynamic Multivariate Rational Inattention Problems in the LQG Framework. Working Paper, Boston University.Google Scholar
Miao, J., Wu, J. and Young, E. R. (2022) Multivariate rational inattention. Econometrica 2(2), 907945.10.3982/ECTA18086CrossRefGoogle Scholar
Miao, J. and Xing, H. (in press) Dynamic discrete choice under rational inattention. Economic Theory.Google Scholar
Mondria, J. (2010) Portfolio choice, attention allocation, and price comovement. Journal of Economic Theory 145(5), 18371864.10.1016/j.jet.2010.03.001CrossRefGoogle Scholar
Peng, L. (2005) Learning with information capacity constraints. Journal of Financial and Quantitative Analysis 40(2), 307329.10.1017/S0022109000002325CrossRefGoogle Scholar
Peng, L. and Xiong, W. (2006) Investor attention, overconfidence and category learning. Journal of Financial Economics 80(3), 563602.10.1016/j.jfineco.2005.05.003CrossRefGoogle Scholar
Schmitt-Grohé, S. and Uribe, M. (2003) Closing small open economy models. Journal of Monetary Economics 61, 163185.Google Scholar
Sims, C. A. (1998). Stickiness. Carnegie-Rochester Conference Series on Public Policy 49, 317356.10.1016/S0167-2231(99)00013-5CrossRefGoogle Scholar
Sims, C. A. (2003) Implications of rational inattention. Journal of Monetary Economics 50(3), 665690.10.1016/S0304-3932(03)00029-1CrossRefGoogle Scholar
Sims, C. A. (2006) Rational inattention: Beyond the linear-quadratic case. American Economic Review 96(2), 158163.10.1257/000282806777212431CrossRefGoogle Scholar
Sims, C. A. (2011) Rational inattention and monetary economics. In: B. M. Friedman and M. Woodford (eds.), Handbook of Monetary Economics, vol. 3A. Amsterdam: North-Holland.Google Scholar
Steiner, J., Stewart, C. and Matějka, F. (2017) Rational inattention dynamics: Inertia and delay in decision-making. Econometrica 85(2), 521553.10.3982/ECTA13636CrossRefGoogle Scholar
Tanaka, T., Kim, K. K., Parrio, P. A. and Mitter, S. K. (2017) Semidefinite programming approach to Gaussian sequential rate-distortion trade-offs. IEEE Transactions on Automatic Control 62(4), 18961910.10.1109/TAC.2016.2601148CrossRefGoogle Scholar
Woodford, M. (2009) Information-constrained state-dependent pricing. Journal of Monetary Economics 56, S100S124.10.1016/j.jmoneco.2009.06.014CrossRefGoogle Scholar
Zorn, P. (2018) Investment Under Rational Inattention: Evidence from US Sectoral Data. Working Paper, University of Munich.Google Scholar
Figure 0

Figure 1. Impulse responses of $C_{t}$ to a one-standard-deviation innovation in the two TFP components under full information and under RI. The left and right two panels display results using our method and the ad hoc LQ approximation method, respectively. All vertical axes are measured in percentage changes from the steady state.

Figure 1

Figure 2. Impulse responses of $I_{t}$, $N_{t}$, and $C_{t}$ to a one-percent innovation in the TFP and preference shocks under full information and under RI. All vertical axes are measured in percentage changes from the steady state.

Figure 2

Figure 3. Impulse responses of $I_{t}$, $N_{t}$, and $C_{t}$ to a one-percent innovation in the TFP and investment shocks under full information and under RI. All vertical axes are measured in percentage changes from the steady state.

Figure 3

Figure 4. Impulse responses of $I_{t}$, $N_{t}$, and $C_{t}$ to a one-percent innovation in the TFP and news shocks under full information and under RI. All vertical axes are measured in percentage changes from the steady state.

Figure 4

Figure 5. Impulse responses to a one-percent innovation shock to $y_{1t}$ under full information and under RI. All vertical axes are measured in percentage changes from the steady state.

Figure 5

Figure 6. Impulse responses to a one-percent innovation shock to $y_{2t}$ under full information and under RI. All vertical axes are measured in percentage changes from the steady state.

Figure 6

Figure 7. Impulse responses to a one-percent innovation shock to $\epsilon _{zt}$ under full information and under RI. All vertical axes are measured in percentage changes from the steady state.