Working Paper: CEPR ID: DP5964
Authors: Pierpaolo Benigno; Michael Woodford
Abstract: We consider a general class of nonlinear optimal policy problems involving forward-looking constraints (such as the Euler equations that are typically present as structural equations in DSGE models), and show that it is possible, under regularity conditions that are straightforward to check, to derive a problem with linear constraints and a quadratic objective that approximates the exact problem. The LQ approximate problem is computationally simple to solve, even in the case of moderately large state spaces and flexibly parameterized disturbance processes, and its solution represents a local linear approximation to the optimal policy for the exact model in the case that stochastic disturbances are small enough. We derive the second-order conditions that must be satisfied in order for the LQ problem to have a solution, and show that these are stronger, in general, than those required for LQ problems without forward-looking constraints. We also show how the same linear approximations to the model structural equations and quadratic approximation to the exact welfare measure can be used to correctly rank alternative simple policy rules, again in the case of small enough shocks.
Keywords: optimization
JEL Codes: C61
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
LQ methods (C51) | simplified solution to complex policy problems (D78) |
small stochastic disturbances (C69) | accurate policy guidance from LQ approximations (C54) |
stronger second-order conditions (C62) | robustness of the LQ solution (C61) |
naive LQ approximations (C51) | incorrect coefficients for policy rules (C51) |