Working Paper: NBER ID: w28981
Authors: Mahdi Ebrahimi Kahou; Jess Fernández-Villaverde; Jesse Perla; Arnav Sood
Abstract: We propose a new method for solving high-dimensional dynamic programming problems and recursive competitive equilibria with a large (but finite) number of heterogeneous agents using deep learning. We avoid the curse of dimensionality thanks to three complementary techniques: (1) exploiting symmetry in the approximate law of motion and the value function; (2) constructing a concentration of measure to calculate high-dimensional expectations using a single Monte Carlo draw from the distribution of idiosyncratic shocks; and (3) designing and training deep learning architectures that exploit symmetry and concentration of measure. As an application, we find a global solution of a multi-firm version of the classic Lucas and Prescott (1971) model of investment under uncertainty. First, we compare the solution against a linear-quadratic Gaussian version for validation and benchmarking. Next, we solve the nonlinear version where no accurate or closed-form solution exists. Finally, we describe how our approach applies to a large class of models in economics.
Keywords: Dynamic Programming; Deep Learning; Heterogeneous Agents; Symmetry
JEL Codes: C02; E00
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
new method for solving high-dimensional dynamic programming problems (C69) | significant improvements in computational efficiency (C63) |
method exploits symmetry in the model (C51) | significant improvements in computational efficiency (C63) |
method reduces the complexity of calculating conditional expectations (C51) | calculation of high-dimensional expectations using a single Monte Carlo draw (C15) |
neural network approach (C45) | accurate results even in cases where traditional analytical solutions are infeasible (C60) |
deep learning architecture (C45) | performance of the model across various conditions (C52) |