Working Paper: NBER ID: w10450
Authors: Patrick Bajari; C. Lanier Benkard; Jonathan Levin
Abstract: We describe a two-step algorithm for estimating dynamic games under the assumption that behavior is consistent with Markov Perfect Equilibrium. In the first step, the policy functions and the law of motion for the state variables are estimated. In the second step, the remaining structural parameters are estimated using the optimality conditions for equilibrium. The second step estimator is a simple simulated minimum distance estimator. The algorithm applies to a broad class of models, including I.O. models with both discrete and continuous controls such as the Ericson and Pakes (1995) model. We test the algorithm on a class of dynamic discrete choice models with normally distributed errors, and a class of dynamic oligopoly models similar to that of Pakes and McGuire (1994).
Keywords: dynamic models; imperfect competition; Markov perfect equilibrium; estimation algorithms
JEL Codes: L0; C5
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
two-step estimation algorithm (C51) | estimation of dynamic models (C51) |
two-step estimation algorithm (C51) | estimation of structural parameters (C51) |
recovering agents' policy functions and beliefs (E65) | estimating parameters governing decision-making processes (D91) |
simulated minimum distance estimator (C51) | minimizes violations of optimality conditions (C61) |
two-step estimation algorithm (C51) | positive effect on efficiency of parameter estimation (C51) |