Working Paper: CEPR ID: DP7323
Authors: Chaim Fershtman; Ariel Pakes
Abstract: With applied work in mind, we define an equilibrium notion for dynamic games with asymmetric information which does not require a specification for players' beliefs about their opponent types. This enables us to define equilibrium conditions which, at least in principal, are testable and can be computed using a simple reinforcement learning algorithm. We conclude with an example that endogenizes the maintenance decisions for electricity generators in a dynamic game among electric utilities in which the costs states of the generators are private information.
Keywords: Applied Markov Equilibrium; Dynamic Games; Dynamic Oligopoly
JEL Codes: C63; C73; L13
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
Equilibrium notion for dynamic games with asymmetric information (C73) | Computation of equilibria using a reinforcement learning algorithm (C73) |
Players' actions (Z22) | Their own and others' profits (D33) |
Players' actions (Z22) | Future payoffs (G19) |
State variables evolution over time (C32) | Future payoffs (G19) |
Reinforcement learning algorithm (C73) | Approximation of players' behavior in a dynamic setting (C73) |
Past experiences of players in the game (Z22) | Learning of players (Z22) |