Rationalizable Learning

Working Paper: NBER ID: w30873

Authors: Andrew Caplin; Daniel J. Martin; Philip Marx

Abstract: The central question we address in this paper is: what can an analyst infer from choice data about what a decision maker has learned? The key constraint we impose, which is shared across models of Bayesian learning, is that any learning must be rationalizable. To implement this constraint, we introduce two conditions, one of which refines the mean preserving spread of Blackwell (1953) to take account for optimality, and the other of which generalizes the NIAC condition (Caplin and Dean 2015) and the NIAS condition (Caplin and Martin 2015) to allow for arbitrary learning. We apply our framework to show how identification of what was learned can be strengthened with additional assumptions on the form of Bayesian learning.

Keywords: No keywords provided

JEL Codes: D83; D91


Causal Claims Network Graph

Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.


Causal Claims

CauseEffect
structure of learning (Y80)observable choice data (C81)
set of information structures is rationalizable under costly learning (D80)characteristics of learning across different contexts inform understanding of decision-making behavior (D91)
additional assumptions on the form of Bayesian learning (C11)identification of what was learned (A21)
rationalizable learning (D80)observed choice data (C25)

Back to index