Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?

Working Paper: NBER ID: w31122

Authors: John J. Horton

Abstract: Newly-developed large language models (LLM)—because of how they are trained and designed—are implicit computational models of humans—a homo silicus. LLMs can be used like economists use homo economicus: they can be given endowments, information, preferences, and so on, and then their behavior can be explored in scenarios via simulation. Experiments using this approach, derived from Charness and Rabin (2002), Kahneman, Knetsch and Thaler (1986), and Samuelson and Zeckhauser (1988) show qualitatively similar results to the original, but it is also easy to try variations for fresh insights. LLMs could allow researchers to pilot studies via simulation first, searching for novel social science insights to test in the real world.

Keywords: Large Language Models; Behavioral Economics; Simulation; Economic Agents

JEL Codes: D0


Causal Claims Network Graph

Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.


Causal Claims

CauseEffect
type of endowment (G22)AI's choices (D79)
equity preference (G12)AI's choices (D79)
efficiency preference (D61)AI's choices (D79)
endowment (I22)AI behavior in dictator games (C72)
framing of scenarios (E17)AI responses (C45)
political leanings (D72)AI responses (C45)
no endowment (D29)AI outcomes (C45)

Back to index