The Dozen Things Experimental Economists Should Do More Of

Working Paper: NBER ID: w25451

Authors: Eszter Czibor; David Jimenezgomez; John A. List

Abstract: What was once broadly viewed as an impossibility – learning from experimental data in economics – has now become commonplace. Governmental bodies, think tanks, and corporations around the world employ teams of experimental researchers to answer their most pressing questions. For their part, in the past two decades academics have begun to more actively partner with organizations to generate data via field experimentation. While this revolution in evidence-based approaches has served to deepen the economic science, recently a credibility crisis has caused even the most ardent experimental proponents to pause. This study takes a step back from the burgeoning experimental literature and introduces 12 actions that might help to alleviate this credibility crisis and raise experimental economics to an even higher level. In this way, we view our “12 action wish list” as discussion points to enrich the field.

Keywords: No keywords provided

JEL Codes: C9; C90; C91; C92; C93; D03


Causal Claims Network Graph

Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.


Causal Claims

CauseEffect
randomization in experiments (C90)identify causal effects (C22)
controlled experiments in economics (C90)enhance understanding of causal relationships (C90)
properly constructed experiments (C90)go beyond mere measurement (C99)
natural field experiments (NFEs) (C93)avoid biases associated with participant selection (C90)
natural field experiments (NFEs) (C93)greater external validity (C90)
considering statistical power (C90)avoid false positives (C52)
methodological rigor and thoughtful experimental design (C90)enhance causal inference (C32)

Back to index