So You Want to Run an Experiment? Now What? Some Simple Rules of Thumb for Optimal Experimental Design

Working Paper: NBER ID: w15701

Authors: John A. List; Sally Sadoff; Mathis Wagner

Abstract: Experimental economics represents a strong growth industry. In the past several decades the method has expanded beyond intellectual curiosity, now meriting consideration alongside the other more traditional empirical approaches used in economics. Accompanying this growth is an influx of new experimenters who are in need of straightforward direction to make their designs more powerful. This study provides several simple rules of thumb that researchers can apply to improve the efficiency of their experimental designs. We buttress these points by including empirical examples from the literature.

Keywords: experimental design; randomization; treatment effects

JEL Codes: C9; C91; C92; C93


Causal Claims Network Graph

Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.


Causal Claims

CauseEffect
Optimal sample size arrangements (C90)Power of the experiment (C90)
Proportional sample sizes to standard deviations (C46)Optimal treatment effect estimation (C22)
Equal allocation across treatment and control groups (C90)Optimal treatment effect estimation (under equal variances) (C21)
Randomization (C90)Unbiased estimates of average treatment effect (C51)

Back to index