Working Paper: NBER ID: w23957
Authors: Karthik Muralidharan; Paul Niehaus
Abstract: This paper makes the case for greater use of randomized experiments “at scale.” We review various critiques of experimental program evaluation in developing countries, and discuss how experimenting at scale along three specific dimensions – the size of the sampling frame, the number of units treated, and the size of the unit of randomization – can help alleviate them. We find that program evaluation randomized controlled trials published in top journals over the last 15 years have typically been “small” in these senses, but also identify a number of examples – including from our own work – demonstrating that experimentation at much larger scales is both feasible and valuable.
Keywords: Randomized Controlled Trials; Development Economics; Policy Evaluation
JEL Codes: C93; H4; H50; O20
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
Larger-scale randomized experiments (C90) | Improve external validity of findings (C90) |
Larger samples (C55) | More representative estimates of treatment effects (C22) |
Larger-scale RCTs (C90) | Mitigate bias from non-representative sampling (C83) |
Randomizing at larger units (C90) | Better assessment of spillover effects (C21) |
Scale of experimentation (C90) | Accuracy of policy impact estimates (C13) |
Larger experiments (C90) | Significant impacts on leakage and payment delays (G33) |