Working Paper: CEPR ID: DP6059
Authors: Esther Duflo; Rachel Glennerster; Michael Kremer
Abstract: This paper is a practical guide (a toolkit) for researchers, students and practitioners wishing to introduce randomization as part of a research design in the field. It first covers the rationale for the use of randomization, as a solution to selection bias and a partial solution to publication biases. Second, it discusses various ways in which randomization can be practically introduced in a field settings. Third, it discusses designs issues such as sample size requirements, stratification, level of randomization and data collection methods. Fourth, it discusses how to analyze data from randomized evaluations when there are departures from the basic framework. It reviews in particular how to handle imperfect compliance and externalities. Finally, it discusses some of the issues involved in drawing general conclusions from randomized evaluations, including the necessary use of theory as a guide when designing evaluations and interpreting results.
Keywords: development; experiments; program evaluation
JEL Codes: C93
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
randomization (C90) | elimination of selection bias (C52) |
random assignment to treatment and control groups (C90) | unbiased estimates of treatment effects (C90) |
textbook provision (Y20) | test scores (C52) |
randomization (C90) | control for potential confounders (C90) |
average treatment effect (C22) | difference in means between treatment and control groups (C90) |
sufficiently large sample sizes (C55) | convergence to true treatment effect (C22) |
randomization (C90) | mitigate selection bias (C52) |