Lottery-Based Evaluations of Early Education Programs: Opportunities and Challenges for Building the Next Generation of Evidence

Working Paper: NBER ID: w30970

Authors: Christina Weiland; Rebecca Unterman; Susan Dynarski; Rachel Abenavoli; Howard Bloom; Breno Braga; Annmarie Faria; Erica H. Greenberg; Brian Jacob; Jane Arnold Lincove; Karen Manship; Meghan McCormick; Luke Miratrix; Toms E. Monarrez; Pamela Morris-Perez; Anna Shapiro; Jon Valant; Lindsay Weixler

Abstract: Lottery-based identification strategies offer potential for generating the next generation of evidence on U.S. early education programs. Our collaborative network of five research teams applying this design in early education and methods experts has identified six challenges that need to be carefully considered in this next context: 1) available baseline covariates may not be very rich; 2) limited data on the counterfactual; 3) limited and inconsistent outcome data; 4) weakened internal validity due to attrition; 5) constrained external validity due to who competes for oversubscribed programs; and 6) difficulties answering site-level questions with child-level randomization. We offer potential solutions to these six challenges and concrete recommendations for the design of future lottery-based early education studies.

Keywords: lottery-based evaluations; early education programs; causal inference; policy recommendations

JEL Codes: I20; I21


Causal Claims Network Graph

Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.


Causal Claims

CauseEffect
attrition (J63)internal validity (C90)
characteristics of applicants (I23)internal validity (C90)
baseline covariates (C29)reliability of causal claims (C90)
outcome data quality (C52)reliability of causal claims (C90)
lottery assignment (H27)educational outcomes (I26)

Back to index