Working Paper: NBER ID: w24678
Authors: Victor Chernozhukov; Mert Demirer; Esther Duflo; Iván Fernández-Val
Abstract: We propose strategies to estimate and make inference on key features of heterogeneous effects in randomized experiments. These key features include best linear predictors of the effects using machine learning proxies, average effects sorted by impact groups, and average characteristics of most and least impacted units. The approach is valid in high dimensional settings, where the effects are proxied (but not necessarily consistently estimated) by predictive and causal machine learning methods. We post-process these proxies into estimates of the key features. Our approach is generic, it can be used in conjunction with penalized methods, neural networks, random forests, boosted trees, and ensemble methods, both predictive and causal. Estimation and inference are based on repeated data splitting to avoid overfitting and achieve validity. We use quantile aggregation of the results across many potential splits, in particular taking medians of p-values and medians and other quantiles of confidence intervals. We show that quantile aggregation lowers estimation risks over a single split procedure, and establish its principal inferential properties. Finally, our analysis reveals ways to build provably better machine learning proxies through causal learning: we can use the objective functions that we develop to construct the best linear predictors of the effects, to obtain better machine learning proxies in the initial step. We illustrate the use of both inferential tools and causal learners with a randomized field experiment that evaluates a combination of nudges to stimulate demand for immunization in India.
Keywords: Machine Learning; Heterogeneous Treatment Effects; Randomized Experiments; Immunization
JEL Codes: C18; C21; D14; G21; O16
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
ML methods (C52) | valid estimation and inference on key features of CATE (C51) |
best linear predictors based on ML proxy predictors (C51) | nuanced understanding of heterogeneity in treatment effects (C21) |
quantile aggregation of results across multiple data splits (C32) | reduces estimation risks compared to single-split approaches (C51) |
use of causal learners (C99) | better ML proxies for CATE than traditional predictive methods (C52) |
methodology can effectively handle complex experimental designs (C90) | insights into treatment effects across different subgroups defined by baseline covariates (C32) |