Working Paper: NBER ID: w22566
Authors: Stefano Dellavigna; Devin Pope
Abstract: Academic experts frequently recommend policies and treatments. But how well do they anticipate the impact of different treatments? And how do their predictions compare to the predictions of non-experts? We analyze how 208 experts forecast the results of 15 treatments involving monetary and non-monetary motivators in a real-effort task. We compare these forecasts to those made by PhD students and non-experts: undergraduates, MBAs, and an online sample. We document seven main results. First, the average forecast of experts predicts quite well the experimental results. Second, there is a strong wisdom-of-crowds effect: the average forecast outperforms 96 percent of individual forecasts. Third, correlates of expertise---citations, academic rank, field, and contextual experience--do not improve forecasting accuracy. Fourth, experts as a group do better than non-experts, but not if accuracy is defined as rank ordering treatments. Fifth, measures of effort, confidence, and revealed ability are predictive of forecast accuracy to some extent, especially for non-experts. Sixth, using these measures we identify `superforecasters' among the non-experts who outperform the experts out of sample. Seventh, we document that these results on forecasting accuracy surprise the forecasters themselves. We present a simple model that organizes several of these results and we stress the implications for the collection of forecasts of future experimental results.
Keywords: expert predictions; forecast accuracy; wisdom of crowds; behavioral economics
JEL Codes: C9; C91; C93; D03
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
expert forecasting (G17) | predicting outcomes (C53) |
aggregating forecasts (C53) | better predictions (C53) |
expertise (D80) | forecasting accuracy (C53) |
effort, confidence, and revealed ability (D29) | forecasting accuracy among nonexperts (C53) |