Text Selection

Working Paper: NBER ID: w26517

Authors: Bryan T. Kelly; Asaf Manela; Alan Moreira

Abstract: Text data is ultra-high dimensional, which makes machine learning techniques indispensable for textual analysis. Text is often selected—journalists, speechwriters, and others craft messages to target their audiences’ limited attention. We develop an economically motivated high dimensional selection model that improves learning from text (and from sparse counts data more generally). Our model is especially useful when the choice to include a phrase is more interesting than the choice of how frequently to repeat it. It allows for parallel estimation, making it computationally scalable. A first application revisits the partisanship of US congressional speech. We find that earlier spikes in partisanship manifested in increased repetition of different phrases, whereas the upward trend starting in the 1990s is due to entirely distinct phrase selection. Additional applications show how our model can backcast, nowcast, and forecast macroeconomic indicators using newspaper text, and that it substantially improves out-of-sample fit relative to alternative approaches.

Keywords: text analysis; machine learning; econometrics; partisanship; macroeconomic indicators

JEL Codes: C1; C4; C5; C58; E17; G12; G17


Causal Claims Network Graph

Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.


Causal Claims

CauseEffect
inclusion of specific phrases in congressional speeches (D72)partisanship (D72)
repetition partisanship (D72)partisanship (D72)
inclusion partisanship (D71)partisanship (D72)
HDMR model (C59)out-of-sample predictions of the intermediary capital ratio (ICR) (G17)
text-based information (L86)forecasts of economic indicators (E37)
text of the Wall Street Journal (Y60)stock market risk premia (G17)
text data (Y10)nowcasting of macroeconomic indicators (E37)

Back to index