Out-of-Sample Forecast Tests Robust to the Choice of Window Size

Working Paper: CEPR ID: DP8542

Authors: Atsushi Inoue; Barbara Rossi

Abstract: This paper proposes new methodologies for evaluating out-of-sample forecasting performance that are robust to the choice of the estimation window size. The methodologies involve evaluating the predictive ability of forecasting models over a wide range of window sizes. We show that the tests proposed in the literature may lack the power to detect predictive ability and might be subject to data snooping across different window sizes if used repeatedly. An empirical application shows the usefulness of the methodologies for evaluating exchange rate models' forecasting ability.

Keywords: Estimation window; Forecast evaluation; Predictive ability testing

JEL Codes: C22; C52; C53


Causal Claims Network Graph

Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.


Causal Claims

CauseEffect
choice of estimation window size (C51)empirical results of predictive ability tests (C52)
single window size (L25)incorrect conclusions about a model's predictive ability (C52)
multiple tests (C52)reporting results from the most favorable window size (C22)
variability in window sizes (C22)stronger evidence of predictive ability (C52)
window size choices (D72)forecast accuracy (C53)
data mining (C55)over-rejection of the null hypothesis in favor of predictive ability (C52)

Back to index