Working Paper: CEPR ID: DP3809
Authors: Atsushi Inoue; Lutz Kilian
Abstract: It is standard in applied work to select forecasting models by ranking candidate models by their prediction mean squared error (PMSE) in simulated out-of-sample (SOOS) forecasts. Alternatively, forecast models may be selected using information criteria (IC). We compare the asymptotic and finite-sample properties of these methods in terms of their ability to mimimize the true out-of-sample PMSE, allowing for possible misspecification of the forecast models under consideration. We first study a covariance stationary environment. We show that under suitable conditions the IC method will be consistent for the best approximating model among the candidate models. In contrast, under standard assumptions the SOOS method will select over-parameterized models with positive probability, resulting in excessive finite-sample PMSEs. We also show that in the presence of unmodelled structural change both methods will be inadmissible in the sense that they may select a model with strictly higher PMSE than the best approximating model among the candidate models.
Keywords: Forecast accuracy; Information criteria; Model selection; Simulated out-of-sample method; Structural change
JEL Codes: C22; C52; C53
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
IC method (F50) | best approximating model (C51) |
SOOS method (C87) | over-parameterized models (C52) |
SOOS method (C87) | higher pmse (G14) |
SOOS method (C87) | inconsistent for best approximating model (C52) |
SOOS method (C87) | model with strictly higher pmse (C52) |
unmodeled structural change (L16) | higher pmse (G14) |
IC method (F50) | more accurate out-of-sample forecasts (C53) |
sample size increase (C83) | more accurate out-of-sample forecasts (IC method) (C53) |