Working Paper: CEPR ID: DP10239
Authors: Patrick Minford; Yongdeng Xu; Peng Zhou
Abstract: Out-of-sample forecasting tests of DSGE models against time-series benchmarks such as an unrestricted VAR are increasingly used to check a) the specification b) the forecasting capacity of these models. We carry out a Monte Carlo experiment on a widely-used DSGE model to investigate the power of these tests. We find that in specification testing they have weak power relative to an in-sample indirect inference test; this implies that a DSGE model may be badly mis-specified and still improve forecasts from an unrestricted VAR. In testing forecasting capacity they also have quite weak power, particularly on the lefthand tail. By contrast a model that passes an indirect inference test of specification will almost definitely also improve on VAR forecasts.
Keywords: DSGE; forecast performance; indirect inference; out of sample forecasts; specification tests; VAR
JEL Codes: E10; E17
Edges that are evidenced by causal inference methods are in orange, and the rest are in light blue.
Cause | Effect |
---|---|
degree of misspecification (C50) | forecasting performance (C53) |
DSGE model specification (E13) | forecasting capacity (C53) |
misspecified DSGE model (E13) | better forecasts than VAR (C53) |
model passing indirect inference test (C52) | improve on VAR forecasts (C53) |
weak power of OSF tests (C12) | models may not be rejected (C52) |
misspecification crossover point (C34) | forecasting performance (C53) |