The seminar will be held in the Visitor Centre at 3:15 pm

Raffaella Giacomini (UCL, UK)

Abstract

We propose new methods for analyzing the relative performance of two competing, misspecifed models in the presence of possible data instability. The main idea is to develop a measure of the relative local performance for the two models, and to investigate its stability over time by means of statistical tests. The models.performance can be evaluated using either in-sample or out-of-sample criteria. In the former case, we suggest using the local Kullback-Leibler information criterion, whereas in the latter, we consider the local out-of-sample forecast loss, for a general loss function. We propose two tests: a fluctuation test for analyzing the evolution of the model.s relative performance over historical samples and a .sequential test, that monitors the models.relative performance in real time. Compared to previous approaches to model selection and forecast comparison, which are based on measures of global performance.(e.g., Vuong (1989) and West (1996)), our focus on the entire time path of the models relative performance may contain useful information that is lost when looking for a globally best model. Our methods can be applied to nonlinear, dynamic, multivariate models estimated by a variety of techniques. An empirical application provides insights into the time variation in the performance of Smets and Wouters (2003) DSGE model of the European economy relative to that of VARs.

Keywords: Model Selection Tests, Misspecification, Structural Change, Forecast Evaluation, Kullback-Leibler Information Criterion

J.E.L. Codes: C22, C52, C53

paper