Non-nested Model Selection/Validation: Making Credible Postsample Inference Feasible
dc.contributor.author | Ashley, Richard A. | en |
dc.contributor.department | Economics | en |
dc.date.accessioned | 2019-07-17T15:14:59Z | en |
dc.date.available | 2019-07-17T15:14:59Z | en |
dc.date.issued | 1995-04 | en |
dc.description.abstract | Effective, credible inference with respect to the postsample forecasting performance of time series models is widely held to be infeasible. Consequently, the model selection and Granger-causality literatures have focussed almost exclusively on in-sample tests, which can easily be biased by typical specification-search activity. Indeed, the postsample error series generated by competing models are typically cross-correlated, serially correlated, and not even clearly gaussian; thus, postsample inference procedures are necessarily only asymptotically valid. As a result, a postsample period large enough to yield credible inferences is perceived to be too costly in terms of sample observations foregone. This paper describes a new, re-sampling based, approach to postsample inference which, by explicitly quantifying the inferential uncertainty caused by the limited length of the postsample period, makes it feasible to obtain credible postsample inferences using postsample periods of reasonable length. For a given target level of inferential precision – e.g., significance at the 5% level – this new approach also provides explicit estimates of both how strong the postsample forecasting efficiency evidence in favor of one of two models must be (for a given length postsample period) and how long a postsample period is necessary, if the evidence is of given strength. These results indicate that postsample model validation periods substantially longer than the 5 to 20 periods typically reserved in past studies are necessary in order to credibly detect 20% - 30% MSE reductions. This approach also quantifies the inferential impact of different forecasting efficiency criterion choices – e.g., MSE vs. MAE vs. asymmetric criteria and the use of expected loss differentials (as in Diebold and Mariano(1994)) vs. ratios of expected losses. The value of this new approach to postsample inference is illustrated using postsample forecasting error data from Ashley, Granger, and Schmalensee(1980), in which evidence was presented for unidirectional Granger-causation from fluctuations in aggregate U.S. consumption expenditures to fluctuations in U.S. aggregate expenditures on advertising. | en |
dc.description.sponsorship | NSF grant #SES-8922394 | en |
dc.format.extent | 25 pages | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.sourceurl | http://ashleymac.econ.vt.edu/working_papers/e9507.pdf | en |
dc.identifier.uri | http://hdl.handle.net/10919/91475 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.relation.ispartofseries | Economics Department Working paper #E95-07 | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.title | Non-nested Model Selection/Validation: Making Credible Postsample Inference Feasible | en |
dc.type | Working paper | en |
dc.type.dcmitype | Text | en |
Files
Original bundle
1 - 1 of 1