Scholarly Works, Economics
Permanent URI for this collection
Research articles, presentations, and other scholarship
Browse
Browsing Scholarly Works, Economics by Author "Ashley, Richard A."
Now showing 1 - 10 of 10
Results Per Page
Sort Options
- Beyond Optimal ForecastingAshley, Richard A. (Virginia Tech, 2006-11-04)While the conditional mean is known to provide the minimum mean square error (MSE) forecast – and hence is optimal under a squared-error loss function – it must often in practice be replaced by a noisy estimate when model parameters are estimated over a small sample. Here two results are obtained, both of which motivate the use of forecasts biased toward zero (shrinkage forecasts) in such settings. First, the noisy forecast with minimum MSE is shown to be a shrinkage forecast. Second, a condition is derived under which a shrinkage forecast stochastically dominates the unbiased forecast over the class of loss functions monotonic in the forecast error magnitude. The appropriate amount of shrinkage from either perspective depends on a noisiness parameter which must be estimated, however, so the actual reduction in expected losses from shrinkage forecasting is an empirical issue. Simulation results over forecasts from a large variety of multiple regression models indicate that feasible shrinkage forecasts typically do provide modest improvements in forecast MSE when the noise in the estimate of the conditional mean is substantial.
- Credible Granger-Causality Inference with Modest Sample Lengths: A Cross-Sample Validation ApproachAshley, Richard A.; Tsang, Kwok Ping (MDPI, 2014-03-25)Credible Granger-causality analysis appears to require post-sample inference, as it is well-known that in-sample fit can be a poor guide to actual forecasting effectiveness. However, post-sample model testing requires an often-consequential a priori partitioning of the data into an “in-sample” period – purportedly utilized only for model specification/estimation – and a “post-sample” period, purportedly utilized (only at the end of the analysis) for model validation/testing purposes. This partitioning is usually infeasible, however, with samples of modest length – e.g., T ≤ 150 – as is common in both quarterly data sets and/or in monthly data sets where institutional arrangements vary over time, simply because there is in such cases insufficient data available to credibly accomplish both purposes separately. A cross-sample validation (CSV) testing procedure is proposed below which both eliminates the aforementioned a priori partitioning and which also substantially ameliorates this power versus credibility predicament – preserving most of the power of in-sample testing (by utilizing all of the sample data in the test), while also retaining most of the credibility of post-sample testing (by always basing model forecasts on data not utilized in estimating that particular model’s coefficients). Simulations show that the price paid, in terms of power relative to the in-sample Granger-causality F test, is manageable. An illustrative application is given, to a re-analysis of the Engel andWest [1] study of the causal relationship between macroeconomic fundamentals and the exchange rate; several of their conclusions are changed by our analysis.
- An Elementary Method for Detecting and Modeling Regression Parameter Variation Across Frequences With an Application to Testing the Permanent Income HypothesisBoon, Tan Hui; Ashley, Richard A. (Virginia Tech, 1997-03)A simple technique for directly testing the parameters of a time series regression model for instability across frequencies is presented. The method can be easily implemented in the time domain, so parameter instability across frequency bands can be conveniently detected and modeled in conjunction with other econometric features of the problem at hand, such as simultaneity, cointegration, missing observations, cross-equation restrictions, etc. The usefulness of the new technique is illustrated with an application to a cointegrated consumption-income regression model, yielding a straightforward test of the permanent income hypothesis.
- Identification of Coefficients in a Quadratic Moving Average Process Using the Generalized Method of MomentsAshley, Richard A.; Patterson, Douglas M. (Virginia Tech, 2002-06-21)The output of a causal, stable, time-invariant nonlinear filter can be approximately represented by the linear and quadratic terms of a finite parameter Volterra series expansion. We call this representation the “quadratic nonlinear MA model” since it is the logical extension of the usual linear MA process. Where the actual generating mechanism for the data is fairly smooth, this quadratic MA model should provide a better approximation to the true dynamics than the twostate threshold autoregression and Markov switching models usually considered. As with linear MA processes, the nonlinear MA model coefficients can be estimated via least squares fitting, but it is essential to begin with a reasonably parsimonious model identification and non-arbitrary preliminary estimates for the parameters. In linear ARMA modeling these are derived from the sample correlogram and the sample partial correlogram, but these tools are confounded by nonlinearity in the generating mechanism. Here we obtain analytic expressions for the second and third order moments – the autocovariances and third order cumulants – of a quadratic MA process driven by i.i.d. symmetric innovations. These expressions allow us to identify the significant coefficients in the process by using GMM to obtain preliminary coefficient estimates and their concomitant estimated standard errors. The utility of the method for specifying nonlinear time series models is illustrated using artificially generated data.
- International Evidence On The Oil Price-Real Output Relationship: Does Persistence Matter?Ashley, Richard A.; Tsang, Kwok Ping (Virginia Tech, 2013-08-28)The literature on the relationship between real output growth and the growth rate in the price of oil, including an allowance for asymmetry in the impact of oil prices on output, continues to evolve. Here we show that a new technique, which allows us to control for both this asymmetry and also for the persistence of oil price changes, yields results implying that such control is necessary for a statistically adequate specification of the relationship. The new technique also yields an estimated model for the relationship which is more economically interpretable. In particular, using quarterly data from 1976 – 2007 on each of six countries which are essentially net oil importers, we find that changes in the growth rate of oil prices which persist for more than four years have a large and statistically significant impact on future output growth, whereas less persistent changes (lasting more than one year but less than four years) have no significant impact on output growth. In contrast, ‘temporary’ fluctuations in the oil price growth rate – persisting for only a year or less – again have a large and statistically significant impact on output growth for most of these countries. The results for the single major net oil producer in our sample (Norway) are distinct in an interesting way.
- Non-nested Model Selection/Validation: Making Credible Postsample Inference FeasibleAshley, Richard A. (Virginia Tech, 1995-04)Effective, credible inference with respect to the postsample forecasting performance of time series models is widely held to be infeasible. Consequently, the model selection and Granger-causality literatures have focussed almost exclusively on in-sample tests, which can easily be biased by typical specification-search activity. Indeed, the postsample error series generated by competing models are typically cross-correlated, serially correlated, and not even clearly gaussian; thus, postsample inference procedures are necessarily only asymptotically valid. As a result, a postsample period large enough to yield credible inferences is perceived to be too costly in terms of sample observations foregone. This paper describes a new, re-sampling based, approach to postsample inference which, by explicitly quantifying the inferential uncertainty caused by the limited length of the postsample period, makes it feasible to obtain credible postsample inferences using postsample periods of reasonable length. For a given target level of inferential precision – e.g., significance at the 5% level – this new approach also provides explicit estimates of both how strong the postsample forecasting efficiency evidence in favor of one of two models must be (for a given length postsample period) and how long a postsample period is necessary, if the evidence is of given strength. These results indicate that postsample model validation periods substantially longer than the 5 to 20 periods typically reserved in past studies are necessary in order to credibly detect 20% - 30% MSE reductions. This approach also quantifies the inferential impact of different forecasting efficiency criterion choices – e.g., MSE vs. MAE vs. asymmetric criteria and the use of expected loss differentials (as in Diebold and Mariano(1994)) vs. ratios of expected losses. The value of this new approach to postsample inference is illustrated using postsample forecasting error data from Ashley, Granger, and Schmalensee(1980), in which evidence was presented for unidirectional Granger-causation from fluctuations in aggregate U.S. consumption expenditures to fluctuations in U.S. aggregate expenditures on advertising.
- A Reconsideration of Consistent Estimation of a Dynamic Panel Data Model in the Random Effects (Error Components) FrameworkAshley, Richard A. (Virginia Tech, 2010-04-19)It is widely believed that the inclusion of lagged dependent variables in a panel data model necessarily renders the Random Effects (RE) estimators, based on OLS applied to the quasi-differenced variables, inconsistent. It is shown here that this belief is incorrect under the usual assumption made in this context — i.e., that the other regressors are strictly exogenous. This result follows from the fact that lagged values of the deviation of the quasi-differenced dependent variable from its mean can be written as a weighted sum of the past values of the quasi-differenced model error term, whereas these quasi-differenced errors are serially uncorrelated by construction. The RE estimators are therefore consistent. Thus, since instrumental variables methods { e.g., Arellano and Bond (1991) — clearly provide less precise estimates, the RE estimates are preferable if a Hausman test is unable to reject the null hypothesis that the parameter estimates of interest from both methods are equal.
- Sensitivity Analysis of an OLS Multiple Regression Inference with Respect to Possible Linear Endogeneity in the Explanatory Variables, for Both Modest and for Extremely Large SamplesAshley, Richard A.; Parmeter, Christopher F. (MDPI, 2020-03-16)This work describes a versatile and readily-deployable sensitivity analysis of an ordinary least squares (OLS) inference with respect to possible endogeneity in the explanatory variables of the usual k-variate linear multiple regression model. This sensitivity analysis is based on a derivation of the sampling distribution of the OLS parameter estimator, extended to the setting where some, or all, of the explanatory variables are endogenous. In exchange for restricting attention to possible endogeneity which is solely linear in nature—the most typical case—no additional model assumptions must be made, beyond the usual ones for a model with stochastic regressors. The sensitivity analysis quantifies the sensitivity of hypothesis test rejection p-values and/or estimated confidence intervals to such endogeneity, enabling an informed judgment as to whether any selected inference is “robust” versus “fragile.” The usefulness of this sensitivity analysis—as a “screen” for potential endogeneity issues—is illustrated with an example from the empirical growth literature. This example is extended to an extremely large sample, so as to illustrate how this sensitivity analysis can be applied to parameter confidence intervals in the context of massive datasets, as in “big data”.
- Sensitivity Analysis of OLS Multiple Regression Inference with Respect to Possible Linear Endogeneity in the Explanatory VariablesAshley, Richard A.; Parmeter, Christopher F. (Virginia Tech, 2019-06-17)This work describes a versatile sensitivity analysis of OLS hypothesis test rejection p-values with respect to possible endogeneity in the explanatory variables of the usual k-variate linear multiple regression model which practitioners can readily deploy in their research. This sensitivity analysis is based on a derivation of the asymptotic distribution of the OLS parameter estimator, but extended in a particularly straightforward way to the case where some or all of the explanatory variables are endogenous to a specified degree — that is, where the population covariances of the explanatory variables with the model errors are given. In exchange for restricting attention to possible endogeneity which is solely linear in nature, no additional model assumptions must be made, beyond the usual ones for a model with stochastic regressors. In addition, we also use simulation methods to quantify the uncertainty in the sensitivity analysis results introduced by replacing the population variance-covariance matrix by its sample estimate. The usefulness of the analysis — as a `screen' for potential endogeneity issues — is illustrated with an example from the empirical growth literature.
- Subset-Continuous-Updating GMM Estimators for Dynamic Panel Data ModelsAshley, Richard A.; Sun, Xiaojin (MDPI, 2016-11-30)The two-step GMM estimators of Arellano and Bond (1991) and Blundell and Bond (1998) for dynamic panel data models have been widely used in empirical work; however, neither of them performs well in small samples with weak instruments. The continuous-updating GMM estimator proposed by Hansen, Heaton, and Yaron (1996) is in principle able to reduce the small-sample bias, but it involves high-dimensional optimizations when the number of regressors is large. This paper proposes a computationally feasible variation on these standard two-step GMM estimators by applying the idea of continuous-updating to the autoregressive parameter only, given the fact that the absolute value of the autoregressive parameter is less than unity as a necessary requirement for the data-generating process to be stationary. We show that our subset-continuous-updating method does not alter the asymptotic distribution of the two-step GMM estimators, and it therefore retains consistency. Our simulation results indicate that the subset-continuous-updating GMM estimators outperform their standard two-step counterparts in finite samples in terms of the estimation accuracy on the autoregressive parameter and the size of the Sargan-Hansen test.