Scholarly Works, Economics
Permanent URI for this collection
Research articles, presentations, and other scholarship
Browse
Browsing Scholarly Works, Economics by Department "Economics"
Now showing 1 - 20 of 45
Results Per Page
Sort Options
- Amenities, affordability, and housing vouchersBieri, David S.; Dawkins, Casey J. (Blackwell Publishing Inc., 2018-06-19)Against the background of an emerging rental affordability crisis, we examine how the standard rule that households should not spend more than 30% of their income on housing expenditures leads to inefficiencies in the context of federal low-income housing policy. We quantify how the current practice of locally indexing individual rent subsidies in the Housing Choice Voucher (HCV) program regardless of quality-of-life conditions implicitly incentivizes recipients to live in high-amenity areas. We also assess a novel scenario for housing policy reform that adjusts subsidies by the amenity expenditures of low-income households, permitting national HCV program coverage to increase. © 2018 The Authors. Journal of Regional Science published by Wiley Periodicals, Inc.
- Application of stochastic choice modeling to policy analysis of public-goods - a case-study of air-quality improvementsLoehman, E.; De, V. H. (MIT Press, 1982)
- Asset Prices under Random Risk PreferencesTsang, Kwok Ping; Tserenjigmid, Gerelt (2016-12-05)We consider an overlapping-generations model with two types of investors: the stable investors have constant risk aversion, but the unstable investors have random levels of risk aversion across different generations. Investors are not sure about how risk averse future investors are. We show that i) a small amount of randomness in the risk aversion or ii) a small population of the unstable investors generates a large deviation from fundamental price and a high price volatility.
- Beamforming when the sound velocity is not precisely knownHinich, Melvin J. (Acoustical Society of America, 1980-08)Beamforming is an integral part of most signal processingsystems in active or passive sonars. The delays used to generate a beam are functions of the sound velocity, which depends on temperature, salinity, and pressure. There is a loss in array gain if the delays are incorrectly set. This will occur when the sound velocity in the water surrounding the hydrophones is different from the velocity that was used to set the delays. This paper makes two points: (1) fixed delay line sonars suffer a loss in gain when the true sound speed in the water is different from the velocity that is used to set the delays, and (2) there are signal processing techniques for two- or three-dimensional arrays that yield source bearings that are independent of the true sound velocity. These techniques require variable time delays, which can be realized using digital processing.
- Bearing estimation using a large towed arrayHinich, Melvin J.; Rule, William (Acoustical Society of America, 1975)When a towed array of hydrophones is significantly nonlinear due to bending, ordinary linear array beamforming gives a biased estimate of the true source bearing. By processing the array as a sequence of smaller aperture subarrays and then computing the mean of the subarray bearings, the variation due to bending is reduced and a reasonably precise estimate is obtained if the average bending angle with respect to the nominal axis is small. The median and mean subarray bearings are analyzed for a theoreticalstatistical model and are tested using artificial data for various sinusoidal array geometries.
- Bearing estimation using a perturbed linear arrayHinich, Melvin J. (Acoustical Society of America, 1977-06)A linear hydrophone array which is towed in the ocean is subject to snakelike bending. If the array is processed as if it was truly linear, the author has shown that the bending causes a deflection of the measured bearing of a fixed source from its true bearing relative to the array. This deflection results from patterned perturbations in the true sensor positions along the nominal array axis. As the perturbation pattern changes with the flexing of the array, the source appears to move around. A probability model of the perturbations is used in order to develop a theoretical solution to the question of how the space-time information gathered by the array is best used to measure source bearing. The method which is used to reduce the bending perturbation deflection of the bearing is to group the sensors into adjacent subarrays, process these arrays over short time slices, average the subarray bearings for each time period, and then to average the average over time. This averaging method significantly improves the bearing accuracy of the source when the array is bent according to the model.
- Beyond Optimal ForecastingAshley, Richard A. (Virginia Tech, 2006-11-04)While the conditional mean is known to provide the minimum mean square error (MSE) forecast – and hence is optimal under a squared-error loss function – it must often in practice be replaced by a noisy estimate when model parameters are estimated over a small sample. Here two results are obtained, both of which motivate the use of forecasts biased toward zero (shrinkage forecasts) in such settings. First, the noisy forecast with minimum MSE is shown to be a shrinkage forecast. Second, a condition is derived under which a shrinkage forecast stochastically dominates the unbiased forecast over the class of loss functions monotonic in the forecast error magnitude. The appropriate amount of shrinkage from either perspective depends on a noisiness parameter which must be estimated, however, so the actual reduction in expected losses from shrinkage forecasting is an empirical issue. Simulation results over forecasts from a large variety of multiple regression models indicate that feasible shrinkage forecasts typically do provide modest improvements in forecast MSE when the noise in the estimate of the conditional mean is substantial.
- Cloud-Sourcing: Using an Online Labor Force to Detect Clouds and Cloud Shadows in Landsat ImagesYu, Ling; Ball, Sheryl B.; Blinn, Christine E.; Moeltner, Klaus; Peery, Seth; Thomas, Valerie A.; Wynne, Randolph H. (MDPI, 2015-02-26)We recruit an online labor force through Amazon.com’s Mechanical Turk platform to identify clouds and cloud shadows in Landsat satellite images. We find that a large group of workers can be mobilized quickly and relatively inexpensively. Our results indicate that workers’ accuracy is insensitive to wage, but deteriorates with the complexity of images and with time-on-task. In most instances, human interpretation of cloud impacted area using a majority rule was more accurate than an automated algorithm (Fmask) commonly used to identify clouds and cloud shadows. However, cirrus-impacted pixels were better identified by Fmask than by human interpreters. Crowd-sourced interpretation of cloud impacted pixels appears to be a promising means by which to augment or potentially validate fully automated algorithms.
- Credible Granger-Causality Inference with Modest Sample Lengths: A Cross-Sample Validation ApproachAshley, Richard A.; Tsang, Kwok Ping (MDPI, 2014-03-25)Credible Granger-causality analysis appears to require post-sample inference, as it is well-known that in-sample fit can be a poor guide to actual forecasting effectiveness. However, post-sample model testing requires an often-consequential a priori partitioning of the data into an “in-sample” period – purportedly utilized only for model specification/estimation – and a “post-sample” period, purportedly utilized (only at the end of the analysis) for model validation/testing purposes. This partitioning is usually infeasible, however, with samples of modest length – e.g., T ≤ 150 – as is common in both quarterly data sets and/or in monthly data sets where institutional arrangements vary over time, simply because there is in such cases insufficient data available to credibly accomplish both purposes separately. A cross-sample validation (CSV) testing procedure is proposed below which both eliminates the aforementioned a priori partitioning and which also substantially ameliorates this power versus credibility predicament – preserving most of the power of in-sample testing (by utilizing all of the sample data in the test), while also retaining most of the credibility of post-sample testing (by always basing model forecasts on data not utilized in estimating that particular model’s coefficients). Simulations show that the price paid, in terms of power relative to the in-sample Granger-causality F test, is manageable. An illustrative application is given, to a re-analysis of the Engel andWest [1] study of the causal relationship between macroeconomic fundamentals and the exchange rate; several of their conclusions are changed by our analysis.
- Demand for electricity in VirginiaMurray, M. P.; Spann, R.; Pulley, L.; Beauvais, E. (MIT Press, 1978)
- Economic models for TMDL assessment and implementationBosch, Darrell J.; Ogg, Clayton; Osei, Edward; Stoecker, Arthur L. (American Society of Agricultural and Biological Engineers, 2006)The TMDL assessment and implementation process is designed to achieve designated uses for water bodies, which are set by states based on criteria including perceived costs and benefits. Setting water quality goals based on designated uses and plans to achieve these goals have important implications for public welfare. Both treatment and damage costs should be considered in simultaneously determining the desired water quality goal and allocating pollution reductions among sources to achieve that goal. Assessing and implementing TMDL plans are complicated by uncertainties about pollution damages and stakeholder responses. Economic optimization or simulation models linked to water quality models allow water quality impacts and costs of TMDL standards to be assessed Higher water quality thresholds may be reserved for watersheds with higher estimated benefits. Costs of achieving standards can be reduced by targeting reductions at pollution sources with the lowest costs of achieving reductions. Trading programs can help achieve efficient targeting of pollution reductions while distributing costs equitably. The effectiveness of economic models to assist in setting water quality goals and in TMDL program planning and implementation can be improved by using economic models to analyze costs and benefits of water quality improvements and to assist with pollution targeting and trading programs to minimize costs of reducing pollution. Multi-media impacts of pollution should be included within economic and environmental water quality models. Given uncertainties about benefits and costs of achieving TMDL standards, policymakers and program managers need to collect more data on stakeholder responses to TMDL programs as well as better monitoring data on pollutant levels and functioning of aquatic systems.
- An Elementary Method for Detecting and Modeling Regression Parameter Variation Across Frequences With an Application to Testing the Permanent Income HypothesisBoon, Tan Hui; Ashley, Richard A. (Virginia Tech, 1997-03)A simple technique for directly testing the parameters of a time series regression model for instability across frequencies is presented. The method can be easily implemented in the time domain, so parameter instability across frequency bands can be conveniently detected and modeled in conjunction with other econometric features of the problem at hand, such as simultaneity, cointegration, missing observations, cross-equation restrictions, etc. The usefulness of the new technique is illustrated with an application to a cointegrated consumption-income regression model, yielding a straightforward test of the permanent income hypothesis.
- Estimating bearing when the source is endfire to an arrayHinich, Melvin J. (Acoustical Society of America, 1979-03)Consider the problem of estimating the direction of arrival of a plane wave using a linear array of length L with M sensors. Let 0 denote the direction with respect to the array axis. Assume that we know that 0 >0. The sign of 0 is ambiguous, given data from a linear array. The approximations to the rms errors of the maximum likelihood 1-a and the least-squares 4 estimators of 0 are proportional to I sin0 1 'l. The Cramer-Rao bound also has this property. 5 This seems to imply that these estimators have infinite variances as 0-0. This is not the case, as I will now show.
- Estimating signal and noise using a random arrayHinich, Melvin J. (Acoustical Society of America, 1982-01)This paper presents approximations for the rms error of the maximum likelihood estimator of the direction of a plane wave incident on a random array. The sensor locations are assumed to be realizations of independent, identically distributed random vectors. The second part of the paper presents an asymptotically unbiased estimator of the noise wavenumber spectrum from random array data.
- Estimating the consumption and investment demands for housing and their effect on housing tenure statusIoannides, Y. M.; Rosenthal, S. S. (MIT Press, 1994-02)Theoretical work suggests that families live in owner-occupied housing if their investment demand for housing exceeds their consumption demand for housing. Using household data from the 1983 Survey of Consumer Finances, we test this theory by estimating an ordered probit model of whether families rent without owning property, rent while owning property other than their home, own their home without owning other properties, or own their home in addition to other properties. For owner-occupiers who own additional property, both the investment and consumption demands are directly observed enabling us to separately identify these functions. Results suggest that investment demand is more sensitive to wealth and income than is consumption demand, but that consumption demand is more sensitive to demographic variables and proximity to urban suburbs. In addition, test results indicate that the principal residence of most owner-occupiers is determined by their consumption demand for housing, not their investment demand. Hence, previous empirical housing demand studies likely to have identified the consumption demand for housing. Test results also suggest that, although the divergence between investment and consumption demand for housing is an important determinant of housing subtenure status, other factors also influence housing tenure decisions.
- Focal stimulation of the temporoparietal junction improves rationality in prosocial decision‑makingLi, Flora; Ball, Sheryl B.; Zhang, Xiaomeng; Smith, Alexander Charles (Nature Research, 2020)We tested the hypothesis that modulation of neurocomputational inputs to value-based decisionmaking affects the rationality of economic choices. The brain’s right temporoparietal junction (rTPJ) has been functionally associated with both social behavior and with domain-general information processing and attention. To identify the causal function of rTPJ in prosocial decisions, we administered focal high definition transcranial direct current stimulation (HD-tDCS) while participants allocated money between themselves and a charity in a modified dictator game. Anodal stimulation led to improved rationality as well as increased charitable giving and egalitarianism, resulting in more consistent and efficient choices and increased sensitivity to the price of giving. These results are consistent with the theory that anodal stimulation of the rTPJ increases the precision of value computations in social decision-making. Our results demonstrate that theories of rTPJ function should account for the multifaceted role of the rTPJ in the representation of social inputs into value-based decisions.
- Frequency-wave number array processingHinich, Melvin J. (Acoustical Society of America, 1981-03)Most array signal processing systems use delay-and-sum beamforming to estimate source bearings. This paper demonstrates the close relationship between beamforming and frequency-wavenumber spectrum analysis. The latter approach has computational advantages over beamforming when the noise is spatially correlated. The wavenumber approach is used to derive the array response of a general linear or planar array to plane wave signals. The statistical properties of the maximum-likelihood estimators of source bearing and amplitude are presented for an array with many elements. Optimal array design is also discussed.
- Frustration and Anger in GamesBattigalli, Pierpaolo; Dufwenberg, Martin; Smith, Alexander Charles (Virginia Tech, 2015-08-24)Frustration, anger, and aggression have important consequences for economic and social behavior, concerning for example monopoly pricing, contracting, bargaining, traffic safety, violence, and politics. Drawing on insights from psychology, we develop a formal approach to exploring how frustration and anger, via blame and aggression, shape interaction and outcomes in economic settings.
- Identification of Coefficients in a Quadratic Moving Average Process Using the Generalized Method of MomentsAshley, Richard A.; Patterson, Douglas M. (Virginia Tech, 2002-06-21)The output of a causal, stable, time-invariant nonlinear filter can be approximately represented by the linear and quadratic terms of a finite parameter Volterra series expansion. We call this representation the “quadratic nonlinear MA model” since it is the logical extension of the usual linear MA process. Where the actual generating mechanism for the data is fairly smooth, this quadratic MA model should provide a better approximation to the true dynamics than the twostate threshold autoregression and Markov switching models usually considered. As with linear MA processes, the nonlinear MA model coefficients can be estimated via least squares fitting, but it is essential to begin with a reasonably parsimonious model identification and non-arbitrary preliminary estimates for the parameters. In linear ARMA modeling these are derived from the sample correlogram and the sample partial correlogram, but these tools are confounded by nonlinearity in the generating mechanism. Here we obtain analytic expressions for the second and third order moments – the autocovariances and third order cumulants – of a quadratic MA process driven by i.i.d. symmetric innovations. These expressions allow us to identify the significant coefficients in the process by using GMM to obtain preliminary coefficient estimates and their concomitant estimated standard errors. The utility of the method for specifying nonlinear time series models is illustrated using artificially generated data.
- International Evidence On The Oil Price-Real Output Relationship: Does Persistence Matter?Ashley, Richard A.; Tsang, Kwok Ping (Virginia Tech, 2013-08-28)The literature on the relationship between real output growth and the growth rate in the price of oil, including an allowance for asymmetry in the impact of oil prices on output, continues to evolve. Here we show that a new technique, which allows us to control for both this asymmetry and also for the persistence of oil price changes, yields results implying that such control is necessary for a statistically adequate specification of the relationship. The new technique also yields an estimated model for the relationship which is more economically interpretable. In particular, using quarterly data from 1976 – 2007 on each of six countries which are essentially net oil importers, we find that changes in the growth rate of oil prices which persist for more than four years have a large and statistically significant impact on future output growth, whereas less persistent changes (lasting more than one year but less than four years) have no significant impact on output growth. In contrast, ‘temporary’ fluctuations in the oil price growth rate – persisting for only a year or less – again have a large and statistically significant impact on output growth for most of these countries. The results for the single major net oil producer in our sample (Norway) are distinct in an interesting way.
- «
- 1 (current)
- 2
- 3
- »