Browsing by Author "Habibnia, Ali"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
- Deep Time: Deep Learning Extensions to Time Series Factor Analysis with Applications to Uncertainty Quantification in Economic and Financial ModelingMiller, Dawson Jon (Virginia Tech, 2022-09-12)This thesis establishes methods to quantify and explain uncertainty through high-order moments in time series data, along with first principal-based improvements on the standard autoencoder and variational autoencoder. While the first-principal improvements on the standard variational autoencoder provide additional means of explainability, we ultimately look to non-variational methods for quantifying uncertainty under the autoencoder framework. We utilize Shannon's differential entropy to accomplish the task of uncertainty quantification in a general nonlinear and non-Gaussian setting. Together with previously established connections between autoencoders and principal component analysis, we motivate the focus on differential entropy as a proper abstraction of principal component analysis to this more general framework, where nonlinear and non-Gaussian characteristics in the data are permitted. Furthermore, we are able to establish explicit connections between high-order moments in the data to those in the latent space, which induce a natural latent space decomposition, and by extension, an explanation of the estimated uncertainty. The proposed methods are intended to be utilized in economic and financial factor models in state space form, building on recent developments in the application of neural networks to factor models with applications to financial and economic time series analysis. Finally, we demonstrate the efficacy of the proposed methods on high frequency hourly foreign exchange rates, macroeconomic signals, and synthetically generated autoregressive data sets.
- Essays on Social Capital and Peer EffectsJiang, He (Virginia Tech, 2022-06-03)In Chapter 2, I employ the educational production function to identify the different effects of making a friend of the same gender and the opposite gender in a school network. Unlike other gender peer effects literature that only quantifies the causal effects of the proportion of girls in an aggregated level, such as other students in the same class, grade, or dorm, I study the gender of the five best friends nominated by the student. I address the endogeneity of friendship composition by employing a novel set of instrumental variables for the number of same-gender and opposite-gender friends. We find that having more friends, especially in the early accumulation stage, lowers the test scores. We also explore the mechanisms. In Chapter 3, I investigate the role of social learning in enrollment decisions for a public pension scheme. All else equal, if a qualified rural resident moves from a community where no other co- villagers participate in the new pension scheme to a community that is fully covered by the pension scheme, the probability of an individual enrolling by 0.541 percentage point. We use robustness checks to illustrate that the estimated peer effects are not driven by the common unobserved factors, but by social interactions. In Chapter 4, we use the survey data on Chinese middle students and the instrumental variables method to explore the different effects of making friends with the same gender and the opposite gender in a school network on mental health. The empirical results find that having a larger number of same-gender friends improves mental health but having a larger number of opposite-gender friends hurts mental health.
- A geometric approach for accelerating neural networks designed for classification problemsSaffar, Mohsen; Kalhor, Ahmad; Habibnia, Ali (Nature Portfolio, 2024-07-30)This paper proposes a geometric-based technique for compressing convolutional neural networks to accelerate computations and improve generalization by eliminating non-informative components. The technique utilizes a geometric index called separation index to evaluate the functionality of network elements such as layers and filters. By applying this index along with center-based separation index, a systematic algorithm is proposed that optimally compresses convolutional and fully connected layers. The algorithm excludes layers with low performance, selects the best subset of filters in the filtering layers, and tunes the parameters of fully connected layers using center-based separation index. An illustrative example of classifying CIFAR-10 dataset is presented to explain the algorithm step-by-step. The proposed method achieves impressive pruning results on networks trained by CIFAR-10 and ImageNet datasets, with 87.5%, 77.6%, and 78.8% of VGG16, GoogLeNet, and DenseNet parameters pruned, respectively. Comparisons with state-of-the-art works are provided to demonstrate the effectiveness of the proposed method.
- Macroeconomic Forecasting: Statistically Adequate, Temporal Principal ComponentsDorazio, Brian Arthur (Virginia Tech, 2023-06-05)The main goal of this dissertation is to expand upon the use of Principal Component Analysis (PCA) in macroeconomic forecasting, particularly in cases where traditional principal components fail to account for all of the systematic information making up common macroeconomic and financial indicators. At the outset, PCA is viewed as a statistical model derived from the reparameterization of the Multivariate Normal model in Spanos (1986). To motivate a PCA forecasting framework prioritizing sound model assumptions, it is demonstrated, through simulation experiments, that model mis-specification erodes reliability of inferences. The Vector Autoregressive (VAR) model at the center of these simulations allows for the Markov (temporal) dependence inherent in macroeconomic data and serves as the basis for extending conventional PCA. Stemming from the relationship between PCA and the VAR model, an operational out-of-sample forecasting methodology is prescribed incorporating statistically adequate, temporal principal components, i.e. principal components which capture not only Markov dependence, but all of the other, relevant information in the original series. The macroeconomic forecasts produced from applying this framework to several, common macroeconomic indicators are shown to outperform standard benchmarks in terms of predictive accuracy over longer forecasting horizons.
- Three Essays in Experimental EconomicsBradley, Austin Edward (Virginia Tech, 2024-06-21)The experiments presented and analyzed in this dissertation concern two well-established phenomena in behavioral economics: that human decision makers hold biased beliefs about probability and that free-form communication between economic agents promotes cooperation far in excess of what standard theory predicts. First, Chapter 2 studies subjective probability, focusing on the well-established existence of both the Hot Hand and Gambler's Fallacies — the false expectation of positive and negative autocorrelation, respectively. Both biases are prevalent throughout a wide variety of real-world contexts; what causes a person to favor one over the other? We conduct an experiment in which we observe fully informed subjects switching between the Hot Hand and Gambler's Fallacies when predicting future outcomes of mathematically identical sequences. Subjects exhibit the Gambler's Fallacy when predicting single outcomes but favor the Hot Hand when asked explicitly to estimate probabilities. Connecting our results to existing theory suggests that very subtle changes in framing lead decision makers to employ substantially different approaches to form predictions. The remainder of this dissertation studies cheap talk communication between human subjects playing incentivised trust games. In Chapter 3, we study free-form communication using a dataset of over 1000 messages sent between participants in a laboratory Trust game. We employ Natural Language Processing to systematically generate meaningful partitions of the messages space which we can then examine with established regression approaches. Our investigation reveals features correlated with trust that have not previously been considered. Most notably, highly detailed, specific promises establish trust more effectively than other messages which signal the same intended action. Additionally, we observe that the most and least trusted messages in our dataset differ starkly in their quality. Highly trusted messages are longer, more detailed, and contain fewer grammatical errors whereas the least trusted messages tend to be brief and prone to errors. In Chapter 4, we examine whether the difference is message quality affects trust by acting as a signal of effort. We report the results of an experiment designed to test whether promises which require higher levels of effort result in greater trust from their recipients. We find that more costly promises lead recipients to trust more frequently. However, there is no corresponding, significant difference in the trustworthiness of their senders. Further, when asked their beliefs explicitly, recipients do not believe that higher cost promises are more likely to be trustworthy. This presents a potential challenge to our understanding of trust between economic decision makers. If effort increases trust without altering receivers' beliefs, receivers must be concerned with factors other than their own payoff maximization. We conclude by presenting a follow-up experiment where varying effort cost cannot convey the sender's intentions, however, the results are inconclusive.
- Volatility Modeling and Risk Measurement using Statistical Models based on the Multivariate Student's t DistributionBanasaz, Mohammad Mahdi (Virginia Tech, 2022-04-01)An effective risk management program requires reliable risk measurement. Failure to assess inherited risks in mortgage-backed securities in the U.S. market contributed to the financial crisis of 2007–2008, which has prompted government regulators to pay greater attention to controlling risk in banks, investment funds, credit unions, and other financial institutions to prevent bankruptcy and financial crisis in the future. In order to calculate risk in a reliable manner, this thesis has focused on the statistical modeling of expected return and volatility. The primary aim of this study is to propose a framework, based on the probabilistic reduction approach, to reliably quantify market risk using statistical models and historical data. Particular emphasis is placed on the importance of the validity of the probabilistic assumptions in risk measurement by demonstrating how a statistically misspecified model will lead the evaluation of risk astray. The concept of market risk is explained by discussing the narrow definition of risk in a financial context and its evaluation and implications for financial management. After highlighting empirical evidence and discussing the limitations of the ARCH-GARCH-type volatility models using exchange rate and stock market data, we proposed Student's t Autoregressive models to estimate expected return and volatility to measure risk, using Value at Risk (VaR) and Expected Shortfall (ES). The misspecification testing analysis shows that our proposed models can adequately capture the chance regularities in exchange rates and stock indexes data and give a reliable estimation of regression and skedastic functions used in risk measurement. According to empirical findings, the COVID-19 pandemic in the first quarter of 2020 posed an enormous risk to global financial markets. The risk in financial markets returned to levels prior to the COVID-19 pandemic in 2021, after COVID-19 vaccine distribution started in developed countries.