Scholarly Works, Economics

Permanent URI for this collection

Research articles, presentations, and other scholarship

Browse

Recent Submissions

Now showing 1 - 20 of 96
  • Understanding the Effects of a Math Placement Exam on Calculus 1 Enrollment and Engineering Persistence
    Ryan, Olivia; Sajadi, Susan; Barrera, Sergio; Jaghargh, Reza Tavakoli (MDPI, 2025-01-26)
    Educational institutions are grappling with declining enrollments and low mathematical achievements. This study investigates how a math placement exam (ALEKS) influences enrollment in Calculus 1 and student persistence, taking into account academic preparation and demographic factors. It also evaluates the effects of remedial math courses for students near the placement cutoff. Using Astin’s input–environment–outcome model, this study analyzed data from 3380 students employing a Kitagawa-Oaxaca-Blinder decomposition and fuzzy regression discontinuity. These methods were used to identify unexplained differences across demographic groups and capture outcomes near the math placement cutoff. Based on the findings, a cutoff of 80% for the ALEKS exam is appropriate. This study underscores the role of math placement exams in shaping engineering enrollment and student success. These findings prompt reevaluating placement strategies and support mechanisms, particularly for URM, first-generation, and female students, to enhance equity and retention in engineering.
  • Merge-Proofness and Cost Solidarity in Shortest Path Games
    Bahel, Eric; Gómez-Rúa, María; Vidal-Puga, Juan (Springer, 2025-02)
    We study cost-sharing rules in network problems where agents seek to ship quantities of some good to their respective locations, and the cost on each arc is linear in the flow crossing it. In this context, Core Selection requires that each subgroup of agents pay a joint cost share that is not higher than its stand-alone cost. We prove that the demander rule, under which each agent pays the cost of her shortest path for each unit she demands, is the unique cost-sharing rule satisfying both Core Selection and Merge Proofness. The Merge Proofness axiom prevents distinct nodes from reducing their joint cost share by merging into a single node. An alternative characterization of the demander rule is obtained by combining Core Selection and Cost Solidarity. The Cost Solidarity axiom says that each agent’s cost share should be weakly increasing in the cost matrix.
  • A geometric approach for accelerating neural networks designed for classification problems
    Saffar, Mohsen; Kalhor, Ahmad; Habibnia, Ali (Nature Portfolio, 2024-07-30)
    This paper proposes a geometric-based technique for compressing convolutional neural networks to accelerate computations and improve generalization by eliminating non-informative components. The technique utilizes a geometric index called separation index to evaluate the functionality of network elements such as layers and filters. By applying this index along with center-based separation index, a systematic algorithm is proposed that optimally compresses convolutional and fully connected layers. The algorithm excludes layers with low performance, selects the best subset of filters in the filtering layers, and tunes the parameters of fully connected layers using center-based separation index. An illustrative example of classifying CIFAR-10 dataset is presented to explain the algorithm step-by-step. The proposed method achieves impressive pruning results on networks trained by CIFAR-10 and ImageNet datasets, with 87.5%, 77.6%, and 78.8% of VGG16, GoogLeNet, and DenseNet parameters pruned, respectively. Comparisons with state-of-the-art works are provided to demonstrate the effectiveness of the proposed method.
  • Dynamic Unstructured Bargaining with Private Information: Theory, Experiment, and Outcome Prediction via Machine Learning
    Camerer, Colin F.; Nave, Gideon; Smith, Alec C. (INFORMS (Institute for Operations Research and Management Sciences), 2018-05)
    We study dynamic unstructured bargaining with deadlines and one-sided private information about the amount available to share (the "pie size"). Using mechanism design theory, we show that given the players' incentives, the equilibrium incidence of bargaining failures ("strikes'') should increase with the pie size, and we derive a condition under which strikes are efficient. In our setting, no equilibrium satisfies both equality and efficiency in all pie sizes. We derive two equilibria that resolve the trade-off between equality and efficiency by either favoring equality or favoring efficiency. Using a novel experimental paradigm, we confirm that strike incidence is decreasing in the pie size. Subjects reach equal splits in small pie games (in which strikes are efficient), while most payoffs are close to either the efficient or the equal equilibrium prediction, when the pie is large. We employ a machine learning approach to show that bargaining process features recorded early in the game improve out of sample prediction of disagreements at the deadline. The process feature predictions are as accurate as predictions from pie sizes only, and adding process and pie data together improves predictions even more.
  • Enumerating rights: more is not always better
    Ball, Sheryl B.; Dave, Chetan; Dodds, Stefan (Springer, 2023-05-11)
    Contemporary political and policy debate rhetoric increasingly employs the language of ‘rights’: how they are assigned and what entitlements individuals in a society are due. While the obvious constitution design issues surround how rights enumeration affects the relationship between a government and its citizens, we instead analyze how rights framing impacts how citizens interact with each other. We design and implement a novel experiment to test whether social cooperation depends on the enumeration and positive or negative framing of the right of subjects to take a particular action. We find that when rights are framed positively, there exists an ‘entitlement effect’ that reduces social cooperation levels and crowds-out the tendency of individuals to act pro-socially.
  • Vaccine Hesitancy and Betrayal Aversion
    Alsharawy, Abdelaziz; Dwibedi, Esha; Aimone, Jason; Ball, Sheryl B. (Springer, 2022-05-17)
    The determinants of vaccine hesitancy remain complex and context specific. Betrayal aversion occurs when an individual is hesitant to risk being betrayed in an environment involving trust. In this pre-registered vignette experiment, we show that betrayal aversion is not captured by current vaccine hesitancy measures despite representing a significant source of unwillingness to be vaccinated. Our survey instrument was administered to 888 United States residents via Amazon Mechanical Turk in March 2021. We find that over a third of participants have betrayal averse preferences, resulting in an 8–26% decline in vaccine acceptance, depending on the betrayal source. Interestingly, attributing betrayal risk to scientists or government results in the greatest declines in vaccine acceptance. We explore an exogenous message intervention and show that an otherwise effective message acts narrowly and fails to reduce betrayal aversion. Our results demonstrate the importance of betrayal aversion as a preference construct in the decision to vaccinate.
  • The Ashley and Patterson (1986) test for serial independence in daily stock returns, revisited
    Ashley, Richard A.; Najafi, Faezeh (Springer, 2024-11-22)
    We update and extend the non-parametric test proposed in Ashley and Patterson (J Financ Quant Anal 21:221–227, 2014) – of the proposition that the (pre-whitened) daily stock returns for a firm are serially independent, and hence unpredictable from their own past. That paper applied this test to daily returns from 1962 to 1981 for several U.S. corporations and aggregate indices, finding mixed evidence against this null hypothesis of serial independence. The returns dataset is updated here to include thirteen firms which are currently more relevant, and the sample is extended through the end of 2023. We also update the simulation methodology here to properly account for the conditional heteroskedasticity in the daily returns data, so that the present results should now be more statistically reliable. The results are broadly in line with our earlier results, but they do suggest further avenues of research in this area.
  • A model of the formation of multilayer networks
    Billand, Pascal; Bravard, Christophe; Joshi, Sumit; Mahmud, Ahmed Saber; Sarangi, Sudipta (Elsevier, 2023-10)
    We study the formation of multilayer networks where payoffs are determined by the degrees of players in each network. We begin by imposing either concavity or convexity in degree on the payoff function of the players. We then explore distinct network relationships that result from inter- and intra-network spillovers captured by the properties of supermodularity/submodularity and strategic complementarity respectively. We show the existence of equilibria and characterize them. Additionally, we establish both necessary and sufficient conditions for an equilibrium to occur. We also highlight the connection, in equilibrium, between inter-network externalities and the identity of linked players in one network given the identity of linked players in the other network. Furthermore, we analyze efficient multilayer networks. Finally, we extend our models to contexts with more than two layers, and scenarios where agents receive a bonus for being connected to the same individuals in both networks.
  • ChatGPT has Aced the Test of Understanding in College Economics: Now What?
    Geerling, Wayne; Mateer, G. Dirk; Damodaran, Nikhil; Wooten, Jadrian (SAGE, 2023-04-08)
    The Test of Understanding in College Economics (TUCE) is a standardized test of economics knowledge performed in the United States which primarily targets principles-level understanding. We asked ChatGPT to complete the TUCE. ChatGPT ranked in the 91st percentile for Microeconomics and the 99th percentile for Macroeconomics when compared to students who take the TUCE exam at the end of their principles course. The results show that ChatGPT is capable of providing answers that exceed the mean responses of students across all institutions. The emergence of artificial intelligence presents a significant challenge to traditional assessment methods in higher education. An important implication of this finding is that educators will likely need to redesign their curriculum in at least one of the following three ways: reintroduce proctored, in-person assessments; augment learning with chatbots; and/or increase the prevalence of experiential learning projects that artificial intelligence struggles to replicate well.
  • Assessing proxies of knowledge and difficulty with rubric‐based instruments
    Smith, Ben O.; Wooten, Jadrian (Wiley, 2023-09-28)
    The fields of psychometrics, economic education, and education have developed statistically‐valid methods of assessing knowledge and learning. These methods include item response theory, value‐added learning models, and disaggregated learning. These methods, however, focus on multiple‐choice or single response assessments. Faculty and administrators routinely assess knowledge through papers, thesis presentations, or other demonstrations of knowledge assessed with rubric rows. This paper presents a statistical approach to estimating a proxy for student ability and rubric row difficulty. Moreover, we have developed software so that practitioners can more easily apply this method to their instruments. This approach can be used in researching education treatment effects, practitioners measuring learning outcomes in their own classrooms, or estimating knowledge for administrative assessment. As an example, we have applied these new methods to projects in a large Labor Economics course at a public university.
  • Gender, risk aversion, and the "COVID" grading option in a principles of economics course
    Trost, Steve; Wooten, Jadrian (Routledge, 2023-05-12)
    As the COVID-19 pandemic swept across the United States, colleges and universities faced the challenge of completing the academic term. Many institutions offered students the option of a "credit/no credit" grading system, which wouldn't affect their GPA. In this study, we examine which student characteristics are correlated with the decision to choose this grading option over a traditional letter grade. Our findings show that female students, particularly those with lower course grades, were more likely to opt for the "credit/no credit" option than male students. This aligns with previous research indicating that female students tend to be more risk-averse, particularly in economics courses.
  • Philosophy of Econometrics
    Spanos, Aris (Routledge, 2021-10-12)
    The preceding quotation from Einstein’s reply to Robert Thornton, a young philosopher of science who began teaching physics at the university level in 1944, encapsulates succinctly the importance of examining the methodology, history, and philosophical foundations of different scientific fields to avoid missing the forest for the trees. The field of interest in the discussion that follows is modern econometrics, whose roots can be traced back to the early 20th century. The problem of induction, in the sense of justifying an inference from particular instances to realizations yet to be observed, has been bedeviling the philosophy of science since Hume’s discourse on the problem. Modern statistical inference, as a form of induction, is based on data that exhibit inherent chance regularity patterns. Model-based statistical induction differs from other forms of induction, such as induction by enumeration, in three crucial respects.
  • Fortune Tellers: The Story of America's First Economic Forecasters [Book review]
    Spanos, Aris (Cambridge University Press, 2015-11-12)
  • Frequentist Probability
    Spanos, Aris (2017)
    The primary objective of this article is to discuss a model-based frequentist interpretation that identifies the probability of an event with the limit of its relative frequency of occurrence. What differentiates the proposed interpretation from the traditional ones are several key features: (i) events and probabilities are defined in the context of a statistical model , (ii) it is anchored on the strong law of large numbers, (iii) it is justified on empirical grounds by validating the model assumptions vis-à-vis data , (iv) the “long-run” metaphor can be rendered operational by simple simulation based on , and (v) the link between probability and real-world phenomena is provided by viewing data as a “truly typical” realization of the stochastic mechanism defined by . This link constitutes a feature shared with the Kolmogorov complexity algorithmic perspective on probability, which provides a further justification for the proposed frequentist interpretation.
  • Bernoulli Regression Models: Revisiting the Specification of Statistical Models with Binary Dependent Variables
    Bergtold, Jason S.; Spanos, Aris; Onukwugha, Eberechukwu (Elsevier, 2010)
    The latent variable and generalized linear modelling approaches do not provide a systematic approach for modelling discrete choice observational data. Another alternative, the probabilistic reduction (PR) approach, provides a systematic way to specify such models that can yield reliable statistical and substantive inferences. The purpose of this paper is to re-examine the underlying probabilistic foundations of conditional statistical models with binary dependent variables using the PR approach. This leads to the development of the Bernoulli Regression Model, a family of statistical models, which includes the binary logistic regression model. The paper provides an explicit presentation of probabilistic model assumptions, guidance on model specification and estimation, and empirical application.
  • The Pre-Eminence of Theory Versus the European CVAR Perspective in Macroeconometric Modeling
    Spanos, Aris (Elsevier, 2008-01-01)
    The primary aim of the paper is to place current methodological discussions on empirical modeling contrasting the 'theory first' versus the 'data first' perspectives in the context of a broader methodological framework with a view to constructively appraise them. In particular, the paper focuses on Colander?s argument in his paper ?Economists, Incentives, Judgement and Empirical Work? relating to the two different perspectives in Europe and the US that are currently dominating empirical macro-econometric modeling and delves deeper into their methodological/philosophical foundations. It is argued that the key to establishing a constructive dialogue between them is provided by a better understanding of the role of data in modern statistical inference, and how that relates to the centuries old issue of the realisticness of economic theories.
  • Statistics and Economics
    Spanos, Aris (Palgrave Macmillan, 2008)
  • Testing for Structural Breaks and other forms of Non-stationarity: a Misspecification Perspective
    Heracleous, Maria S.; Koutris, Andreas; Spanos, Aris (2008)
    In the 1980s and 1990s the issue of non-stationarity in economic time series has been discussed in the context of unit roots vs. mean trends in AR(p) models. More recently this perspective has been extended to include structural breaks. In this paper we take a much broader perspective by viewing the problem of changing parameters as one of misspecification testing due to the nonstationarity of the underlying process. The proposed misspecification testing procedure relies on resampling techniques to enhance the informational content of the observed data in an attempt to capture heterogeneity ‘locally’ using rolling window estimators of the primary moments of the stochastic process. The effectiveness of the testing procedure is assessed using extensive Monte Carlo simulations.
  • Propagation of shocks in an input-output economy: Evidence from disaggregated prices
    Luo, Shaowen; Villar, David (Elsevier, 2023-07-01)
    Using disaggregated industry-level data, this paper empirically evaluates predictions for the cross-sectional price change distribution made by input-output models with sticky prices. The response of prices to shocks is found to be consistent with the price sensitivities predicted by the input-output model. Moreover, moments of the sectoral price change distribution vary over time in response to the evolution of the network structure. Finally, through a quantitative analysis, demand and supply shocks are disentangled during the pandemic period. Counterfactual analyses show that sectoral supply shocks, aggregate demand shocks and the production network structure contributed significantly to the inflation surge in 2021–2022.
  • The price adjustment hazard function: Evidence from high inflation periods
    Luo, Shaowen; Villar, Daniel (Elsevier, 2021-09-01)
    The price adjustment hazard function - the probability of a good's price changing as a function of its price misalignment - enables the examination of the relationship between price stickiness and monetary non-neutrality without specifying a micro-founded model, as discussed by Caballero and Engel (1993a, 2007). Using the micro data underlying the U.S. Consumer Price Index going back to the 1970s, we estimate the hazard function relying on empirical patterns from high and low inflation periods. We find that the relation between inflation and higher moments of the price change distribution is particularly informative for the shape of the hazard function. Our estimated hazard function is relatively flat with positive values at zero. It implies weak price selection and a high degree of monetary non-neutrality: about 60% of the degree implied by the Calvo model, and much higher than what menu cost models imply. In addition, our estimated function is asymmetric: price increases are considerably more likely to occur than price decreases of the same magnitude.