VTechWorks staff will be away for the winter holidays starting Tuesday, December 24, 2024, through Wednesday, January 1, 2025, and will not be replying to requests during this time. Thank you for your patience, and happy holidays!
 

Scholarly Works, Economics

Permanent URI for this collection

Research articles, presentations, and other scholarship

Browse

Recent Submissions

Now showing 1 - 20 of 89
  • A model of the formation of multilayer networks
    Billand, Pascal; Bravard, Christophe; Joshi, Sumit; Mahmud, Ahmed Saber; Sarangi, Sudipta (Elsevier, 2023-10)
    We study the formation of multilayer networks where payoffs are determined by the degrees of players in each network. We begin by imposing either concavity or convexity in degree on the payoff function of the players. We then explore distinct network relationships that result from inter- and intra-network spillovers captured by the properties of supermodularity/submodularity and strategic complementarity respectively. We show the existence of equilibria and characterize them. Additionally, we establish both necessary and sufficient conditions for an equilibrium to occur. We also highlight the connection, in equilibrium, between inter-network externalities and the identity of linked players in one network given the identity of linked players in the other network. Furthermore, we analyze efficient multilayer networks. Finally, we extend our models to contexts with more than two layers, and scenarios where agents receive a bonus for being connected to the same individuals in both networks.
  • ChatGPT has Aced the Test of Understanding in College Economics: Now What?
    Geerling, Wayne; Mateer, G. Dirk; Damodaran, Nikhil; Wooten, Jadrian (SAGE, 2023-04-08)
    The Test of Understanding in College Economics (TUCE) is a standardized test of economics knowledge performed in the United States which primarily targets principles-level understanding. We asked ChatGPT to complete the TUCE. ChatGPT ranked in the 91st percentile for Microeconomics and the 99th percentile for Macroeconomics when compared to students who take the TUCE exam at the end of their principles course. The results show that ChatGPT is capable of providing answers that exceed the mean responses of students across all institutions. The emergence of artificial intelligence presents a significant challenge to traditional assessment methods in higher education. An important implication of this finding is that educators will likely need to redesign their curriculum in at least one of the following three ways: reintroduce proctored, in-person assessments; augment learning with chatbots; and/or increase the prevalence of experiential learning projects that artificial intelligence struggles to replicate well.
  • Assessing proxies of knowledge and difficulty with rubric‐based instruments
    Smith, Ben O.; Wooten, Jadrian (Wiley, 2023-09-28)
    The fields of psychometrics, economic education, and education have developed statistically‐valid methods of assessing knowledge and learning. These methods include item response theory, value‐added learning models, and disaggregated learning. These methods, however, focus on multiple‐choice or single response assessments. Faculty and administrators routinely assess knowledge through papers, thesis presentations, or other demonstrations of knowledge assessed with rubric rows. This paper presents a statistical approach to estimating a proxy for student ability and rubric row difficulty. Moreover, we have developed software so that practitioners can more easily apply this method to their instruments. This approach can be used in researching education treatment effects, practitioners measuring learning outcomes in their own classrooms, or estimating knowledge for administrative assessment. As an example, we have applied these new methods to projects in a large Labor Economics course at a public university.
  • Gender, risk aversion, and the "COVID" grading option in a principles of economics course
    Trost, Steve; Wooten, Jadrian (Routledge, 2023-05-12)
    As the COVID-19 pandemic swept across the United States, colleges and universities faced the challenge of completing the academic term. Many institutions offered students the option of a "credit/no credit" grading system, which wouldn't affect their GPA. In this study, we examine which student characteristics are correlated with the decision to choose this grading option over a traditional letter grade. Our findings show that female students, particularly those with lower course grades, were more likely to opt for the "credit/no credit" option than male students. This aligns with previous research indicating that female students tend to be more risk-averse, particularly in economics courses.
  • Philosophy of Econometrics
    Spanos, Aris (Routledge, 2021-10-12)
    The preceding quotation from Einstein’s reply to Robert Thornton, a young philosopher of science who began teaching physics at the university level in 1944, encapsulates succinctly the importance of examining the methodology, history, and philosophical foundations of different scientific fields to avoid missing the forest for the trees. The field of interest in the discussion that follows is modern econometrics, whose roots can be traced back to the early 20th century. The problem of induction, in the sense of justifying an inference from particular instances to realizations yet to be observed, has been bedeviling the philosophy of science since Hume’s discourse on the problem. Modern statistical inference, as a form of induction, is based on data that exhibit inherent chance regularity patterns. Model-based statistical induction differs from other forms of induction, such as induction by enumeration, in three crucial respects.
  • Fortune Tellers: The Story of America's First Economic Forecasters [Book review]
    Spanos, Aris (Cambridge University Press, 2015-11-12)
  • Frequentist Probability
    Spanos, Aris (2017)
    The primary objective of this article is to discuss a model-based frequentist interpretation that identifies the probability of an event with the limit of its relative frequency of occurrence. What differentiates the proposed interpretation from the traditional ones are several key features: (i) events and probabilities are defined in the context of a statistical model , (ii) it is anchored on the strong law of large numbers, (iii) it is justified on empirical grounds by validating the model assumptions vis-à-vis data , (iv) the “long-run” metaphor can be rendered operational by simple simulation based on , and (v) the link between probability and real-world phenomena is provided by viewing data as a “truly typical” realization of the stochastic mechanism defined by . This link constitutes a feature shared with the Kolmogorov complexity algorithmic perspective on probability, which provides a further justification for the proposed frequentist interpretation.
  • Bernoulli Regression Models: Revisiting the Specification of Statistical Models with Binary Dependent Variables
    Bergtold, Jason S.; Spanos, Aris; Onukwugha, Eberechukwu (Elsevier, 2010)
    The latent variable and generalized linear modelling approaches do not provide a systematic approach for modelling discrete choice observational data. Another alternative, the probabilistic reduction (PR) approach, provides a systematic way to specify such models that can yield reliable statistical and substantive inferences. The purpose of this paper is to re-examine the underlying probabilistic foundations of conditional statistical models with binary dependent variables using the PR approach. This leads to the development of the Bernoulli Regression Model, a family of statistical models, which includes the binary logistic regression model. The paper provides an explicit presentation of probabilistic model assumptions, guidance on model specification and estimation, and empirical application.
  • The Pre-Eminence of Theory Versus the European CVAR Perspective in Macroeconometric Modeling
    Spanos, Aris (Elsevier, 2008-01-01)
    The primary aim of the paper is to place current methodological discussions on empirical modeling contrasting the 'theory first' versus the 'data first' perspectives in the context of a broader methodological framework with a view to constructively appraise them. In particular, the paper focuses on Colander?s argument in his paper ?Economists, Incentives, Judgement and Empirical Work? relating to the two different perspectives in Europe and the US that are currently dominating empirical macro-econometric modeling and delves deeper into their methodological/philosophical foundations. It is argued that the key to establishing a constructive dialogue between them is provided by a better understanding of the role of data in modern statistical inference, and how that relates to the centuries old issue of the realisticness of economic theories.
  • Statistics and Economics
    Spanos, Aris (Palgrave Macmillan, 2008)
  • Testing for Structural Breaks and other forms of Non-stationarity: a Misspecification Perspective
    Heracleous, Maria S.; Koutris, Andreas; Spanos, Aris (2008)
    In the 1980s and 1990s the issue of non-stationarity in economic time series has been discussed in the context of unit roots vs. mean trends in AR(p) models. More recently this perspective has been extended to include structural breaks. In this paper we take a much broader perspective by viewing the problem of changing parameters as one of misspecification testing due to the nonstationarity of the underlying process. The proposed misspecification testing procedure relies on resampling techniques to enhance the informational content of the observed data in an attempt to capture heterogeneity ‘locally’ using rolling window estimators of the primary moments of the stochastic process. The effectiveness of the testing procedure is assessed using extensive Monte Carlo simulations.
  • Propagation of shocks in an input-output economy: Evidence from disaggregated prices
    Luo, Shaowen; Villar, David (Elsevier, 2023-07-01)
    Using disaggregated industry-level data, this paper empirically evaluates predictions for the cross-sectional price change distribution made by input-output models with sticky prices. The response of prices to shocks is found to be consistent with the price sensitivities predicted by the input-output model. Moreover, moments of the sectoral price change distribution vary over time in response to the evolution of the network structure. Finally, through a quantitative analysis, demand and supply shocks are disentangled during the pandemic period. Counterfactual analyses show that sectoral supply shocks, aggregate demand shocks and the production network structure contributed significantly to the inflation surge in 2021–2022.
  • The price adjustment hazard function: Evidence from high inflation periods
    Luo, Shaowen; Villar, Daniel (Elsevier, 2021-09-01)
    The price adjustment hazard function - the probability of a good's price changing as a function of its price misalignment - enables the examination of the relationship between price stickiness and monetary non-neutrality without specifying a micro-founded model, as discussed by Caballero and Engel (1993a, 2007). Using the micro data underlying the U.S. Consumer Price Index going back to the 1970s, we estimate the hazard function relying on empirical patterns from high and low inflation periods. We find that the relation between inflation and higher moments of the price change distribution is particularly informative for the shape of the hazard function. Our estimated hazard function is relatively flat with positive values at zero. It implies weak price selection and a high degree of monetary non-neutrality: about 60% of the degree implied by the Calvo model, and much higher than what menu cost models imply. In addition, our estimated function is asymmetric: price increases are considerably more likely to occur than price decreases of the same magnitude.
  • The Skewness of the Price Change Distribution: A New Touchstone for Sticky Price Models
    Luo, Shaowen; Villar, Daniel (Wiley, 2021-02-01)
    We present a new way of empirically evaluating various sticky price models that are used to assess the degree of monetary nonneutrality. While menu cost models uniformly predict that price change skewness and dispersion fall with inflation, in the Calvo model, both rise. However, the U.S. Consumer Price Index (CPI) data from the late 1970s onward show that skewness does not fall with inflation, while dispersion does. We present a random menu cost model that, with a menu cost distribution that has a strong Calvo flavor, can match the empirical patterns. The model exhibits much more monetary nonneutrality than existing menu cost models.
  • Severity and Trustworthy Evidence: Foundational Problems versus Misuses of Frequentist Testing
    Spanos, Aris (Cambridge University Press, 2022-02-10)
    For model-based frequentist statistics, based on a parametric statistical model ${{\cal M}_\theta }({\bf{x}})$, the trustworthiness of the ensuing evidence depends crucially on (i) the validity of the probabilistic assumptions comprising ${{\cal M}_\theta }({\bf{x}})$, (ii) the optimality of the inference procedures employed, and (iii) the adequateness of the sample size (n) to learn from data by securing (i)–(ii). It is argued that the criticism of the postdata severity evaluation of testing results based on a small n by Rochefort-Maranda (2020) is meritless because it conflates [a] misuses of testing with [b] genuine foundational problems. Interrogating this criticism reveals several misconceptions about trustworthy evidence and estimation-based effect sizes, which are uncritically embraced by the replication crisis literature.
  • Why the Medical Diagnostic Screening Perspective Misrepresents Frequentist Testing and Misdiagnoses the Replication Crisis
    The replication crisis and the untrustworthiness of empirical evidence is often viewed through the lens of the Medical Diagnostic screening (MDS) perspective, conceived as a surrogate for Neyman-Pearson (N-P) testing. To shed light on this crisis theMDS Positive Predictive Value (PPV) is Metamorphosed into the M-PPV by identifying the false positive/negative probabilities with the type I/II error probabilities. The diagnosis based on M-PPV is that the untrustworthiness of empirical evidence stems from several misuses of N-P testing, including p-hacking, data-dredging, and cherry-picking. The appropriateness of the MDS perspective, as well as the ensuing diagnosis based on the M-PPV, are called into question since they invoke dubious analogies with N-P testing. It is argued that a more pertinent explanation is that the untrustworthiness of evidence stems from a much broader problem relating to the uninformed and recipe-like implementation of frequentist statistics without proper understanding of the invoked assumptions, limitations, and warranted evidential interpretations of the frequentist inference results. This broader perspective, in conjunction with the post-data severity evidential interpretation of the testing results, could potentially address the untrustworthiness of empirical evidence problem.
  • Testing for Structural Breaks and other forms of Non-stationarity: a Misspecification Perspective
    In the 1980s and 1990s the issue of non-stationarity in economic time series has been in the context of unit roots vs. mean trends in AR(p) models. More recently this perspective has been extended to include structural breaks. In this paper we take a much broader perspective by viewing the problem as one of misspecification testing: assessing the stationarity of the underlying process. The proposed misspecification testing procedure relies on resampling techniques to enhance the informational content of the observed data in an attempt to capture heterogeneity `locally' using rolling window estimators of the primary moments of the stochastic process. The effectiveness of the testing procedure is assessed using extensive Monte Carlo simulations
  • What Foundations for Statistical Modeling and Inference?
    Spanos, Aris (Association Oeconomia, 2019-12-01)
  • A New Perspective on Impartial and Unbiased Apportionment
    Hyman, Ross; Tideman, Nicolaus (Taylor & Francis, 2023-08-17)
    How to fairly apportion congressional seats to states has been debated for centuries. We present an alternative perspective on apportionment, centered not on states but “families” of states, sets of states with “divisor-method” quotas with the same integer part. We develop “impartial” and “unbiased” apportionment methods. Impartial methods apportion the same number of seats to families of states containing the same total population, whether a family consists of many small-population states or a few large-population states. Unbiased methods apportion seats so that if states are drawn repeatedly from the same distribution, the expected number of seats apportioned to each family equals the expected divisor-method quota for that family.