Scholarly Works, Economics

Permanent URI for this collection

Research articles, presentations, and other scholarship

Browse

Recent Submissions

Now showing 1 - 20 of 84
  • A model of the formation of multilayer networks
    Billand, Pascal; Bravard, Christophe; Joshi, Sumit; Mahmud, Ahmed Saber; Sarangi, Sudipta (Elsevier, 2023-10)
    We study the formation of multilayer networks where payoffs are determined by the degrees of players in each network. We begin by imposing either concavity or convexity in degree on the payoff function of the players. We then explore distinct network relationships that result from inter- and intra-network spillovers captured by the properties of supermodularity/submodularity and strategic complementarity respectively. We show the existence of equilibria and characterize them. Additionally, we establish both necessary and sufficient conditions for an equilibrium to occur. We also highlight the connection, in equilibrium, between inter-network externalities and the identity of linked players in one network given the identity of linked players in the other network. Furthermore, we analyze efficient multilayer networks. Finally, we extend our models to contexts with more than two layers, and scenarios where agents receive a bonus for being connected to the same individuals in both networks.
  • ChatGPT has Aced the Test of Understanding in College Economics: Now What?
    Geerling, Wayne; Mateer, G. Dirk; Damodaran, Nikhil; Wooten, Jadrian (SAGE, 2023-04-08)
    The Test of Understanding in College Economics (TUCE) is a standardized test of economics knowledge performed in the United States which primarily targets principles-level understanding. We asked ChatGPT to complete the TUCE. ChatGPT ranked in the 91st percentile for Microeconomics and the 99th percentile for Macroeconomics when compared to students who take the TUCE exam at the end of their principles course. The results show that ChatGPT is capable of providing answers that exceed the mean responses of students across all institutions. The emergence of artificial intelligence presents a significant challenge to traditional assessment methods in higher education. An important implication of this finding is that educators will likely need to redesign their curriculum in at least one of the following three ways: reintroduce proctored, in-person assessments; augment learning with chatbots; and/or increase the prevalence of experiential learning projects that artificial intelligence struggles to replicate well.
  • Philosophy of Econometrics
    Spanos, Aris (Routledge, 2021-10-12)
    The preceding quotation from Einstein’s reply to Robert Thornton, a young philosopher of science who began teaching physics at the university level in 1944, encapsulates succinctly the importance of examining the methodology, history, and philosophical foundations of different scientific fields to avoid missing the forest for the trees. The field of interest in the discussion that follows is modern econometrics, whose roots can be traced back to the early 20th century. The problem of induction, in the sense of justifying an inference from particular instances to realizations yet to be observed, has been bedeviling the philosophy of science since Hume’s discourse on the problem. Modern statistical inference, as a form of induction, is based on data that exhibit inherent chance regularity patterns. Model-based statistical induction differs from other forms of induction, such as induction by enumeration, in three crucial respects.
  • Fortune Tellers: The Story of America's First Economic Forecasters [Book review]
    Spanos, Aris (Cambridge University Press, 2015-11-12)
  • Frequentist Probability
    Spanos, Aris (2017)
    The primary objective of this article is to discuss a model-based frequentist interpretation that identifies the probability of an event with the limit of its relative frequency of occurrence. What differentiates the proposed interpretation from the traditional ones are several key features: (i) events and probabilities are defined in the context of a statistical model , (ii) it is anchored on the strong law of large numbers, (iii) it is justified on empirical grounds by validating the model assumptions vis-à-vis data , (iv) the “long-run” metaphor can be rendered operational by simple simulation based on , and (v) the link between probability and real-world phenomena is provided by viewing data as a “truly typical” realization of the stochastic mechanism defined by . This link constitutes a feature shared with the Kolmogorov complexity algorithmic perspective on probability, which provides a further justification for the proposed frequentist interpretation.
  • The Pre-Eminence of Theory Versus the European CVAR Perspective in Macroeconometric Modeling
    Spanos, Aris (Elsevier, 2008-01-01)
    The primary aim of the paper is to place current methodological discussions on empirical modeling contrasting the 'theory first' versus the 'data first' perspectives in the context of a broader methodological framework with a view to constructively appraise them. In particular, the paper focuses on Colander?s argument in his paper ?Economists, Incentives, Judgement and Empirical Work? relating to the two different perspectives in Europe and the US that are currently dominating empirical macro-econometric modeling and delves deeper into their methodological/philosophical foundations. It is argued that the key to establishing a constructive dialogue between them is provided by a better understanding of the role of data in modern statistical inference, and how that relates to the centuries old issue of the realisticness of economic theories.
  • Statistics and Economics
    Spanos, Aris (Palgrave Macmillan, 2008)
  • Testing for Structural Breaks and other forms of Non-stationarity: a Misspecification Perspective
    Heracleous, Maria S.; Koutris, Andreas; Spanos, Aris (2008)
    In the 1980s and 1990s the issue of non-stationarity in economic time series has been discussed in the context of unit roots vs. mean trends in AR(p) models. More recently this perspective has been extended to include structural breaks. In this paper we take a much broader perspective by viewing the problem of changing parameters as one of misspecification testing due to the nonstationarity of the underlying process. The proposed misspecification testing procedure relies on resampling techniques to enhance the informational content of the observed data in an attempt to capture heterogeneity ‘locally’ using rolling window estimators of the primary moments of the stochastic process. The effectiveness of the testing procedure is assessed using extensive Monte Carlo simulations.
  • Propagation of shocks in an input-output economy: Evidence from disaggregated prices
    Luo, Shaowen; Villar, David (Elsevier, 2023-07-01)
    Using disaggregated industry-level data, this paper empirically evaluates predictions for the cross-sectional price change distribution made by input-output models with sticky prices. The response of prices to shocks is found to be consistent with the price sensitivities predicted by the input-output model. Moreover, moments of the sectoral price change distribution vary over time in response to the evolution of the network structure. Finally, through a quantitative analysis, demand and supply shocks are disentangled during the pandemic period. Counterfactual analyses show that sectoral supply shocks, aggregate demand shocks and the production network structure contributed significantly to the inflation surge in 2021–2022.
  • The price adjustment hazard function: Evidence from high inflation periods
    Luo, Shaowen; Villar, Daniel (Elsevier, 2021-09-01)
    The price adjustment hazard function - the probability of a good's price changing as a function of its price misalignment - enables the examination of the relationship between price stickiness and monetary non-neutrality without specifying a micro-founded model, as discussed by Caballero and Engel (1993a, 2007). Using the micro data underlying the U.S. Consumer Price Index going back to the 1970s, we estimate the hazard function relying on empirical patterns from high and low inflation periods. We find that the relation between inflation and higher moments of the price change distribution is particularly informative for the shape of the hazard function. Our estimated hazard function is relatively flat with positive values at zero. It implies weak price selection and a high degree of monetary non-neutrality: about 60% of the degree implied by the Calvo model, and much higher than what menu cost models imply. In addition, our estimated function is asymmetric: price increases are considerably more likely to occur than price decreases of the same magnitude.
  • The Skewness of the Price Change Distribution: A New Touchstone for Sticky Price Models
    Luo, Shaowen; Villar, Daniel (Wiley, 2021-02-01)
    We present a new way of empirically evaluating various sticky price models that are used to assess the degree of monetary nonneutrality. While menu cost models uniformly predict that price change skewness and dispersion fall with inflation, in the Calvo model, both rise. However, the U.S. Consumer Price Index (CPI) data from the late 1970s onward show that skewness does not fall with inflation, while dispersion does. We present a random menu cost model that, with a menu cost distribution that has a strong Calvo flavor, can match the empirical patterns. The model exhibits much more monetary nonneutrality than existing menu cost models.
  • Testing for Structural Breaks and other forms of Non-stationarity: a Misspecification Perspective
    In the 1980s and 1990s the issue of non-stationarity in economic time series has been in the context of unit roots vs. mean trends in AR(p) models. More recently this perspective has been extended to include structural breaks. In this paper we take a much broader perspective by viewing the problem as one of misspecification testing: assessing the stationarity of the underlying process. The proposed misspecification testing procedure relies on resampling techniques to enhance the informational content of the observed data in an attempt to capture heterogeneity `locally' using rolling window estimators of the primary moments of the stochastic process. The effectiveness of the testing procedure is assessed using extensive Monte Carlo simulations
  • What Foundations for Statistical Modeling and Inference?
    Spanos, Aris (Association Oeconomia, 2019-12-01)
  • A New Perspective on Impartial and Unbiased Apportionment
    Hyman, Ross; Tideman, Nicolaus (Taylor & Francis, 2023-08-17)
    How to fairly apportion congressional seats to states has been debated for centuries. We present an alternative perspective on apportionment, centered not on states but “families” of states, sets of states with “divisor-method” quotas with the same integer part. We develop “impartial” and “unbiased” apportionment methods. Impartial methods apportion the same number of seats to families of states containing the same total population, whether a family consists of many small-population states or a few large-population states. Unbiased methods apportion seats so that if states are drawn repeatedly from the same distribution, the expected number of seats apportioned to each family equals the expected divisor-method quota for that family.
  • Post-Corona Balanced-Budget Super-Stimulus: The Case for Shifting Taxes onto Land
    Kumhof, Michael; Tideman, Nicolaus; Hudson, Michael; Goodhart, Charles (2021-10-20)
  • Choosing Among the Variety of Proposed Voting Reforms
    Tideman, Nicolaus (Springer, 2023-12-05)
    A wide variety of voting reforms are offered for consideration in this special issue. This paper draws connections among them and identifies the beliefs that make particular proposals more attractive than others.
  • Revisiting the Large n (Sample Size) Problem: How to Avert Spurious Significance Results
    Spanos, Aris (MDPI, 2023-12-05)
    Although large data sets are generally viewed as advantageous for their ability to provide more precise and reliable evidence, it is often overlooked that these benefits are contingent upon certain conditions being met. The primary condition is the approximate validity (statistical adequacy) of the probabilistic assumptions comprising the statistical model Mθ(x) applied to the data. In the case of a statistically adequate Mθ(x) and a given significance level α, as n increases, the power of a test increases, and the p-value decreases due to the inherent trade-off between type I and type II error probabilities in frequentist testing. This trade-off raises concerns about the reliability of declaring ‘statistical significance’ based on conventional significance levels when n is exceptionally large. To address this issue, the author proposes that a principled approach, in the form of post-data severity (SEV) evaluation, be employed. The SEV evaluation represents a post-data error probability that converts unduly data-specific ‘accept/reject H0 results’ into evidence either supporting or contradicting inferential claims regarding the parameters of interest. This approach offers a more nuanced and robust perspective in navigating the challenges posed by the large n problem.
  • How the Post-Data Severity Converts Testing Results into Evidence for or against Pertinent Inferential Claims
    Spanos, Aris (MDPI, 2024-01-22)
    The paper makes a case that the current discussions on replicability and the abuse of significance testing have overlooked a more general contributor to the untrustworthiness of published empirical evidence, which is the uninformed and recipe-like implementation of statistical modeling and inference. It is argued that this contributes to the untrustworthiness problem in several different ways, including [a] statistical misspecification, [b] unwarranted evidential interpretations of frequentist inference results, and [c] questionable modeling strategies that rely on curve-fitting. What is more, the alternative proposals to replace or modify frequentist testing, including [i] replacing p-values with observed confidence intervals and effects sizes, and [ii] redefining statistical significance, will not address the untrustworthiness of evidence problem since they are equally vulnerable to [a]–[c]. The paper calls for distinguishing between unduly data-dependant ‘statistical results’, such as a point estimate, a p-value, and accept/reject H0, from ‘evidence for or against inferential claims’. The post-data severity (SEV) evaluation of the accept/reject H0 results, converts them into evidence for or against germane inferential claims. These claims can be used to address/elucidate several foundational issues, including (i) statistical vs. substantive significance, (ii) the large n problem, and (iii) the replicability of evidence. Also, the SEV perspective sheds light on the impertinence of the proposed alternatives [i]–[iii], and oppugns [iii] the alleged arbitrariness of framing H0 and H1 which is often exploited to undermine the credibility of frequentist testing.
  • Women’s labour market participation and intimate partner violence in Ghana: A multilevel analysis
    Owusu-Brown, Bernice (Virginia Tech, 2023-09-14)
    In recent decades, the capabilities approach has emerged as the most pertinent theoretical framework for elucidating development, well-being, and justice. By emphasizing the multifaceted nature of human well-being, the capability approach advocates a broader perspective of development beyond mere economic growth. It underscores the necessity of considering various dimensions that contribute to the enhancement of human lives by assigning importance to freedom. One prevalent form of freedom violation is intimate partner violence, which stems from historically unequal power dynamics between men and women, resulting in the subjugation and discrimination of women by men and hindering the full realization of their potential. This profound restriction of freedom does not only violate their fundamental human rights but also jeopardizes their health, and, consequently, obstructs their active engagement in national economic and social development. The capability approach prescribes women’s empowerment as a remedy for curbing violence, as reflected in both conventional economic and non-economic models. These models forecast that women's engagement in the labor market enhances their bargaining power, leading to a decrease in intimate partner violence. However, in conflict are rather pessimistic models suggesting that women who earn more than their partners via their labor market participation are at risk of expiring increased partnered violence. Conscious of this bi-causal relationship and accounting for the potential endogeneity, I set out to empirically investigate the direction of association of this relationship within the Ghanaian context. Our key finding indicates that woman's work status significantly increases her likelihood of becoming a victim of partnered violence. I conclude that while there is a growing focus on creating job opportunities for women to foster gender equality and development, it is essential to consider and address the implications this may have on their safety and well-being.