Why the Decision Theoretic Perspective Misrepresents Frequentist Inference: 'Nuts and Bolts' vs. Learning from Data
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The primary objective of this paper is to revisit and make a case for the merits of R.A. Fisher's objections to the decision-theoretic framing of frequentist inference. It is argued that this framing is congruent with the Bayesian but incongruent with the frequentist inference. It provides the Bayesian approach with a theory of optimal inference, but it misrepresents the theory of optimal frequentist inference by framing inferences solely in terms of the universal quantifier `for all values of theta in the parameter space'. This framing is at odds with the primary objective of model-based frequentist inference, which is to learn from data about the true value of theta (unknown parameter(s)); the one that gave rise to the particular data. The frequentist approach relies on factual (estimation, prediction), as well as hypothetical (testing) reasoning whose primary aim is to learn from data about the true theta. The paper calls into question the appropriateness of admissibility and reassesses Stein's paradox as it relates to the capacity of frequentist estimators to pinpoint the true theta. The paper also compares and contrasts loss-based errors with traditional frequentist errors, such as coverage, type I and II; the former are attached to {\theta}, but the latter to the inference procedure itself.