A comparison of the classical and inverse methods of calibration in regression

TR Number
Date
1969
Journal Title
Journal ISSN
Volume Title
Publisher
Virginia Polytechnic Institute
Abstract

The linear calibration problem, frequently referred to as inverse regression or the discrimination problem can be stated briefly as the problem of estimating the independent variable x in a regression situation for a measured value of the dependent variable y. The literature on this problem deals primarily with the Classical method where the Classical estimator is obtained by expressing the linear model as

yi = α + βxi + εi ,

obtaining the least squares estimator for y for a given value of x and inverting the relationship. A second estimator for calibration, the Inverse estimator, is obtained by expressing the linear model as

xi = γ + δyi + ε’i

and using the resulting least squares estimator to estimate x. The experimental design problem for the Inverse estimator is explored first in this dissertation using the criterion of minimizing the average or integrated mean squared error, and the resulting optimal and near optimal designs are then compared with those for the Classical estimator which were recently derived by Ott and Nycrs.

Optimal designs are developed for a linear approximation when the true model is linear and when it is quadratic. In both cases, the optimal designs depend on unknown model parameters and are not realistically useable. However, designs are shown to exist which are near optimal and do not depend on the unknown model parameters. For the linear approximation to the quadratic model, these near optimal designs depend on N, the number of observations used to estimate the model parameters, and specific designs are developed and set forth in tables for N = 5(1)20(2)30(5)50.

The cost of misclassifying a quadratic model as linear is discussed from a design point of view as well as the cost of protecting against a possible quadratic effect, The costs are expressed in terms of the percent deviation from the average mean squared error that would be obtained if the model were classified correctly,

The derived designs for the Inverse estimator are compared with the recently derived designs for the Classical estimator using as a measure of comparison the ratio of minimum average mean squared errors obtained by using the optimal design for both estimators. Further comparisons are also made between optimal designs for the Classical estimator and the derived near optimal designs for the Inverse estimator using the ratio of the corresponding average mean squared errors as a measure of comparison.

Parallels are drawn between forward regression (estimating, the dependent variable for a given value of the independent variable) and inverse regression using both the Classical and Inverse methods.

Description
Keywords
Citation