Calibration Efficacy of Three Logistic Models to the Degrees of Reading Power Test Using Residual Analysis

Files

ch1.pdf (120.4 KB)
Downloads: 142

ch2.pdf (521.06 KB)
Downloads: 141

ch3.pdf (44.05 KB)
Downloads: 104

TR Number

Date

1997-11-18

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

The publisher of the Degrees of Reading Power test of reading comprehension (DRP) calibrate their test using an item response model called the Rasch or one-parameter logistic model. The relationship between the use of the Rasch model in calibration of the DRP and the use of the DRP as a component of the Virginia Literacy Passport Testing Program (LPT) is addressed. Analyses concentrate on sixth grade students who were administered the DRP in 1991. The question that arises is whether the Rasch model is the appropriate model to use to calibrate the DRP in this high-stakes setting. The majority of research that has been reported by the publisher of the DRP to assess the adequacy of the Rasch model have not included direct checks on model assumptions, model features or model predictions. Instead, they have relied almost exclusively on statistical tests in assessment of model fit. This study will assess the adequacy of fitting DRP test data to the Rasch model through direct examination of the assumptions, features and predictions of the IRT model. This is accomplished by comparing the Rasch model to the less restrictive two- and three-parameter logistic models. Robust IRT-based goodness-of-fit techniques are conducted. When the DRP is used in a high stakes setting, guessing is likely for those in jeopardy of failing. Under these circumstances, we must attend to the possibility that guessing may be a factor and thereby calibrate the DRP with the three-parameter model, as this model takes guessing into account.

Description

Keywords

Degrees of Reading Power (DRP), Goodness of Fit, Item Response Theory, Residual Analysis, Virginia Literacy Passport Test (LPT)

Citation