An Empirical Analysis of Rating Effectiveness for a State Quality Award

thesis.pdf (1.74 MB)
Downloads: 391
TR Number
Journal Title
Journal ISSN
Volume Title
Virginia Tech

This research clarified existing inconsistencies in self-assessment literature, and added to the body of knowledge for rating effectiveness of organizational assessments by defining relationships among rating effectiveness criteria (ratee, rater, rating scale, rating process) and measures (interrater reliability, halo error, leniency and severity, range restriction) based on extensive literature review. A research framework was developed from this review, and was employed in computing rating effectiveness measures at the individual (i.e., examiner or eight rating scale dimensions) and sector (e.g., Private Manufacturing Sector, Private Service Sector, Public Local Sector, Public State & Federal Sector) levels for a State Quality Award (SQA) using data from the 1998 applications.

Interrater reliability (measured by intraclass correlations for each rating scale dimension) was low to moderate, and differed by dimension. Halo error (measured by the determinant of the dimension intercorrelation matrices for each examiner) was present for all examiners. Leniency and severity (measured by presence of statistically significant Rater main effect for each dimension) was present for 11 of 32 cases, and differed by dimension. Range restriction (measured by variance analysis for each dimension) was present for 22 of 32 cases, and differed by dimension. A post-hoc principle component analysis indicated poor internal reliability for the rating scale. To improve, the SQA should replace the existing rating scale and provide in-depth training on all elements of the rating process. The importance of the SQA using boxplots, histograms, and rating effectiveness measures to make fully informed decisions was discussed.

Halo Error, Organizational Assessment, Quality Award, Rating Effectiveness, Interrater Reliability, Leniency and Severity, Range Restriction