O*NET or NOT? Adequacy of the O*NET system's rater and format choices

Files
TR Number
Date
2001-10-11
Journal Title
Journal ISSN
Volume Title
Publisher
Virginia Tech
Abstract

The ONET was built to replace the Dictionary of Occupational Titles (DOT) and form a highly accessible, on-line (through the World Wide Web), common language occupational information center (Dye & Silver, 1999). This study tested the relevance of the self-rating choice and unconventional BARS format to be used by the ONET system for occupational ratings. In addition, a new rating scale format named NBADS, was tested for improved ratings. Fifty three Incumbent raters in two occupations (Graduate teaching assistants and Secretaries) and 87 laypeople raters who have never worked in these occupations, rated 21 item-pairs (Importance and Level type questions) picked randomly from the 52 items on the original ONET Ability questionnaire. Participants rated each of the 21 item-pairs three times, with the Level question being presented in the ONET BARS, a Likert GRS and the NBADS formats; The importance type question was always rated using a 1-5 Likert scale. Hypothesis 1a was supported, showing a significant leniency bias across formats for self-ratings. Hypothesis 1b was mostly supported, failing to show significant leniency, elevation error or interrater agreement improvement over laypeople ratings; only the overall-error measure showed a significant improvement for incumbent raters. Hypothesis 2 was not supported, failing to show that the GRS format had any improvement on leniency, accuracy or interrater agreement over the ONET BARS format. Hypothesis 3a was supported, showing significant leniency reduction, accuracy error reduction and higher interrater agreement using the NBADS format over the GRS format. In a similar sense, hypothesis 3b was partially supported, showing reduction in leniency effect and higher agreement using the NBADS format over the ONET BARS format. Finally, hypothesis 4 was mostly supported, showing hardly any significant differences in the ratings of the Importance type question across the three format sessions, strengthening the idea that no other interfering variables have caused the format sessions' differences. Implications of the results are discussed.

Description
Keywords
format, GRS, O*NET, self appraisal, accuracy, NBADS, rating bias, rating scale
Citation
Collections