The use of summated ratings in faculty evaluation

TR Number
Date
1976-01-05
Journal Title
Journal ISSN
Volume Title
Publisher
Virginia Tech
Abstract

Student evaluation of instruction was investigated using summated ratings obtained from three different types of evaluation instruments: (1) a standardized form developed by Kansas State University (2) a single item form on which the student indicated an overall rating of the instructor and (3) a form on which students constructed their own items and rated the instructor.

The study attempted to answer the following questions: (1) To what extent can summated ratings distinguish among instructors? (2) How strongly do summated ratings based on student constructed items correlate with ratings from other evaluation instruments? (3) What item topics identified by students are common to all or nearly all instructors? (4) Do profiles of instructors based on summated ratings provide a basis for distinguishing among instructors on the basis of certain personal characteristics?

The study utilized students and faculty from three different types of educational institutions: A community college, a senior college and a state university. Summated scores from individual students were used as a primary basis for analysis. Means of summated scores from the three types of instruments and subsets of items identified from the student constructed items form were analyzed for each instructor using a one-way analysis of variance. Duncan's New Multiple Test was used to isolate groups without significant mean differences for each measure of evaluation. Mean ranks for each instructor were used to obtain correlation coefficients between each measure of evaluation. Kendall's coefficient of concordance was used to determine the degree of agreement between the measures of evaluation. Rankings were used in a pattern analysis (Johnson's MAX procedure) to determine whether rank profiles could be related to personal and professional characteristics of the instructor.

Findings from the study indicated that for the vast majority of faculty members, practical differences could not be determined on the basis of summated ratings. An alternative method based on ranks of sub-groups identified by the New Multiple Range Test provides a more practical approach for distinguishing among faculty members.

It was concluded, from the moderate to high correlations between the rankings for each measure of evaluation, that the instructor would receive basically the same ranking no matter which instrument was used for evaluation. High values obtained from the Kendall coefficient of concordance indicated a high degree of association between the measures of evaluation.

Five item topics were identified from the student constructed items as being common to all or nearly all instructors. These items defined three areas of concern that the student has for instruction: (1) the instructor knows the subject, (2) the subject is presented well, and (3) the instructor relates positively to the student personally and professionally.

Instructors in courses in which immediate application of course work is evident will receive higher ratings than instructors in those courses in which content application is difficult to discern, such as English and history. The teaching specialty of the instructor, therefore, does make a difference in the kind of evaluation the instructor receives.

Description
Keywords
education subject areas
Citation