The application of decision times and reaction times in the construction of latency weighted test scores

TR Number
Journal Title
Journal ISSN
Volume Title
Virginia Polytechnic Institute and State University

The use of response latencies to determine item weights in the construction of multiple-choice test scores was investigated. Three times were measured on each test item by the computer system which administered two tests. The times recorded were: the reading time which the student used to read the stem of the question; the decision time which the student used to read the alternatives, to eliminate the incorrect choices he could identify, and to indicate the correct choice; the third time recorded was the choice reaction time which the student required to identify the number of the correct alternative.

Each of the three response times was analyzed as a function of two independent variables: the number of alternatives eliminated, and the correctness of the answer selected. Reading time was not significantly related to either of the independent variables. Decision time was shorter when correct answers were selected than when incorrect answers were selected. Furthermore, decision time was shorter when zero or three alternatives were eliminated than when one or tvo alternatives vere eliminated. Reaction time was also shorter when correct answers were selected than when incorrect answers were selected. In contrast to the decision time results of two independent main effects, the reaction time analysis indicated an interaction between correctness of the student's choice and the number of alternatives eliminated: when the decision was correct, reaction time became shorter as the number of eliminations increased; when the decision was incorrect, reaction time became longer as the number of eliminations increased. Both decision time and reaction time results were consistent with those of laboratory studies.

Item weights were constructed as the differences between item response latencies for each student so that between-student differences in absolute response time were eliminated. Given a confidence construct popular in decision time and reaction time research, the latency item weights were formulated to maximize weight of items answered with relative certainty and to minimize weight of items answered with relative uncertainty. Test scores used in evaluating the latency weighted scores included raw scores (the number correct) and the Coombs mode scores (the number of alternatives correctly eliminated minus three times the number of alternatives incorrectly identified, since all items had four choices), and several personality trait scores. The reaction time scores had higher validity estimates than either the Coombs mode or raw scores from the same test, but did not correlate with corresponding raw scores as well as the Coombs mode scores. In contrast, the decision time scores had validity estimates higher than raw scores and comparable to the Coombs mode scores, but were very highly correlated with the corresponding raw scores. In addition, decision time scores correlated with the exam raw scores more so than any other measure. Finally, the effect of personality traits on each of the test scores was investigated. Both reaction time and decision time scores were less correlated with a measure of test-taking anxiety than either the Coombs mode or raw scores.