Investigating the Effectiveness of Applying the Critical Incident Technique to Remote Usability Evaluation

Files
TR Number
Date
1999-12-01
Journal Title
Journal ISSN
Volume Title
Publisher
Virginia Tech
Abstract

Remote usability evaluation is a usability evaluation method (UEM) where the experimenter, performing observation and analysis, is separated in space and/or time from the user. There are several approaches by which to implement remote evaluation, limited only by the availability of supporting technology. One such implementation method is RECITE (the REmote Critical Incident TEchnique), an adaptation of the user-reported critical incident technique developed by Castillo (1997). This technique requires that trained users, working in their normal work environment, identify and report critical incidents. Critical incidents are interactions with a system feature that prove to be particularly easy or difficult, leading to extremely good or extremely poor performance. Critical incident reports are submitted to the experimenter using an on-line reporting tool, who is responsible for their compilation into a list of usability problems. Support for this approach to remote evaluation has been reported (Hartson, H.R., Castillo, J.C., Kelso, J., and Neale, W.C., 1996; Castillo, 1997).

The purpose of this study was to quantitatively assess the effectiveness of RECITE with respect to traditional, laboratory-based applications of the critical incident technique. A 3x2x 5 mixed-factor experimental design was used to compare the frequency and severity ratings of critical incidents reported by remote versus laboratory-based users. Frequency was measured according to the number of critical incident reports submitted and severity was rated along four dimensions: task frequency, impact on task performance, impact on satisfaction, and error severity. This study also compared critical incident data reported by trained users versus by usability experts observing end-users. Finally, changes in critical incident data reported over time were evaluated.

In total, 365 critical incident reports were submitted, containing 117 unique usability problems and 50 usability success descriptions. Critical incidents were classified using the Usability Problem Inspector (UPI). A higher number of web-based critical incidents occurred during Planning than expected. The distribution of voice-based critical incidents differed among participant groups: users reported a greater than expected number of Planning incidents while experts reported fewer than expected Assessment incidents. Usability expert performance was not correlated, requiring that separate analyses be conducted for each expert data set.

Support for the effectiveness in applying critical incidents to remote usability was demonstrated, with all research hypotheses at least partially supported. Usability experts gave significantly different ratings of impact on task performance than did user reporters. Remote user performance versus laboratory-based users failed to reveal differences in all but one measure: laboratory-based users reported more positive critical incidents for the voice interface than did remote users. In general, the number of negative critical incidents decreased over time; a similar result did not apply to the number of positive critical incidents.

It was concluded that RECITE is an effective means of capturing problem-oriented data over time. Recommendations for its use as a formative evaluation method applied during the latter stages of product development (i.e. when a high fidelity prototype is available) are made. Opportunities for future research are identified.

Description
Keywords
Remote Usability, Usability Evaluation Methods, Critical Incident Technique, Voice Email
Citation
Collections