An Exploration of End-User Critical Incident Classification

Files

TR Number

Date

2001-10-18

Journal Title

Journal ISSN

Volume Title

Publisher

Virginia Tech

Abstract

Laboratory usability tests can be a rich source of usability information for software design, but are expensive to run and involve time-consuming data analysis. Expert review of software is cheaper, but highly dependent on the experience of the expert. Techniques are needed that maintain user involvement while reducing both the cost of user involvement and the time required to analyze data. The User Action Framework (UAF) is a classification scheme for usability problems that facilitates data analysis and reusability of information learned from one project to another, but is also reliant on expert interpretation of usability data, and classification can be difficult when user-supplied problem descriptions are incomplete.

This study explored end-user classification of self-reported critical incidents (usability issues) using the UAF, a technique that was hoped to reduce expert interpretation of usability problems. It also explored end-user critical incident reporting from a usability session recording, rather than reporting incidents as soon as they occur, a technique that could be used in future studies to compare effectiveness of usability methods. Results indicate that users are not good at diagnosing their own critical incidents due to the level of detail required for proper classification, although observations suggest that users were able to provide usability information that would not have been captured by an expert observer. The recording technique was successful, and is recommended for future studies to further explore differences in the kind of information that can be gathered from end-users and from experts during usability studies.

Description

Keywords

user and expert differences, User Action Framework, critical incident, usability problem classification

Citation

Collections