Browsing by Author "Capra, Miranda Galadriel"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- An Exploration of End-User Critical Incident ClassificationCapra, Miranda Galadriel (Virginia Tech, 2001-10-18)Laboratory usability tests can be a rich source of usability information for software design, but are expensive to run and involve time-consuming data analysis. Expert review of software is cheaper, but highly dependent on the experience of the expert. Techniques are needed that maintain user involvement while reducing both the cost of user involvement and the time required to analyze data. The User Action Framework (UAF) is a classification scheme for usability problems that facilitates data analysis and reusability of information learned from one project to another, but is also reliant on expert interpretation of usability data, and classification can be difficult when user-supplied problem descriptions are incomplete. This study explored end-user classification of self-reported critical incidents (usability issues) using the UAF, a technique that was hoped to reduce expert interpretation of usability problems. It also explored end-user critical incident reporting from a usability session recording, rather than reporting incidents as soon as they occur, a technique that could be used in future studies to compare effectiveness of usability methods. Results indicate that users are not good at diagnosing their own critical incidents due to the level of detail required for proper classification, although observations suggest that users were able to provide usability information that would not have been captured by an expert observer. The recording technique was successful, and is recommended for future studies to further explore differences in the kind of information that can be gathered from end-users and from experts during usability studies.
- Usability Problem Description and the Evaluator Effect in Usability TestingCapra, Miranda Galadriel (Virginia Tech, 2006-03-13)Previous usability evaluation method (UEM) comparison studies have noted an evaluator effect on problem detection in heuristic evaluation, with evaluators differing in problems found and problem severity judgments. There have been few studies of the evaluator effect in usability testing (UT), task-based testing with end-users. UEM comparison studies focus on counting usability problems detected, but we also need to assess the content of usability problem descriptions (UPDs) to more fully measure evaluation effectiveness. The goals of this research were to develop UPD guidelines, explore the evaluator effect in UT, and evaluate the usefulness of the guidelines for grading UPD content. Ten guidelines for writing UPDs were developed by consulting usability practitioners through two questionnaires and a card sort. These guidelines are (briefly): be clear and avoid jargon, describe problem severity, provide backing data, describe problem causes, describe user actions, provide a solution, consider politics and diplomacy, be professional and scientific, describe your methodology, and help the reader sympathize with the user. A fourth study compared usability reports collected from 44 evaluators, both practitioners and graduate students, watching the same 10-minute UT session recording. Three judges measured problem detection for each evaluator and graded the reports for following 6 of the UPD guidelines. There was support for existence of an evaluator effect, even when watching pre-recorded sessions, with low to moderate individual thoroughness of problem detection across all/severe problems (22%/34%), reliability of problem detection (37%/50%) and reliability of severity judgments (57% for severe ratings). Practitioners received higher grades averaged across the 6 guidelines than students did, suggesting that the guidelines may be useful for grading reports. The grades for the guidelines were not correlated with thoroughness, suggesting that the guideline grades complement measures of problem detection. A simulation of evaluators working in groups found a 34% increase in severe problems found by adding a second evaluator. The simulation also found that thoroughness of individual evaluators would have been overestimated if the study had included a small number of evaluators. The final recommendations are to use multiple evaluators in UT, and to assess both problem detection and description when measuring evaluation effectiveness.