Browsing by Author "Castillo, Jose C."
Now showing 1 - 4 of 4
Results Per Page
Sort Options
- Remote Usability Evaluation at a GlanceCastillo, Jose C.; Hartson, H. Rex; Hix, Deborah (Department of Computer Science, Virginia Polytechnic Institute & State University, 1997-07-01)Much traditional user interface evaluation is conducted in usability laboratories, where a small number of selected users is directly observed by trained evaluators. However, as the network itself and the remote work setting have become intrinsic parts of usage patterns, evaluators often have limited access to representative users for usability evaluation in the laboratory and the users' work context is difficult or impossible to reproduce in a laboratory setting. These barriers to usability evaluation led to extending the concept of usability evaluation beyond the laboratory, typically using the network itself as a bridge to take interface evaluation to a broad range of users in their natural work settings.
- Remote Usability Testing Methods a la CarteCastillo, Jose C.; Hartson, H. Rex (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)Although existing lab-based formative usability testing is frequently and effectively applied to improving usability of software user interfaces, it has limitations that have led developers to turn to remote usability evaluation methods (RUEMs) to collect formative usability data from daily usage by real users in their own real-world task environments. The enormous increase in Web usage, where users can be isolated and the network and remote work settingbecome intrinsic parts of usage patterns, is strong motivation for supplementing lab-based testing with remote usability evaluation methods. Another significant impetus for remote evaluation is the fact that the iterative development cycle for any software, Web application or not, does not end with initial deployment. We review and informally compare several approaches to remote usability evaluation with respect to quantity and quality of data collected and the effort to collect the data.
- Trusting Remote Users… Can They Identify Problems Without Involving Usability Experts?Castillo, Jose C.; Hartson, H. Rex (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)Based on our belief that critical incident data, observed during usage and associated closely with specific task performance are the most useful kind of formative evaluation data for finding and fixing usability problems, we developed a Remote Usability Evaluation Method (RUEM) that involves real users self-reporting critical incidents encountered in real tasks performed in their normal working environments without the intervention of evaluators. In our exploratory study we observed that users were able to identify, report, and rate the severity level of their own critical incidents with only brief training.
- The User-Reported Critical Incident Method at a GlanceCastillo, Jose C.; Hartson, H. Rex; Hix, Deborah (Department of Computer Science, Virginia Polytechnic Institute & State University, 1997-07-01)The over-arching goal of this work is to discuss the user-reported critical incident method, a cost-effective remote usability evaluation method for real-world applications involving real users, doing real tasks in real work environments. Several methods have been developed for conducting usability evaluation without direct observation of a user by an evaluator. However, contrary to the user-reported critical incident method, none of the existing remote evaluation methods (nor even traditional laboratory-based evaluation) meets all the following criteria: - data are centered around critical incidents that occur during task performance; - tasks are performed by real users; - users are located in normal working environment; - users self-report own critical incidents; - data are captured in day-to-day task situations; - no direct interaction is needed between user and evaluator during an evaluation session; - there is a cost-effective way to capture data; and - data are high quality and therefore relatively easy to convert into usability problems.