Human Factors Design and Evaluation of Augmented Reality Visualization Techniques for Avoidance of Out-of-View Objects

dc.contributor.authorQuinn, Kelsey Anneen
dc.contributor.committeechairGabbard, Joe L.en
dc.contributor.committeememberSwan, J. Edwarden
dc.contributor.committeememberDickerson, Deborah Elspethen
dc.contributor.committeememberPatrick, Rafaelen
dc.contributor.departmentIndustrial and Systems Engineeringen
dc.date.accessioned2025-05-30T08:04:24Zen
dc.date.available2025-05-30T08:04:24Zen
dc.date.issued2025-05-29en
dc.description.abstractRecent advancements in augmented reality (AR) hardware, software and application capabilities have introduced exciting benefits and advantages, especially in industrial and occupational fields. The use of head-worn AR in real work settings affords the benefit of a hands-free tool, allowing workers to perform usual tasks and move around freely in their environment with easier access to critical information. The integration of novel technologies, like AR, into the real-world requires an understanding of how to utilize the technology as a catalyst for improved work-place efficiency, rather than further hinder or slow down performance. The ongoing, dynamic nature of industrial and occupational settings can benefit from AR assistive interfaces, such as visualizations that can cue users to task- or goal-related objects in their environment. Although the benefits of such cueing visualization techniques (VTs) are acknowledged, the consideration of FOV limitations and perceptual and cognitive challenges need to be further examined for effective visualization technique design. Perceptual issues, such as occlusion of real-world information, along with cognitive issues, such as heightened mental workload, can also occur with the use of AR, sometimes resulting in a negative impact on performance. Researchers and designers must consider the strengths and weaknesses of the human visual system, the allocation of limited attentional resources, the interface information density, and information encoding tactics used to cue the users to objects. Many AR visualization techniques already exist to aid users in locating and identifying objects outside of their FOV, but no technique universally solves the perceptual and cognitive issues associated with these interfaces. Previous work related to visualization techniques for attention guidance explored cues to guide users' attention towards an object of interest, for navigation, object collection or user task support. Our work, however, looked at how AR visualization techniques, specifically Whiskers, Wedge, and Compass, can be used to guide users away from objects, for safety or hazard avoidance. We explore the use of these cues in a fast-paced, outdoor and indoor environment, and apply our work to the domain of Search and Rescue, which is a time-critical and often dangerous mission. This work aimed to (1) explore existing attention guidance visualization techniques and the design elements and preattentive properties used to encode out-of-view object information, (2) examine cue conveyance effectiveness of visualization techniques for out-of-view object avoidance and related cognitive impacts on user safety and performance, and (3) investigate the integration of visualization techniques, and the properties used to encode crucial information, into Search and Rescue through expertise-driven examination with subject matter experts. Through quantitative and qualitative analysis, we derived visualization technique design guidelines for cueing to out-of-view objects, which may be applicable to many other industrial and occupational settings. Rather than identifying the best visualization technique, we identify the information encoded, preattentive properties, and design elements ideal for cue conveyance, user safety, and dual task support.en
dc.description.abstractgeneralAugmented reality (AR) has become increasingly popular for entertainment and gaming use and can also be a helpful tool for workplaces. Augmented reality displays graphics through a headset, while still allowing the user to see the real-world environment. The graphics overlaid on the environment can be images or text regarding relevant information to assist workers in training, instructions, or task completion, throughout many industries. In order to take advantage of AR as an innovative, hand-free tool, it is important to understand the technological limitations of the AR head-set, as well as the cognitive or physical capabilities and limitations of the user wearing it. In this work, we researched how to display information on the screen of the AR headset so that the information is understandable and easy to use to ensure AR benefits rather than impedes users' work. Our work specifically explored how to point users, through "cues," to objects nearby yet outside the central field of view of the user. We investigated how to tell users about the direction, distance, and type of object through the use of color, shapes, symbols, and numbers. We also explored how to layout this information on the screen, so it is not blocking the users' view, nor distracting them from their tasks. For example, the cues on the screen can help a first responder locate a hazard in the environment by using the rotation of a shape to indicate direction of the hazard, color to indicate how close or far the hazard is, and a symbol to indicate the type of hazard they are being cued to. The typical use of these cues on an AR headset is to guide users towards an object, such as navigating users towards the next item to pick up in order to assemble a product correctly. However, in our work, we research how we can use these cues to help a user avoid an object. We specifically look at the use of cues in Search and Rescue to help first responders safely navigate around a nearby hazard, while simultaneously looking for victims in the disaster environment. We explore what their work already entails, both mentally and physically, so we can best design cues with colors, shapes, and symbols so they can quickly and clearly understand information about the hazard they need to avoid. In our work, we explore what cues have already been designed, conduct experiments to see how well participants perform with different cues, as well as interview experts of Search and Rescue to gain insight on what they need from the cues in order for AR to be a helpful tool. Our work provides insight into what types of cues help keep users safe while aiding in multi-tasking and delivers design guidelines of how to create cues that improve work efficiency in Search and Rescue operations and beyond.en
dc.description.degreeDoctor of Philosophyen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:43813en
dc.identifier.urihttps://hdl.handle.net/10919/134305en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectaugmented realityen
dc.subjectinterface designen
dc.subjectvisualization techniquesen
dc.subjecthuman-centered designen
dc.subjectvisual attentionen
dc.subjectcueing techniquesen
dc.titleHuman Factors Design and Evaluation of Augmented Reality Visualization Techniques for Avoidance of Out-of-View Objectsen
dc.typeDissertationen
thesis.degree.disciplineIndustrial and Systems Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.leveldoctoralen
thesis.degree.nameDoctor of Philosophyen

Files

Original bundle
Now showing 1 - 2 of 2
Name:
Quinn_KA_D_2025.pdf
Size:
65.39 MB
Format:
Adobe Portable Document Format
Name:
Quinn_KA_D_2025_support_1.pdf
Size:
2.8 MB
Format:
Adobe Portable Document Format
Description:
Supporting documents