Browsing by Author "Smith, Martha Irene"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- Developing an Augmented Reality Visual Clutter Score Through Establishing the Applicability of Image Analysis Measures of Clutter and the Analysis of Augmented Reality User Interface PropertiesFlittner, Jonathan Garth (Virginia Tech, 2023-09-05)Augmented reality (AR) is seeing a rapid expansion into several domains due to the proliferation of more accessible and powerful hardware. While augmented reality user interfaces (AR UIs) allow the presentation of information atop the real world, this extra visual data potentially comes at a cost of increasing the visual clutter of the users' field of view, which can increase visual search time, error rates, and have an overall negative effect on performance. Visual clutter has been studied for existing display technologies, but there are no established measures of visual clutter for AR UIs which precludes the study of the effects of clutter on performance in AR UIs. The first objective of this research is to determine the applicability of extant image analysis measures of feature congestion, edge density, and sub-band entropy for measuring visual clutter in the head-worn optical see-through AR space and establish a relationship between image analysis measures of clutter and visual search time. These image analysis measures are specifically chosen to quantify clutter, as they can be applied to complex and naturalistic scenes, as is common to experience while using an optical see-through AR UI. The second objective is to examine the effects of AR UIs comprised of multiple apparent depths on user performance through the metric of visual search time. The third objective is to determine the effects of other AR UI properties such as target clutter, target eccentricity, target apparent depth and target total distance on performance as measured through visual search time. These results will then be used to develop a visual clutter score, which will rate different AR UIs against each other. Image analysis measures for clutter of feature congestion, edge density, and sub-band entropy of clutter were correlated to visual search time when they were taken for the overall AR UI and when they were taken for a target object that a participant was searching for. In the case of an AR UI comprised of both projected and AR parts, image analysis measures were not correlated to visual search time for the constituent AR UI parts (projected or AR) but were still correlated to the overall AR UI clutter. Target eccentricity also had an effect on visual search time, while target apparent depth and target total distance from center did not. Target type and AR object percentage also had an effect on visual search time. These results were synthesized into a general model known as the "AR UI Visual Clutter Score Algorithm" using a multiple regression. This model can be used to compare different AR UIs to each other in order to identify the AR UI that is projected to have lower target visual search times.
- Effects of Augmented Reality Head-up Display Graphics’ Perceptual Form on Driver Spatial Knowledge AcquisitionDe Oliveira Faria, Nayara (Virginia Tech, 2019-12-16)In this study, we investigated whether modifying augmented reality head-up display (AR HUD) graphics’ perceptual form influences spatial learning of the environment. We employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium-fidelity driving simulator at the COGENT lab at Virginia Tech. Two different navigation cues systems were compared: world-relative and screen-relative. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. We captured empirical data regarding changes in driving behaviors, glance behaviors, spatial knowledge acquisition (measured in terms of landmark and route knowledge), reported workload, and usability of the interface. Results showed that both screen-relative and world-relative AR head-up display interfaces have similar impact on the levels of spatial knowledge acquired; suggesting that world-relative AR graphics may be used for navigation with no comparative reduction in spatial knowledge acquisition. Even though our initial assumption that the conformal AR HUD interface would draw drivers’ attention to a specific part of the display was correct, this type of interface was not helpful to increase spatial knowledge acquisition. This finding contrasts a common perspective in the AR community that conformal, world-relative graphics are inherently more effective than screen-relative graphics. We suggest that simple, screen-fixed designs may indeed be effective in certain contexts. Finally, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR HUD interfaces; with conformal-graphics demanding more visual attention from drivers. We showed that the distribution of visual attention allocation was that the world-relative condition was typically associated with fewer glances in total, but glances of longer duration.
- Informing Design of In-Vehicle Augmented Reality Head-Up Displays and Methods for AssessmentSmith, Martha Irene (Virginia Tech, 2018-08-23)Drivers require a steady stream of relevant but focused visual input to make decisions. Most driving information comes from the surrounding environment so keeping drivers' eyes on the road is paramount. However, important information still comes from in-vehicle displays. With this in mind, there has been renewed recent interest in delivering driving in-formation via head-up display. A head-up display (HUD) can present an image directly on-to the windshield of a vehicle, providing a relatively seamless transition between the display image and the road ahead. Most importantly, HUD use keeps drivers' eyes focused in the direction of the road ahead. The transparent display coupled with a new location make it likely that HUDs provide a fundamentally different driving experience and may change the way people drive, in both good and bad ways. Therefore, the objectives of this work were to 1) understand changes in drivers' glance behaviors when using different types of displays, 2) investigate the impact of HUD position on glance behaviors, and 3) examine the impact of HUD graphic type on drivers' behaviors. Specifically, we captured empirical data regarding changes in driving behaviors, glance behaviors, reported workload, and preferences while driving performing a secondary task using in-vehicle displays. We found that participants exhibited different glance behaviors when using different display types, with participants allocating more and longer glances towards a HUD as compared to a traditional Head-Down Display. However, driving behaviors were not largely affected and participants reported lower workload when using the HUD. HUD location did not cause large changes in glance behaviors, but some driving behaviors were affected. When exam-ining the impact of graphic types on participants, we employed a novel technique for ana-lyzing glance behaviors by dividing the display into three different areas of interest relative to the HUD graphic. This method allowed us to differentiate between graphic types and to better understand differences found in driving behaviors and participant preferences than could be determined with frequently used glance analysis methods. Graphics that were fixed in place rather than animated generally resulted in less time allocated to looking at the graphics, and these changes were likely because the fixed graphics were simple and easy to understand. Ultimately, glance and driving behaviors were affected at some level by the display type, display location, and graphic type as well as individual differences like gender and age.