Center for Human-Computer Interaction
Permanent URI for this community
The Center for Human-Computer Interaction is a transdisciplinary community of scholars. Our mission is to advance HCI research and education through intellectual and creative leadership and to advocate for a human-centered approach to technology, both at Virginia Tech and globally.
Browse
Browsing Center for Human-Computer Interaction by Subject "augmented reality"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- The Effects of Incorrect Occlusion Cues on the Understanding of Barehanded Referencing in Collaborative Augmented RealityLi, Yuan; Hu, Donghan; Wang, Boyuan; Bowman, Douglas A.; Lee, Sang Won (Frontiers, 2021-07-01)In many collaborative tasks, the need for joint attention arises when one of the users wants to guide others to a specific location or target in space. If the collaborators are co-located and the target position is in close range, it is almost instinctual for users to refer to the target location by pointing with their bare hands. While such pointing gestures can be efficient and effective in real life, performance will be impacted if the target is in augmented reality (AR), where depth cues like occlusion may be missing if the pointer’s hand is not tracked and modeled in 3D. In this paper, we present a study utilizing head-worn AR displays to examine the effects of incorrect occlusion cues on spatial target identification in a collaborative barehanded referencing task. We found that participants’ performance in AR was reduced compared to a real-world condition, but also that they developed new strategies to cope with the limitations of AR. Our work also identified mixed results of the effect of spatial relationships between users.
- Relative Effects of Real-world and Virtual-World Latency on an Augmented Reality Training Task: An AR Simulation ExperimentNabiyouni, Mahdi; Scirbo, Siroberto; Bowman, Douglas A.; Höllerer, Tobias (Frontiers Media, 2017-01-30)In augmented reality (AR), virtual objects and information are overlaid onto the user’s view of the physical world and can appear to become part of the real-world. Accurate registration of virtual objects is a key requirement for an effective and natural AR system, but misregistration can break the illusion of virtual objects being part of the real-world and disrupt immersion. End-to-end system latency severely impacts the quality of AR registration. In this research, we present a controlled study that aims at a deeper understanding of the effects of latency on virtual and real-world imagery and its influences on task performance in an AR training task. We utilize an AR simulation approach, in which an outdoor AR training task is simulated in a high-fidelity virtual reality (VR) system. The real and augmented portions of the AR training scenarios are simulated in VR, affording us detailed control over a variety of immersion parameters and the ability to explore the effects of different types of simulated latency. We utilized a representative task inspired by outdoor AR military training systems to compare various AR system configurations, including optical see-through and video see-through setups with both matched and unmatched levels of real and virtual objects latency. Our findings indicate that users are able to perform significantly better when virtual and real-world latencies are matched (as in the case of simulated video see-through AR with perfect augmentation-to-real-world registration). Unequal levels of latency led to reduction in performance, even when overall latency levels were lower compared to the matched case. The relative results hold up with increased overall latency.