Browsing by Author "David-John, Brendan Matthew"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Enhancing Security and Privacy in Head-Mounted Augmented Reality Systems Using Eye GazeCorbett, Matthew (Virginia Tech, 2024-04-22)Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. Specifically, head-mounted AR devices can accurately sense and understand their environment through an increasingly powerful array of sensors such as cameras, depth sensors, eye gaze trackers, microphones, and inertial sensors. The ability of these devices to collect this information presents both challenges and opportunities to improve existing security and privacy techniques in this domain. Specifically, eye gaze tracking is a ready-made capability to analyze user intent, emotions, and vulnerability, and as an input mechanism. However, modern AR devices lack systems to address their unique security and privacy issues. Problems such as lacking local pairing mechanisms usable while immersed in AR environments, bystander privacy protections, and the increased vulnerability to shoulder surfing while wearing AR devices all lack viable solutions. In this dissertation, I explore how readily available eye gaze sensor data can be used to improve existing methods for assuring information security and protecting the privacy of those near the device. My research has presented three new systems, BystandAR, ShouldAR, and GazePair that each leverage user eye gaze to improve security and privacy expectations in or with Augmented Reality. As these devices grow in power and number, such solutions are necessary to prevent perception failures that hindered earlier devices. The work in this dissertation is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful devices.
- Investigating Asymmetric Collaboration and Interaction in Immersive EnvironmentsEnriquez, Daniel (Virginia Tech, 2024-01-23)With the commercialization of virtual/augmented reality (VR/AR) devices, there is an increasing interest in combining immersive and non-immersive devices (e.g., desktop computers, mobile devices) for asymmetric collaborations. While such asymmetric settings have been examined in social platforms, questions surrounding collaborative view dimensionalities in data-driven decision-making and interaction from non-immersive devices remain under-explored. A crucial inquiry arises: although presenting a consistent 3D virtual world on both immersive and non-immersive platforms has been a common practice in social applications, does the same guideline apply to lay out data? Or should data placement be optimized locally according to each device's display capacity? To this effect, a user study was conducted to provide empirical insights into the user experience of asymmetric collaboration in data-driven decision-making. The user study tested practical dimensionality combinations between PC and VR, resulting in three conditions: PC2D+VR2D, PC2D+VR3D, and PC3D+VR3D. The results revealed a preference for PC2D+VR3D, and PC2D+VR2D led to the quickest task completion. Similarly, mobile devices have become an inclusive alternative to head-worn displays in virtual reality (VR) environments, enhancing accessibility and allowing cross-device collaboration. Object manipulation techniques in mobile Augmented Reality (AR) have been typically evaluated in table-top scale and we lack an understanding of how these techniques perform in room-scale environments. Two studies were conducted to analyze object translation tasks, each with 30 participants, to investigate how different techniques impact usability and performance for room-scale mobile VR object translations. Results indicated that the Joystick technique, which allowed translation in relation to the user's perspective, was the fastest and most preferred, without difference in precision. These findings provide insight for designing collaborative, asymmetric VR environments.