Browsing by Author "Corbett, Matthew"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
- BystandAR: Protecting Bystander Visual Data in Augmented Reality SystemsCorbett, Matthew; David-John, Brendan; Shang, Jiacheng; Hu, Y. Charlie; Ji, Bo (ACM, 2023-06-18)Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. While the powerful suite of sensors on modern AR devices is necessary for enabling such an immersive experience, they can create unease in bystanders (i.e., those surrounding the device during its use) due to potential bystander data leaks, which is called the bystander privacy problem. In this paper, we propose BystandAR, the first practical system that can effectively protect bystander visual (camera and depth) data in real-time with only on-device processing. BystandAR builds on a key insight that the device user’s eye gaze and voice are highly effective indicators for subject/bystander detection in interpersonal interaction, and leverages novel AR capabilities such as eye gaze tracking, wearer-focused microphone, and spatial awareness to achieve a usable frame rate without offloading sensitive information. Through a 16-participant user study,we showthat BystandAR correctly identifies and protects 98.14% of bystanders while allowing access to 96.27% of subjects. We accomplish this with average frame rates of 52.6 frames per second without the need to offload unprotected bystander data to another device.
- Enhancing Security and Privacy in Head-Mounted Augmented Reality Systems Using Eye GazeCorbett, Matthew (Virginia Tech, 2024-04-22)Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. Specifically, head-mounted AR devices can accurately sense and understand their environment through an increasingly powerful array of sensors such as cameras, depth sensors, eye gaze trackers, microphones, and inertial sensors. The ability of these devices to collect this information presents both challenges and opportunities to improve existing security and privacy techniques in this domain. Specifically, eye gaze tracking is a ready-made capability to analyze user intent, emotions, and vulnerability, and as an input mechanism. However, modern AR devices lack systems to address their unique security and privacy issues. Problems such as lacking local pairing mechanisms usable while immersed in AR environments, bystander privacy protections, and the increased vulnerability to shoulder surfing while wearing AR devices all lack viable solutions. In this dissertation, I explore how readily available eye gaze sensor data can be used to improve existing methods for assuring information security and protecting the privacy of those near the device. My research has presented three new systems, BystandAR, ShouldAR, and GazePair that each leverage user eye gaze to improve security and privacy expectations in or with Augmented Reality. As these devices grow in power and number, such solutions are necessary to prevent perception failures that hindered earlier devices. The work in this dissertation is presented in the hope that these solutions can improve and expedite the adoption of these powerful and useful devices.
- Poster: BystandAR: Protecting Bystander Visual Data in Augmented Reality SystemsCorbett, Matthew; David-John, Brendan; Shang, Jiacheng; Hu, Y. Charlie; Ji, Bo (ACM, 2023-06-18)Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. While the powerful suite of sensors on modern AR devices is necessary for enabling such an immersive experience, they can create unease in bystanders (i.e., those surrounding the device during its use) due to potential bystander data leaks, which is called the bystander privacy problem. In this poster, we propose BystandAR, the first practical system that can effectively protect bystander visual (camera and depth) data in real-time with only on-device processing. BystandAR builds on a key insight that the device user’s eye gaze and voice are highly effective indicators for subject/bystander detection in interpersonal interaction, and leverages novel AR capabilities such as eye gaze tracking, wearer-focused microphone, and spatial awareness to achieve a usable frame rate without offloading sensitive information. Through a 16-participant user study,we showthat BystandAR correctly identifies and protects 98.14% of bystanders while allowing access to 96.27% of subjects. We accomplish this with average frame rates of 52.6 frames per second without the need to offload unprotected bystander data to another device.
- ShouldAR: Detecting Shoulder Surfing Attacks Using Multimodal Eye Tracking and Augmented RealityCorbett, Matthew; David-John, Brendan; Shang, Jiacheng; Ji, Bo (ACM, 2024-09-09)Shoulder surfing attacks (SSAs) are a type of observation attack designed to illicitly gather sensitive data from "over the shoulder' of victims. This attack can be directed at mobile devices, desktop screens, Personal Identification Number (PIN) pads at an Automated Teller Machine (ATM), or written text. Existing solutions are generally focused on authentication techniques (e.g., logins) and are limited to specific attack scenarios (e.g., mobile devices or PIN Pads). We present ShouldAR, a mobile and usable system to detect SSAs using multimodal eye gaze information (i.e., from both the potential attacker and victim). ShouldAR uses an augmented reality headset as a platform to incorporate user eye gaze tracking, rear-facing image collection and eye gaze analysis, and user notification of potential attacks. In a 24-participant study, we show that the prototype is capable of detecting 87.28% of SSAs against both physical and digital targets, a two-fold improvement on the baseline solution using a rear-facing mirror, a widely used solution to the SSA problem. The ShouldAR approach provides an AR-based, active SSA defense that applies to both digital and physical information entry in sensitive environments.