Browsing by Author "David-John, Brendan"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
- BystandAR: Protecting Bystander Visual Data in Augmented Reality SystemsCorbett, Matthew; David-John, Brendan; Shang, Jiacheng; Hu, Y. Charlie; Ji, Bo (ACM, 2023-06-18)Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. While the powerful suite of sensors on modern AR devices is necessary for enabling such an immersive experience, they can create unease in bystanders (i.e., those surrounding the device during its use) due to potential bystander data leaks, which is called the bystander privacy problem. In this paper, we propose BystandAR, the first practical system that can effectively protect bystander visual (camera and depth) data in real-time with only on-device processing. BystandAR builds on a key insight that the device user’s eye gaze and voice are highly effective indicators for subject/bystander detection in interpersonal interaction, and leverages novel AR capabilities such as eye gaze tracking, wearer-focused microphone, and spatial awareness to achieve a usable frame rate without offloading sensitive information. Through a 16-participant user study,we showthat BystandAR correctly identifies and protects 98.14% of bystanders while allowing access to 96.27% of subjects. We accomplish this with average frame rates of 52.6 frames per second without the need to offload unprotected bystander data to another device.
- GazeIntent: Adapting Dwell-time Selection in VR Interaction with Real-time Intent ModelingNarkar, Anish; Michalak, Jan; Peacock, Candace; David-John, Brendan (ACM, 2024-05-28)The use of ML models to predict a user’s cognitive state from behavioral data has been studied for various applications which includes predicting the intent to perform selections in VR.We developed a novel technique that uses gaze-based intent models to adapt dwell-time thresholds to aid gaze-only selection. A dataset of users performing selection in arithmetic tasks was used to develop intent prediction models (F1 = 0.94).We developed GazeIntent to adapt selection dwell times based on intent model outputs and conducted an end-user study with returning and new users performing additional tasks with varied selection frequencies. Personalized models for returning users effectively accounted for prior experience and were preferred by 63% of users. Our work provides the field with methods to adapt dwell-based selection to users, account for experience over time, and consider tasks that vary by selection frequency.
- Poster: BystandAR: Protecting Bystander Visual Data in Augmented Reality SystemsCorbett, Matthew; David-John, Brendan; Shang, Jiacheng; Hu, Y. Charlie; Ji, Bo (ACM, 2023-06-18)Augmented Reality (AR) devices are set apart from other mobile devices by the immersive experience they offer. While the powerful suite of sensors on modern AR devices is necessary for enabling such an immersive experience, they can create unease in bystanders (i.e., those surrounding the device during its use) due to potential bystander data leaks, which is called the bystander privacy problem. In this poster, we propose BystandAR, the first practical system that can effectively protect bystander visual (camera and depth) data in real-time with only on-device processing. BystandAR builds on a key insight that the device user’s eye gaze and voice are highly effective indicators for subject/bystander detection in interpersonal interaction, and leverages novel AR capabilities such as eye gaze tracking, wearer-focused microphone, and spatial awareness to achieve a usable frame rate without offloading sensitive information. Through a 16-participant user study,we showthat BystandAR correctly identifies and protects 98.14% of bystanders while allowing access to 96.27% of subjects. We accomplish this with average frame rates of 52.6 frames per second without the need to offload unprotected bystander data to another device.
- ShouldAR: Detecting Shoulder Surfing Attacks Using Multimodal Eye Tracking and Augmented RealityCorbett, Matthew; David-John, Brendan; Shang, Jiacheng; Ji, Bo (ACM, 2024-09-09)Shoulder surfing attacks (SSAs) are a type of observation attack designed to illicitly gather sensitive data from "over the shoulder' of victims. This attack can be directed at mobile devices, desktop screens, Personal Identification Number (PIN) pads at an Automated Teller Machine (ATM), or written text. Existing solutions are generally focused on authentication techniques (e.g., logins) and are limited to specific attack scenarios (e.g., mobile devices or PIN Pads). We present ShouldAR, a mobile and usable system to detect SSAs using multimodal eye gaze information (i.e., from both the potential attacker and victim). ShouldAR uses an augmented reality headset as a platform to incorporate user eye gaze tracking, rear-facing image collection and eye gaze analysis, and user notification of potential attacks. In a 24-participant study, we show that the prototype is capable of detecting 87.28% of SSAs against both physical and digital targets, a two-fold improvement on the baseline solution using a rear-facing mirror, a widely used solution to the SSA problem. The ShouldAR approach provides an AR-based, active SSA defense that applies to both digital and physical information entry in sensitive environments.
- Swap It Like Its Hot: Segmentation-based spoof attacks on eye-tracking imagesNarkar, Anish S.; David-John, Brendan (ACM, 2024-06-04)Video-based eye trackers capture the iris biometric and enable authentication to secure user identity. However, biometric authentication is susceptible to spoofing another user’s identity through physical or digital manipulation. The current standard to identify physical spoofing attacks on eye-tracking sensors uses liveness detection. Liveness detection classifies gaze data as real or fake, which is sufficient to detect physical presentation attacks. However, such defenses cannot detect a spoofing attack when real eye image inputs are digitally manipulated to swap the iris pattern of another person. We propose IrisSwap as a novel attack on gaze-based liveness detection. IrisSwap allows attackers to segment and digitally swap in a victim’s iris pattern to fool iris authentication. Both offline and online attacks produce gaze data that deceives the current state-of-the-art defense models at rates up to 58% and motivates the need to develop more advanced authentication methods for eye trackers.