Biometric Leakage from Generative Models and Adversarial Iris Swapping for Spoofing Eye-based Authentication
dc.contributor.author | Michalak, Jan Jakub | en |
dc.contributor.committeechair | David-John, Brendan Matthew | en |
dc.contributor.committeemember | Ji, Bo | en |
dc.contributor.committeemember | Viswanath, Bimal | en |
dc.contributor.department | Computer Science and#38; Applications | en |
dc.date.accessioned | 2025-06-11T08:01:14Z | en |
dc.date.available | 2025-06-11T08:01:14Z | en |
dc.date.issued | 2025-06-10 | en |
dc.description.abstract | This thesis investigates the vulnerability of generative models trained on biometric data and explores digital spoofing attacks on iris-based authentication systems representative of AR/VR environments. We first explore how diffusion models trained on biometric data can memorize and leak iris images. Next, we evaluate the effectiveness of Cross-Attention GANs for iris-swapping attacks, demonstrating their ability to enable presentation attacks that spoof iris-recognition systems. Our experiments across several standard iris and VR datasets have an attack success rate of 100% within similar domains and generalize across domains with rates as high as 70%. Our findings highlight the need to consider vulnerabilities in biometric systems and strengthen defenses against digital presentation attacks produced by generative models. | en |
dc.description.abstractgeneral | Most people are familiar with Face ID, which uses facial features to unlock phones or laptops. But in virtual and augmented reality (AR/VR) headsets, where faces are not always visible, devices rely on the iris to recognize users. In this thesis, we show that AI models trained on iris images can sometimes memorize and leak information about the people they were trained on. We also designed a new type of attack where a fake eye is created by swapping one person's iris onto another's eye. These generated irises can trick iris recognition systems into thinking the attacker is the target. Our results show near-perfect success when attacking within the same dataset, and strong success when crossing between different VR datasets. These findings suggest that while iris recognition holds promise for secure login in AR/VR devices, it also opens a new risk that AI-powered attacks could exploit. | en |
dc.description.degree | Master of Science | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:44229 | en |
dc.identifier.uri | https://hdl.handle.net/10919/135460 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Iris Authentication | en |
dc.subject | Digital Presentation Attacks | en |
dc.subject | Generative Adversarial Networks (GANs) | en |
dc.title | Biometric Leakage from Generative Models and Adversarial Iris Swapping for Spoofing Eye-based Authentication | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Science & Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |
Files
Original bundle
1 - 1 of 1