Biometric Leakage from Generative Models and Adversarial Iris Swapping for Spoofing Eye-based Authentication

dc.contributor.authorMichalak, Jan Jakuben
dc.contributor.committeechairDavid-John, Brendan Matthewen
dc.contributor.committeememberJi, Boen
dc.contributor.committeememberViswanath, Bimalen
dc.contributor.departmentComputer Science and#38; Applicationsen
dc.date.accessioned2025-06-11T08:01:14Zen
dc.date.available2025-06-11T08:01:14Zen
dc.date.issued2025-06-10en
dc.description.abstractThis thesis investigates the vulnerability of generative models trained on biometric data and explores digital spoofing attacks on iris-based authentication systems representative of AR/VR environments. We first explore how diffusion models trained on biometric data can memorize and leak iris images. Next, we evaluate the effectiveness of Cross-Attention GANs for iris-swapping attacks, demonstrating their ability to enable presentation attacks that spoof iris-recognition systems. Our experiments across several standard iris and VR datasets have an attack success rate of 100% within similar domains and generalize across domains with rates as high as 70%. Our findings highlight the need to consider vulnerabilities in biometric systems and strengthen defenses against digital presentation attacks produced by generative models.en
dc.description.abstractgeneralMost people are familiar with Face ID, which uses facial features to unlock phones or laptops. But in virtual and augmented reality (AR/VR) headsets, where faces are not always visible, devices rely on the iris to recognize users. In this thesis, we show that AI models trained on iris images can sometimes memorize and leak information about the people they were trained on. We also designed a new type of attack where a fake eye is created by swapping one person's iris onto another's eye. These generated irises can trick iris recognition systems into thinking the attacker is the target. Our results show near-perfect success when attacking within the same dataset, and strong success when crossing between different VR datasets. These findings suggest that while iris recognition holds promise for secure login in AR/VR devices, it also opens a new risk that AI-powered attacks could exploit.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:44229en
dc.identifier.urihttps://hdl.handle.net/10919/135460en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectIris Authenticationen
dc.subjectDigital Presentation Attacksen
dc.subjectGenerative Adversarial Networks (GANs)en
dc.titleBiometric Leakage from Generative Models and Adversarial Iris Swapping for Spoofing Eye-based Authenticationen
dc.typeThesisen
thesis.degree.disciplineComputer Science & Applicationsen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Michalak_JJ_T_2025.pdf
Size:
5.79 MB
Format:
Adobe Portable Document Format

Collections