VTechWorks staff will be away for the winter holidays starting Tuesday, December 24, 2024, through Wednesday, January 1, 2025, and will not be replying to requests during this time. Thank you for your patience, and happy holidays!
 

Adversarial RFML: Evading Deep Learning Enabled Signal Classification

dc.contributor.authorFlowers, Bryse Austinen
dc.contributor.committeechairBuehrer, R. Michaelen
dc.contributor.committeechairHeadley, William C.en
dc.contributor.committeememberGerdes, Ryan M.en
dc.contributor.committeememberYu, Guoqiangen
dc.contributor.departmentElectrical and Computer Engineeringen
dc.date.accessioned2019-07-25T08:00:42Zen
dc.date.available2019-07-25T08:00:42Zen
dc.date.issued2019-07-24en
dc.description.abstractDeep learning has become an ubiquitous part of research in all fields, including wireless communications. Researchers have shown the ability to leverage deep neural networks (DNNs) that operate on raw in-phase and quadrature samples, termed Radio Frequency Machine Learning (RFML), to synthesize new waveforms, control radio resources, as well as detect and classify signals. While there are numerous advantages to RFML, this thesis answers the question "is it secure?" DNNs have been shown, in other applications such as Computer Vision (CV), to be vulnerable to what are known as adversarial evasion attacks, which consist of corrupting an underlying example with a small, intelligently crafted, perturbation that causes a DNN to misclassify the example. This thesis develops the first threat model that encompasses the unique adversarial goals and capabilities that are present in RFML. Attacks that occur with direct digital access to the RFML classifier are differentiated from physical attacks that must propagate over-the-air (OTA) and are thus subject to impairments due to the wireless channel or inaccuracies in the signal detection stage. This thesis first finds that RFML systems are vulnerable to current adversarial evasion attacks using the well known Fast Gradient Sign Method originally developed for CV applications. However, these current adversarial evasion attacks do not account for the underlying communications and therefore the adversarial advantage is limited because the signal quickly becomes unintelligible. In order to envision new threats, this thesis goes on to develop a new adversarial evasion attack that takes into account the underlying communications and wireless channel models in order to create adversarial evasion attacks with more intelligible underlying communications that generalize to OTA attacks.en
dc.description.abstractgeneralDeep learning is beginning to permeate many commercial products and is being included in prototypes for next generation wireless communications devices. This technology can provide huge breakthroughs in autonomy; however, it is not sufficient to study the effectiveness of deep learning in an idealized laboratory environment, the real world is often harsh and/or adversarial. Therefore, it is important to know how, and when, these deep learning enabled devices will fail in the presence of bad actors before they are deployed in high risk environments, such as battlefields or connected autonomous vehicle communications. This thesis studies a small subset of the security vulnerabilities of deep learning enabled wireless communications devices by attempting to evade deep learning enabled signal classification by an eavesdropper while maintaining effective wireless communications with a cooperative receiver. The primary goal of this thesis is to define the threats to, and identify the current vulnerabilities of, deep learning enabled signal classification systems, because a system can only be secured once its vulnerabilities are known.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:21663en
dc.identifier.urihttp://hdl.handle.net/10919/91987en
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectAdversarial Signal Processingen
dc.subjectCognitive Radio Securityen
dc.subjectMachine learningen
dc.subjectModulation Identificationen
dc.subjectRadio Frequency Machine learningen
dc.titleAdversarial RFML: Evading Deep Learning Enabled Signal Classificationen
dc.typeThesisen
thesis.degree.disciplineComputer Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Flowers_BA_T_2019.pdf
Size:
2.22 MB
Format:
Adobe Portable Document Format

Collections