Show simple item record

dc.contributor.authorFlowers, Bryse Austinen_US
dc.date.accessioned2019-07-25T08:00:42Z
dc.date.available2019-07-25T08:00:42Z
dc.date.issued2019-07-24
dc.identifier.othervt_gsexam:21663en_US
dc.identifier.urihttp://hdl.handle.net/10919/91987
dc.description.abstractDeep learning has become an ubiquitous part of research in all fields, including wireless communications. Researchers have shown the ability to leverage deep neural networks (DNNs) that operate on raw in-phase and quadrature samples, termed Radio Frequency Machine Learning (RFML), to synthesize new waveforms, control radio resources, as well as detect and classify signals. While there are numerous advantages to RFML, this thesis answers the question "is it secure?" DNNs have been shown, in other applications such as Computer Vision (CV), to be vulnerable to what are known as adversarial evasion attacks, which consist of corrupting an underlying example with a small, intelligently crafted, perturbation that causes a DNN to misclassify the example. This thesis develops the first threat model that encompasses the unique adversarial goals and capabilities that are present in RFML. Attacks that occur with direct digital access to the RFML classifier are differentiated from physical attacks that must propagate over-the-air (OTA) and are thus subject to impairments due to the wireless channel or inaccuracies in the signal detection stage. This thesis first finds that RFML systems are vulnerable to current adversarial evasion attacks using the well known Fast Gradient Sign Method originally developed for CV applications. However, these current adversarial evasion attacks do not account for the underlying communications and therefore the adversarial advantage is limited because the signal quickly becomes unintelligible. In order to envision new threats, this thesis goes on to develop a new adversarial evasion attack that takes into account the underlying communications and wireless channel models in order to create adversarial evasion attacks with more intelligible underlying communications that generalize to OTA attacks.en_US
dc.format.mediumETDen_US
dc.publisherVirginia Techen_US
dc.rightsThis item is protected by copyright and/or related rights. Some uses of this item may be deemed fair and permitted by law even without permission from the rights holder(s), or the rights holder(s) may have licensed the work for use under certain conditions. For other uses you need to obtain permission from the rights holder(s).en_US
dc.subjectAdversarial Signal Processingen_US
dc.subjectCognitive Radio Securityen_US
dc.subjectMachine Learningen_US
dc.subjectModulation Identificationen_US
dc.subjectRadio Frequency Machine Learningen_US
dc.titleAdversarial RFML: Evading Deep Learning Enabled Signal Classificationen_US
dc.typeThesisen_US
dc.contributor.departmentElectrical and Computer Engineeringen_US
dc.description.degreeMaster of Scienceen_US
thesis.degree.nameMaster of Scienceen_US
thesis.degree.levelmastersen_US
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen_US
thesis.degree.disciplineComputer Engineeringen_US
dc.contributor.committeechairBuehrer, Richard M.en_US
dc.contributor.committeechairHeadley, William C.en_US
dc.contributor.committeememberGerdes, Ryan M.en_US
dc.contributor.committeememberYu, Guoqiangen_US
dc.description.abstractgeneralDeep learning is beginning to permeate many commercial products and is being included in prototypes for next generation wireless communications devices. This technology can provide huge breakthroughs in autonomy; however, it is not sufficient to study the effectiveness of deep learning in an idealized laboratory environment, the real world is often harsh and/or adversarial. Therefore, it is important to know how, and when, these deep learning enabled devices will fail in the presence of bad actors before they are deployed in high risk environments, such as battlefields or connected autonomous vehicle communications. This thesis studies a small subset of the security vulnerabilities of deep learning enabled wireless communications devices by attempting to evade deep learning enabled signal classification by an eavesdropper while maintaining effective wireless communications with a cooperative receiver. The primary goal of this thesis is to define the threats to, and identify the current vulnerabilities of, deep learning enabled signal classification systems, because a system can only be secured once its vulnerabilities are known.en


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record