Secure and reliable deep learning in signal processing

dc.contributor.authorLiu, Jinshanen
dc.contributor.committeechairPark, Jung-Minen
dc.contributor.committeememberDietrich, Carl B.en
dc.contributor.committeememberHou, Yiwei Thomasen
dc.contributor.committeememberZeng, Haiboen
dc.contributor.committeememberHuang, Berten
dc.contributor.departmentElectrical Engineeringen
dc.date.accessioned2021-06-10T08:00:45Zen
dc.date.available2021-06-10T08:00:45Zen
dc.date.issued2021-06-09en
dc.description.abstractIn conventional signal processing approaches, researchers need to manually extract features from raw data that can better describe the underlying problem. Such a process requires strong domain knowledge about the given problems. On the contrary, deep learning-based signal processing algorithms can discover features and patterns that would not be apparent to humans by feeding a sufficient amount of training data. In the past decade, deep learning has proved to be efficient and effective at delivering high-quality results. Deep learning has demonstrated its great advantages in image processing and text mining. One of the most promising applications of deep learning-based signal processing techniques is autonomous driving. Today, many companies are developing and testing autonomous vehicles. High-level autonomous vehicles are expected to be commercialized in the near future. Besides, deep learning has demonstrated great potential in wireless communications applications. Researchers have addressed some of the most challenging problems such as transmitter classification and modulation recognition using deep learning. Despite these advantages, there exist a wide range of security and reliability issues when applying deep learning models to real-world applications. First, deep learning models could not generate reliable results for testing data if the training data size is insufficient. Since generating training data is time consuming and resource intensive, it is important to understand the relationship between model reliability and the size of training data. Second, deep learning models could generate highly unreliable results if the testing data are significantly different from the training data, which we refer to as ``out-of-distribution (OOD)'' data. Failing to detect OOD testing data may expose serious security risks. Third, deep learning algorithms can be easily fooled when the input data are falsified. Such vulnerabilities may cause severe risks in safety-critical applications such as autonomous driving. In this dissertation, we focus on the security and reliability issues in deep learning models in the following three aspects. (1) We systematically study how the model performance changes as more training data are provided in wireless communications applications. (2) We discuss how OOD data can impact the performance of deep learning-based classification models in wireless communications applications. We propose FOOD (Feature representation for OOD detection), a unified model that can detect OOD testing data effectively and perform classifications for regular testing data simultaneously. (3) We focus on the security issues of applying deep learning algorithms to autonomous driving. We discuss the impact of Perception Error Attacks (PEAs) on LIDAR and camera and propose a countermeasure called LIFE (LIDAR and Image data Fusion for detecting perception Errors).en
dc.description.abstractgeneralDeep learning has provided computers and mobile devices extraordinary powers to solve challenging signal processing problems. For example, current deep learning technologies are able to improve the quality of machine translation significantly, recognize speech as accurately as human beings, and even outperform human beings in face recognition. Although deep learning has demonstrated great advantages in signal processing, it can be insecure and unreliable if the model is not trained properly or is tested under adversarial scenarios. In this dissertation, we study the following three security and reliability issues in deep learning-based signal processing methods. First, we provide insights on how the deep learning model reliability is changed as the size of training data increases. Since generating training data requires a tremendous amount of labor and financial resources, our research work could help researchers and product developers to gain insights on balancing the tradeoff between model performance and training data size. Second, we propose a novel model to detect the abnormal testing data that are significantly different from the training data. In deep learning, there is no performance guarantee when the testing data are significantly different from the training data. Failing to detect such data may cause severe security risks. Finally, we design a system to detect sensor attacks targeting autonomous vehicles. Deep learning can be easily fooled when the input sensor data are falsified. Security and safety can be enhanced significantly if the autonomous driving systems are able to figure out the falsified sensor data before making driving decisions.en
dc.description.degreeDoctor of Philosophyen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:31121en
dc.identifier.urihttp://hdl.handle.net/10919/103740en
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectDeep learningen
dc.subjectsignal processingen
dc.subjectsecurityen
dc.subjectreliabilityen
dc.titleSecure and reliable deep learning in signal processingen
dc.typeDissertationen
thesis.degree.disciplineElectrical Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.leveldoctoralen
thesis.degree.nameDoctor of Philosophyen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Liu_J_D_2021.pdf
Size:
6.69 MB
Format:
Adobe Portable Document Format