Secure and reliable deep learning in signal processing

Files
TR Number
Date
2021-06-09
Journal Title
Journal ISSN
Volume Title
Publisher
Virginia Tech
Abstract

In conventional signal processing approaches, researchers need to manually extract features from raw data that can better describe the underlying problem. Such a process requires strong domain knowledge about the given problems. On the contrary, deep learning-based signal processing algorithms can discover features and patterns that would not be apparent to humans by feeding a sufficient amount of training data. In the past decade, deep learning has proved to be efficient and effective at delivering high-quality results.

Deep learning has demonstrated its great advantages in image processing and text mining. One of the most promising applications of deep learning-based signal processing techniques is autonomous driving. Today, many companies are developing and testing autonomous vehicles. High-level autonomous vehicles are expected to be commercialized in the near future. Besides, deep learning has demonstrated great potential in wireless communications applications. Researchers have addressed some of the most challenging problems such as transmitter classification and modulation recognition using deep learning.

Despite these advantages, there exist a wide range of security and reliability issues when applying deep learning models to real-world applications. First, deep learning models could not generate reliable results for testing data if the training data size is insufficient. Since generating training data is time consuming and resource intensive, it is important to understand the relationship between model reliability and the size of training data. Second, deep learning models could generate highly unreliable results if the testing data are significantly different from the training data, which we refer to as ``out-of-distribution (OOD)'' data. Failing to detect OOD testing data may expose serious security risks. Third, deep learning algorithms can be easily fooled when the input data are falsified. Such vulnerabilities may cause severe risks in safety-critical applications such as autonomous driving.

In this dissertation, we focus on the security and reliability issues in deep learning models in the following three aspects. (1) We systematically study how the model performance changes as more training data are provided in wireless communications applications. (2) We discuss how OOD data can impact the performance of deep learning-based classification models in wireless communications applications. We propose FOOD (Feature representation for OOD detection), a unified model that can detect OOD testing data effectively and perform classifications for regular testing data simultaneously. (3) We focus on the security issues of applying deep learning algorithms to autonomous driving. We discuss the impact of Perception Error Attacks (PEAs) on LIDAR and camera and propose a countermeasure called LIFE (LIDAR and Image data Fusion for detecting perception Errors).

Description
Keywords
Deep learning, signal processing, security, reliability
Citation