Fortifying Your Defenses: Techniques to Thwart Adversarial Attacks and Boost Performance of Machine Learning-Based Intrusion Detection Systems
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Machine learning has seen significant advancements in recent years and has proven to be highly effective in a wide range of applications, including intrusion detection systems (IDS). However, while working in adversarial environments, machine learning-based systems are known to be vulnerable to a range of attacks. In this talk, we will discuss techniques aimed at strengthening machine learning-based IDS. On the one hand, we explore techniques for enhancing the performance and robustness of IDS in adversarial environments, where we propose a contrastive learning-based approach that builds highly differentiating IDS. On the other hand, we develop efficient security mechanisms to thwart common attacks, including an adversarial example (AE) detector that filters out suspicious inputs at the model testing time, and a robust model evaluation method that leverages latent space representations to build resiliency in model aggregation against model poisoning attacks in federated learning. This talk will report our research results along this line of research.