Game-Theoretic and Machine Learning-based Defensive Deception for Dependable and Secure Cyber-Physical Systems
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Cyber-physical systems (CPSs) and Human-Machine Teaming Systems (HMTSs) face growing risks from sophisticated cyber threats, particularly Advanced Persistent Threats (APTs), which conventional security measures struggle to counter effectively. These threats can subvert controls or launch multi-stage attacks, compromising critical infrastructure. This dissertation develops defensive deception (DD) techniques that manipulate attackers' beliefs to mislead their decision-making, inducing suboptimal actions that lead to attack failure. By integrating game theory and machine learning, this research creates strategic, autonomous defense frameworks tailored to CPS and HMTS environments, aiming to design dependable and secure systems capable of intelligent interactions, autonomous learning, and seamless human-machine collaboration. This research addresses three key tasks.
For the Strategic Defensive Deception (SDD) task, we developed Foureye using hypergame theory to model attack-defense interactions in IoT environments under uncertainty. We extended this framework to handle multiple APT attackers across all cyber kill chain stages with bundle-based defenses. Analysis demonstrated that DD is most effective under imperfect information, with machine learning significantly enhancing defense strategy selection through more accurate opponent prediction.
In the Autonomous Defensive Deception (ADD) task, we designed a UAV surveillance system with "Honey Drones" to defend against DoS attacks through dynamic signal strength adjustment. Our hypergame theory-guided deep reinforcement learning (HT-DRL) approach enabled autonomous decision-making with faster convergence. Experiments showed significant improvements in mission completion (32%), energy efficiency (20%), and attack mitigation (62%) compared to conventional approaches.
For the Human-Machine Teaming Defensive Deception (HMT-DD) task, we developed DASH (Deception-Augmented Shared mental model for Human-machine teaming) to enhance both performance and security in UGV-human collaborative environments. DASH integrates strategic information sharing with component-specific deception techniques like "bait tasks" to detect compromised team members. Evaluations showed maintained 60% mission success under extreme attack frequencies while dramatically reducing compromise rates.
This dissertation advances cybersecurity by delivering comprehensive deception-based frameworks for CPSs and HMTSs facing advanced threats. Through rigorous evaluations measuring system resilience, attack mitigation, and mission performance, we demonstrate the effectiveness of combining game theory and machine learning to create adaptive, intelligent security mechanisms for IoT networks, UAV missions, and human-machine collaborations.