Game-Theoretic and Machine Learning-based Defensive Deception for Dependable and Secure Cyber-Physical Systems
dc.contributor.author | Wan, Zelin | en |
dc.contributor.committeechair | Cho, Jin-Hee | en |
dc.contributor.committeemember | Kamhoua, Charles | en |
dc.contributor.committeemember | Lu, Chang Tien | en |
dc.contributor.committeemember | Ji, Bo | en |
dc.contributor.committeemember | Moore, Terrence J. | en |
dc.contributor.department | Computer Science and#38; Applications | en |
dc.date.accessioned | 2025-06-19T08:00:36Z | en |
dc.date.available | 2025-06-19T08:00:36Z | en |
dc.date.issued | 2025-06-18 | en |
dc.description.abstract | Cyber-physical systems (CPSs) and Human-Machine Teaming Systems (HMTSs) face growing risks from sophisticated cyber threats, particularly Advanced Persistent Threats (APTs), which conventional security measures struggle to counter effectively. These threats can subvert controls or launch multi-stage attacks, compromising critical infrastructure. This dissertation develops defensive deception (DD) techniques that manipulate attackers' beliefs to mislead their decision-making, inducing suboptimal actions that lead to attack failure. By integrating game theory and machine learning, this research creates strategic, autonomous defense frameworks tailored to CPS and HMTS environments, aiming to design dependable and secure systems capable of intelligent interactions, autonomous learning, and seamless human-machine collaboration. This research addresses three key tasks. For the Strategic Defensive Deception (SDD) task, we developed Foureye using hypergame theory to model attack-defense interactions in IoT environments under uncertainty. We extended this framework to handle multiple APT attackers across all cyber kill chain stages with bundle-based defenses. Analysis demonstrated that DD is most effective under imperfect information, with machine learning significantly enhancing defense strategy selection through more accurate opponent prediction. In the Autonomous Defensive Deception (ADD) task, we designed a UAV surveillance system with "Honey Drones" to defend against DoS attacks through dynamic signal strength adjustment. Our hypergame theory-guided deep reinforcement learning (HT-DRL) approach enabled autonomous decision-making with faster convergence. Experiments showed significant improvements in mission completion (32%), energy efficiency (20%), and attack mitigation (62%) compared to conventional approaches. For the Human-Machine Teaming Defensive Deception (HMT-DD) task, we developed DASH (Deception-Augmented Shared mental model for Human-machine teaming) to enhance both performance and security in UGV-human collaborative environments. DASH integrates strategic information sharing with component-specific deception techniques like "bait tasks" to detect compromised team members. Evaluations showed maintained 60% mission success under extreme attack frequencies while dramatically reducing compromise rates. This dissertation advances cybersecurity by delivering comprehensive deception-based frameworks for CPSs and HMTSs facing advanced threats. Through rigorous evaluations measuring system resilience, attack mitigation, and mission performance, we demonstrate the effectiveness of combining game theory and machine learning to create adaptive, intelligent security mechanisms for IoT networks, UAV missions, and human-machine collaborations. | en |
dc.description.abstractgeneral | In today's world, cyber-physical systems (CPSs) and human-machine teaming systems (HMTSs) are everywhere, managing critical services like power grids, self-driving cars, and healthcare devices. These systems blend the physical and digital worlds or pair humans with smart machines to accomplish tasks. But as they become more essential, they also become bigger targets for cyber attacks—especially sophisticated ones called Advanced Persistent Threats (APTs), which traditional security tools struggle to stop. This dissertation offers a creative approach to protecting these systems using a strategy called defensive deception, which tricks attackers with fake information to throw them off course. The research leverages two core technologies, game theory and machine learning, to build intelligent defense systems. Game theory helps model the strategic interactions between attackers and defenders, while machine learning enables the defenses to learn and adapt autonomously. The work is divided into three main parts, each addressing a different challenge in safeguarding these systems. First, the Strategic Defensive Deception (SDD) component provides a playbook for outsmarting attackers using hypergame theory, which models scenarios where both sides lack complete information. This is particularly effective against stealthy, long-term APTs. The framework also supports multiple attackers, helping defenders choose the best mix of tactics. Second, the Autonomous Defensive Deception (ADD) component makes the system self-reliant. It uses deep reinforcement learning and Honey Drones techniques to against cyber attacks with adaptive drone signals. This can deceive adversaries while staying mission-focused. Results showed improved mission success, energy efficiency, and reduced attack damage. Third, the Human-Machine Teaming Defensive Deception (HMT-DD) component secures teams of humans and machines. It introduces a technique called DASH. This is a system that builds shared plans and injects fake tasks to detect insider threats, ensuring smooth and secure collaboration even under heavy attack. This dissertation advances cybersecurity by integrating game theory and machine learning into practical, adaptive defenses. It proposes novel methods for protecting vital systems from sophisticated threats, whether fully autonomous or involving human-machine collaboration. By making these systems smarter and more resilient, the research helps ensure the safety and reliability of the technology we depend on every day. | en |
dc.description.degree | Doctor of Philosophy | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:42701 | en |
dc.identifier.uri | https://hdl.handle.net/10919/135538 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Human-machine teaming | en |
dc.subject | deep reinforcement learning | en |
dc.subject | hypergame theory | en |
dc.subject | hyper nash equilibrium | en |
dc.subject | shared mental model | en |
dc.subject | cyber deception | en |
dc.subject | uncertainty | en |
dc.subject | trust | en |
dc.subject | attacker | en |
dc.subject | defender | en |
dc.subject | advanced persistent threat | en |
dc.subject | honey-X | en |
dc.subject | UAV/UGV | en |
dc.subject | mission effectiveness | en |
dc.title | Game-Theoretic and Machine Learning-based Defensive Deception for Dependable and Secure Cyber-Physical Systems | en |
dc.type | Dissertation | en |
thesis.degree.discipline | Computer Science & Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | doctoral | en |
thesis.degree.name | Doctor of Philosophy | en |
Files
Original bundle
1 - 1 of 1