Cyber-Physical-Social Systems for Autonomous Defense: Enabling Mission-Centric, Adaptive, and Anytime Intelligence
| dc.contributor.author | Yoon, Han Jun | en |
| dc.contributor.committeechair | Cho, Jin-Hee | en |
| dc.contributor.committeechair | Lu, Chang Tien | en |
| dc.contributor.committeemember | Kim, Dan | en |
| dc.contributor.committeemember | Ji, Bo | en |
| dc.contributor.committeemember | Moore, Terrence J. | en |
| dc.contributor.department | Computer Science and#38; Applications | en |
| dc.date.accessioned | 2026-01-15T09:00:35Z | en |
| dc.date.available | 2026-01-15T09:00:35Z | en |
| dc.date.issued | 2026-01-14 | en |
| dc.description.abstract | Autonomous Cyber-Physical-Social Systems (CPSSs) are rapidly transforming mission-critical domains such as transportation, defense, emergency response, and smart infrastructure by enabling real-time sensing, decentralized control, and autonomous decision-making. Yet the increasing complexity, interconnectivity, and adversarial exposure of CPSSs make them highly susceptible to evolving cyber threats. Existing security approaches frequently operate in isolation, overlooking the deep interdependencies among cyber, physical, and social components as well as the asymmetric information and strategic dynamics that shape attacker–defender interactions. These gaps highlight unresolved challenges in mission-level risk awareness, adaptive defense under uncertainty, and resource-bounded reasoning. This dissertation pursues a unified research vision: to develop a mission-centric, uncertainty-aware, and resource-adaptive autonomy framework for resilient CPSS operations. This vision is realized through three tightly interrelated research thrusts, including Mission Impact Assessment (MIA), Intrusion Response Systems (IRS), and Anytime Inference (AIF), that collectively advance trustworthy decision-making under adversarial, uncertain, and resource-constrained conditions. Rather than functioning as isolated contributions, these tasks form a seamless progression: MIA quantifies mission risk and identifies critical assets, IRS defends those assets adaptively under adversarial uncertainty, and AIF enables both MIA and IRS to operate reliably when computation, information, or time is limited. Task MIA develops an interdependent Mission Impact Assessment framework, termed iMIA. By integrating Subjective Bayesian Networks (SBNs) with Subjective Logic-based Hypergame Theory (SL-HGT), iMIA models both epistemic uncertainty and divergent attacker–defender perceptions while capturing interdependencies among assets, services, and tasks. This enables robust mission-outcome inference under missing, noisy, or conflicting information. A key contribution is the identification of highly critical nodes whose degradation disproportionately impacts mission success, allowing targeted reinforcement strategies to enhance resilience and effectiveness in dynamic threat environments. Task IRS designs an uncertainty-aware, Deep Reinforcement Learning-based Intrusion Response System for resilient operation in in-vehicle networks. The proposed IRS leverages structured sub-action spaces tailored to attack types and employs entropy regularization to promote robust decision policies under uncertainty. Extensive experiments demonstrate significant reductions in Attack Success Ratio (ASR) and improvements in mission-performance metrics such as route completion and safety compliance. A human-in-the-loop extension further incorporates expert feedback into reward shaping, enabling interpretable, adaptive, and trustworthy defense strategies in safety-critical vehicular environments. Task AIF introduces an Anytime Inference (AIF) algorithm for SBNs that supports incremental, resource-aware reasoning. The framework employs simulation-based inference, bijective mappings between subjective opinions and probability distributions, and dynamic resource allocation strategies based on entropy and Bayes factor heuristics. The resulting algorithm delivers fast, interruptible, and high-fidelity inference even under tight computational budgets, thus directly enabling responsive mission assessment and adaptive defense in the MIA and IRS tasks. Empirical results show accelerated convergence with increasing sample budgets and the superior scalability of Gibbs sampling in larger networks. Overall, this dissertation yields three key findings: (i) explicitly modeling epistemic and perceptual uncertainty is essential for achieving high-fidelity mission reasoning and adaptive autonomy; (ii) hypergame-theoretic and uncertainty-aware learning approaches dramatically improve defensive effectiveness in adversarial CPSSs; and (iii) resource-aware anytime inference is critical for timely, trustworthy decision-making in dynamic and constrained environments. Future research may extend this unified framework toward multi-agent CPSS settings, integrate richer forms of human collaboration, and advance scalable, uncertainty-aware learning algorithms for increasingly complex autonomous systems. | en |
| dc.description.abstractgeneral | Autonomous systems are increasingly used in critical areas such as transportation, national defense, emergency response, and smart cities. These systems combine software, physical machines, and human interactions to sense their environment, make decisions, and act with little or no human intervention. While this capability brings major benefits, it also introduces serious risks. As these systems become more connected and complex, they are more vulnerable to cyber attacks that can disrupt operations, compromise safety, or cause mission failure. Most existing security solutions focus on individual components in isolation. For example, protecting software without fully considering how physical devices and human behavior are affected. They also often assume perfect information, even though real-world attackers and defenders operate with uncertainty, incomplete data, and limited time or computing power. As a result, today's approaches struggle to assess mission-level risk, adapt defenses in real time, and make reliable decisions under constraints. This dissertation addresses these challenges by developing a unified framework for building resilient autonomous systems. The core goal is to enable systems to understand mission risk, defend themselves intelligently, and continue operating even when information, time, or computing resources are limited. This vision is realized through three closely connected research areas: Mission Impact Assessment (MIA), Intrusion Response Systems (IRS), and Anytime Inference (AIF). Together, they form a complete pipeline: first understanding what matters most to the mission, then protecting it against attacks, and finally ensuring decisions remain reliable under tight constraints. The first part, Mission Impact Assessment, introduces a framework called iMIA that evaluates how cyber attacks affect overall mission success. Rather than focusing on individual failures, iMIA identifies which components are most critical and how their degradation could ripple through the system. By explicitly modeling uncertainty and differing attacker and defender perspectives, the framework can still provide meaningful risk assessments even when information is noisy, missing, or conflicting. This allows decision-makers to focus defensive efforts where they matter most. The second part, Intrusion Response Systems, develops an intelligent defense mechanism for vehicle networks, such as those used in autonomous or connected cars. Using deep reinforcement learning, the system learns how to respond to attacks in real time while accounting for uncertainty in attack detection and system behavior. The design reduces the success rate of attacks and improves safety-related outcomes like route completion and compliance with driving rules. A human-in-the-loop extension allows expert feedback to guide learning, making the system's behavior more interpretable and trustworthy in safety-critical settings. The third part, Anytime Inference, focuses on decision-making under limited resources. In real-world systems, there is often not enough time or computing power to perform perfect analysis. The proposed Anytime Inference approach allows the system to produce progressively better answers as more resources become available and to stop early when needed while still providing useful results. This makes mission assessment and defense practical in fast-moving or constrained environments. Overall, this dissertation shows that three elements are essential for resilient autonomous systems: explicitly accounting for uncertainty, anticipating strategic behavior by attackers and defenders, and adapting reasoning to available resources. By combining these ideas into a unified framework, the work advances the reliability, safety, and trustworthiness of autonomous systems operating in complex and adversarial environments. Future work can extend this approach to multi-agent systems, deeper human collaboration, and even larger and more complex real-world applications. | en |
| dc.description.degree | Doctor of Philosophy | en |
| dc.format.medium | ETD | en |
| dc.identifier.other | vt_gsexam:45400 | en |
| dc.identifier.uri | https://hdl.handle.net/10919/140811 | en |
| dc.language.iso | en | en |
| dc.publisher | Virginia Tech | en |
| dc.rights | In Copyright | en |
| dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
| dc.subject | Mission impact assessment | en |
| dc.subject | mission effectiveness | en |
| dc.subject | mission performance | en |
| dc.subject | hypergame theory | en |
| dc.subject | subjective logic | en |
| dc.subject | deep reinforcement learning | en |
| dc.subject | human-in-the-loop | en |
| dc.subject | uncertainty | en |
| dc.subject | anytime inference | en |
| dc.subject | sampling. | en |
| dc.title | Cyber-Physical-Social Systems for Autonomous Defense: Enabling Mission-Centric, Adaptive, and Anytime Intelligence | en |
| dc.type | Dissertation | en |
| thesis.degree.discipline | Computer Science & Applications | en |
| thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
| thesis.degree.level | doctoral | en |
| thesis.degree.name | Doctor of Philosophy | en |
Files
Original bundle
1 - 1 of 1