Cyber-Physical-Social Systems for Autonomous Defense: Enabling Mission-Centric, Adaptive, and Anytime Intelligence
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Autonomous Cyber-Physical-Social Systems (CPSSs) are rapidly transforming mission-critical domains such as transportation, defense, emergency response, and smart infrastructure by enabling real-time sensing, decentralized control, and autonomous decision-making. Yet the increasing complexity, interconnectivity, and adversarial exposure of CPSSs make them highly susceptible to evolving cyber threats. Existing security approaches frequently operate in isolation, overlooking the deep interdependencies among cyber, physical, and social components as well as the asymmetric information and strategic dynamics that shape attacker–defender interactions. These gaps highlight unresolved challenges in mission-level risk awareness, adaptive defense under uncertainty, and resource-bounded reasoning.
This dissertation pursues a unified research vision: to develop a mission-centric, uncertainty-aware, and resource-adaptive autonomy framework for resilient CPSS operations. This vision is realized through three tightly interrelated research thrusts, including Mission Impact Assessment (MIA), Intrusion Response Systems (IRS), and Anytime Inference (AIF), that collectively advance trustworthy decision-making under adversarial, uncertain, and resource-constrained conditions. Rather than functioning as isolated contributions, these tasks form a seamless progression: MIA quantifies mission risk and identifies critical assets, IRS defends those assets adaptively under adversarial uncertainty, and AIF enables both MIA and IRS to operate reliably when computation, information, or time is limited.
Task MIA develops an interdependent Mission Impact Assessment framework, termed iMIA. By integrating Subjective Bayesian Networks (SBNs) with Subjective Logic-based Hypergame Theory (SL-HGT), iMIA models both epistemic uncertainty and divergent attacker–defender perceptions while capturing interdependencies among assets, services, and tasks. This enables robust mission-outcome inference under missing, noisy, or conflicting information. A key contribution is the identification of highly critical nodes whose degradation disproportionately impacts mission success, allowing targeted reinforcement strategies to enhance resilience and effectiveness in dynamic threat environments.
Task IRS designs an uncertainty-aware, Deep Reinforcement Learning-based Intrusion Response System for resilient operation in in-vehicle networks. The proposed IRS leverages structured sub-action spaces tailored to attack types and employs entropy regularization to promote robust decision policies under uncertainty. Extensive experiments demonstrate significant reductions in Attack Success Ratio (ASR) and improvements in mission-performance metrics such as route completion and safety compliance. A human-in-the-loop extension further incorporates expert feedback into reward shaping, enabling interpretable, adaptive, and trustworthy defense strategies in safety-critical vehicular environments.
Task AIF introduces an Anytime Inference (AIF) algorithm for SBNs that supports incremental, resource-aware reasoning. The framework employs simulation-based inference, bijective mappings between subjective opinions and probability distributions, and dynamic resource allocation strategies based on entropy and Bayes factor heuristics. The resulting algorithm delivers fast, interruptible, and high-fidelity inference even under tight computational budgets, thus directly enabling responsive mission assessment and adaptive defense in the MIA and IRS tasks. Empirical results show accelerated convergence with increasing sample budgets and the superior scalability of Gibbs sampling in larger networks.
Overall, this dissertation yields three key findings: (i) explicitly modeling epistemic and perceptual uncertainty is essential for achieving high-fidelity mission reasoning and adaptive autonomy; (ii) hypergame-theoretic and uncertainty-aware learning approaches dramatically improve defensive effectiveness in adversarial CPSSs; and (iii) resource-aware anytime inference is critical for timely, trustworthy decision-making in dynamic and constrained environments. Future research may extend this unified framework toward multi-agent CPSS settings, integrate richer forms of human collaboration, and advance scalable, uncertainty-aware learning algorithms for increasingly complex autonomous systems.