Browsing by Author "Moore, Terrence J."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- Autonomous Cyber Defense for Resilient Cyber-Physical SystemsZhang, Qisheng (Virginia Tech, 2024-01-09)In this dissertation research, we design and analyze resilient cyber-physical systems (CPSs) under high network dynamics, adversarial attacks, and various uncertainties. We focus on three key system attributes to build resilient CPSs by developing a suite of the autonomous cyber defense mechanisms. First, we consider network adaptability to achieve the resilience of a CPS. Network adaptability represents the network ability to maintain its security and connectivity level when faced with incoming attacks. We address this by network topology adaptation. Network topology adaptation can contribute to quickly identifying and updating the network topology to confuse attacks by changing attack paths. We leverage deep reinforcement learning (DRL) to develop CPSs using network topology adaptation. Second, we consider the fault-tolerance of a CPS as another attribute to ensure system resilience. We aim to build a resilient CPS under severe resource constraints, adversarial attacks, and various uncertainties. We chose a solar sensor-based smart farm as one example of the CPS applications and develop a resource-aware monitoring system for the smart farms. We leverage DRL and uncertainty quantification using a belief theory, called Subjective Logic, to optimize critical tradeoffs between system performance and security under the contested CPS environments. Lastly, we study system resilience in terms of system recoverability. The system recoverability refers to the system's ability to recover from performance degradation or failure. In this task, we mainly focus on developing an automated intrusion response system (IRS) for CPSs. We aim to design the IRS with effective and efficient responses by reducing a false alarm rate and defense cost, respectively. Specifically, We build a lightweight IRS for an in-vehicle controller area network (CAN) bus system operating with DRL-based autonomous driving.
- DIVERGENCE: Deep Reinforcement Learning-Based Adaptive Traffic Inspection and Moving Target Defense Countermeasure FrameworkKim, Sunghwan; Yoon, Seunghyun; Cho, Jin-Hee; Kim, Dong Seong; Moore, Terrence J.; Free-Nelson, Frederica; Lim, Hyuk (IEEE, 2022-12)Reinforcement learning (RL) is a promising approach for intelligent agents to protect a given system under highly hostile environments. RL allows the agent to adaptively make sequential defense decisions based on the perceived current state of system security aiming to achieve the maximum defense performance in terms of fast, efficient, and automated detection, threat analysis, and response to the threat. In this paper, we propose a deep reinforcement learning (DRL)-based adaptive traffic inspection and moving target defense countermeasure framework, called 'DIVERGENCE,' for building a secure networked system. The DIVERGENCE provides two main security services: (1) a DRL-based network traffic inspection mechanism to achieve scalable and intensive network traffic visibility for rapid threat detection; and (2) an address shuffling-based moving target defense (MTD) technique to defend against threats as a proactive intrusion prevention mechanism. Through extensive simulations and experiments, we demonstrate that the DIVERGENCE successfully caught malicious traffic flows while significantly reducing the vulnerability of the network through MTD.
- PRADA-TF: Privacy-Diversity-Aware Online Team FormationMahajan, Yash (Virginia Tech, 2021-06-14)In this work, we propose a PRivAcy-Diversity-Aware Team Formation framework, namely PRADA-TF, that can be deployed based on the trust relationships between users in online social networks (OSNs). Our proposed PRADA-TF is mainly designed to reflect team members' domain expertise and privacy preserving preferences when a task requires a wide range of diverse domain expertise for its successful completion. The proposed PRADA-TF aims to form a team for maximizing its productivity based on members' characteristics in their diversity, privacy preserving, and information sharing. We leveraged a game theory called Mechanism Design in order for a mechanism designer as a team leader to select team members that can maximize the team's social welfare, which is the sum of all team members' utilities considering team productivity, members' privacy preserving, and potential privacy loss caused by information sharing. To screen a set of candidate teams in the OSN, we built an expert social network based on real co-authorship datasets (i.e., Netscience) with 1,590 scientists, used the semi-synthetic datasets to construct a trust network based on a belief model called Subjective Logic, and identified trustworthy users as candidate team members. Via our extensive simulation experiments, we compared the seven different TF schemes, including our proposed and existing TF algorithms, and analyzed the key factors that can significantly impact the expected and actual social welfare, expected and actual potential privacy leakout, and team diversity of a selected team.