Scholarly Works, Virginia Tech National Security Institute

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 11 of 11
  • Slowly but surely: Exposure of communities and infrastructure to subsidence on the US east coast
    Ohenhen, Leonard; Shirzaei, Manoochehr; Barnard, Patrick L. (Oxford University Press, 2024-01-02)
    Coastal communities are vulnerable to multihazards, which are exacerbated by land subsidence. On the US east coast, the high density of population and assets amplifies the region's exposure to coastal hazards. We utilized measurements of vertical land motion rates obtained from analysis of radar datasets to evaluate the subsidence-hazard exposure to population, assets, and infrastructure systems/facilities along the US east coast. Here, we show that 2,000 to 74,000 km² land area, 1.2 to 14 million people, 476,000 to 6.3 million properties, and >50% of infrastructures in major cities such as New York, Baltimore, and Norfolk are exposed to subsidence rates between 1 and 2 mm per year. Additionally, our analysis indicates a notable trend: as subsidence rates increase, the extent of area exposed to these hazards correspondingly decreases. Our analysis has far-reaching implications for community and infrastructure resilience planning, emphasizing the need for a targeted approach in transitioning from reactive to proactive hazard mitigation strategies in the era of climate change.
  • A statistical framework for domain shape estimation in Stokes flows
    Borggaard, Jeffrey T.; Glatt-Holtz, Nathan E.; Krometis, Justin (IOP Publishing, 2023-08-01)
    We develop and implement a Bayesian approach for the estimation of the shape of a two dimensional annular domain enclosing a Stokes flow from sparse and noisy observations of the enclosed fluid. Our setup includes the case of direct observations of the flow field as well as the measurement of concentrations of a solute passively advected by and diffusing within the flow. Adopting a statistical approach provides estimates of uncertainty in the shape due both to the non-invertibility of the forward map and to error in the measurements. When the shape represents a design problem of attempting to match desired target outcomes, this ‘uncertainty’ can be interpreted as identifying remaining degrees of freedom available to the designer. We demonstrate the viability of our framework on three concrete test problems. These problems illustrate the promise of our framework for applications while providing a collection of test cases for recently developed Markov chain Monte Carlo algorithms designed to resolve infinite-dimensional statistical quantities.
  • Deep-Learning-Based Digitization of Protein-Self-Assembly to Print Biodegradable Physically Unclonable Labels for Device Security
    Pradhan, Sayantan; Rajagopala, Abhi D.; Meno, Emma; Adams, Stephen; Elks, Carl R.; Beling, Peter A.; Yadavalli, Vamsi K. (MDPI, 2023-08-28)
    The increasingly pervasive problem of counterfeiting affects both individuals and industry. In particular, public health and medical fields face threats to device authenticity and patient privacy, especially in the post-pandemic era. Physical unclonable functions (PUFs) present a modern solution using counterfeit-proof security labels to securely authenticate and identify physical objects. PUFs harness innately entropic information generators to create a unique fingerprint for an authentication protocol. This paper proposes a facile protein self-assembly process as an entropy generator for a unique biological PUF. The posited image digitization process applies a deep learning model to extract a feature vector from the self-assembly image. This is then binarized and debiased to produce a cryptographic key. The NIST SP 800-22 Statistical Test Suite was used to evaluate the randomness of the generated keys, which proved sufficiently stochastic. To facilitate deployment on physical objects, the PUF images were printed on flexible silk-fibroin-based biodegradable labels using functional protein bioinks. Images from the labels were captured using a cellphone camera and referenced against the source image for error rate comparison. The deep-learning-based biological PUF has potential as a low-cost, scalable, highly randomized strategy for anti-counterfeiting technology.
  • How Can the Adversary Effectively Identify Cellular IoT Devices Using LSTM Networks?
    Luo, Zhengping Jay; Pitera, Will; Zhao, Shangqing; Lu, Zhuo; Sagduyu, Yalin (ACM, 2023-06-01)
    The Internet of Things (IoT) has become a key enabler for connecting edge devices with each other and the internet. Massive IoT services provided by cellular networks offer various applications such as smart metering and smart cities. Security of the massive IoT devices working alongside traditional devices such as smartphones and laptops has become a major concern. Protecting these IoT devices from being identified by malicious attackers is often the first line of defense for cellular IoT devices. In this paper, we provide an effective attacking method for identifying cellular IoT devices from cellular networks. Inspired by the characteristics of Long Short-Term Memory (LSTM) networks, we have developed a method that can not only capture context information but also adapt to the dynamic changes of the environment over time. Experimental validation shows a high detection rate with less than 10 epochs of training on public datasets.
  • Disruptive Role of Vertical Land Motion in Future Assessments of Climate Change-Driven Sea-Level Rise and Coastal Flooding Hazards in the Chesapeake Bay
    Sherpa, Sonam Futi; Shirzaei, Manoochehr; Ojha, Chandrakanta (American Geophysical Union, 2023-04)
    Future projections of sea-level rise (SLR) used to assess coastal flooding hazards and exposure throughout the 21st century and devise risk mitigation efforts often lack an accurate estimate of coastal vertical land motion (VLM) rate, driven by anthropogenic or non-climate factors in addition to climatic factors. The Chesapeake Bay (CB) region of the United States is experiencing one of the fastest rates of relative sea-level rise on the Atlantic coast of the United States. This study uses a combination of space-borne Interferometric Synthetic Aperture Radar (InSAR), Global Navigation Satellite System (GNSS), Light Detecting and Ranging (LiDAR) data sets, available National Oceanic and Atmospheric Administration (NOAA) long-term tide gauge data, and SLR projections from the Intergovernmental Panel on Climate Change (IPCC), AR6 WG1 to quantify the regional rate of relative SLR and future flooding hazards for the years 2030, 2050, and 2100. By the year 2100, the total inundated areas from SLR and subsidence are projected to be 454(316–549)–600(535𝐴𝐴–690) km² for Shared Socioeconomic Pathways (SSPs) 1–1.9 to 5–8.5, respectively, and 342(132–552)–627(526–735) 𝐴𝐴 km2 only from SLR. The effect of storm surges based on Hurricane Isabel can increase the inundated area to 849(832–867)–1,117(1,054–1,205) km² under different VLM and SLR scenarios. We suggest that accurate estimates of VLM rate, such as those obtained here, are essential to revise IPCC projections and obtain accurate maps of coastal flooding and inundation hazards. The results provided here inform policymakers when assessing hazards associated with global climate changes and local factors in CB, required for developing risk management and disaster resilience plans.
  • How to Attack and Defend NextG Radio Access Network Slicing With Reinforcement Learning
    Shi, Yi; Sagduyu, Yalin E.; Erpek, Tugba; Gursoy, M. Cenk (IEEE, 2023)
    In this paper, reinforcement learning (RL) for network slicing is considered in next generation (NextG) radio access networks, where the base station (gNodeB) allocates resource blocks (RBs) to the requests of user equipments and aims to maximize the total reward of accepted requests over time. Based on adversarial machine learning, a novel over-the-air attack is introduced to manipulate the RL algorithm and disrupt NextG network slicing. The adversary observes the spectrum and builds its own RL based surrogate model that selects which RBs to jam subject to an energy budget with the objective of maximizing the number of failed requests due to jammed RBs. By jamming the RBs, the adversary reduces the RL algorithm's reward. As this reward is used as the input to update the RL algorithm, the performance does not recover even after the adversary stops jamming. This attack is evaluated in terms of both the recovery time and the (maximum and total) reward loss, and it is shown to be much more effective than benchmark (random and myopic) jamming attacks. Different reactive and proactive defense schemes such as suspending the RL algorithm's update once an attack is detected, introducing randomness to the decision process in RL to mislead the learning process of the adversary, or manipulating the feedback (NACK) mechanism such that the adversary may not obtain reliable information are introduced to show that it is viable to defend NextG network slicing against this attack, in terms of improving the RL algorithm's reward.
  • Collaborative Multi-Robot Multi-Human Teams in Search and Rescue
    Williams, Ryan K.; Abaid, Nicole; McClure, James; Lau, Nathan; Heintzman, Larkin; Hashimoto, Amanda; Wang, Tianzi; Patnayak, Chinmaya; Kumar, Akshay (2022-04-30)
    Robots such as unmanned aerial vehicles (UAVs) deployed for search and rescue (SAR) can explore areas where human searchers cannot easily go and gather information on scales that can transform SAR strategy. Multi-UAV teams therefore have the potential to transform SAR by augmenting the capabilities of human teams and providing information that would otherwise be inaccessible. Our research aims to develop new theory and technologies for field deploying autonomous UAVs and managing multi-UAV teams working in concert with multi-human teams for SAR. Specifically, in this paper we summarize our work in progress towards these goals, including: (1) a multi-UAV search path planner that adapts to human behavior; (2) an in-field distributed computing prototype that supports multi-UAV computation and communication; (3) behavioral modeling that yields spatially localized predictions of lost person location; and (4) an interface between human searchers and UAVs that facilitates human-UAV interaction over a wide range of autonomy.
  • Adversarial Machine Learning for NextG Covert Communications Using Multiple Antennas
    Kim, Brian; Sagduyu, Yalin; Davaslioglu, Kemal; Erpek, Tugba; Ulukus, Sennur (MDPI, 2022-07-29)
    This paper studies the privacy of wireless communications from an eavesdropper that employs a deep learning (DL) classifier to detect transmissions of interest. There exists one transmitter that transmits to its receiver in the presence of an eavesdropper. In the meantime, a cooperative jammer (CJ) with multiple antennas transmits carefully crafted adversarial perturbations over the air to fool the eavesdropper into classifying the received superposition of signals as noise. While generating the adversarial perturbation at the CJ, multiple antennas are utilized to improve the attack performance in terms of fooling the eavesdropper. Two main points are considered while exploiting the multiple antennas at the adversary, namely the power allocation among antennas and the utilization of channel diversity. To limit the impact on the bit error rate (BER) at the receiver, the CJ puts an upper bound on the strength of the perturbation signal. Performance results show that this adversarial perturbation causes the eavesdropper to misclassify the received signals as noise with a high probability while increasing the BER at the legitimate receiver only slightly. Furthermore, the adversarial perturbation is shown to become more effective when multiple antennas are utilized.
  • Improving Deep Learning for Maritime Remote Sensing through Data Augmentation and Latent Space
    Sobien, Daniel; Higgins, Erik; Krometis, Justin; Kauffman, Justin; Freeman, Laura J. (MDPI, 2022-07-07)
    Training deep learning models requires having the right data for the problem and understanding both your data and the models’ performance on that data. Training deep learning models is difficult when data are limited, so in this paper, we seek to answer the following question: how can we train a deep learning model to increase its performance on a targeted area with limited data? We do this by applying rotation data augmentations to a simulated synthetic aperture radar (SAR) image dataset. We use the Uniform Manifold Approximation and Projection (UMAP) dimensionality reduction technique to understand the effects of augmentations on the data in latent space. Using this latent space representation, we can understand the data and choose specific training samples aimed at boosting model performance in targeted under-performing regions without the need to increase training set sizes. Results show that using latent space to choose training data significantly improves model performance in some cases; however, there are other cases where no improvements are made. We show that linking patterns in latent space is a possible predictor of model performance, but results require some experimentation and domain knowledge to determine the best options.
  • SpaceDrones 2.0 — Hardware-in-the-Loop Simulation and Validation for Orbital and Deep Space Computer Vision and Machine Learning Tasking Using Free-Flying Drone Platforms
    Peterson, Marco; Du, Minzhen; Springle, Bryant; Black, Jonathan (MDPI, 2022-05-06)
    The proliferation of reusable space vehicles has fundamentally changed how assets are injected into the low earth orbit and beyond, increasing both the reliability and frequency of launches. Consequently, it has led to the rapid development and adoption of new technologies in the aerospace sector, including computer vision (CV), machine learning (ML)/artificial intelligence (AI), and distributed networking. All these technologies are necessary to enable truly autonomous “Human-out-of-the-loop” mission tasking for spaceborne applications as spacecrafts travel further into the solar system and our missions become more ambitious. This paper proposes a novel approach for space-based computer vision sensing and machine learning simulation and validation using synthetically trained models to generate the large amounts of space-based imagery needed to train computer vision models. We also introduce a method of image data augmentation known as domain randomization to enhance machine learning performance in the dynamic domain of spaceborne computer vision to tackle unique space-based challenges such as orientation and lighting variations. These synthetically trained computer vision models then apply that capability for hardware-in-the-loop testing and evaluation via free-flying robotic platforms, thus enabling sensor-based orbital vehicle control, onboard decision making, and mobile manipulation similar to air-bearing table methods. Given the current energy constraints of space vehicles using solar-based power plants, cameras provide an energy-efficient means of situational awareness when compared to active sensing instruments. When coupled with computationally efficient machine learning algorithms and methods, it can enable space systems proficient in classifying, tracking, capturing, and ultimately manipulating objects for orbital/planetary assembly and maintenance (tasks commonly referred to as In-Space Assembly and On-Orbit Servicing). Given the inherent dangers of manned spaceflight/extravehicular activities (EVAs) currently employed to perform spacecraft maintenance and the current limitation of long-duration human spaceflight outside the low earth orbit, space robotics armed with generalized sensing and control and machine learning architecture have a unique automation potential. However, the tools and methodologies required for hardware-in-the-loop simulation, testing, and validation at a large scale and at an affordable price point are in developmental stages. By leveraging a drone’s free-flight maneuvering capability, theater projection technology, synthetically generated orbital and celestial environments, and machine learning, this work strives to build a robust hardware-in-the-loop testing suite. While the focus of the specific computer vision models in this paper is narrowed down to solving visual sensing problems in orbit, this work can very well be extended to solve any problem set that requires a robust onboard computer vision, robotic manipulation, and free-flight capabilities.