Browsing by Author "Gerdes, Ryan M."
Now showing 1 - 20 of 42
Results Per Page
Sort Options
- Adversarial RFML: Evading Deep Learning Enabled Signal ClassificationFlowers, Bryse Austin (Virginia Tech, 2019-07-24)Deep learning has become an ubiquitous part of research in all fields, including wireless communications. Researchers have shown the ability to leverage deep neural networks (DNNs) that operate on raw in-phase and quadrature samples, termed Radio Frequency Machine Learning (RFML), to synthesize new waveforms, control radio resources, as well as detect and classify signals. While there are numerous advantages to RFML, this thesis answers the question "is it secure?" DNNs have been shown, in other applications such as Computer Vision (CV), to be vulnerable to what are known as adversarial evasion attacks, which consist of corrupting an underlying example with a small, intelligently crafted, perturbation that causes a DNN to misclassify the example. This thesis develops the first threat model that encompasses the unique adversarial goals and capabilities that are present in RFML. Attacks that occur with direct digital access to the RFML classifier are differentiated from physical attacks that must propagate over-the-air (OTA) and are thus subject to impairments due to the wireless channel or inaccuracies in the signal detection stage. This thesis first finds that RFML systems are vulnerable to current adversarial evasion attacks using the well known Fast Gradient Sign Method originally developed for CV applications. However, these current adversarial evasion attacks do not account for the underlying communications and therefore the adversarial advantage is limited because the signal quickly becomes unintelligible. In order to envision new threats, this thesis goes on to develop a new adversarial evasion attack that takes into account the underlying communications and wireless channel models in order to create adversarial evasion attacks with more intelligible underlying communications that generalize to OTA attacks.
- Analysis of Firmware Security in Embedded ARM EnvironmentsBrown, Dane Andrew (Virginia Tech, 2019-09-30)Modern enterprise-grade systems with virtually unlimited resources have many options when it comes to implementing state of the art intrusion prevention and detection solutions. These solutions are costly in terms of energy, execution time, circuit board area, and capital. Sustainable Internet of Things devices and power-constrained embedded systems are thus forced to make suboptimal security trade-offs. One such trade-off is the design of architectures which prevent execution of injected shell code, yet have allowed Return Oriented Programming (ROP) to emerge as a more reliable way to execute malicious code following attacks. ROP is a method used to take over the execution of a program by causing the return address of a function to be modified through an exploit vector, then returning to small segments of otherwise innocuous code located in executable memory one after the other to carry out the attacker's aims. We show that the Tiva TM4C123GH6PM microcontroller, which utilizes anARM Cortex-M4F processor, can be fully controlled with this technique. Firmware code is pre-loaded into a ROM on Tiva microcontrollers which can be subverted to erase and rewrite the flash memory where the program resides. That same firmware is searched for a Turing-complete gadget set which allows for arbitrary execution. We then design and evaluate a method for verifying the integrity of firmware on embedded systems, in this case Solid State Drives (SSDs). Some manufacturers make firmware updates available, but their proprietary protections leave end users unable to verify the authenticity of the firmware post installation. This means that attackers who are able to get a malicious firmware version installed on a victim SSD are able to operate with full impunity, as the owner will have no tools for detection. We have devised a method for performing side channel analysis of the current drawn by an SSD, which can compare its behavior while running genuine firmware against its behavior when running modified firmware. We train a binary classifier with samples of both versions and are able to consistently discriminate between genuine firmware and modified firmware, even despite changes in external factors such as temperature and supplied power.
- Analysis of Lightweight Cryptographic PrimitivesGeorge, Kiernan Brent (Virginia Tech, 2021-05-05)Internet-of-Things (IoT) devices have become increasingly popular in the last 10 years, yet also show an acceptance for lack of security due to hardware constraints. The range of sophistication in IoT devices varies substantially depending on the functionality required, so security options need to be flexible. Manufacturers typically either use no security, or lean towards the use of the Advanced Encryption Standard (AES) with a 128-bit key. AES-128 is suitable for the higher end of that IoT device range, but is costly enough in terms of memory, time, and energy consumption that some devices opt to use no security. Short development and a strong drive to market also contribute to a lack in security. Recent work in lightweight cryptography has analyzed the suitability of custom protocols using AES as a comparative baseline. AES outperforms most custom protocols when looking at security, but those analyses fail to take into account block size and future capabilities such as quantum computers. This thesis analyzes lightweight cryptographic primitives that would be suitable for use in IoT devices, helping fill a gap for "good enough" security within the size, weight, and power (SWaP) constraints common to IoT devices. The primitives have not undergone comprehensive cryptanalysis and this thesis attempts to provide a preliminary analysis of confidentiality. The first is a single-stage residue number system (RNS) pseudorandom number generator (PRNG) that was shown in previous publications to produce strong outputs when analyzed with statistical tests like the NIST RNG test suite and DIEHARD. However, through analysis, an intelligent multi-stage conditional probability attack based on the pigeonhole principle was devised to reverse engineer the initial state (key) of a single-stage RNS PRNG. The reverse engineering algorithm is presented and used against an IoT-caliber device to showcase the ability of an attacker to retrieve the initial state. Following, defenses based on intentional noise, time hopping, and code hopping are proposed. Further computation and memory analysis show the proposed defenses are simple in implementation, but increase complexity for an attacker to the point where reverse engineering the PRNG is likely no longer viable. The next primitive proposed is a block cipher combination technique based on Galois Extension Field multiplication. Using any PRNG to produce the pseudorandom stream, the block cipher combination technique generates a variable sized key matrix to encrypt plaintext. Electronic Codebook (ECB) and Cipher Feedback (CFB) modes of operation are discussed. Both system modes are implemented in MATLAB as well as on a Texas Instruments (TI) MSP430FR5994 microcontroller for hardware validation. A series of statistical tests are then run against the simulation results to analyze overall randomness, including NIST and the Law of the Iterated Logarithm; the system passes both. The implementation on hardware is compared against a stream cipher variation and AES-128. The block cipher proposed outperforms AES-128 in terms of computation time and consumption for small block sizes. While not as secure, the cryptosystem is more scalable to block sizes used in IoT devices.
- Application of Deep Learning in Intelligent Transportation SystemsDabiri, Sina (Virginia Tech, 2019-02-01)The rapid growth of population and the permanent increase in the number of vehicles engender several issues in transportation systems, which in turn call for an intelligent and cost-effective approach to resolve the problems in an efficient manner. A cost-effective approach for improving and optimizing transportation-related problems is to unlock hidden knowledge in ever-increasing spatiotemporal and crowdsourced information collected from various sources such as mobile phone sensors (e.g., GPS sensors) and social media networks (e.g., Twitter). Data mining and machine learning techniques are the major tools for analyzing the collected data and extracting useful knowledge on traffic conditions and mobility behaviors. Deep learning is an advanced branch of machine learning that has enjoyed a lot of success in computer vision and natural language processing fields in recent years. However, deep learning techniques have been applied to only a small number of transportation applications such as traffic flow and speed prediction. Accordingly, my main objective in this dissertation is to develop state-of-the-art deep learning architectures for resolving the transport-related applications that have not been treated by deep learning architectures in much detail, including (1) travel mode detection, (2) vehicle classification, and (3) traffic information system. To this end, an efficient representation for spatiotemporal and crowdsourced data (e.g., GPS trajectories) is also required to be designed in such a way that not only be adaptable with deep learning architectures but also contains efficient information for solving the task-at-hand. Furthermore, since the good performance of a deep learning algorithm is primarily contingent on access to a large volume of training samples, efficient data collection and labeling strategies are developed for different data types and applications. Finally, the performance of the proposed representations and models are evaluated by comparing to several state-of-the-art techniques in literature. The experimental results clearly and consistently demonstrate the superiority of the proposed deep-learning based framework for each application.
- Architecting IoT-Enabled Smart Building TestbedAmanzadeh, Leila (Virginia Tech, 2018-10-29)Smart building's benefits range from improving comfort of occupant, increased productivity, reduction in energy consumption and operating costs, lower CO2 emission, to improved life cycle of utilities, efficient operation of building systems, etc. [65]. Hence, modern building owners are turning towards smart buildings. However, the current smart buildings mostly are not capable of achieving the objectives they are designed for and they can improve a lot better [22]. Therefore, a new technology called, Internet of Things, or IoT, is combined with the smart buildings to improve their performance [23]. IoT is the inter-networking of things embedded with electronics, software, sensors, actuators, and network connectivity to collect and exchange data, and things in this definition is anything and everything around us and even ourselves. Using this technology, e.g. a door can be a thing and can sense how many people have passed it's sensor to enter a space and let the lighting system know to prepare appropriate amount of light, or the HVAC (Heating Ventilation Air Conditioning) system to provide desirable temperature. IoT will provide a lot of useful information that before that accessibility to it was impossible, e.g., condition of water pipes in winter, which helps avoiding damages like frozen or broken pipes. However, despite all the benefits, IoT suffers from being vulnerable to cyber attacks. Examples have been provided later in Chapter 1. In this project among building systems, HVAC system is chosen to be automated with a new control method called MPC (Model Predictive Control). This method is fast, very energy efficient and has a lower than 0.001 rate of error for regulating the space temperature to any temperature that the occupants desire according to the results of this project. Furthermore, a PID (Proportional–Integral–Derivative) controller has been designed for the HVAC system that in the exact same cases MPC shows a much better performance. To design controllers for HVAC system and set the temperature to the desired value a method to automate balancing the heat flow should be found, therefore a thermal model of building should be available that using this model, the amount of heat, flowing in and out of a space in the building disregarding the external weather would be known to estimate. To automate the HVAC system using the programming languages like MATLAB, there is a need to convert the thermal model of the building to a mathematical model. This mathematical model is unique for each building depending on how many floors it has, how wide it is, and what materials have been used to construct the building. This process is needs a lot of effort and time even for buildings with 2 floors and 2 rooms on each floor and at the end the engineer might have done it with error. In this project you will see a software that will do the conversion of thermal model of buildings in any size to their mathematical model automatically, which helps improving the HVAC controllers to set temperature to the value occupants desire and avoid errors and time loss which is put both into calculations and troubleshooting. In addition, a test environment has been designed and constructed as a cyber physical system that allows us to test the IoT- enabled control systems before implementing them on real buildings, observe the performance, and decide if the system is satisfying or not. Also, all cyber threats can be explored and the solutions to those attacks can be evaluated. Even for the systems that are already out there, there is an opportunity to be assessed on this testbed and if there is any vulnerability in case of cyber security, solutions would be evaluated and help the existing systems improve.
- Automated Tracking of Mouse Embryogenesis from Large-scale Fluorescence Microscopy DataWang, Congchao (Virginia Tech, 2021-06-03)Recent breakthroughs in microscopy techniques and fluorescence probes enable the recording of mouse embryogenesis at the cellular level for days, easily generating terabyte-level 3D time-lapse data. Since millions of cells are involved, this information-rich data brings a natural demand for an automated tool for its comprehensive analysis. This tool should automatically (1) detect and segment cells at each time point and (2) track cell migration across time. Most existing cell tracking methods cannot scale to the data with such large size and high complexity. For those purposely designed for embryo data analysis, the accuracy is heavily sacrificed. Here, we present a new computational framework for the mouse embryo data analysis with high accuracy and efficiency. Our framework detects and segments cells with a fully probability-principled method, which not only has high statistical power but also helps determine the desired cell territories and increase the segmentation accuracy. With the cells detected at each time point, our framework reconstructs cell traces with a new minimum-cost circulation-based paradigm, CINDA (CIrculation Network-based DataAssociation). Compared with the widely used minimum-cost flow-based methods, CINDA guarantees the global optimal solution with the best-of-known theoretical worst-case complexity and hundreds to thousands of times practical efficiency improvement. Since the information extracted from a single time point is limited, our framework iteratively refines cell detection and segmentation results based on the cell traces which contain more information from other time points. Results show that this dramatically improves the accuracy of cell detection, segmentation, and tracking. To make our work easy to use, we designed a standalone software, MIVAQ (Microscopic Image Visualization, Annotation, and Quantification), with our framework as the backbone and a user-friendly interface. With MIVAQ, users can easily analyze their data and visually check the results.
- Better Side-Channel Attacks Through MeasurementsSingh, Alok K.; Gerdes, Ryan M. (ACM, 2023-11-30)In recent years, there has been a growing focus on improving the efficiency of the power side-channel analysis (SCA) attack by using machine learning or artificial intelligence methods, however, they can only be as good as the data they are trained on. Previous work has not given much attention to improving the accuracy of measurements by optimizing the measurement setup and the parameters, and most new researchers rely on heuristics to make measurements. This paper proposes an effective methodology to launch power SCA and increase the efficiency of the attack by improving the measurements. We examine the heuristics related to measurement parameters, investigate ways to optimize the parameters, determine their effects empirically, and provide a theoretical analysis to support the findings. To demonstrate the shortcomings of commercial measurement devices, we present a low-cost measurement board design and its hardware realization. In doing so, we are able to improve the power measurements, by optimizing the measurement setup, which in turn improves the efficiency of the attack.
- Defending Real-Time Systems through Timing-Aware DesignsMishra, Tanmaya (Virginia Tech, 2022-05-04)Real-time computing systems are those that are designed to achieve computing goals by certain deadlines. Real-time computing systems are present in everything from cars to airplanes, pacemakers to industrial-control systems, and other pieces of critical infrastructure. With the increasing interconnectivity of these systems, system security issues and the constant threat of manipulation by malicious external attackers that have plagued general computing systems, now threaten the integrity and safety of real-time systems. This dissertation discusses three different defense techniques that focuses on the role that real-time scheduling theory can play to reduce runtime cost, and guarantee correctness when applying these defense strategies to real-time systems. The first work introduces a novel timing aware defense strategy for the CAN bus that utilizes TrustZone on state-of-the-art ARMv8-M microcontrollers. The second reduces the runtime cost of control-flow integrity (CFI), a popular system security defense technique, by correctly modeling when a real-time system performs I/O, and exploiting the model to schedule CFI procedures efficiently. Finally, the third studies and provides a lightweight mitigation strategy for a recently discovered vulnerability within mixed criticality real-time systems.
- Designing Security Defenses for Cyber-Physical SystemsForuhandeh, Mahsa (Virginia Tech, 2022-05-04)Legacy cyber-physical systems (CPSs) were designed without considering cybersecurity as a primary design tenet especially when considering their evolving operating environment. There are many examples of legacy systems including automotive control, navigation, transportation, and industrial control systems (ICSs), to name a few. To make matters worse, the cost of designing and deploying defenses in existing legacy infrastructure can be overwhelming as millions or even billions of legacy CPS systems are already in use. This economic angle, prevents the use of defenses that are not backward compatible. Moreover, any protection has to operate efficiently in resource constraint environments that are dynamic nature. Hence, the existing approaches that require ex- pensive additional hardware, propose a new protocol from scratch, or rely on complex numerical operations such as strong cryptographic solutions, are less likely to be deployed in practice. In this dissertation, we explore a variety of lightweight solutions for securing different existing CPSs without requiring any modifications to the original system design at hardware or protocol level. In particular, we use fingerprinting, crowdsourcing and deterministic models as alternative backwards- compatible defenses for securing vehicles, global positioning system (GPS) receivers, and a class of ICSs called supervisory control and data acquisition (SCADA) systems, respectively. We use fingerprinting to address the deficiencies in automobile cyber-security from the angle of controller area network (CAN) security. CAN protocol is the de-facto bus standard commonly used in the automotive industry for connecting electronic control units (ECUs) within a vehicle. The broadcast nature of this protocol, along with the lack of authentication or integrity guarantees, create a foothold for adversaries to perform arbitrary data injection or modification and impersonation attacks on the ECUs. We propose SIMPLE, a single-frame based physical layer identification for intrusion detection and prevention on such networks. Physical layer identification or fingerprinting is a method that takes advantage of the manufacturing inconsistencies in the hardware components that generate the analog signal for the CPS of our interest. It translates the manifestation of these inconsistencies, which appear in the analog signals, into unique features called fingerprints which can be used later on for authentication purposes. Our solution is resilient to ambient temperature, supply voltage value variations, or aging. Next, we use fingerprinting and crowdsourcing at two separate protection approaches leveraging two different perspectives for securing GPS receivers against spoofing attacks. GPS, is the most predominant non-authenticated navigation system. The security issues inherent into civilian GPS are exacerbated by the fact that its design and implementation are public knowledge. To address this problem, first we introduce Spotr, a GPS spoofing detection via device fingerprinting, that is able to determine the authenticity of signals based on their physical-layer similarity to the signals that are known to have originated from GPS satellites. More specifically, we are able to detect spoofing activities and track genuine signals over different times and locations and propagation effects related to environmental conditions. In a different approach at a higher level, we put forth Crowdsourcing GPS, a total solution for GPS spoofing detection, recovery and attacker localization. Crowdsourcing is a method where multiple entities share their observations of the environment and get together as a whole to make a more accurate or reliable decision on the status of the system. Crowdsourcing has the advantage of deployment with the less complexity and distributed cost, however its functionality is dependent on the adoption rate by the users. Here, we have two methods for implementing Crowdsourcing GPS. In the first method, the users in the crowd are aware of their approximate distance from other users using Bluetooth. They cross validate this approximate distance with the GPS-derived distance and in case of any discrepancy they report ongoing spoofing activities. This method is a strong candidate when the users in the crowd have a sparse distribution. It is also very effective when tackling multiple coordinated adversaries. For method II, we exploit the angular dispersion of the users with respect to the direction that the adversarial signal is being transmitted from. As a result, the users that are not facing the attacker will be safe. The reason for this is that human body mostly comprises of water and absorbs the weak adversarial GPS signal. The safe users will help the spoofed users find out that there is an ongoing attack and recover from it. Additionally, the angular information is used for localizing the adversary. This method is slightly more complex, and shows the best performance in dense areas. It is also designed based on the assumption that the spoofing attack is only terrestrial. Finally, we propose a tandem IDS to secure SCADA systems. SCADA systems play a critical role in most safety-critical infrastructures of ICSs. The evolution of communications technology has rendered modern SCADA systems and their connecting actuators and sensors vulnerable to malicious attacks on both physical and application layers. The conventional IDS that are built for securing SCADA systems are focused on a single layer of the system. With the tandem IDS we break this habit and propose a strong multi-layer solution which is able to expose a wide range of attack. To be more specific, the tandem IDS comprises of two parts, a traditional network IDS and a shadow replica. We design the shadow replica as a deterministic IDS. It performs a workflow analysis and makes sure the logical flow of the events in the SCADA controller and its connected devices maintain their expected states. Any deviation would be a malicious activity or a reliability issue. To model the application level events, we leverage finite state machines (FSMs) to compute the anticipated states of all of the devices. This is feasible because in many of the existing ICSs the flow of traffic and the resulting states and actions in the connected devices have a deterministic nature. Consequently, it leads to a reliable and free of uncertainty solution. Aside from detecting traditional network attacks, our approach bypasses the attacker in case it succeeds in taking over the devices and also maintains continuous service if the SCADA controller gets compromised.
- An Efficient Knapsack-Based Approach for Calculating the Worst-Case Demand of AVR TasksBijinemula, Sandeep Kumar (Virginia Tech, 2019-02-01)Engine-triggered tasks are real-time tasks that are released when the crankshaft arrives at certain positions in its path of rotation. This makes the rate of release of these jobs a function of the crankshaft's angular speed and acceleration. In addition, several properties of the engine triggered tasks like the execution time and deadlines are dependent on the speed profile of the crankshaft. Such tasks are referred to as adaptive-variable rate (AVR) tasks. Existing methods to calculate the worst-case demand of AVR tasks are either inaccurate or computationally intractable. We propose a method to efficiently calculate the worst-case demand of AVR tasks by transforming the problem into a variant of the knapsack problem. We then propose a framework to systematically narrow down the search space associated with finding the worst-case demand of AVR tasks. Experimental results show that our approach is at least 10 times faster, with an average runtime improvement of 146 times for randomly generated task sets when compared to the state-of-the-art technique.
- Electromagnetic Interference Attacks on Cyber-Physical Systems: Theory, Demonstration, and DefenseDayanikli, Gokcen Yilmaz (Virginia Tech, 2021-08-27)A cyber-physical system (CPS) is a complex integration of hardware and software components to perform well-defined tasks. Up to this point, many software-based attacks targeting the network and computation layers have been reported by the researchers. However, the physical layer attacks that utilize natural phenomena (e.g., electromagnetic waves) to manipulate safety-critic signals such as analog sensor outputs, digital data, and actuation signals have recently taken the attention. The purpose of this dissertation is to detect the weaknesses of cyber-physical systems against low-power Intentional Electromagnetic Interference (IEMI) attacks and provide hardware-level countermeasures. Actuators are irreplaceable components of electronic systems that control the physically moving sections, e.g., servo motors that control robot arms. In Chapter 2, the potential effects of IEMI attacks on actuation control are presented. Pulse Width Modulation (PWM) signal, which is the industry–standard for actuation control, is observed to be vulnerable to IEMI with specific frequency and modulated–waveforms. Additionally, an advanced attacker with limited information about the victim can prevent the actuation, e.g., stop the rotation of a DC or servo motor. For some specific actuator models, the attacker can even take the control of the actuators and consequently the motion of the CPS, e.g., the flight trajectory of a UAV. The attacks are demonstrated on a fixed-wing unmanned aerial vehicle (UAV) during varying flight scenarios, and it is observed that the attacker can block or take control of the flight surfaces (e.g., aileron) which results in a crash of the UAV or a controllable change in its trajectory, respectively. Serial communication protocols such as UART or SPI are widely employed in electronic systems to establish communication between peripherals (e.g., sensors) and controllers. It is observed that an adversary with the reported three-phase attack mechanism can replace the original victim data with the 'desired' false data. In the detection phase, the attacker listens to the EM leakage of the victim system. In the signal processing phase, the exact timing of the victim data is determined from the victim EM leakage, and in the transmission phase, the radiated attack waveform replaces the original data with the 'desired' false data. The attack waveform is a narrowband signal at the victim baud rate, and in a proof–of–concept demonstration, the attacks are observed to be over 98% effective at inducing a desired bit sequence into pseudorandom UART frames. Countermeasures such as twisted cables are discussed and experimentally validated in high-IEMI scenarios. In Chapter 4, a state-of-art electrical vehicle (EV) charger is assessed in IEMI attack scenarios, and it is observed that an attacker can use low–cost RF components to inject false current or voltage sensor readings into the system. The manipulated sensor data results in a drastic increase in the current supplied to the EV which can easily result in physical damage due to thermal runaway of the batteries. The current switches, which control the output current of the EV charger, can be controlled (i.e., turned on) by relatively high–power IEMI, which gives the attacker direct control of the current supplied to the EV. The attacks on UAVs, communication systems, and EV chargers show that additional hardware countermeasures should be added to the state-of-art system design to alleviate the effect of IEMI attacks. The fiber-optic transmission and low-frequency magnetic field shielding can be used to transmit 'significant signals' or PCB-level countermeasures can be utilized which are reported in Chapter 5.
- An Empirical Method of Ascertaining the Null Points from a Dedicated Short-Range Communication (DSRC) Roadside Unit (RSU) at a Highway On/Off-RampWalker, Jonathan Bearnarr (Virginia Tech, 2018-09-26)The deployment of dedicated short-range communications (DSRC) roadside units (RSUs) allows a connected or automated vehicle to acquire information from the surrounding environment using vehicle-to-infrastructure (V2I) communication. However, wireless communication using DSRC has shown to exhibit null points, at repeatable distances. The null points are significant and there was unexpected loss in the wireless signal strength along the pathway of the V2I communication. If the wireless connection is poor or non-existent, the V2I safety application will not obtain sufficient data to perform the operation services. In other words, a poor wireless connection between a vehicle and infrastructure (e.g., RSU) could hamper the performance of a safety application. For example, a designer of a V2I safety application may require a minimum rate of data (or packet count) over 1,000 meters to effectively implement a Reduced Speed/Work Zone Warning (RSZW) application. The RSZW safety application is aimed to alert or warn drivers, in a Cooperative Adaptive Cruise Control (CACC) platoon, who are approaching a work zone. Therefore, the packet counts and/or signal strength threshold criterion must be determined by the developer of the V2I safety application. Thus, we selected an arbitrary criterion to develop an empirical method of ascertaining the null points from a DSRC RSU. The research motivation focuses on developing an empirical method of calculating the null points of a DSRC RSU for V2I communication at a highway on/off-ramp. The intent is to improve safety, mobility, and environmental applications since a map of the null points can be plotted against the distance between the DSRC RSU and a vehicle's onboard unit (OBU). The main research question asks: 'What is a more robust empirical method, compared to the horizontal and vertical laws of reflection formula, in determining the null points from a DSRC RSU on a highway on/off ramp?' The research objectives are as follows: 1. Explain where and why null points occur from a DSRC RSU (Chapter 2) 2. Apply the existing horizontal and vertical polarization model and discuss the limitations of the model in a real-world scenario for a DSRC RSU on a highway on/off ramp (Chapter 3 and Appendix A) 3. Introduce an extended horizontal and vertical polarization null point model using empirical data (Chapter 4) 4. Discuss the conclusion, limitations of work, and future research (Chapter 5). The simplest manner to understand where and why null points occur is depicted as two sinusoidal waves: direct and reflective waves (i.e., also known as a two-ray model). The null points for a DSRC RSU occurs because the direct and reflective waves produce a destructive interference (i.e., decrease in signal strength) when they collide. Moreover, the null points can be located using Pythagorean theorem for the direct and reflective waves. Two existing models were leveraged to analyze null points: 1) signal strength loss (i.e., a free space path loss model, or FSPL, in Appendix A) and 2) the existing horizontal and vertical polarization null points from a DSRC RSU. Using empirical data from two different field tests, the existing horizontal and vertical polarization null point model was shown to contain limitations in short distances from the DSRC RSU. Moreover, the existing horizontal and vertical polarization model for null points was extremely challenging to replicate with over 15 DSRC RSU data sets. After calculating the null point for several DSRC RSU heights, the paper noticed a limitation of the existing horizontal and vertical polarization null point model with over 15 DSRC RSU data sets (i.e., the model does not account for null points along the full length of the FSPL model). An extended horizontal and vertical polarization model is proposed that calculates the null point from a DSRC RSU. There are 18 model comparisons of the packet counts and signal strengths at various thresholds as perspective extended horizontal and vertical polarization models. This paper compares the predictive ability of 18 models and measures the fit. Finally, a predication graph is depicted with the neural network's probability profile for packet counts =1 when greater than or equal to 377. Likewise, a python script is provided of the extended horizontal and vertical polarization model in Appendix C. Consequently, the neural network model was applied to 10 different DSRC RSU data sets at 10 unique locations around a circular test track with packet counts ranging from 0 to 11. Neural network models were generated for 10 DSRC RSUs using three thresholds with an objective to compare the predictive ability of each model and measure the fit. Based on 30 models at 10 unique locations, the highest misclassification was 0.1248, while the lowest misclassification was 0.000. There were six RSUs mounted at 3.048 (or 10 feet) from the ground with a misclassification rate that ranged from 0.1248 to 0.0553. Out of 18 models, seven had a misclassification rate greater than 0.110, while the remaining misclassification rates were less than 0.0993. There were four RSUs mounted at 6.096 meters (or 20 feet) from the ground with a misclassification rate that ranged from 0.919 to 0.000. Out of 12 models, four had a misclassification rate greater than 0.0590, while the remaining misclassification rates were less than 0.0412. Finally, there are two major limitations in the research: 1) the most effective key parameter is packet counts, which often require expensive data acquisition equipment to obtain the information and 2) the categorical type (i.e., decision tree, logistic regression, and neural network) will vary based on the packet counts or signal strength threshold that is dictated by the threshold criterion. There are at least two future research areas that correspond to this body of work: 1) there is a need to leverage the extended horizontal and vertical polarization null point model on multiple DSRC RSUs along a highway on/off ramp, and 2) there is a need to apply and validate different electric and magnetic (or propagation) models.
- Enhancing Software Security through Code Diversification Verification, Control-flow Restriction, and Automatic CompartmentalizationJang, Jae-Won (Virginia Tech, 2024-07-26)In today's digital age, computer systems are prime targets for adversaries due to the vast amounts of sensitive information stored digitally. This ongoing cat-and-mouse game between programmers and adversaries forces security researchers to continually develop novel security measures. Widely adopted schemes like NX bits have safeguarded systems against traditional memory exploits such as buffer overflows, but new threats like code-reuse attacks quickly bypass these defenses. Code-reuse attacks exploit existing code sequences, known as gadgets, without injecting new malicious code, making them challenging to counter. Additionally, input-based vulnerabilities pose significant risks by exploiting external inputs to trigger malicious paths. Languages like C and C++ are often considered unsafe due to their tendency to cause issues like buffer overflows and use-after-free errors. Addressing these complex vulnerabilities requires extensive research and a holistic approach. This dissertation initially introduces a methodology for verifying the functional equivalence between an original binary and its diversified version. The Verification of Diversified Binary (VDB) algorithm is employed to determine whether the two binaries—the original and the diversified—maintain functional equivalence. Code diversification techniques modify the binary compilation process to produce functionally equivalent yet different binaries from the same source code. Most code diversification techniques focus on analyzing non-functional properties, such as whether the technique improves security. The objective of this contribution is to enable the use of untrusted diversification techniques in essential applications. Our evaluation demonstrates that the VDB algorithm can verify the functional equivalence of 85,315 functions within binaries from the GNU Coreutils 8.31 benchmark suite. Next, this dissertation proposes a binary-level tool that modifies binaries to protect against control-flow hijacking attacks. Traditional approaches to guard against ROP attacks either introduce significant overhead, require hardware support, or need intimate knowledge of the binary, such as source code. In contrast, this contribution does not rely on source code nor the latest hardware technology (e.g., Intel Control-flow Enforcement Technology). Instead, we show that we can precisely restrict control flow transfers from transferring to non-intended paths even without these features. To that end, this contribution proposes a novel control-flow integrity policy based on a deny list called Control-flow Restriction (CFR). CFR determines which control flow transfers are allowed in the binary without requiring source code. Our implementation and evaluation of CFR show that it achieves this goal with an average runtime performance overhead for commercial off-the-shelf (COTS) binaries in the range of 5.5% to 14.3%. In contrast, a state-of-the-art binary-level solution such as BinCFI has an average overhead of 61.5%. Additionally, this dissertation explores leveraging the latest hardware security primitives to compartmentalize sensitive data. Specifically, we use a tagged memory architecture introduced by ARM called the Memory Tagging Extension (MTE), which assigns a metadata tag to a memory location that is associated with pointers referencing that memory location. Although promising, ARM MTE suffers from predictable tag allocation on stack data, vulnerable plain-text metadata tags, and lack of fine-grained memory access control. Therefore, this contribution introduces Shroud to enhance data security through compartmentalization using MTE and protect MTE's tagged pointers' vulnerability through encryption. Evaluation of Shroud demonstrates its security effectiveness against non-control-data attacks like Heartbleed and Data-Oriented Programming, with performance evaluations showing an average overhead of 4.2% on lighttpd and 2% on UnixBench. Finally, the NPB benchmark measured Shroud's overhead, showing an average runtime overhead of 2.57%. The vulnerabilities highlighted by exploits like Heartbleed capitalize on external inputs, underscoring the need for enhanced input-driven security measures. Therefore, this dissertation describes a method to improve upon the limitations of traditional compartmentalization techniques. This contribution introduces an Input-Based Compartmentalization System (IBCS), a comprehensive toolchain that utilizes user input to identify data for memory protection automatically. Based on user inputs, IBCS employs hybrid taint analysis to generate sensitive code paths and further analyze each tainted data using novel assembly analyses to identify and enforce selective targets. Evaluations of IBCS demonstrate its security effectiveness through adversarial analysis and report an average overhead of 3% on Nginx. Finally, this dissertation concludes by revisiting the problem of implementing a classical technique known as Software Fault Isolation (SFI) on an x86-64 architecture. Prior works attempting to implement SFI on an x86-64 architecture have suffered from supporting a limited number of sandboxes, high context-switch overhead, and requiring extensive modifications to the toolchain, jeopardizing maintainability and introducing compatibility issues due to the need for specific hardware. This dissertation describes x86-based Fault Isolation (XFI), an efficient SFI scheme implemented on an x86-64 architecture with minimal modifications needed to the toolchain, while reducing complexity in enforcing SFI policies with low performance (22.48% average) and binary size overheads (2.65% average). XFI initializes the sandbox environment for the rewritten binary and, depending on the instructions, enforces data-access and control-flow policies to ensure safe execution. XFI provides the security benefits of a classical SFI scheme and offers additional protection against several classes of side-channel attacks, which can be further extended to enhance its protection capabilities.
- Establishment of a Cyber-Physical Systems (CPS) Test Bed to Explore Traffic Collision Avoidance System (TCAS) Vulnerabilities to Cyber AttacksGraziano, Timothy Michael (Virginia Tech, 2021-08-10)Traffic Collision Avoidance Systems (TCAS) are safety-critical, unauthenticated, ranging systems required in commercial aircraft. Previous work has proposed TCAS vulnerabilities to attacks from malicious actors with low cost software defined radios (SDR) and inexpensive open-source software (GNU radio) where spoofing TCAS radio signals in now possible. This paper outlines a proposed threat model for several TCAS vulnerabilities from an adversarial perspective. Periodic and aperiodic attack models are explored as possible low latency solutions to spoof TCAS range estimation. A TCAS test bed is established with commercial avionics to demonstrate the efficacy of proposed vulnerabilities. SDRs and Vector Waveform Generators (VWGs) are used to achieve desired latency. Sensor inputs to the TCAS system are spoofed with micro-controllers. These include Radar Altimeter, Barometric Altimeter, and Air Data Computer (ADC) heading and attitude information transmitted by Aeronautical Radio INC (ARINC) 429 encoding protocol. TCAS spoofing is attempted against the test bed and analysis conducted on the timing results and test bed performance indicators. The threat model is analyzed qualitatively and quantitatively.
- Explainable and Network-based Approaches for Decision-making in Emergency ManagementTabassum, Anika (Virginia Tech, 2021-10-19)Critical Infrastructures (CIs), such as power, transportation, healthcare, etc., refer to systems, facilities, technologies, and networks vital to national security, public health, and socio-economic well-being of people. CIs play a crucial role in emergency management. For example, the recent Hurricane Ida, Texas Winter storm, colonial cyber-attack that occurred during 2021 in the US, shows the CIs are highly inter-dependent with complex interactions. Hence power system failures and shutdown of natural gas pipelines, in turn, led to debilitating impacts on communication, waste systems, public health, etc. Consider power failures during a disaster, such as a hurricane. Subject Matter Experts (SMEs) such as emergency management authorities may be interested in several decision-making tasks. Can we identify disaster phases in terms of the severity of damage from analyzing changes in power failures? Can we tell the SMEs which power grids or regions are the most affected during each disaster phase and need immediate action to recover? Answering these questions can help SMEs to respond quickly and send resources for fast recovery from damage. Can we systematically provide how the failure of different power grids may impact the whole CIs due to inter-dependencies? This can help SMEs to better prepare and mitigate the risks by improving system resiliency. In this thesis, we explore problems to efficiently operate decision-making tasks during a disaster for emergency management authorities. Our research has two primary directions, guide decision-making in resource allocation and plans to improve system resiliency. Our work is done in collaboration with the Oak Ridge National Laboratory to contribute impactful research in real-life CIs and disaster power failure data. 1. Explainable resource allocation: In contrast to the current interpretable or explainable model that provides answers to understand a model output, we view explanations as answers to guide resource allocation decision-making. In this thesis, we focus on developing a novel model and algorithm to identify disaster phases from changes in power failures. Also, pinpoint the regions which can get most affected at each disaster phase so the SMEs can send resources for fast recovery. 2. Networks for improving system resiliency: We view CIs as a large heterogeneous network with nodes as infrastructure components and dependencies as edges. Our goal is to construct a visual analytic tool and develop a domain-inspired model to identify the important components and connections to which the SMEs need to focus and better prepare to mitigate the risk of a disaster.
- Exploring the Vulnerabilities of Traffic Collision Avoidance Systems (TCAS) Through Software Defined Radio (SDR) ExploitationBerges, Paul Martin (Virginia Tech, 2019-06-13)Traffic Collision Avoidance Systems (TCAS) are safety-critical systems that are deployed on most commercial aircraft in service today. However, TCAS transactions were not designed to account for malicious actors. While in the past it may have been infeasible for an attacker to craft arbitrary radio signals, attackers today have access to open-source digital signal processing software like GNU Radio and inexpensive Software Define Radios (SDR). Therefore, this thesis presents motivation through analytical and experimental means for more investigation into TCAS from a security perspective. Methods for analyzing TCAS both qualitatively and quantitatively from an adversarial perspective are presented, and an experimental attack is developed in GNU Radio to perform an attack in a well-defined threat model.
- Extensions to Radio Frequency FingerprintingAndrews, Seth Dixon (Virginia Tech, 2019-12-05)Radio frequency fingerprinting, a type of physical layer identification, allows identifying wireless transmitters based on their unique hardware. Every wireless transmitter has slight manufacturing variations and differences due to the layout of components. These are manifested as differences in the signal emitted by the device. A variety of techniques have been proposed for identifying transmitters, at the physical layer, based on these differences. This has been successfully demonstrated on a large variety of transmitters and other devices. However, some situations still pose challenges: Some types of fingerprinting feature are very dependent on the modulated signal, especially features based on the frequency content of a signal. This means that changes in transmitter configuration such as bandwidth or modulation will prevent wireless fingerprinting. Such changes may occur frequently with cognitive radios, and in dynamic spectrum access networks. A method is proposed to transform features to be invariant with respect to changes in transmitter configuration. With the transformed features it is possible to re-identify devices with a high degree of certainty. Next, improving performance with limited data by identifying devices using observations crowdsourced from multiple receivers is examined. Combinations of three types of observations are defined. These are combinations of fingerprinter output, features extracted from multiple signals, and raw observations of multiple signals. Performance is demonstrated, although the best method is dependent on the feature set. Other considerations are considered, including processing power and the amount of data needed. Finally, drift in fingerprinting features caused by changes in temperature is examined. Drift results from gradual changes in the physical layer behavior of transmitters, and can have a substantial negative impact on fingerprinting. Even small changes in temperature are found to cause drift, with the oscillator as the primary source of this drift (and other variation) in the fingerprints used. Various methods are tested to compensate for these changes. It is shown that frequency based features not dependent on the carrier are unaffected by drift, but are not able to distinguish between devices. Several models are examined which can improve performance when drift is present.
- Incorporating Obfuscation Techniques in Privacy Preserving Database-Driven Dynamic Spectrum Access SystemsZabransky, Douglas Milton (Virginia Tech, 2018-09-11)Modern innovation is a driving force behind increased spectrum crowding. Several studies performed by the National Telecommunications and Information Administration (NTIA), Federal Communications Commission (FCC), and other groups have proposed Dynamic Spectrum Access (DSA) as a promising solution to alleviate spectrum crowding. The spectrum assignment decisions in DSA will be made by a centralized entity referred to as as spectrum access system (SAS); however, maintaining spectrum utilization information in SAS presents privacy risks, as sensitive Incumbent User (IU) operation parameters are required to be stored by SAS in order to perform spectrum assignments properly. These sensitive operation parameters may potentially be compromised if SAS is the target of a cyber attack or an inference attack executed by a secondary user (SU). In this thesis, we explore the operational security of IUs in SAS-based DSA systems and propose a novel privacy-preserving SAS-based DSA framework, Suspicion Zone SAS (SZ-SAS), the first such framework which protects against both the scenario of inference attacks in an area with sparsely distributed IUs and the scenario of untrusted or compromised SAS. We then define modifications to the SU inference attack algorithm, which demonstrate the necessity of applying obfuscation to SU query responses. Finally, we evaluate obfuscation schemes which are compatible with SZ-SAS, verifying the effectiveness of such schemes in preventing an SU inference attack. Our results show SZ-SAS is capable of utilizing compatible obfuscation schemes to prevent the SU inference attack, while operating using only homomorphically encrypted IU operation parameters.
- Latent Walking Techniques for Conditioning GAN-Generated MusicEisenbeiser, Logan Ryan (Virginia Tech, 2020-09-21)Artificial music generation is a rapidly developing field focused on the complex task of creating neural networks that can produce realistic-sounding music. Generating music is very difficult; components like long and short term structure present time complexity, which can be difficult for neural networks to capture. Additionally, the acoustics of musical features like harmonies and chords, as well as timbre and instrumentation require complex representations for a network to accurately generate them. Various techniques for both music representation and network architecture have been used in the past decade to address these challenges in music generation. The focus of this thesis extends beyond generating music to the challenge of controlling and/or conditioning that generation. Conditional generation involves an additional piece or pieces of information which are input to the generator and constrain aspects of the results. Conditioning can be used to specify a tempo for the generated song, increase the density of notes, or even change the genre. Latent walking is one of the most popular techniques in conditional image generation, but its effectiveness on music-domain generation is largely unexplored. This paper focuses on latent walking techniques for conditioning the music generation network MuseGAN and examines the impact of this conditioning on the generated music.
- Model Predictive Adaptive Cruise Control with Consideration of Comfort and Energy SavingsRyan, Timothy Patrick (Virginia Tech, 2021-06-09)The Hybrid Electric Vehicle Team (HEVT) of Virginia Tech is partaking in the 4-Year EcoCar Mobility Challenge organized by Argonne National Labs. The objective of this competition is to modify a stock 2019 traditional internal combustion engine Chevrolet Blazer and to transform the vehicle into a P4 hybrid. Due to the P4 Hybrid architecture, the HEVT vehicle has an internal combustion engine on the front axle and an electric motor on the rear axle. The goal of this competition is to create a vehicle that achieves better fuel economy and increases customer appeal. The general target market of hybrids is smaller vehicles. As a midsize sport utility vehicle (SUV), the Blazer offers a larger vehicle with the perk of better fuel economy. In the competition, the vehicle is assessed on the ability to integrate advanced vehicle technology, improve consumer appeal, and provide comfort for the passenger. The research of this paper is centered around the design of a full range longitudinal Adaptive Cruise Control (ACC) algorithm. Initially, research is conducted on various linear and nonlinear control strategies that provide the necessary functionality. Based on the ability to predict future time instances in an optimal method, the Model Predictive Control (MPC) algorithm is chosen and combined with other standard control strategies to create an ACC system. The main objective of this research is the implementation of Adaptive Cruise Control features that provide comfort and energy savings to the rider while maintaining safety as the priority. Rider comfort is achieved by placing constraints on acceleration and jerk. Lastly, a proper energy analysis is conducted to showcase the potential energy savings with the implementation of the Adaptive Cruise Control system. This implementation includes tuning the algorithm so that the best energy consumption at the wheel is achieved without compromising vehicle safety. The scope of this paper expands on current knowledge of Adaptive Cruise Control by using a simplified nonlinear vehicle system model in MATLAB to simulate different conditions. For each condition, comfort and energy consumption are analyzed. The city 505 simulation of a traditional ACC system show a 14% or 42 Wh/mi reduction in energy at the wheel. The city 505 simulation of the environmentally friendly ACC system show a 29% or 88 Wh/mi reduction in energy at the wheel. Furthermore, these simulations confirm that maximum acceleration and jerk are bounded. Specifically, peak jerk is reduced by 90% or 8 m/s3 during a jerky US06 drive cycle. The main objective of this analysis is to demonstrate that with proper implementation, this ACC system effectively reduces tractive energy consumption while improving rider comfort for any vehicle.
- «
- 1 (current)
- 2
- 3
- »