Browsing by Author "Beling, Peter A."
Now showing 1 - 10 of 10
Results Per Page
Sort Options
- Active Learning with Combinatorial CoverageKatragadda, Sai Prathyush (Virginia Tech, 2022-08-04)Active learning is a practical field of machine learning as labeling data or determining which data to label can be a time consuming and inefficient task. Active learning automates the process of selecting which data to label, but current methods are heavily model reliant. This has led to the inability of sampled data to be transferred to new models as well as issues with sampling bias. Both issues are of crucial concern in machine learning deployment. We propose active learning methods utilizing Combinatorial Coverage to overcome these issues. The proposed methods are data-centric, and through our experiments we show that the inclusion of coverage in active learning leads to sampling data that tends to be the best in transferring to different models and has a competitive sampling bias compared to benchmark methods.
- Boundary Resilience: A New Approach to Analyzing Behavior in Complex SystemsWilhelm, Julia Claire Wolf (Virginia Tech, 2024-04-30)Systems engineering has many subdisciplines which would be useful to study in terms of complex system behavior. However, it is the interactions between a complex system and its operating environment which drive the motivation for this analysis. Specifically, this work introduces a new approach to assessing these interactions called "boundary resilience." While classical resilience theory measures a system's internal reaction to adverse event, boundary resilience evaluates the impacts such an event may have on the surrounding environment. As the scope of this analysis is quite large, it was deemed appropriate to conduct a case study to determine the fundamental tenants of boundary resilience. SpaceX's satellite Internet mega-constellation (StarLink) was chosen due to its large potential to impact the space environment as well as its size and complexity. This study produced two boundary resilience measures, one for local boundary resilience of a single component and one for the global boundary behavior of the entire system. The local metric measures the likelihood of an adverse event occurring at that boundary location as well as its potential to impact the surrounding environment. The global boundary resilience metric reflects a nonlinear relationship among the system components.
- Closed System Precepts in Systems Engineering for Artificial Intelligence- SE4AIShadab, Niloofar (Virginia Tech, 2024-01-08)Intelligent systems ought to be distinguished as a special type of systems that require distinctive engineering processes. While this distinction is informally acknowledged by some, practical systems engineering (SE) methodologies for intelligent systems remain primarily rooted in traditional SE paradigms centered around component aggregation. Initially, this dissertation posits that the traditional approach is grounded in the notion of open systems as the fundamental precept, whereas engineering intelligent systems necessitates an alternative approach founded on the principles of closed systems. This dissertation endeavors to identify potential gaps within the current SE foundations concerning the accommodation of the unique characteristics of intelligent systems, such as continuous learning and sensitivity to environmental changes. Furthermore, it argues for the mitigation of these gaps through the formalization of closed systems precepts. It adopts a systems-theoretic perspective to elucidate the phenomena of closed systems and their intricate interplay with engineering intelligent systems. This research contends that, given the intricate coupling between intelligent systems and their environments, the incorporation of closed systems precepts into SE represents a pivotal pathway to construct engineered intelligence. In pursuit of this objective, this dissertation establishes a formal foundation to delineate closed systems precepts and other fundamental practices. Subsequently, it provides formalism to discern two important categories of closed systems, informationally and functionally closed systems, and their relevance in the domains of engineering and design across diverse levels of system abstraction. Additionally, it explores the practical application of the closed systems precepts through the novel paradigm of core and periphery, followed by its examination within real-world contexts. This dissertation is organizes as follows: Chapter 1 initiates the dissertation by presenting the problem formulation and motivation. It subsequently delves into a thorough literature review and outlines the research's scope and objectives, contributing to the essence of this work. In Chapter 2, a narrative unfolds, elucidating the contributions of the provided papers to the objectives outlined in Chapter 1. This chapter illuminates how each paper aligns with and furthers the overarching goals set forth in the Chapter 1. Chapter 3 serves as a culmination, offering a summary of the accomplishments, acknowledging limitations, and delineating potential avenues for future research within this domain. Paper A is devoted to substantiating the closed notion of intelligence property. In the realm of artificial intelligence (AI), systems are often expected to exert influence upon their environments and, reciprocally, to be influenced by their surroundings. Consequently, a profound interdependence exists between the system and its environment, transcending the confines of conventional input-output relations. In this regard, Paper A postulates that the engineering of intelligent systems mandates an approach that elevates closed systems as foundational precepts for characterizing intelligence as a property contingent upon the system's relationship with its context. The ensuing discussion will juxtapose the viewpoints of open and closed systems, illustrating the limitations of the open system perspective in representing intelligence as a relational property. In response, this paper will advocate for the adoption of the closed system view to establish intelligence as an inherent relational property arising from the system's dynamic interactions with its environment. Paper B is dedicated to the formalization of the closed systems paradigm within SE. In this paper, formalism is proffered for the closed systems precepts, drawing upon systems theory, cybernetics, and information theory. A comprehensive comparison of two closure types, informational and functional closure, within closed systems is presented, underpinned by a common systems-theoretic formal framework. This dissertation contends that by grounding these initiatives in the core and periphery concept, we can facilitate the design and engineering of intelligent systems across multiple levels of abstraction. These levels may span a spectrum from informational closure to a synthesis of informational and functional openness. It posits that this approach represents a versatile, method-agnostic solution to some of the principal challenges encountered when engineering multiple tiers of intelligence for complex systems. Paper C delves into the rise of the concept of core-periphery from some cybernetics principles, such as variety and "The Law of Requisite Variety" and provides a formalism that is a derivation of the mentioned principles in Cybernetics. Later, it elaborates on the practical implications of such concepts in intelligent systems from biological systems and entails an engagement with a CNN model to explore the core and periphery concept within AI-enabled systems. Paper D proposes the practical implementation of the closed systems doctrine in SE, offering frameworks that rigorously define the boundaries between closed systems and their environment. These frameworks are meticulously designed to account for stakeholder requirements and the inherent design constraints of the system. This paper illustrates practical applications of informational and functional closure within SE processes, leveraging a hypothetical example for elucidation. It focuses on two aspects of engineering intelligence, scope and scale to provide a platform for the utilization of closed systems precepts.
- Cyberphysical Security Through Resiliency: A Systems-Centric ApproachFleming, Cody H.; Elks, Carl R.; Bakirtzis, Georgios; Adams, Stephen C.; Carter, Bryan; Beling, Peter A.; Horowitz, Barry M. (2021-06)Cyberphysical systems require resiliency techniques for defense, and multicriteria resiliency problems need an approach that evaluates systems for current threats and potential design solutions. A systems-oriented view of cyberphysical security, termed Mission Aware, is proposed based on a holistic understanding of mission goals, system dynamics, and risk.
- Deep-Learning-Based Digitization of Protein-Self-Assembly to Print Biodegradable Physically Unclonable Labels for Device SecurityPradhan, Sayantan; Rajagopala, Abhi D.; Meno, Emma; Adams, Stephen; Elks, Carl R.; Beling, Peter A.; Yadavalli, Vamsi K. (MDPI, 2023-08-28)The increasingly pervasive problem of counterfeiting affects both individuals and industry. In particular, public health and medical fields face threats to device authenticity and patient privacy, especially in the post-pandemic era. Physical unclonable functions (PUFs) present a modern solution using counterfeit-proof security labels to securely authenticate and identify physical objects. PUFs harness innately entropic information generators to create a unique fingerprint for an authentication protocol. This paper proposes a facile protein self-assembly process as an entropy generator for a unique biological PUF. The posited image digitization process applies a deep learning model to extract a feature vector from the self-assembly image. This is then binarized and debiased to produce a cryptographic key. The NIST SP 800-22 Statistical Test Suite was used to evaluate the randomness of the generated keys, which proved sufficiently stochastic. To facilitate deployment on physical objects, the PUF images were printed on flexible silk-fibroin-based biodegradable labels using functional protein bioinks. Images from the labels were captured using a cellphone camera and referenced against the source image for error rate comparison. The deep-learning-based biological PUF has potential as a low-cost, scalable, highly randomized strategy for anti-counterfeiting technology.
- Essays on Innovation and Dynamic Capabilities: Evidence from Public Sector Operations and CybersecurityMiller, Marcus Soren (Virginia Tech, 2024-08-16)The public sector needs the capacity for continual improvement and innovation. Cybersecurity threats against U.S. federal civilian agencies and national critical infrastructure stand out as a major problem area requiring agile and timely responses. Moreover, curbing ransomware attacks directed towards uniquely vulnerable domains, such as healthcare, education, and local government poses a particularly vexing policy challenge for government leaders. In three discrete essays, this dissertation examines management theories applied to the public sector and cybersecurity. The first two essays investigate a public management approach for improvement and innovation based on dynamic capabilities - that is, the organizational capacity to observe, understand, learn, and react in a transformational manner. The first essay of this dissertation presents a systematic literature review of empirical research on dynamic capabilities in the public sector which indicates clear benefits from the employment of dynamic capabilities through impacts on organizational capabilities, innovation, organizational change, operational performance, and public value. Building upon that literature review, the second essay of this dissertation applies archival data research and first-person interviews to examine the pivotal role played by dynamic capabilities in facilitating the generation and deployment of innovative cybersecurity approaches among the federal civilian agencies. This novel research identified and categorized dynamic capabilities in action and assessed their operational influence, specifically inter- and intra-agency collaboration, strategic planning, governance, and signature processes. The third essay of this dissertation was the first-ever documented system dynamics model of the ransomware ecosystem to understand incident trend patterns and provide insight into policy decisions. Simulation showed improvement by mandating incident reporting, reducing reporting delays, and strengthening passive defenses, but unexpectedly not by capping ransom payments.
- An ontological metamodel for cyber-physical system safety, security, and resilience coengineeringBakirtzis, Georgios; Sherburne, Tim; Adams, Stephen C.; Horowitz, Barry M.; Beling, Peter A.; Fleming, Cody H. (2021-06-01)Cyber-physical systems are complex systems that require the integration of diverse software, firmware, and hardware to be practical and useful. This increased complexity is impacting the management of models necessary for designing cyber-physical systems that are able to take into account a number of "-ilities", such that they are safe and secure and ultimately resilient to disruption of service. We propose an ontological metamodel for system design that augments an already existing industry metamodel to capture the relationships between various model elements (requirements, interfaces, physical, and functional) and safety, security, and resilient considerations. Employing this metamodel leads to more cohesive and structured modeling efforts with an overall increase in scalability, usability, and unification of already existing models. In turn, this leads to a mission-oriented perspective in designing security defenses and resilience mechanisms to combat undesirable behaviors. We illustrate this metamodel in an open-source GraphQL implementation, which can interface with a number of modeling languages. We support our proposed metamodel with a detailed demonstration using an oil and gas pipeline model.
- RESONANT: Reinforcement Learning Based Moving Target Defense for Detecting Credit Card FraudAbdel Messih, George Ibrahim (Virginia Tech, 2023-12-20)According to security.org, as of 2023, 65% of credit card (CC) users in the US have been subjected to fraud at some point in their lives, which equates to about 151 million Americans. The proliferation of advanced machine learning (ML) algorithms has also contributed to detecting credit card fraud (CCF). However, using a single or static ML-based defense model against a constantly evolving adversary takes its structural advantage, which enables the adversary to reverse engineer the defense's strategy over the rounds of an iterated game. This paper proposes an adaptive moving target defense (MTD) approach based on deep reinforcement learning (DRL), termed RESONANT to identify the optimal switching points to another ML classifier for credit card fraud detection. It identifies optimal moments to strategically switch between different ML-based defense models (i.e., classifiers) to invalidate any adversarial progress and always stay a step ahead of the adversary. We take this approach in an iterated game theoretic manner where the adversary and defender take turns to take their action in the CCF detection contexts. Via extensive simulation experiments, we investigate the performance of our proposed RESONANT against that of the existing state-of-the-art counterparts in terms of the mean and variance of detection accuracy and attack success ratio to measure the defensive performance. Our results demonstrate the superiority of RESONANT over other counterparts, including static and naïve ML and MTD selecting a defense model at random (i.e., Random-MTD). Via extensive simulation experiments, our results show that our proposed RESONANT can outperform the existing counterparts up to two times better performance in detection accuracy using AUC (i.e., Area Under the Curve of the Receiver Operating Characteristic (ROC) curve) and system security against attacks using attack success ratio (ASR).
- Study of Equivalence in Systems Engineering within the Frame of VerificationWach, Paul F. (Virginia Tech, 2023-01-20)This dissertation contributes to the theoretical foundations of systems engineering (SE) and exposes an unstudied SE area of definition of verification models. In practice, verification models are largely qualitatively defined based on heuristic assumptions rather than science-based approach. For example, we may state the desire for representativeness of a verification model in qualitative terms of low, medium, or high fidelity in early phases of a system lifecycle when verification requirements are typically defined. Given that fidelity is defined as a measure of approximation from reality and that the (real) final product does (or may) not exist in early phases, we are stating desire for and making assumptions of representative equivalence that may not be true. Therefore, this dissertation contends that verification models can and should be defined on the scientific basis of systems theoretic principles. Furthermore, the practice of SE is undergoing a digital transformation and corresponding desire to enhance SE educationally and as a discipline, which this research proposes to address through a science-based approach that is grounded in the mathematical formalism of systems theory. The maturity of engineering disciplines is reflected in their science-based approach, such as computational fluid dynamics and finite element analysis. Much of the discipline of SE remains qualitatively descriptive, which may suffer from interpretation discrepancies; rather than being grounded in, inherently analytical, theoretical foundations such as is a stated goal of the SE professional organization the International Council on Systems Engineering (INCOSE). Additionally, along with the increased complexity of modern engineered systems comes the impracticality of verification through traditional means, which has resulted in verification being described as broken and in need of theoretical foundations. The relationships used to define verification models are explored through building on the systems theoretic lineage of A. Wayne Wymore; such as computational systems theory, theory of system design, and theory of problem formulation. Core systems theoretic concepts used to frame the relationship-based definition of verification models are the notions of system morphisms that characterize equivalence between pairs, problem spaces of functions that bound the acceptability of solution systems, and hierarchy of system specification that characterizes stratification. The research inquisition was in regard to how verification models should be defined and hypothesized that verification models should be defined through a combination of systems theoretic relationships between verification artifacts; system requirements, system designs, verification requirements, and verification models. The conclusions of this research provide a science-based metamodel for defining verification models through systems theoretic principles. The verification models were shown to be indirectly defined from system requirements, through system designs and verification requirements. Verification models are expected to be morphically equivalent to corresponding system designs; however, there may exist infinite equivalence which may be reduced through defining bounding conditions. These bounding conditions were found to be defined through verification requirements that are formed as (1) verification requirement problem spaces that characterize the verification activity on the basis of morphic equivalence to the system requirements and (2) morphic conditions that specify desired equivalence between a system design and verification model. An output of this research is a system theoretic metamodel of verification artifacts, which may be used for a science-based approach to define verification models and advancement of the maturity of the SE discipline.
- A survey of inverse reinforcement learningAdams, Stephen; Cody, Tyler; Beling, Peter A. (Springer, 2022-08)Learning from demonstration, or imitation learning, is the process of learning to act in an environment from examples provided by a teacher. Inverse reinforcement learning (IRL) is a specific form of learning from demonstration that attempts to estimate the reward function of a Markov decision process from examples provided by the teacher. The reward function is often considered the most succinct description of a task. In simple applications, the reward function may be known or easily derived from properties of the system and hard coded into the learning process. However, in complex applications, this may not be possible, and it may be easier to learn the reward function by observing the actions of the teacher. This paper provides a comprehensive survey of the literature on IRL. This survey outlines the differences between IRL and two similar methods - apprenticeship learning and inverse optimal control. Further, this survey organizes the IRL literature based on the principal method, describes applications of IRL algorithms, and provides areas of future research.