Browsing by Author "Bansal, Manish"
Now showing 1 - 20 of 28
Results Per Page
Sort Options
- Active Learning with Combinatorial CoverageKatragadda, Sai Prathyush (Virginia Tech, 2022-08-04)Active learning is a practical field of machine learning as labeling data or determining which data to label can be a time consuming and inefficient task. Active learning automates the process of selecting which data to label, but current methods are heavily model reliant. This has led to the inability of sampled data to be transferred to new models as well as issues with sampling bias. Both issues are of crucial concern in machine learning deployment. We propose active learning methods utilizing Combinatorial Coverage to overcome these issues. The proposed methods are data-centric, and through our experiments we show that the inclusion of coverage in active learning leads to sampling data that tends to be the best in transferring to different models and has a competitive sampling bias compared to benchmark methods.
- Age of Information: Fundamentals, Distributions, and ApplicationsAbd-Elmagid, Mohamed Abd-Elaziz (Virginia Tech, 2023-07-11)A typical model for real-time status update systems consists of a transmitter node that generates real-time status updates about some physical process(es) of interest and sends them through a communication network to a destination node. Such a model can be used to analyze the performance of a plethora of emerging Internet of Things (IoT)-enabled real-time applications including healthcare, factory automation, autonomous vehicles, and smart homes, to name a few. The performance of these applications highly depends upon the freshness of the information status at the destination node about its monitored physical process(es). Because of that, the main design objective of such real-time status update systems is to ensure timely delivery of status updates from the transmitter node to the destination node. To measure the freshness of information at the destination node, the Age of Information (AoI) has been introduced as a performance metric that accounts for the generation time of each status update (which was ignored by conventional performance metrics, specifically throughput and delay). Since then, there have been two main research directions in the AoI research area. The first direction aimed to analyze/characterize AoI in different queueing-theoretic models/disciplines, and the second direction was focused on the optimization of AoI in different communication systems that deal with time-sensitive information. However, the prior queueing-theoretic analyses of AoI have mostly been limited to the characterization of the average AoI and the prior studies developing AoI/age-aware scheduling/transmission policies have mostly ignored the energy constraints at the transmitter node(s). Motivated by these limitations, this dissertation develops new queueing-theoretic methods that allow the characterization of the distribution of AoI in several classes of status updating systems as well as novel AoI-aware scheduling policies accounting for the energy constraints at the transmitter nodes (for several settings of communication networks) in the process of decision-making using tools from optimization theory and reinforcement learning. The first part of this dissertation develops a stochastic hybrid system (SHS)-based general framework to facilitate the analysis of characterizing the distribution of AoI in several classes of real-time status updating systems. First, we study a general setting of status updating systems, where a set of source nodes provide status updates about some physical process(es) to a set of monitors. For this setting, the continuous state of the system is formed by the AoI/age processes at different monitors, the discrete state of the system is modeled using a finite-state continuous-time Markov chain, and the coupled evolution of the continuous and discrete states of the system is described by a piecewise linear SHS with linear reset maps. Using the notion of tensors, we derive a system of linear equations for the characterization of the joint moment generating function (MGF) of an arbitrary set of age processes in the network. Afterwards, we study a general setting of gossip networks in which a source node forwards its measurements (in the form of status updates) about some observed physical process to a set of monitoring nodes according to independent Poisson processes. Furthermore, each monitoring node sends status updates about its information status (about the process observed by the source) to the other monitoring nodes according to independent Poisson processes. For this setup, we develop SHS-based methods that allow the characterization of higher-order marginal/joint moments of the age processes in the network. Finally, our SHS-based framework is applied to derive the stationary marginal and joint MGFs for several queueing disciplines and gossip network topologies, using which we derive closed-form expressions for marginal/joint high-order statistics of age processes, such as the variance of each age process and the correlation coefficients between all possible pairwise combinations of age processes. In the second part of this dissertation, our analysis is focused on understanding the distributional properties of AoI in status updating systems powered by energy harvesting (EH). In particular, we consider a multi-source status updating system in which an EH-powered transmitter node has multiple sources generating status updates about several physical processes. The status updates are then sent to a destination node where the freshness of each status update is measured in terms of AoI. The status updates of each source and harvested energy packets are assumed to arrive at the transmitter according to independent Poisson processes, and the service time of each status update is assumed to be exponentially distributed. For this setup, we derive closed-form expressions of MGF of AoI under several queueing disciplines at the transmitter, including non-preemptive and source-agnostic/source-aware preemptive in service strategies. The generality of our analysis is demonstrated by recovering several existing results as special cases. A key insight from our characterization of the distributional properties of AoI is that it is crucial to incorporate the higher moments of AoI in the implementation/optimization of status updating systems rather than just relying on its average (as has been mostly done in the existing literature on AoI). In the third and final part of this dissertation, we employ AoI as a performance metric for several settings of communication networks, and develop novel AoI-aware scheduling policies using tools from optimization theory and reinforcement learning. First, we investigate the role of an unmanned aerial vehicle (UAV) as a mobile relay to minimize the average peak AoI for a source-destination pair. For this setup, we formulate an optimization problem to jointly optimize the UAV's flight trajectory as well as energy and service time allocations for packet transmissions. This optimization problem is subject to the UAV's mobility constraints and the total available energy constraints at the source node and UAV. In order to solve this non-convex problem, we propose an efficient iterative algorithm and establish its convergence analytically. A key insight obtained from our results is that the optimal design of the UAV's flight trajectory achieves significant performance gains especially when the available energy at the source node and UAV is limited and/or when the size of the update packet is large. Afterwards, we study a generic system setup for an IoT network in which radio frequency (RF)-powered IoT devices are sensing different physical processes and need to transmit their sensed data to a destination node. For this generic system setup, we develop a novel reinforcement learning-based framework that characterizes the optimal sampling policy for IoT devices with the objective of minimizing the long-term weighted sum of average AoI values in the network. Our analytical results characterize the structural properties of the age-optimal policy, and demonstrate that it has a threshold-based structure with respect to the AoI values for different processes. They further demonstrate that the structures of the age-optimal and throughput-optimal policies are different. Finally, we analytically characterize the structural properties of the AoI-optimal joint sampling and updating policy for wireless powered communication networks while accounting for the costs of generating status updates in the process of decision-making. Our results demonstrate that the AoI-optimal joint sampling and updating policy has a threshold-based structure with respect to different system state variables.
- Balancing of Parallel U-Shaped Assembly Lines with Crossover PointsRattan, Amanpreet (Virginia Tech, 2017-09-06)This research introduces parallel U-shaped assembly lines with crossover points. Crossover points are connecting points between two parallel U-shaped lines making the lines interdependent. The assembly lines can be employed to manufacture a variety of products belonging to the same product family. This is achieved by utilizing the concepts of crossover points, multi-line stations, and regular stations. The binary programming formulation presented in this research can be employed for any scenario (e.g. task times, cycle times, and the number of tasks) in the configuration that includes a crossover point. The comparison of numerical problem solutions based on the proposed heuristic approach with the traditional approach highlights the possible reduction in the quantity of workers required. The conclusion from this research is that a wider variety of products can be manufactured at the same capital expense using parallel U-shaped assembly lines with crossover points, leading to a reduction in the total number of workers.
- A Carbon-Conscious Closed-Loop Bi-Objective p-hub Location ProblemIyer, Arjun (Virginia Tech, 2024-05-22)Closed-loop supply chains (CLSC) though present for decades, have seen significant research in optimization only in the last five years. Traditional sustainable CLSCs have generally implemented a Carbon Cap Trading (CCT), Carbon Cap (CC), or Carbon Taxes methodology to set carbon emissions limits but fail to minimize these emissions explicitly. Moreover, the traditional CCT model discourages investment in greener technologies by favoring established logistics over eco-friendly alternatives. This research tackles the sustainable CLSC problem by proposing a mixed-integer linear programming (MILP) carbon-conscious textit{p}-hub location model having the objective of minimizing emissions subject to profit constraints. The model is then extended to incorporate multi-periodicity, transportation modes, and end-of-life periods with a bi-objective cost and emissions function. Additionally, the model accounts for long-term planning and optimization, considering changes in demand and returns over time by incorporating a time dimension. The model's robustness and solving capabilities were tested for the case of electric vehicle (EV) battery supply chains. Demand for EVs is projected to increase by 18% annually, and robust supply chain designs are crucial to meet this demand, making this sector an important test case for the model to solve. Two baseline cases with minimum cost and minimum emissions objectives were tested, revealing a significant gap in emissions and underlining the need for an emissions objective. A sensitivity analysis was conducted on key parameters focusing on minimizing emissions; the analysis revealed that demand, return rates, and recycling costs greatly impact CLSC dynamics. The results showcase the model's capability of tackling real-world case scenarios, thus facilitating comprehensive decision-making goals in carbon-conscious CSLC design.
- Computational Simulation and Machine Learning for Quality Improvement in Composites AssemblyLutz, Oliver Tim (Virginia Tech, 2023-08-22)In applications spanning across aerospace, marine, automotive, energy, and space travel domains, composite materials have become ubiquitous because of their superior stiffness-to-weight ratios as well as corrosion and fatigue resistance. However, from a manufacturing perspective, these advanced materials have introduced new challenges that demand the development of new tools. Due to the complex anisotropic and nonlinear material properties, composite materials are more difficult to model than conventional materials such as metals and plastics. Furthermore, there exist ultra-high precision requirements in safety critical applications that are yet to be reliably met in production. Towards developing new tools addressing these challenges, this dissertation aims to (i) build high-fidelity numerical simulations of composite assembly processes, (ii) bridge these simulations to machine learning tools, and (iii) apply data-driven solutions to process control problems while identifying and overcoming their shortcomings. This is accomplished in case studies that model the fixturing, shape control, and fastening of composite fuselage components. Therein, simulation environments are created that interact with novel implementations of modified proximal policy optimization, based on a newly developed reinforcement learning algorithm. The resulting reinforcement learning agents are able to successfully address the underlying optimization problems that underpin the process and quality requirements.
- Discrete Approximations, Relaxations, and Applications in Quadratically Constrained Quadratic ProgrammingBeach, Benjamin Josiah (Virginia Tech, 2022-05-02)We present works on theory and applications for Mixed Integer Quadratically Constrained Quadratic Programs (MIQCQP). We introduce new mixed integer programming (MIP)-based relaxation and approximation schemes for general Quadratically Constrained Quadratic Programs (QCQP's), and also study practical applications of QCQP's and Mixed-integer QCQP's (MIQCQP). We first address a challenging tank blending and scheduling problem regarding operations for a chemical plant. We model the problem as a discrete-time nonconvex MIQCP, then approximate this model as a MILP using a discretization-based approach. We combine a rolling horizon approach with the discretization of individual chemical property specifications to deal with long scheduling horizons, time-varying quality specifications, and multiple suppliers with discrete arrival times. Next, we study optimization methods applied to minimizing forces for poses and movements of chained Stewart platforms (SPs). These SPs are parallel mechanisms that are stiffer, and more precise, on average, than their serial counterparts at the cost of a smaller range of motion. The robot will be used in concert with several other types robots to perform complex assembly missions in space. We develop algorithms and optimization models that can efficiently decide on favorable poses and movements that reduce force loads on the robot, hence reducing wear on this machine, and allowing for a larger workspace and a greater overall payload capacity. In the third work, we present a technique for producing valid dual bounds for nonconvex quadratic optimization problems. The approach leverages an elegant piecewise linear approximation for univariate quadratic functions and formulate this approximation using mixed-integer programming (MIP). Combining this with a diagonal perturbation technique to convert a nonseparable quadratic function into a separable one, we present a mixed-integer convex quadratic relaxation for nonconvex quadratic optimization problems. We study the strength (or sharpness) of our formulation and the tightness of its approximation. We computationally demonstrate that our model outperforms existing MIP relaxations, and on hard instances can compete with state-of-the-art solvers. Finally, we study piecewise linear relaxations for solving quadratically constrained quadratic programs (QCQP's). We introduce new relaxation methods based on univariate reformulations of nonconvex variable products, leveraging the relaxation from the third work to model each univariate quadratic term. We also extend the NMDT approach (Castro, 2015) to leverage discretization for both variables in a bilinear term, squaring the resulting precision for the same number of binary variables. We then present various results related to the relative strength of the various formulations.
- Distributionally risk-receptive and risk-averse network interdiction problems with general ambiguity setKang, Sumin; Bansal, Manish (Wiley, 2022-06)We introduce generalizations of stochastic network interdiction problem with distributional ambiguity. Specifically, we consider a distributionally risk-averse (or robust) network interdiction problem (DRA-NIP) and a distributionally risk-receptive network interdiction problem (DRR-NIP) where a leader maximizes a follower's minimal expected objective value for either the worst-case or the best-case, respectively, probability distribution belonging to ambiguity set (a set of distributions). The DRA-NIP arises in applications where a risk-averse leader interdicts a follower to cause delays in their supply convoy. In contrast, the DRR-NIP provides network vulnerability analysis where a network-user seeks to identify vulnerabilities in the network against potential disruptions by an adversary (or leader) who is receptive to risk for improving the expected objective values. We present finitely convergent algorithms for solving DRA-NIP and DRR-NIP with a general ambiguity set. To evaluate their performance, we provide results of our extensive computational experiments performed on instances known for (risk-neutral) stochastic NIP.
- Embeddings for Disjunctive Programs with Applications to Political Districting and Rectangle PackingFravel III, William James (Virginia Tech, 2024-11-08)This dissertations represents a composite of three papers which have been submitted for publication: The first chapter deals with a non-convex knapsack which is inspired by a simplified political districting problem. We present and derive a constant time solution to the problem via a reduced-dimensional reformulation, the Karash-Kuhn-Tucker optimality conditions, and gradient descent. The second chapter covers a more complete form of the political districting problem. We attempt to overcome the non-convex objective function and combinatorially massive solution space through a variety of linearization techniques and cutting planes. Our focus on dual bounds is novel in the space. The final chapter develops a framework for identifying ideal mixed binary linear programs and applies it to several rectangle packing formulations. These include both existing and novel formulations for the underlying disjunctive program. Additionally, we investigate the poor performance of branch-and-cut on the example problems.
- Fast and Scalable Power System Learning, Analysis, and PlanningTaheri Hosseinabadi, Sayedsina (Virginia Tech, 2022-02-01)With the integration of renewable and distributed energy resources (DER) and advances in metering infrastructure, power systems are undergoing rapid modernization that brings forward new challenges and possibilities, which call for more advanced learning, analysis, and planning tools. While there are numerous problems present in the modern power grid, in this work, this work has addressed four of the most prominent challenges and has shown that how the new advances in generation and metering can be leveraged to address the challenges that arose by them. With regards to learning in power systems, we first have tackled power distribution system topology identification, since knowing the topology of the power grid is a crucial piece in any meaningful optimization and control task. The topology identification presented in this work is based on the idea of emph{prob-to-learn}, which is perturbing the power grid with small power injections and using the metered response to learn the topology. By using maximum-likelihood estimation, we were able to formulate the topology identification problem as a mixed-integer linear program. We next have tackled the prominent challenge of finding optimal flexibility of aggregators in distribution systems, which is a crucial step in utilizing the capacity of distributed energy resources as well as flexible loads of the distribution systems and to aid transmission systems to be more efficient and reliable. We have shown that the aggregate flexibility of a group of devices with uncertainties and non-convex models can be captured with a quadratic classifier and using that classifier we can design a virtual battery model that best describes the aggregate flexibility. For power system analysis and planning, we have addressed fast probabilistic hosting capacity analysis (PHCA), which is studying how DERs and the intermittency that they bring to the power system can impact the power grid operation in the long term. We have shown that interconnection studies can be sped up by a factor of 20 without losing any accuracy. By formulating a penalized optimal power flow (OPF), we were able to pose PHCA as an instance of multiparametric programming (MPP), and then leveraged the nice properties of MPP to efficiently solve a large number of OPFs. Regarding planning in power systems, we have tackled the problem of strategic investment in energy markets, in which we have utilized the powerful toolbox of multiparametric programming to develop two algorithms for strategic investment. Our MPP-aided grid search algorithm is useful when the investor is only considering a few locations and our MPP-aided gradient descent algorithm is useful for investing in a large number of locations. We next have presented a data-driven approach in finding the flexibility of aggregators in power systems. Finding aggregate flexibility is an important step in utilizing the full potential of smart and controllable loads in the power grid and it's challenging since an aggregator controls a large group of time-coupled devices that operate with non-convex models and are subject to random externalities. We have shown that the aggregate flexibility can be accurately captured with an ellipsoid and then used Farkas' lemma to fit a maximal volume polytope inside the aforementioned ellipsoid. The numerical test showcases that we can capture 10 times the volume that conventional virtual generator models can capture.
- Machine Learning and Quantum Computing for Optimization Problems in Power SystemsGupta, Sarthak (Virginia Tech, 2023-01-26)While optimization problems are ubiquitous in all domains of engineering, they are of critical importance to power systems engineers. A safe and economical operation of the power systems entails solving many optimization problems such as security-constrained unit commitment, economic dispatch, optimal power flow, optimal planning, etc. Although traditional optimization solvers and software have been successful so far in solving these problems, there is a growing need to accelerate the solution process. This need arises on account of several aspects of grid modernization, such as distributed energy resources, renewable energy, smart inverters, batteries, etc, that increase the number of decision variables involved. Moreover, the technologies entail faster dynamics and unpredictability, further demanding a solution speedup. Yet another concern is the growing communication overhead that accompanies this large-scale, high-speed, decision-making process. This thesis explores three different directions to address such concerns. The first part of the thesis explores the learning-to-optimize paradigm whereby instead of solving the optimization problems, machine learning (ML) models such as deep neural networks (DNNs) are trained to predict the solution of the optimization problems. The second part of the thesis also employs deep learning, but in a different manner. DNNs are utilized to model the dynamics of IEEE 1547.8 standard-based local Volt/VAR control rules, and then leverage efficient deep learning libraries to solve the resulting optimization problem. The last part of the thesis dives into the evolving field of quantum computing and develops a general strategy for solving stochastic binary optimization problems using variational quantum eigensolvers (VQE).
- Modeling, Analysis, and Algorithmic Development of Some Scheduling and Logistics Problems Arising in Biomass Supply Chain, Hybrid Flow Shops, and Assembly Job ShopsSingh, Sanchit (Virginia Tech, 2019-07-15)In this work, we address a variety of problems with applications to `ethanol production from biomass', `agile manufacturing' and `mass customization' domains. Our motivation stems from the potential use of biomass as an alternative to non-renewable fuels, the prevalence of `flexible manufacturing systems', and the popularity of `mass customization' in today's highly competitive markets. Production scheduling and design and optimization of logistics network mark the underlying topics of our work. In particular, we address three problems, Biomass Logistics Problem, Hybrid Flow Shop Scheduling Problem, and Stochastic Demand Assembly Job Scheduling Problem. The Biomass Logistics Problem is a strategic cost analysis for setup and operation of a biomass supply chain network that is aimed at the production of ethanol from switchgrass. We discuss the structural components and operations for such a network. We incorporate real-life GIS data of a geographical region in a model that captures this problem. Consequently, we develop and demonstrate the effectiveness of a `Nested Benders' based algorithm for an efficient solution to this problem. The Hybrid Flow Shop Scheduling Problem concerns with production scheduling of a lot over a two-stage hybrid flow shop configuration of machines, and is often encountered in `flexible manufacturing systems'. We incorporate the use of `lot-streaming' in order to minimize the makespan value. Although a general case of this problem is NP-hard, we develop a pseudo-polynomial time algorithm for a special case of this problem when the sublot sizes are treated to be continuous. The case of discrete sublot sizes is also discussed for which we develop a branch-and-bound-based method and experimentally demonstrate its effectiveness in obtaining a near-optimal solution. The Stochastic Demand Assembly Job Scheduling Problem deals with the scheduling of a set of products in a production setting where manufacturers seek to fulfill multiple objectives such as `economy of scale' together with achieving the flexibility to produce a variety of products for their customers while minimizing delivery lead times. We design a novel methodology that is geared towards these objectives and propose a Lagrangian relaxation-based algorithm for efficient computation.
- Multi-Robot Coordination for Hazardous Environmental MonitoringSung, Yoonchang (Virginia Tech, 2019-10-24)In this thesis, we propose algorithms designed for monitoring hazardous agents. Because hazardous environmental monitoring is either tedious or dangerous for human operators, we seek a fully automated robotic system that can help humans. However, there are still many challenges from hardware design to algorithm design that restrict robots to be applied to practical applications. Among these challenges, we are particularly interested in dealing with algorithmic challenges primarily caused by sensing and communication limitations of robots. We develop algorithms with provable guarantees that map and track hazards using a team of robots. Our contributions are as follows. First, we address a situation where the number of hazardous agents is unknown and varies over time. We propose a search and tracking framework that can extract individual target tracks as well as estimate the number and the spatial density of targets. Second, we consider a team of robots tracking individual targets under limited bandwidth. We develop distributed algorithms that can find solutions in bounded amount of time. Third, we propose an algorithm for aerial robots that explores a translating hazardous plume of unknown size and shape. We present a recursive depth-first search-based algorithm that yields a constant competitive ratio for exploring a translating plume. Last, we take into account a heterogeneous team of robots to map and sample a translating plume. These contributions can be applied to a team of aerial robots and a robotic boat monitoring and sampling a translating hazardous plume over a lake. In this application, the aerial robots coordinate with each other to explore the plume and to inform the robotic boat while the robotic boat collects water samples for offline analysis. We demonstrate the performance of our algorithms through simulations and proof-of-concept field experiments for real-world environmental monitoring.
- Multidisciplinary Design Optimization of Composite Spacecraft Structures using Lamination Parameters and Integer ProgrammingBorwankar, Pranav Sanjay (Virginia Tech, 2023-07-03)The digital transformation of engineering design processes is essential for the aerospace industry to remain competitive in the global market. Multidisciplinary design optimization (MDO) frameworks play a crucial role in this transformation by integrating various engineering disciplines and enabling the optimization of complex spacecraft structures. Since the design team consists of multiple entities from different domains working together to build the final product, the design and analysis tools must be readily available and compatible. An integrated approach is required to handle the problem's complexity efficiently. Additionally, most aerospace structures are made from composite panels. It is challenging to optimize such panels as they require the satisfaction of constraints where the design ply thicknesses and orientations can only take discrete values prescribed by the manufacturers. Heuristics such as particle swarm or genetic algorithms are inefficient because they provide sub-optimal solutions when the number of design variables is large. They also are computationally expensive in handling the combinatorial nature of the problem. To overcome these challenges, this work proposes a two-fold solution that integrates multiple disciplines and efficiently optimizes composite spacecraft structures by building a rapid design framework. The proposed model-based design framework for spacecraft structures integrates commercially available software from Siemens packages such as NX and HEEDS and open-source Python libraries. The framework can handle multiple objectives, constraint non-linearities, and discrete design variables efficiently using a combination of black-box global optimization algorithms and Mixed Integer Programming (MIP)-based optimization techniques developed in this work. Lamination parameters and MIP are adopted to optimize composite panels efficiently. The framework integrates structural, thermal and acoustic analysis to optimize the spacecraft's overall performance while satisfying multiple design constraints. Its capabilities are demonstrated in optimizing a small spacecraft structure for required structural performance under various static and dynamic loading conditions when the spacecraft is inside the launch vehicle or operating in orbit.
- Optimal Risk-based Pooled Testing in Public Health Screening, with Equity and Robustness ConsiderationsAprahamian, Hrayer Yaznek Berg (Virginia Tech, 2018-05-03)Group (pooled) testing, i.e., testing multiple subjects simultaneously with a single test, is essential for classifying a large population of subjects as positive or negative for a binary characteristic (e.g., presence of a disease, genetic disorder, or a product defect). While group testing is used in various contexts (e.g., screening donated blood or for sexually transmitted diseases), a lack of understanding of how an optimal grouping scheme should be designed to maximize classification accuracy under a budget constraint hampers screening efforts. We study Dorfman and Array group testing designs under subject-specific risk characteristics, operational constraints, and imperfect tests, considering classification accuracy-, efficiency-, robustness-, and equity-based objectives, and characterize important structural properties of optimal testing designs. These properties provide us with key insights and allow us to model the testing design problems as network flow problems, develop efficient algorithms, and derive insights on equity and robustness versus accuracy trade-off. One of our models reduces to a constrained shortest path problem, for a special case of which we develop a polynomial-time algorithm. We also show that determining an optimal risk-based Dorfman testing scheme that minimizes the expected number of tests is tractable, resolving an open conjecture. Our case studies, on chlamydia screening and screening of donated blood, demonstrate the value of optimal risk-based testing designs, which are shown to be less expensive, more accurate, more equitable, and more robust than current screening practices.
- Optimization Methods for Distribution Systems: Market Design and Resiliency EnhancementBedoya Ceballos, Juan Carlos (Virginia Tech, 2020-08-05)The increasing penetration of proactive agents in distribution systems (DS) has opened new possibilities to make the grid more resilient and to increase participation of responsive loads (RL) and non-conventional generation resources. On the resiliency side, plug-in hybrid electric vehicles (PHEV), energy storage systems (ESS), microgrids (MG), and distributed energy resources (DER), can be leveraged to restore critical load in the system when the utility system is not available for extended periods of time. Critical load restoration is a key factor to achieve a resilient distribution system. On the other hand, existing DERs and responsive loads can be coordinated in a market environment to contribute to efficiency of electricity consumption and fair electricity tariffs, incentivizing proactive agents' participation in the distribution system. Resiliency and market applications for distribution systems are highly complex decision-making problems that can be addressed using modern optimization techniques. Complexities of these problems arise from non-linear relations, integer decision variables, scalability, and asynchronous information. On the resiliency side, existing models include optimization approaches that consider system's available information and neglect asynchrony of data arrival. As a consequence, these models can lead to underutilization of critical resources during system restoration. They can also become computationally intractable for large-scale systems. In the market design problem, existing approaches are based on centralized or computational distributed approaches that are not only limited by hardware requirements but also restrictive for active participation of the market agents. In this context, the work of this dissertation results in major contributions regarding new optimization algorithms for market design and resiliency improvement in distribution systems. In the DS market side, two novel contribution are presented: 1) A computational distributed coordination framework based on bilateral transactions where social welfare is maximized, and 2) A fully decentralized transactive framework where power suppliers, in a simultaneous auction environment, strategically bid using a Markowitz portfolio optimization approach. On the resiliency side, this research proposed a system restoration approach, taking into account uncertain devices and associated asynchronous information, by means of a two-module optimization models based on binary programming and three phase unbalanced optimal power flow. Furthermore, a Reinforcement Learning (RL) method along with a Monte Carlo tree search algorithm has been proposed to solve the scalability problem for resiliency enhancement.
- Optimization Models Addressing Emergency Management Decisions During a Mass Casualty Incident ResponseBartholomew, Paul Roche (Virginia Tech, 2021-11-17)Emergency managers are often faced with the toughest decisions that can ever be made, people's lives hang in the balance. Nevertheless, these tough decisions have to be made, and made quickly. There is usually too much information to process to make the best decisions. Decision support systems can relieve a significant amount of this onus, making decision while considering the complex interweaving of constraints and resources that define the boundary of the problem. We study these complex emergency management, approaching the problem with discrete optimization. Using our operational research knowledge to model mass casualty incidents, we seek to provide solutions and insights for the emergency managers. This dissertation proposes a novel deterministic model to optimize the casualty transportation and treatment decisions in response to a MCI. This deterministic model expands on current state of the art by; (1) including multiple dynamic resources that impact the various interconnected decisions, (2) further refining a survival function to measure expected survivors, (3) defining novel objective functions that consider competing priorities, including maximizing survivors and balancing equity, and finally (4) developing a MCI response simulation that provides insights to how optimization models could be used as decision-support mechanisms.
- Partial Discharges: Experimental Investigation, Model Development, and Data AnalyticsRazavi Borghei, Seyyed Moein (Virginia Tech, 2022-02-11)Insulation system is an inseparable part of electrical equipment. In this study, one of the most important aging factors in insulation systems known as partial discharge (PD) is targeted. PD phenomenon has been studied for more than a century and yet new technologies still demand the investigation of PD impact. Nowadays, electrification is penetrating into various fossil-fuel-based industries such as transportation system that demands the reliability of electrical equipment under various harsh environmental conditions. Due to the lack of knowledge on the behavior of insulation systems, research in this area is intensively needed. The current study probes into the partial discharge phenomenon from two aspects and the groundwork for both aspects are provided by experimentation of multiple PD types. In the first goal, a finite-element analysis (FEA) approach is developed based on measurement data to estimate electric field distribution. The FEA model is coupled with a programming scheme to evaluate PD conditions, calculate PD metrics, and perform statistical analysis of the results. For the second target, it is aimed to use deep neural networks to identify and discriminate different sources of PD. The measurement data are used to generate thousands of phase-resolved PD (PRPD) images that will be used for training deep learning models. To meet the characteristics of the dataset, a deep residual neural network is designed and optimized to discriminate PD sources in an accurate, stable, and time-efficient way. The outcome of this research enhances the reliability of electrical apparatus through a better understanding of the PD behavior and lays a foundation for automatic monitoring of PD sources.
- Prediction and Control of Thermal History in Laser Powder Bed FusionRiensche, Alexander Ray (Virginia Tech, 2024-09-09)
- Robust and Data-Efficient Metamodel-Based Approaches for Online Analysis of Time-Dependent SystemsXie, Guangrui (Virginia Tech, 2020-06-04)Metamodeling is regarded as a powerful analysis tool to learn the input-output relationship of a system based on a limited amount of data collected when experiments with real systems are costly or impractical. As a popular metamodeling method, Gaussian process regression (GPR), has been successfully applied to analyses of various engineering systems. However, GPR-based metamodeling for time-dependent systems (TDSs) is especially challenging due to three reasons. First, TDSs require an appropriate account for temporal effects, however, standard GPR cannot address temporal effects easily and satisfactorily. Second, TDSs typically require analytics tools with a sufficiently high computational efficiency to support online decision making, but standard GPR may not be adequate for real-time implementation. Lastly, reliable uncertainty quantification is a key to success for operational planning of TDSs in real world, however, research on how to construct adequate error bounds for GPR-based metamodeling is sparse. Inspired by the challenges encountered in GPR-based analyses of two representative stochastic TDSs, i.e., load forecasting in a power system and trajectory prediction for unmanned aerial vehicles (UAVs), this dissertation aims to develop novel modeling, sampling, and statistical analysis techniques for enhancing the computational and statistical efficiencies of GPR-based metamodeling to meet the requirements of practical implementations. Furthermore, an in-depth investigation on building uniform error bounds for stochastic kriging is conducted, which sets up a foundation for developing robust GPR-based metamodeling techniques for analyses of TDSs under the impact of strong heteroscedasticity.
- Scenario-based cuts for structured two-stage stochastic and distributionally robust p-order conic mixed integer programsBansal, Manish; Zhang, Yingqiu (Springer, 2021-01-22)In this paper, we derive (partial) convex hull for deterministic multi-constraint polyhedral conic mixed integer sets with multiple integer variables using conic mixed integer rounding (CMIR) cut-generation procedure of Atamtürk and Narayanan (Math Prog 122:1–20, 2008), thereby extending their result for a simple polyhedral conic mixed integer set with single constraint and one integer variable. We then introduce two-stage stochastic p-order conic mixed integer programs (denoted by TSS-CMIPs) in which the second stage problems have sum of lp-norms in the objective function along with integer variables. First, we present sufficient conditions under which the addition of scenario-based nonlinear cuts in the extensive formulation of TSS-CMIPs is sufficient to relax the integrality restrictions on the second stage integer variables without impacting the integrality of the optimal solution of the TSS-CMIP. We utilize scenario-based CMIR cuts for TSS-CMIPs and their distributionally robust generalizations with structured CMIPs in the second stage, and prove that these cuts provide conic/linear programming equivalent or approximation for the second stage CMIPs. We also perform extensive computational experiments by solving stochastic and distributionally robust capacitated facility location problem and randomly generated structured TSS-CMIPs with polyhedral CMIPs and second-order CMIPs in the second stage, i.e. p= 1 and p= 2 , respectively. We observe that there is a significant reduction in the total time taken to solve these problems after adding the scenario-based cuts.