Browsing by Author "Bish, Douglas R."
Now showing 1 - 20 of 54
Results Per Page
Sort Options
- Adaptive Sampling Line Search for Simulation OptimizationRagavan, Prasanna Kumar (Virginia Tech, 2017-03-08)This thesis is concerned with the development of algorithms for simulation optimization (SO), a special case of stochastic optimization where the objective function can only be evaluated through noisy observations from a simulation. Deterministic techniques, when directly applied to simulation optimization problems fail to converge due to their inability to handle randomness thus requiring sophisticated algorithms. However, many existing algorithms dedicated for simulation optimization often show poor performance on implementation as they require extensive parameter tuning. To overcome these shortfalls with existing SO algorithms, we develop ADALINE, a line search based algorithm that eliminates the need for any user defined parameters. ADALINE is designed to identify a local minimum on continuous and integer ordered feasible sets. ADALINE on a continuous feasible set mimics deterministic line search algorithms, while it iterates between a line search and an enumeration procedure on integer ordered feasible sets in its quest to identify a local minimum. ADALINE improves upon many of the existing SO algorithms by determining the sample size adaptively as a trade-off between the error due to estimation and the optimization error, that is, the algorithm expends simulation effort proportional to the quality of the incumbent solution. We also show that ADALINE converges ``almost surely'' to the set of local minima. Finally, our numerical results suggest that ADALINE converges to a local minimum faster, outperforming other advanced SO algorithms that utilize variable sampling strategies. To demonstrate the performance of our algorithm on a practical problem, we apply ADALINE in solving a surgery rescheduling problem. In the rescheduling problem, the objective is to minimize the cost of disruptions to an existing schedule shared between multiple surgical specialties while accommodating semi-urgent surgeries that require expedited intervention. The disruptions to the schedule are determined using a threshold based heuristic and ADALINE identifies the best threshold levels for various surgical specialties that minimizes the expected total cost of disruption. A comparison of the solutions obtained using a Sample Average Approximation (SAA) approach, and ADALINE is provided. We find that the adaptive sampling strategy in ADALINE identifies a better solution quickly than SAA.
- Benefits of integrated screening and vaccination for infection controlRabil, Marie Jeanne; Tunc, Sait; Bish, Douglas R.; Bish, Ebru K. (2021-12)Importance: Screening and vaccination are essential in the fight against infectious diseases, but need to be integrated and customized based on community and disease characteristics. Objective: To develop effective screening and vaccination strategies, customized for a college campus, to reduce COVID-19 infections, hospitalizations, deaths, and peak hospitalizations. Design, Setting, and Participants: We construct a compartmental model of disease spread for vaccination and routine screening, and study the efficacy of four mitigation strategies (routine screening only, vaccination only, vaccination with partial routine screening, vaccination with full routine screening), and a no-intervention strategy. The study setting is a hypothetical college campus of 5,000 students and 455 faculty members, with 11 undetected, asymptotic SARS-CoV-2 infections at the start of an 80-day semester. For sensitivity analysis, we vary the screening frequency, daily vaccination rate, initial vaccination coverage, and screening and vaccination compliance; and consider three scenarios that represent low/medium/high transmission rates and test efficacy. Model parameters come from publicly available or published sources. Results: With low initial vaccination coverage, even aggressive vaccination and screening result in a high number of infections: 1,024/2,040 (1,532/1,773) with routine daily (every other day) screening of the unvaccinated; 275/895 with daily screening extended to the newly vaccinated in base- and worst-case scenarios, with reproduction numbers 4.75 and 6.75, respectively, representative of COVID-19 Delta variant. With the emergence of the Omicron variant, the reproduction number may increase and/or effective vaccine coverage may decrease if a booster shot is needed to maximize vaccine efficacy. Conclusion: Integrated vaccination and routine screening can allow for a safe opening of a college when initial vaccination coverage is sufficiently high. The interventions need to be customized considering the initial vaccination coverage, estimated compliance, screening and vaccination capacity, disease transmission and adverse outcome rates, and the number of infections/peak hospitalizations the college is willing to tolerate.
- Benefits of integrated screening and vaccination for infection controlRabil, Marie Jeanne; Tunc, Sait; Bish, Douglas R.; Bish, Ebru K. (PLOS, 2022-04-21)Importance: Screening and vaccination are essential in the fight against infectious diseases, but need to be integrated and customized based on community and disease characteristics. Objective: To develop effective screening and vaccination strategies, customized for a college campus, to reduce COVID-19 infections, hospitalizations, deaths, and peak hospitalizations. Design, setting, and participants: We construct a compartmental model of disease spread under vaccination and routine screening, and study the efficacy of four mitigation strategies (routine screening only, vaccination only, vaccination with partial or full routine screening), and a no-intervention strategy. The study setting is a hypothetical college campus of 5,000 students and 455 faculty members during the Fall 2021 academic semester, when the Delta variant was the predominant strain. For sensitivity analysis, we vary the screening frequency, daily vaccination rate, initial vaccine coverage, and screening and vaccination compliance; and consider scenarios that represent low/medium/high transmission and test efficacy. Model parameters come from publicly available or published sources. Results: With low initial vaccine coverage (30% in our study), even aggressive vaccination and screening result in a high number of infections: 1,020 to 2,040 (1,530 to 2,480) with routine daily (every other day) screening of the unvaccinated; 280 to 900 with daily screening extended to the newly vaccinated in base- and worst-case scenarios, which respectively consider reproduction numbers of 4.75 and 6.75 for the Delta variant. Conclusion: Integrated vaccination and routine screening can allow for a safe opening of a college when both the vaccine effectiveness and the initial vaccine coverage are sufficiently high. The interventions need to be customized considering the initial vaccine coverage, estimated compliance, screening and vaccination capacity, disease transmission and adverse outcome rates, and the number of infections/peak hospitalizations the college is willing to tolerate.
- Comparative Statics Analysis of Some Operations Management ProblemsZeng, Xin (Virginia Tech, 2012-08-08)We propose a novel analytic approach for the comparative statics analysis of operations management problems on the capacity investment decision and the influenza (flu) vaccine composition decision. Our approach involves exploiting the properties of the underlying mathematical models, and linking those properties to the concept of stochastic orders relationship. The use of stochastic orders allows us to establish our main results without restriction to a specific distribution. A major strength of our approach is that it is "scalable," i.e., it applies to capacity investment decision problem with any number of non-independent (i.e., demand or resource sharing) products and resources, and to the influenza vaccine composition problem with any number of candidate strains, without a corresponding increase in computational effort. This is unlike the current approaches commonly used in the operations management literature, which typically involve a parametric analysis followed by the use of the implicit function theorem. Providing a rigorous framework for comparative statics analysis, which can be applied to other problems that are not amenable to traditional parametric analysis, is our main contribution. We demonstrate this approach on two problems: (1) Capacity investment decision, and (2) influenza vaccine composition decision. A comparative statics analysis is integral to the study of these problems, as it allows answers to important questions such as, "does the firm acquire more or less of the different resources available as demand uncertainty increases? does the firm benefit from an increase in demand uncertainty? how does the vaccine composition change as the yield uncertainty increases?" Using our proposed approach, we establish comparative statics results on how the newsvendor's expected profit and optimal capacity decision change with demand risk and demand dependence in multi-product multi-resource newsvendor networks; and how the societal vaccination benefit, the manufacturer's profit, and the vaccine output change with the risk of random yield of strains.
- Cross-Layer Optimization and Distributed Algorithm Design for Frequency-Agile Radio NetworksFeng, Zhenhua (Virginia Tech, 2010-11-19)Recent advancements in frequency-agile radio technology and dynamic spectrum access network have created a huge space for improving the utilization efficiency of wireless spectrum. Existing algorithms and protocols, however, have not taken full advantage of the new technologies due to obsolete network design ideologies inherited from conventional network design, such as static spectrum access and static channelization. In this dissertation, we propose new resource management models and algorithms that capitalize on the frequency-agility of next generation radios and the dynamic spectrum access concepts to increase the utilization efficiency of wireless spectrum. We first propose a new analytical model for Dynamic Spectrum Access (DSA) networks. Compared to previous models, the new model is able to include essential DSA mechanisms such as spectrum sensing and primary interference avoidance into solid mathematical representation and thus drastically increase the accuracy of our model. The subsequent numerical study conforms well with existing empirical studies and provides fundamental insights on the design of future DSA networks. We then take advantage of partially overlapped channel in frequency-agile radio networks and propose simple joint channel scheduling and flow routing optimization algorithm that maximizes network throughput. The model quantifies the impact of fundamental network settings, such as node density and traffic load, on the performance of partially overlapped channel based networks. We then propose a cross-layer radio resource allocation algorithm JSSRC (Joint Spectrum Sharing and end-to-end data Rate Control) that iteratively adapts a frequency-agile radio network to optimum with regard to aggregate network spectrum utilization. Subsequently, we extend JSSRC to include routing and present TRSS (joint Transport, Routing and Spectrum Sharing) to solve the much more complex joint transport, routing and spectrum sharing optimization problem. Both JSSRC and TRSS enjoy theoretical convergence and achieve optimum with appropriate scheduling algorithms. The works together strive to improve efficiency of spectrum utilization in frequency-agile radio networks. Numerical and simulation studies show the effectiveness of our designs to reduce the so-called spectrum shortage problem.
- Decision Support for Casualty Triage in Emergency ResponseKamali, Behrooz (Virginia Tech, 2016-05-04)Mass-casualty incidents (MCI) cause a sudden increase in demand of medical resources in a region. The most important and challenging task in addressing an MCI is managing overwhelmed resources with the goal of increasing total number of survivors. Currently, most of the decisions following an MCI are made in an ad-hoc manner or by following static guidelines that do not account for amount of available resources and number of the casualties. The purpose of this dissertation is to introduce and analyze sophisticated service prioritization and resource allocation tools. These tools can be used to produce service order strategies that increase the overall number of survivors. There are several models proposed that account for number and mix of the casualties, and amount and type of the resources available. Large number of the elements involved in this problem makes the model very complex, and thus, in order to gain some insights into the structure of the optimal solutions, some of the proposed models are developed under simplifying assumptions. These assumptions include limitations on the number of casualty types, handling of deaths, servers, and types of resources. Under these assumptions several characteristics of the optimal policies are identified, and optimal algorithms for various scenarios are developed. We also develop an integrated model that addresses service order, transportation, and hospital selection. A comprehensive set of computational results and comparison with the related works in the literature are provided in order to demonstrate the efficacy of the proposed methodologies.
- Demand Management in Evacuation: Models, Algorithms, and ApplicationsBish, Douglas R. (Virginia Tech, 2006-07-31)Evacuation planning is an important disaster management tool. A large-scale evacuation of a region by automobile is a difficult task, especially as demand is often greater than supply. This is made more difficult as the imbalance of supply and demand actually reduces supply due to congestion. Currently, most of the emphasis in evacuation planning is on supply management. The purpose of this dissertation is to introduce and study sophisticated demand management tools, specifically, staging and routing of evacuees. These tools can be used to produce evacuation strategies that reduce or eliminate congestion. A strategic planning model is introduced that accounts for evacuation dynamics and the non-linearities in travel times associated with congestion, yet is tractable and can be applied to large-scale networks. Objective functions of potential interest in evacuation planning are introduced and studied in the context of this model. Insights into the use of staging and routing in evacuation management are delineated and solution techniques are developed. Two different strategic approaches are studied in the context of this model. The first strategic approach is to control the evacuation at a disaggregate level, where customized staging and routing plans are produced for each individual or family unit. The second strategic approach is to control the evacuation at a more aggregate level, where evacuation plans are developed for a larger group of evacuees, based on pre-defined geographic areas. In both approaches, shelter requirements and preferences can also be considered. Computational experience using these two strategic approaches, and their respective solution techniques, is provided using a real network pertaining to Virginia Beach, Virginia, in order to demonstrate the efficacy of the proposed methodologies.
- Design of Tactical and Operational Decisions for Biomass Feedstock Logistics ChainRamachandran, Rahul (Virginia Tech, 2016-07-12)The global energy requirement is increasing at a rapid pace and fossil fuels have been one of the major players in meeting this growing energy demand. However, the resources for fossil fuels are finite. Therefore, it is essential to develop renewable energy sources like biofuels to help address growing energy needs. A key aspect in the production of biofuel is the biomass logistics chain that constitutes a complex collection of activities, which must be judiciously executed for a cost-effective operation. In this thesis, we introduce a two-phase optimization-simulation approach to determine tactical biomass logistics-related decisions cost effectively in view of the uncertainties encountered in real-life. These decisions include number of trucks to haul biomass from storage locations to a bio-refinery, the number of unloading equipment sets required at storage locations, and the number of satellite storage locations required to serve as collection points for the biomass secured from the fields. Later, an operational-level decision support tool is introduced to aid the "feedstock manager" at the bio-refinery by recommending which satellite storage facilities to unload, how much biomass to ship, how to allocate existing resources (trucks and unloading equipment sets) during each time period, and how to route unloading equipment sets between storage facilities. Another problem studied is the "Bale Collection Problem" associated with the farmgate operation. It is essentially a capacitated vehicle routing problem with unit demand (CVRP-UD), and its solution defines a cost-effective sequence for collecting bales from the field after harvest.
- Development of New Network-Level Optimization Model for Salem District Pavement Maintenance ProgrammingAkyildiz, Sercan (Virginia Tech, 2008-08-22)Infrastructure systems are critical to sustaining and improving economical growth. Poor condition of infrastructure systems results in lost productivity and reduces the quality of life. Today's global economy forces governments to sustain and renew infrastructure systems already in place in order to remain competitive and productive (GAO, 2008). Therefore, civil engineers and policymakers have been quite interested in the overall quality of the highways and bridges throughout the US (Miller, 2007). Transportation networks are essential parts of the Nation's infrastructure systems. Deterioration due to age and use is the main threat to the level of service observed in surface transportation networks. Thus, highway agencies throughout the United States strive to maintain, repair and renew transportation systems already in place (Miller, 2007). A recent disaster, the collapse of the Minneapolis I-35 W Bridge, once again revealed the importance of infrastructure preservation programs and resulted in debates as to how state departments of transportation (DOTs) should and can preserve the existing infrastructure systems. Therefore, it is essential to establish effective maintenance programs to preserve aging infrastructure systems. The major challenge facing the state highway maintenance managers today is to preserve the road networks at an acceptable level of serviceability subject to the stringent yearly maintenance and rehabilitation (M&R) budgets. Maintenance managers must allocate such limited budgets among competing alternatives, which makes the situation even more challenging. Insufficient use of available smart decision-making tools impedes eliciting effective and efficient maintenance programs. Hence, this thesis presents the development and implementation of a network-level pavement maintenance optimization model which can be used by maintenance managers as a decision-making tool to address the maintenance budget allocation issue. The network-level optimization model is established with the application of the Linear Programming algorithm and is subject to budget constraints and the agencies' pavement performance goals in terms of total lane-miles in each pavement condition state. This tool is developed with Microsoft Office Excel. The tool can compute the optimal amount of investment for each pavement treatment type in a given funding period. Thus, the model enables maintenance managers in highway agencies to develop alternative network-level pavement maintenance strategies through an automated and optimized process rather than using what-if analysis.
- Discrete and Continuous Nonconvex Optimization: Decision Trees, Valid Inequalities, and Reduced Basis TechniquesDalkiran, Evrim (Virginia Tech, 2011-03-31)This dissertation addresses the modeling and analysis of a strategic risk management problem via a novel decision tree optimization approach, as well as development of enhanced Reformulation-Linearization Technique (RLT)-based linear programming (LP) relaxations for solving nonconvex polynomial programming problems, through the generation of valid inequalities and reduced representations, along with the design and implementation of efficient algorithms. We first conduct a quantitative analysis for a strategic risk management problem that involves allocating certain available failure-mitigating and consequence-alleviating resources to reduce the failure probabilities of system safety components and subsequent losses, respectively, together with selecting optimal strategic decision alternatives, in order to minimize the risk or expected loss in the event of a hazardous occurrence. Using a novel decision tree optimization approach to represent the cascading sequences of probabilistic events as controlled by key decisions and investment alternatives, the problem is modeled as a nonconvex mixed-integer 0-1 factorable program. We develop a specialized branch-and-bound algorithm in which lower bounds are computed via tight linear relaxations of the original problem that are constructed by utilizing a polyhedral outer-approximation mechanism in concert with two alternative linearization schemes having different levels of tightness and complexity. We also suggest three alternative branching schemes, each of which is proven to guarantee convergence to a global optimum for the underlying problem. Extensive computational results and sensitivity analyses are presented to provide insights and to demonstrate the efficacy of the proposed algorithm. In particular, our methodology outperformed the commercial software BARON (Version 8.1.5), yielding a more robust performance along with an 89.9% savings in effort on average. Next, we enhance RLT-based LP relaxations for polynomial programming problems by developing two classes of valid inequalities: v-semidefinite cuts and bound-grid-factor constraints. The first of these uses concepts derived from semidefinite programming. Given an RLT relaxation, we impose positive semidefiniteness on suitable dyadic variable-product matrices, and correspondingly derive implied semidefinite cuts. In the case of polynomial programs, there are several possible variants for selecting such dyadic variable-product matrices for imposing positive semidefiniteness restrictions in order to derive implied valid inequalities, which leads to a new class of cutting planes that we call v-semidefinite cuts. We explore various strategies for generating such cuts within the context of an RLT-based branch-and-cut scheme, and exhibit their relative effectiveness towards tightening the RLT relaxations and solving the underlying polynomial programming problems, using a test-bed of randomly generated instances as well as standard problems from the literature. Our results demonstrate that these cutting planes achieve a significant tightening of the lower bound in contrast with using RLT as a stand-alone approach, thereby enabling an appreciable reduction in the overall computational effort, even in comparison with the commercial software BARON. Empirically, our proposed cut-enhanced algorithm reduced the computational effort required by the latter two approaches by 44% and 77%, respectively, over a test-bed of 60 polynomial programming problems. As a second cutting plane strategy, we introduce a new class of bound-grid-factor constraints that can be judiciously used to augment the basic RLT relaxations in order to improve the quality of lower bounds and enhance the performance of global branch-and-bound algorithms. Certain theoretical properties are established that shed light on the effect of these valid inequalities in driving the discrepancies between RLT variables and their associated nonlinear products to zero. To preserve computational expediency while promoting efficiency, we propose certain concurrent and sequential cut generation routines and various grid-factor selection rules. The results indicate a significant tightening of lower bounds, which yields an overall reduction in computational effort of 21% for solving a test-bed of 15 challenging polynomial programming problems to global optimality in comparison with the basic RLT procedure, and over a 100-fold speed-up in comparison with the commercial software BARON. Finally, we explore equivalent, reduced size RLT-based formulations for polynomial programming problems. Utilizing a basis partitioning scheme for an embedded linear equality subsystem, we show that a strict subset of RLT defining equalities imply the remaining ones. Applying this result, we derive significantly reduced RLT representations and develop certain coherent associated branching rules that assure convergence to a global optimum, along with static as well as dynamic basis selection strategies to implement the proposed procedure. In addition, we enhance the RLT relaxations with v-semidefinite cuts, which are empirically shown to further improve the relative performance of the reduced RLT method over the usual RLT approach. Computational results presented using a test-bed of 10 challenging polynomial programs to evaluate the different reduction strategies demonstrate that our superlative proposed approach achieved more than a four-fold improvement in computational effort in comparison with both the commercial software BARON and a recently developed open-source code, Couenne, for solving nonconvex mixed-integer nonlinear programming problems. Moreover, our approach robustly solved all the test cases to global optimality, whereas BARON and Couenne were jointly able to solve only a single instance to optimality within the set computational time limit, having an unresolved average optimality gap of 260% and 437%, respectively, for the other nine instances. This dissertation makes several broader contributions to the field of nonconvex optimization, including factorable, nonlinear mixed-integer programming problems. The proposed decision tree optimization framework can serve as a versatile management tool in the arenas of homeland security and health-care. Furthermore, we have advanced the frontier for tackling formidable nonconvex polynomial programming problems that arise in emerging fields such as signal processing, biomedical engineering, materials science, and risk management. An open-source software using the proposed reduced RLT representations, semidefinite cuts, bound-grid-factor constraints, and range reduction strategies, is currently under preparation. In addition, the different classes of challenging polynomial programming test problems that are utilized in the computational studies conducted in this dissertation have been made available for other researchers via the Web-page http://filebox.vt.edu/users/dalkiran/website/. It is our hope and belief that the modeling and methodological contributions made in this dissertation will serve society in a broader context through the myriad of widespread applications they support.
- Discrete Approximations, Relaxations, and Applications in Quadratically Constrained Quadratic ProgrammingBeach, Benjamin Josiah (Virginia Tech, 2022-05-02)We present works on theory and applications for Mixed Integer Quadratically Constrained Quadratic Programs (MIQCQP). We introduce new mixed integer programming (MIP)-based relaxation and approximation schemes for general Quadratically Constrained Quadratic Programs (QCQP's), and also study practical applications of QCQP's and Mixed-integer QCQP's (MIQCQP). We first address a challenging tank blending and scheduling problem regarding operations for a chemical plant. We model the problem as a discrete-time nonconvex MIQCP, then approximate this model as a MILP using a discretization-based approach. We combine a rolling horizon approach with the discretization of individual chemical property specifications to deal with long scheduling horizons, time-varying quality specifications, and multiple suppliers with discrete arrival times. Next, we study optimization methods applied to minimizing forces for poses and movements of chained Stewart platforms (SPs). These SPs are parallel mechanisms that are stiffer, and more precise, on average, than their serial counterparts at the cost of a smaller range of motion. The robot will be used in concert with several other types robots to perform complex assembly missions in space. We develop algorithms and optimization models that can efficiently decide on favorable poses and movements that reduce force loads on the robot, hence reducing wear on this machine, and allowing for a larger workspace and a greater overall payload capacity. In the third work, we present a technique for producing valid dual bounds for nonconvex quadratic optimization problems. The approach leverages an elegant piecewise linear approximation for univariate quadratic functions and formulate this approximation using mixed-integer programming (MIP). Combining this with a diagonal perturbation technique to convert a nonseparable quadratic function into a separable one, we present a mixed-integer convex quadratic relaxation for nonconvex quadratic optimization problems. We study the strength (or sharpness) of our formulation and the tightness of its approximation. We computationally demonstrate that our model outperforms existing MIP relaxations, and on hard instances can compete with state-of-the-art solvers. Finally, we study piecewise linear relaxations for solving quadratically constrained quadratic programs (QCQP's). We introduce new relaxation methods based on univariate reformulations of nonconvex variable products, leveraging the relaxation from the third work to model each univariate quadratic term. We also extend the NMDT approach (Castro, 2015) to leverage discretization for both variables in a bilinear term, squaring the resulting precision for the same number of binary variables. We then present various results related to the relative strength of the various formulations.
- Distributed Wireless Resource Management in the Internet of ThingsPark, Taehyeun (Virginia Tech, 2020-06-18)The Internet of Things (IoT) is a promising networking technology that will interconnect a plethora of heterogeneous wireless devices. To support the connectivity across a massive-scale IoT, the scarce wireless communication resources must be appropriately allocated among the IoT devices, while considering the technical challenges that arise from the unique properties of the IoT, such as device heterogeneity, strict communication requirements, and limited device capabilities in terms of computation and memory. The primary goal of this dissertation is to develop novel resource management frameworks using which resource-constrained IoT devices can operate autonomously in a dynamic environment. First, a comprehensive overview on the use of various learning techniques for wireless resource management in an IoT is provided, and potential applications for each learning framework are proposed. Moreover, to capture the heterogeneity among IoT devices, a framework based on cognitive hierarchy theory is discussed, and its implementation with learning techniques of different complexities for IoT devices with varying capabilities is analyzed. Next, the problem of dynamic, distributed resource allocation in an IoT is studied when there are heterogeneous messages. Particularly, a novel finite memory multi-state sequential learning is proposed to enable diverse IoT devices to reallocate the limited communication resources in a self-organizing manner to satisfy the delay requirement of critical messages, while minimally affecting the delay-tolerant messages. The proposed learning framework is shown to be effective for the IoT devices with limited memory and observation capabilities to learn the number of critical messages. The results show that the performance of learning framework depends on memory size and observation capability of IoT devices and that the learning framework can realize low delay transmission in a massive IoT. Subsequently, the problem of one-to-one association between resource blocks and IoT devices is studied, when the IoT devices have partial information. The one-to-one association is formulated as Kolkata Paise Restaurant (KPR) game in which an IoT device tries to choose a resource block with highest gain, while avoiding duplicate selection. Moreover, a Nash equilibrium (NE) of IoT KPR game is shown to coincide with socially optimal solution. A proposed learning framework for IoT KPR game is shown to significantly increase the number of resource blocks used to successful transmit compared to a baseline. The KPR game is then extended to consider age of information (AoI), which is a metric to quantify the freshness of information in the perspective of destination. Moreover, to capture heterogeneity in an IoT, non-linear AoI is introduced. To minimize AoI, centralized and distributed approaches for the resource allocation are proposed to enable the sharing of limited communication resources, while delivering messages to the destination in a timely manner. Moreover, the proposed distributed resource allocation scheme is shown to converge to an NE and to significantly lower the average AoI compared to a baseline. Finally, the problem of dynamically partitioning the transmit power levels in non-orthogonal multiple access is studied when there are heterogeneous messages. In particular, an optimization problem is formulated to determine the number of power levels for different message types, and an estimation framework is proposed to enable the network base station to adjust power level partitioning to satisfy the performance requirements. The proposed framework is shown to effectively increase the transmission success probability compared to a baseline. Furthermore, an optimization problem is formulated to increase sum-rate and reliability by adjusting target received powers. Under different fading channels, the optimal target received powers are analyzed, and a tradeoff between reliability and sum-rate is shown. In conclusion, the theoretical and performance analysis of the frameworks proposed in this dissertation will prove essential for implementing an appropriate distributed resource allocation mechanisms for dynamic, heterogeneous IoT environments.
- A Downtown Space Reservation System: Its Design and EvaluationZhao, Yueqin (Virginia Tech, 2009-09-07)This research explores the feasibility of providing innovative and effective solutions for traffic congestion. The design of reservation systems is being considered as an alternative and/or complementary travel demand management (TDM) strategy. A reservation indicates that a user will follow a booking procedure defined by the reservation system before traveling so as to obtain the right to access a facility or resource. In this research, the reservation system is introduced for a cordon-based downtown road network, hereafter called the Downtown Space Reservation System (DSRS). The research is executed in three steps. In the first step, the DSRS is developed using classic optimization techniques in conjunction with an artificial intelligence technology. The development of this system is the foundation of the entire research, and the second and third steps build upon it. In the second step, traffic simulation models are executed so as to assess the impact of the DSRS on a hypothetical transportation road network. A simulation model provides various transportation measures and helps the decision maker analyze the system from a transportation perspective. In this step, multiple simulation runs (demand scenarios) are conducted and performance insights are generated. However, additional performance measurement and system design issues need to be addressed beyond the simulation paradigm. First, it is not the absolute representation of performance that matters, but the concept of relative performance that is important. Moreover, a simulation does not directly demonstrate how key performance measures interact with each other, which is critical when trying to understand a system structure. To address these issues, in the third step, a comprehensive performance measurement framework has been applied. An analytical technique for measuring the relative efficiency of organizational units, or in this case, demand scenarios called network Data Envelopment Analysis (DEA), is used. The network model combines the perspectives of the transportation service provider, the user and the community, who are the major stakeholders in the transportation system. This framework enables the decision maker to gain an in-depth appreciation of the system design and performance measurement issues.
- Effective screening strategies for safe opening of universities under Omicron and Delta variants of COVID-19Rabil, Marie Jeanne; Tunc, Sait; Bish, Douglas R.; Bish, Ebru K. (Springer Nature, 2022-12-09)As new COVID-19 variants emerge, and disease and population characteristics change, screening strategies may also need to change. We develop a decision-making model that can assist a college to determine an optimal screening strategy based on their characteristics and resources, considering COVID-19 infections/hospitalizations/deaths; peak daily hospitalizations; and the tests required. We also use this tool to generate screening guidelines for the safe opening of college campuses. Our compartmental model simulates disease spread on a hypothetical college campus under co-circulating variants with different disease dynamics, considering: (i) the heterogeneity in disease transmission and outcomes for faculty/staff and students based on vaccination status and level of natural immunity; and (ii) variant- and dose-dependent vaccine efficacy. Using the Spring 2022 academic semester as a case study, we study routine screening strategies, and find that screening the faculty/staff less frequently than the students, and/or the boosted and vaccinated less frequently than the unvaccinated, may avert a higher number of infections per test, compared to universal screening of the entire population at a common frequency. We also discuss key policy issues, including the need to revisit the mitigation objective over time, effective strategies that are informed by booster coverage, and if and when screening alone can compensate for low booster coverage.
- Efficiency-Driven Enterprise DesignHerrera-Restrepo, Oscar A. (Virginia Tech, 2016-06-01)This dissertation explores the use of the efficiency performance measurement paradigm (EM), in terms of its concepts and applications, as an ex-ante mechanism to evaluate enterprise performance and inform enterprise design. The design of an enterprise is driven by decisions that include, but not limit to, which strategies to implement, how to allocate resources, how to shift operating patterns, and how to boost coordination among enterprises, among others. Up to date, EM has been mainly used as a descriptive mechanism, but the fundamental reason for measuring performance in an ex-post fashion, i.e., how well an enterprise does, is also valid in the context of design decisions, i.e., ex-ante evaluation. The contrast between the ex-post and ex-ante use of EM relates to the measurement purpose, i.e., why to measure. Ex-post measurement focuses on evaluating 'what happened' (non-disruptive) while ex-ante measurement emphasizes in informing design decisions exploring changes in current settings (more disruptive). Within this context and to achieve the purpose above, this dissertation is supported by theoretical insights and complemented with three empirical studies. The theoretical insights relate to facts that support, connect to, and challenge (i.e., facilitate or impede) the ex-ante use of EM for enterprise evaluation and informing enterprise design. Those insights are based on the efficiency performance measurement, organizational design and enterprise systems engineering literature. Meanwhile, the three empirical studies situate the application of EM as an ex-ante mechanism to inform evacuation management, bank branch management, and power plants. The theoretical and empirical results indicate that EM is well suited for both evaluating enterprise performance and informing design decisions. The main contribution of this dissertation to enterprise stakeholders is that EM can be not only used to answer how well the enterprise did, but also how well it could do if certain design decisions are taken.
- Efficient Prevalence Estimation for Emerging and Seasonal Diseases Under Limited ResourcesNguyen, Ngoc Thu (Virginia Tech, 2019-05-30)Estimating the prevalence rate of a disease is crucial for controlling its spread, and for planning of healthcare services. Due to limited testing budgets and resources, prevalence estimation typically entails pooled, or group, testing where specimens (e.g., blood, urine, tissue swabs) from a number of subjects are combined into a testing pool, which is then tested via a single test. Testing outcomes from multiple pools are analyzed so as to assess the prevalence of the disease. The accuracy of prevalence estimation relies on the testing pool design, i.e., the number of pools to test and the pool sizes (the number of specimens to combine in a pool). Determining an optimal pool design for prevalence estimation can be challenging, as it requires prior information on the current status of the disease, which can be highly unreliable, or simply unavailable, especially for emerging and/or seasonal diseases. We develop and study frameworks for prevalence estimation, under highly unreliable prior information on the disease and limited testing budgets. Embedded into each estimation framework is an optimization model that determines the optimal testing pool design, considering the trade-off between testing cost and estimation accuracy. We establish important structural properties of optimal testing pool designs in various settings, and develop efficient and exact algorithms. Our numerous case studies, ranging from prevalence estimation of the human immunodeficiency virus (HIV) in various parts of Africa, to prevalence estimation of diseases in plants and insects, including the Tomato Spotted Wilt virus in thrips and West Nile virus in mosquitoes, indicate that the proposed estimation methods substantially outperform current approaches developed in the literature, and produce robust testing pool designs that can hedge against the uncertainty in model inputs.Our research findings indicate that the proposed prevalence estimation frameworks are capable of producing accurate prevalence estimates, and are highly desirable, especially for emerging and/or seasonal diseases under limited testing budgets.
- Exploiting Spatial Degrees-of-Freedom for Energy-Efficient Next Generation Cellular SystemsYao, Miao (Virginia Tech, 2017-04-12)This research addresses green communication issues, including energy efficiency, peak-to-average power ratio (PAPR) reduction and power amplifier (PA) linearization. Green communication is expected to be a primary goal in next generation cellular systems because it promises to reduce operating costs. The first key issue is energy efficiency of distributed antenna systems (DASs). The power consumption of high power amplifiers (HPAs) used in wireless communication systems is determined by the transmit power and drain efficiency. For unequal power allocation of orthogonal frequency division multiplexing (OFDM), the drain efficiency of the PA is determined by the PAPR and hence by the power distribution. This research proposes a PAPR-aware energy-efficient resource allocation scheme for joint orthogonal frequency division multiple access (OFDMA)/space division multiple access (SDMA) downlink transmission from DASs. Grouping-based SDMA is applied to exploit the spatial diversity while avoiding performance degradation from correlated channels. The developed scheme considers the impact of both system data rate and effective power consumption on the PAPR during resource allocation. We also present a suboptimal joint subcarrier and power allocation algorithm to facilitate implementation of power-efficient multi-channel wireless communications. By solving Karush-Kuhn-Tucker conditions, a closed-form solution for the power allocation of each remote radio head is obtained. The second key issue is related with PAPR reduction in the massive multiple-input multiple-output (MIMO) systems. The large number of PAs in next generation massive MIMO cellular communication system requires using inexpensive PAs at the base station to keep array cost reasonable. Large-scale multiuser (MU) MIMO systems can provide extra spatial degrees-of-freedom (DoFs) for PAPR reduction. This work applies both recurrent neural network (RNN)- and semidefinite relaxation (SDR)-based schemes for different purposes to reduce PAPR. The highly parallel structure of RNN is proposed in this work to address the issues of scalability and stringent requirements on computational times in PAPR-aware precoding problem. An SDR-based framework is proposed to reduce PAPR that accommodates channel uncertainties and intercell coordination. Both of the proposed structures reduce linearity requirements and enable the use of lower cost RF components for large-scale MU-MIMO-OFDM downlink. The third key issue is digital predistortion (DPD) in the massive MIMO systems. The primary source of nonlinear distortion in wireless transmitters is the PA, which is commonly modeled using polynomials. Conventional DPD schemes use high-order polynomials to accurately approximate and compensate for the nonlinearity of the PA. This is impractical for scaling to tens or hundreds of PAs in massive MIMO systems. This work therefore proposes a scalable DPD method, achieved by exploiting massive DoFs of next generation front ends. We propose a novel indirect learning structure which adapts the channel and PA distortion iteratively by cascading adaptive zero-forcing precoding and DPD. Experimental results show that over 70% of computational complexity is saved for the proposed solution, it is shown that a 3rd order polynomial with the new solution achieves the same performance as the conventional DPD using 11th order polynomial for a 100x10 massive MIMO configuration.
- Fall Risk Assessment By Measuring Determinants Of GaitZhang, Xiaoyue (Virginia Tech, 2013-12-12)Fall accidents are one of the most serious problems leading to unintentional injuries and fatalities among older adults. However, it is difficult to assess individuals' fall risk and to determine who are at risk of falls and in need of fall interventions. Therefore, this study was motivated by a need to provide a cogent fall risk assessment strategy that may be conducive to various wireless platforms. It aimed at developing a fall risk assessment method for evaluating individuals' fall risk by providing diagnostic modalities associated with gait. In this study, a "determinants of gait" model was adopted to analyze gait characteristics and associate them with fall risk. As a proof of concept, this study concentrated on slip-induced falls and the slip initiation risks. Two important parameters of determinants of gait, i.e. the pelvic rotation and the knee flexion, were found to be associated with slip initiation severity. This relationship appeared to be capable of differentiating fallers and non-fallers within older adults, as well as differentiating normal walking conditions and constrained walking conditions. Furthermore, this study also leveraged portable wireless sensor techniques and investigated if miniature inertial measurement units could effectively measure the important parameters of determinants of gait, and therefore assess slip and fall risk. Results in this study suggested that pelvic rotation and knee flexion measured by the inertial measurement units can be used as a substitution of the traditional motion capture system and can assess slip and fall risk with fairly good accuracy. As a summary, findings of this study filled the knowledge gap about how critical gait characteristics can influence slip and fall risk, and demonstrated a new solution to assess slip and fall risk with low cost and high efficiency.
- A Framework for Data Quality for Synthetic InformationGupta, Ragini (Virginia Tech, 2014-07-24)Data quality has been an area of increasing interest for researchers in recent years due to the rapid emergence of 'big data' processes and applications. In this work, the data quality problem is viewed from the standpoint of synthetic information. Based on the structure and complexity of synthetic data, a need to have a data quality framework specific to it was realized. This thesis presents this framework along with implementation details and results of a large synthetic dataset to which the developed testing framework is applied. A formal conceptual framework was designed for assessing data quality of synthetic information. This framework involves developing analytical methods and software for assessing data quality for synthetic information. It includes dimensions of data quality that check the inherent properties of the data as well as evaluate it in the context of its use. The framework developed here is a software framework which is designed considering software design techniques like scalability, generality, integrability and modularity. A data abstraction layer has been introduced between the synthetic data and the tests. This abstraction layer has multiple benefits over direct access of the data by the tests. It decouples the tests from the data so that the details of storage and implementation are kept hidden from the user. We have implemented data quality measures for several quality dimensions: accuracy and precision, reliability, completeness, consistency, and validity. The particular tests and quality measures implemented span a range from low-level syntactic checks to high-level semantic quality measures. In each case, in addition to the results of the quality measure itself, we also present results on the computational performance (scalability) of the measure.
- Green Design of a Cellulosic Bio-butanol Supply Chain Network with Life Cycle AssessmentLiang, Li (Virginia Tech, 2017-10-03)The incentives and policies spearheaded by the U.S. government have created abundant opportunities for renewable fuel production and commercialization. Bio-butanol is a very promising renewable fuel for the future transportation market. Many efforts have been made to improve its production process, but seldom has bio-butanol research discussed the integration and optimization of a cellulosic bio-butanol supply chain network. This study focused on the development of a physical supply chain network and the optimization of a green supply chain network for cellulosic bio-butanol. To develop the physical supply chain network, the production process, material flow, physical supply chain participants, and supply chain logistics activities of cellulosic bio-butanol were identified by conducting an onsite visit and survey of current bio-fuel stakeholders. To optimize the green supply chain network for cellulosic bio-butanol, the life cycle analysis was integrated into a multi-objective linear programming model. With the objectives of maximizing the economic profits and minimizing the greenhouse gas emissions, the proposed model can optimize the location and size of a bio-butanol production plant. The mathematical model was applied to a case study in the state of Missouri, and solved the tradeoff between the feedstock and market availabilities of sorghum stem bio-butanol. The results of this research can be used to support the decision making process at the strategic, tactical, and operational levels of cellulosic bio-butanol commercialization and cellulosic bio-butanol supply chain optimization. The results of this research can also be used as an introductory guideline for beginners who are interested in cellulosic bio-butanol commercialization and supply chain design.
- «
- 1 (current)
- 2
- 3
- »