Browsing by Author "Fraticelli, Barbara M. P."
Now showing 1 - 20 of 29
Results Per Page
Sort Options
- The Approach-dependent, Time-dependent, Label-constrained Shortest Path Problem and Enhancements for the CART Algorithm with Application to Transportation SystemsJeenanunta, Chawalit (Virginia Tech, 2004-05-10)In this dissertation, we consider two important problems pertaining to the analysis of transportation systems. The first of these is an approach-dependent, time-dependent, label-constrained shortest path problem that arises in the context of the Route Planner Module of the Transportation Analysis Simulation System (TRANSIMS), which has been developed by the Los Alamos National Laboratory for the Federal Highway Administration. This is a variant of the shortest path problem defined on a transportation network comprised of a set of nodes and a set of directed arcs such that each arc has an associated label designating a mode of transportation, and an associated travel time function that depends on the time of arrival at the tail node, as well as on the node via which this node was approached. The lattermost feature is a new concept injected into the time-dependent, label-constrained shortest path problem, and is used to model turn-penalties in transportation networks. The time spent at an intersection before entering the next link would depend on whether we travel straight through the intersection, or make a right turn at it, or make a left turn at it. Accordingly, we model this situation by incorporating within each link's travel time function a dependence on the link via which its tail node was approached. We propose two effective algorithms to solve this problem by adapting two efficient existing algorithms to handle time dependency and label constraints: the Partitioned Shortest Path (PSP) algorithm and the Heap-Dijkstra (HP-Dijkstra) algorithm, and present related theoretical complexity results. In addition, we also explore various heuristic methods to curtail the search. We explore an Augmented Ellipsoidal Region Technique (A-ERT) and a Distance-Based A-ERT, along with some variants to curtail the search for an optimal path between a given origin and destination to more promising subsets of the network. This helps speed up computation without sacrificing optimality. We also incorporate an approach-dependent delay estimation function, and in concert with a search tree level-based technique, we derive a total estimated travel time and use this as a key to prioritize node selections or to sort elements in the heap. As soon as we reach the destination node, while it is within some p% of the minimum key value of the heap, we then terminate the search. We name the versions of PSP and HP-Dijkstra that employ this method as Early Terminated PSP (ET-PSP) and Early Terminated Heap-Dijkstra (ETHP-Dijkstra) algorithms. All of these procedures are compared with the original Route Planner Module within TRANSIMS, which is implemented in the Linux operating system, using C++ along with the g++ GNU compiler. Extensive computational testing has been conducted using available data from the Portland, Oregon, and Blacksburg, Virginia, transportation networks to investigate the efficacy of the developed procedures. In particular, we have tested twenty-five different combinations of network curtailment and algorithmic strategies on three test networks: the Blacksburg-light, the Blacksburg-full, and the BigNet network. The results indicate that the Heap-Dijkstra algorithm implementations are much faster than the PSP algorithmic approaches for solving the underlying problem exactly. Furthermore, mong the curtailment schemes, the ETHP-Dijkstra with p=5%, yields the best overall results. This method produces solutions within 0.37-1.91% of optimality, while decreasing CPU effort by 56.68% at an average, as compared with applying the best available exact algorithm. The second part of this dissertation is concerned with the Classification and Regression Tree (CART) algorithm, and its application to the Activity Generation Module of TRANSIMS. The CART algorithm has been popularly used in various contexts by transportation engineers and planners to correlate a set of independent household demographic variables with certain dependent activity or travel time variables. However, the algorithm lacks an automated mechanism for deriving classification trees based on optimizing specified objective functions and handling desired side-constraints that govern the structure of the tree and the statistical and demographic nature of its leaf nodes. Using a novel set partitioning formulation, we propose new tree development, and more importantly, optimal pruning strategies to accommodate the consideration of such objective functions and side-constraints, and establish the theoretical validity of our approach. This general enhancement of the CART algorithm is then applied to the Activity Generator module of TRANSIMS. Related computational results are presented using real data pertaining to the Portland, Oregon, and Blacksburg, Virginia, transportation networks to demonstrate the flexibility and effectiveness of the proposed approach in classifying data, as well as to examine its numerical performance. The results indicate that a variety of objective functions and constraints can be readily accommodated to efficiently control the structural information that is captured by the developed classification tree as desired by the planner or analyst, dependent on the scope of the application at hand.
- Branch-and-Price Method for Stochastic Generalized Assignment Problem, Hospital Staff Scheduling Problem and Stochastic Short-Term Personnel Planning ProblemKim, Seon Ki (Virginia Tech, 2009-03-05)The work presented in this dissertation has been focused on exploiting the branch-and-price (BNP) method for the solution of various stochastic mixed integer programming problems (MIPs). In particular, we address the stochastic generalized assignment problem (SGAP), a hospital staff scheduling problem (HSSP), a stochastic hospital staff scheduling problem (SHSSP), and a stochastic short-term personnel planning problem (SSTPP). The BNP method has been developed in concert with the dual stabilization technique and other enhancements of this method for each of these problems. In view of an excessive number of scenarios that arise for these problems, we also implement the Monte Carlo method within the BNP scheme. The superiority of the BNP-based method over the branch-and-cut (BNC) method is demonstrated for all of these problems. The first problem that we address is the SGAP for which the processing time of a job on a machine is assumed to be stochastic. Even though the generalized assignment problem (GAP) has been solved using the BNP method, yet no study has been reported in the literature on the use of the BNP method for the solution of the SGAP. Our work has been motivated by the desire to fill this gap. We begin by showing that it is better to solve the SGAP as a stochastic program in contrast to solving it by using the expected values of the times required to process the jobs on the machines. Then, we show that the stochastic model of the SGAP is a complete recourse model — a useful property which permits the first stage decisions to produce feasible solutions for the recourse problems. We develop three BNP-based methods for the solution of the SGAP. The first of these is BNP-SGAP, which is a combination of branch-and-bound and column generation methods. The pricing problem of BNP-SGAP is separable with regard to each machine, and it is a multiple-constraint knapsack problem. The second method is BNP-SGAP implemented in concert with the dual stabilization technique (DST), and it is designated as BNPDST-SGAP. We have introduced a new DST by modifying the Boxstep method of Pigatti et al. [76]. We have shown that our method performs better than the method of Pigatti et al. [76] resulting in over two-fold savings in cpu times on average. The third method that we develop for the solution of the SGAP is BNPDST-SGAP implemented with an advanced start to obtain an initial feasible solution. We use a greedy heuristic to obtain this solution, and this heuristic is a modification of a similar method used for the knapsack problem. It relies on the information available at a node of the underlying branch-and-bound tree. We have shown that this procedure obtains an initial feasible solution, if it exists at that node. We designate this method as BNPDSTKP-SGAP. We have also developed a BNC method to solve the SGAP using CPLEX 9.0. We have compared the performances of the BNP and BNC methods on various problem instances obtained by varying the number of machines, the ratio of the number of machines to the number of jobs, the machine capacity, and the penalty cost per unit of extra resource required at each machine. Our results show that all BNP-based methods perform better than the BNC method, with the best performance obtained for BNPDSTKP-SGAP. An issue with the use of the scenario-based methods that we have employed for the solution of the SGAP is that the number of scenarios generally grows exponentially in problem parameters, which gives rise to a large-size problem. To overcome the complexity caused by the presence of a large number of scenarios for the solution of the SGAP, we introduce the use of the Monte Carlo method (MCM) within the BNP scheme. We designate this method as BNPDSTKP-SGAP with MCM. It affords the use of a small subset of scenarios at a time to estimate the "true" optimal objective function value. Replications of the subsets of scenarios are carried out until the objective function value satisfies a stopping criterion. We have established theoretical results for the use of the MCM. These pertain to determining unbiased estimates of: (i) lower and upper bounds of the "true" optimal objective function value, (ii) the "true" optimal solution, and (iii) the optimality gap. We have also provided the 100(1-ï ¡) confidence interval on the optimality gap. Our experimental investigation has shown the efficacy of using this method. It obtains almost optimal solutions, with the objective function value lying within 5% of the "true" optimal objective function value, while giving almost ten-fold savings in cpu time. Our experimentation has also revealed that an increment in the number of scenarios in each replication makes a greater impact on the quality of the solution obtained than an increment in the number of replications. We have also observed the impact of a change in the variance of a processing time distribution on cpu time. As expected, the optimal objective function value increases with increment in processing time variability. Also, by comparing the results with the expected value solution, it is observed that the greater the variability in the data, the better it is to use the stochastic program. The second problem that we study is the hospital staff scheduling problem. We address the following three versions of this problem: HSSP (General): Implementation of schedule incorporating the four principal elements, namely, surgeons, operations, operating rooms, and operation times; HSSP (Priority): Inclusion of priority for some surgeons over the other surgeons regarding the use of the facility in HSSP (General); HSSP (Pre-arranged): Implementation of a completely pre-fixed schedule for some surgeons. The consideration of priority among the surgeons mimics the reality. Our BNP method for the solution of these problems is similar to that for the SGAP except for the following: (i) a feasible solution at a node is obtained with no additional assignment, i.e., it consists of the assignments made in the preceding nodes of that node in the branch-and-bound tree; (ii) the columns with positive reduced cost are candidates for augmentation in the CGM; and (iii) a new branching variable selection strategy is introduced, which selects a fractional variable as a branching variable by fixing a value of which we enforce the largest number of variables to either 0 or 1. The priority problem is separable in surgeons. The results of our experimentation have shown the efficacy of using the BNP-based method for the solution of each HSSP as it takes advantage of the inherent structure of each of these problems. We have also compared their performances with that of the BNC method developed using CPLEX. For the formulations HSSP (General), HSSP (Priority), and HSSP (Pre-arranged), the BNP method gives better results for 22 out of 30, 29 out of 34, and 20 out 32 experiments over the BNC method, respectively. Furthermore, while the BNC method fails to obtain an optimal solution for 15 experiments, the BNP method obtains optimal solutions for all 96 experiments conducted. Thus, the BNP method consistently outperforms the BNC method for all of these problems. The third problem that we have investigated in this study is the stochastic version of the HSSP, designated as the Stochastic HSSP (SHSSP), in which the operation times are assumed to be stochastic. We have introduced a formulation for this formulation, designated as SHSSP2 (General), which allows for overlapping of schedules for surgeons and operating rooms, and also, allows for an assignment of a surgeon to perform an operation that takes less than a pre-arranged operation time, but all incurring appropriate penalty costs. A comparison of the solution of SHSSP2 (General) and its value with those obtained by using expected values (the corresponding problem is designated as Expected-SHSSP2 (General)) reveals that Expected-SHSSP2 (General) may end up with inferior and infeasible schedules. We show that the recourse model for SHSSP2 (General) is a relatively complete recourse model. Consequently, we use the Monte Carlo method (MCM) to reduce the complexity of solving SHSSP2 (General) by considering fewer scenarios. We employ the branch-and-cut (BNC) method in concert with the MCM for solving SHSSP2 (General). The solution obtained is evaluated using tolerance ratio, closeness to optimality, length of confidence interval, and cpu time. The MCM substantially reduces computational effort while producing almost optimal solutions and small confidence intervals. We have also considered a special case of SHSSP2 (General), which considers no overlapping schedules for surgeons and operating rooms and assigns exactly the same operation time for each assignment under each scenario, and designate it as SHSSP2 (Special). With this, we consider another formulation that relies on the longest operation time among all scenarios for each assignment of a surgeon to an operation in order to avoid scheduling conflicts, and we designate this problem as SHSSP (Longest). We show SHSSP (Longest) to be equivalent to deterministic HSSP, designated as HSSP (Equivalent), and we further prove it to be equivalent to SHSSP (General) in terms of the optimal objective function value and the optimal assignments of operations to surgeons. The schedule produced by HSSP (Equivalent) does not allow any overlap among the operations performed in an operating room. That is, a new operation cannot be performed if a previous operation scheduled in that room takes longer than expected. However, the schedule generated by HSSP (Equivalent) may turn out to be a conservative one, and may end up with voids due to unused resources in case an operation in an operating room is completed earlier than the longest time allowed. Nevertheless, the schedule is still a feasible one. In such a case, the schedule can be left-shifted, if possible, because the scenarios are now revealed. Moreover, such voids could be used to perform other procedures (e.g., emergency operations) that have not been considered within the scope of the SHSSP addressed here. Besides, such a schedule can provide useful guidelines to plan for resources ahead of time. The fourth problem that we have addressed in this dissertation is the stochastic short-term personnel planning problem, designated as Stochastic STPP (SSTPP). This problem arises due to the need for finding appropriate temporary contractors (workers) to perform requisite jobs. We incorporate uncertainty in processing time or amount of resource required by a contractor to perform a job. Contrary to the SGAP, the recourse model for this problem is not a relatively complete recourse model. As a result, we cannot employ a MCM method for the solution of this problem as it may give rise to an infeasible solution. The BNP method for the SSTPP employs the DST and the advanced start procedure developed for the SGAP, and due to extra constraints and presence of binary decision variables, we use the branching variable selection strategy developed for the HSSP models. Because of the distinctive properties of the SSTPP, we have introduced a new node selection strategy. We have compared the performances of the BNC-based and BNP-based methods based on the cpu time required. The BNP method outperforms the BNC method in 75% of the experiments conducted, and the BNP method is found to be quite stable with smaller variance in cpu times than those for the BNC method. It affords solution of difficult problems in smaller cpu times than those required for the BNC method.
- Cognitive RF Front-end ControlImana, Eyosias Yoseph (Virginia Tech, 2014-12-09)This research addresses the performance degradation in receivers due to poor selectivity. Poor selectivity is expected to be a primary limitation on the performance of Dynamic-Spectrum-Access (DSA) and millimeter wave (mmWave) technologies. Both DSA and mmWave are highly desired technologies because they can address the spectrum-deficit problem that is currently challenging the wireless industry. Accordingly, addressing poor receiver selectivity is necessary to expedite the adoption of these technologies into the main street of wireless. This research develops two receiver design concepts to enhance the performance of poorly-selective receivers. The first concept is called cognitive RF front-end control (CogRF). CogRF operates by cognitively controlling the local-oscillator and sampling frequencies in receivers. This research shows that CogRF can fulfil the objective of pre-selectors by minimizing the effects of weak and moderately-powered neighboring-channel signals on the desired signal. This research shows that CogRF can be an alternative to high-performance pre-selectors, and hence, CogRF is a viable architecture to implement reliable DSA and mmWave receivers. The theoretical design and hardware implementation of a cognitive engine and a spectrum sensor of CogRF are reported in this dissertation. Measurement results show that CogRF significantly reduces the rate of communication outage due to interference from neighboring-channel signals in poorly-selective receivers. The results also indicate that CogRF can enable a poorly-selective receiver to behave like a highly-selective receiver. The second receiver design concept addresses very strong neighboring-channel signals. The performance of poorly selective receivers can easily suffer due to a strong, unfiltered neighboring-channel signal. A strong neighboring-channel signal is likely for a DSA radio that is operating in military radar bands. Traditionally, strong neighboring signals are addressed using an Automatic-Gain-Control (AGC) that attempt to accommodate the strong received signal into the dynamic range of the receiver. However, this technique potentially desensitizes the receiver because it sacrifices the Signal-to-Noise-Ratio (SNR) of the desired signal. This research proposes the use of auxiliary-receive path to address strong neighboring-channel signals with minimal penalty on the SNR of the desired signal. Through simulation based analysis, and hardware-based measurement, this research shows that the proposed technique can provide significant improvement in the neighboring-channel-interference handling capability of the receiver.
- Collection-and-Delivery-Points: A Variation on a Location-Routing ProblemSavage, Laura Elizabeth (Virginia Tech, 2019-09-20)Missed deliveries are a major issue for package carriers and a source of great hassle for the customers. Either the carrier attempts to redeliver the package, incurring the additional expense of visiting the same house up to three times, or they leave the package on the doorstep, vulnerable to package thieves. In this dissertation, a system of collection-and-delivery-points (CDPs) has been proposed to improve customer service and reduce carrier costs. A CDP is a place, either in an existing business or a new location, where the carrier drops any missed deliveries and the customers can pick the packages at their convenience. To examine the viability of a CDP system in North America, a variation on a location-routing problem (LRP) was created, a mixed-integer programming model called the CDP-LRP. Unlike standard LRPs, the CDP-LRP takes into account both the delivery truck route distance and the direct customer travel to the CDPs. Also, the facilities being placed are not located at the beginning and ending of the truck routes, but are stops along the routes. After testing, it became clear that, because of the size and complexity of the problem, the CDP-LRP is unable to be solved exactly in a reasonable amount of time. Heuristics developed for the standard LRP cannot be applied to the CDP-LRP because of the differences between the models. Therefore, three heuristics were created to approximate the solution to the CDP-LRP, each with two different embedded modified vehicle routing problem (VRP) algorithms, the Clark-Wright and the Sweep, modified to handle the additional restrictions caused by the CDPs. The first is an improvement heuristic, in which each closed CDP is tested as a replacement for each open CDP, and the move that creates the most savings is implemented. The second begins with every CDP open, and closes them one at a time, while the third does the reverse and begins will only one open CDP, then opens the others one by one. In each case, a penalty is applied if the customer travel distance is too long. Each heuristic was tested for each possible number of open CDPs, and the least expensive was chosen as the best solution. Each heuristic and VRP algorithm combination was tested using three delivery failure rates and different data sets: three small data sets pulled from VRP literature, and randomly generated clustered and uniformly distributed data sets with three different numbers of customers. OpenAll and OpenOne produced better solutions than Replacement in most instances, and the Sweep Algorithm outperformed the Clark-Wright in both solution quality and time in almost every test. To judge the quality of the heuristic solutions, the results were compared to the results of a simple locate-first, route-second sequential algorithm that represents the way the decision would commonly be made in industry today. The CDPs were located using a simple facility location model, then the delivery routes were created with the Sweep algorithm. These results were mixed: for the uniformly distributed data sets, if the customer travel penalty threshold and customer density are low enough, the heuristics outperform the sequential algorithm. For the clustered data sets, the sequential algorithm produces solutions as good as or slightly better than the sequential algorithm, because the location of the potential CDP inside the clusters means that the penalty has less impact, and the addition of more open CDPs has less effect on the delivery route distances. The heuristic solutions were also compared to a second value – the route costs incurred by the carrier in the current system of redeliveries, calculated by placing additional customers in the routes and running the Sweep algorithm – to judge the potential savings that could be realized by implementing a CDP system in North America. Though in some circumstances the current system is less expensive, depending on the geographic distribution of the customers and the delivery failure rate, in other circumstances the cost savings to the carrier could be as high as 27.1%. Though the decision of whether or not to set up a CDP system in an area would need to be made on a case-by-case basis, the results of this study suggest that such a system could be successful in North America.
- The Design and Modeling of Ultra-Wideband Position-Location NetworksVenkatesh, Swaroop (Virginia Tech, 2007-02-20)Impulse-based Ultrawideband (UWB) is a form of signaling which uses streams of pulses of very short duration, typically on the order of a nanosecond. Impulse-based UWB systems possess the ability to fuse accurate position-location with low-data rate communication, and provide covertness for tactical applications and robustness in dense multipath propagation environments. These features can be leveraged in the design wireless ad hoc position-location networks (PoLoNets) for accurate location tracking and monitoring where GPS is not available, especially indoors. Location information is sequentially propagated through a network of reference nodes in order to create a framework for the tracking of mobile nodes, as well as a multi-hop message-passing infrastructure between mobile nodes and control nodes located outside the area of deployment. The applications of such networks include the location and command-and-control of fire-fighters in emergency scenarios, the location of military personnel deployed in urban or indoor environments, and the guidance of robots through large multi-room indoor environments. The main objective of this dissertation is to derive design principles, techniques and analytical models for UWB PoLoNets that are useful in the development of practical solutions. Some of the fundamental obstacles to obtaining accurate location information in indoor environments are non-line-of-sight (NLOS) signal propagation, limited connectivity between nodes, and the propagation of localization inaccuracies when using sequential estimation approaches in ad hoc scenarios. Several techniques and algorithms that mitigate these effects, thereby allowing the design of PoLoNets with requisite localization accuracy, are presented. Although these techniques are developed from the perspective of a UWB physical layer, the majority are applicable to generic PoLoNets.
- Discrete and Continuous Nonconvex Optimization: Decision Trees, Valid Inequalities, and Reduced Basis TechniquesDalkiran, Evrim (Virginia Tech, 2011-03-31)This dissertation addresses the modeling and analysis of a strategic risk management problem via a novel decision tree optimization approach, as well as development of enhanced Reformulation-Linearization Technique (RLT)-based linear programming (LP) relaxations for solving nonconvex polynomial programming problems, through the generation of valid inequalities and reduced representations, along with the design and implementation of efficient algorithms. We first conduct a quantitative analysis for a strategic risk management problem that involves allocating certain available failure-mitigating and consequence-alleviating resources to reduce the failure probabilities of system safety components and subsequent losses, respectively, together with selecting optimal strategic decision alternatives, in order to minimize the risk or expected loss in the event of a hazardous occurrence. Using a novel decision tree optimization approach to represent the cascading sequences of probabilistic events as controlled by key decisions and investment alternatives, the problem is modeled as a nonconvex mixed-integer 0-1 factorable program. We develop a specialized branch-and-bound algorithm in which lower bounds are computed via tight linear relaxations of the original problem that are constructed by utilizing a polyhedral outer-approximation mechanism in concert with two alternative linearization schemes having different levels of tightness and complexity. We also suggest three alternative branching schemes, each of which is proven to guarantee convergence to a global optimum for the underlying problem. Extensive computational results and sensitivity analyses are presented to provide insights and to demonstrate the efficacy of the proposed algorithm. In particular, our methodology outperformed the commercial software BARON (Version 8.1.5), yielding a more robust performance along with an 89.9% savings in effort on average. Next, we enhance RLT-based LP relaxations for polynomial programming problems by developing two classes of valid inequalities: v-semidefinite cuts and bound-grid-factor constraints. The first of these uses concepts derived from semidefinite programming. Given an RLT relaxation, we impose positive semidefiniteness on suitable dyadic variable-product matrices, and correspondingly derive implied semidefinite cuts. In the case of polynomial programs, there are several possible variants for selecting such dyadic variable-product matrices for imposing positive semidefiniteness restrictions in order to derive implied valid inequalities, which leads to a new class of cutting planes that we call v-semidefinite cuts. We explore various strategies for generating such cuts within the context of an RLT-based branch-and-cut scheme, and exhibit their relative effectiveness towards tightening the RLT relaxations and solving the underlying polynomial programming problems, using a test-bed of randomly generated instances as well as standard problems from the literature. Our results demonstrate that these cutting planes achieve a significant tightening of the lower bound in contrast with using RLT as a stand-alone approach, thereby enabling an appreciable reduction in the overall computational effort, even in comparison with the commercial software BARON. Empirically, our proposed cut-enhanced algorithm reduced the computational effort required by the latter two approaches by 44% and 77%, respectively, over a test-bed of 60 polynomial programming problems. As a second cutting plane strategy, we introduce a new class of bound-grid-factor constraints that can be judiciously used to augment the basic RLT relaxations in order to improve the quality of lower bounds and enhance the performance of global branch-and-bound algorithms. Certain theoretical properties are established that shed light on the effect of these valid inequalities in driving the discrepancies between RLT variables and their associated nonlinear products to zero. To preserve computational expediency while promoting efficiency, we propose certain concurrent and sequential cut generation routines and various grid-factor selection rules. The results indicate a significant tightening of lower bounds, which yields an overall reduction in computational effort of 21% for solving a test-bed of 15 challenging polynomial programming problems to global optimality in comparison with the basic RLT procedure, and over a 100-fold speed-up in comparison with the commercial software BARON. Finally, we explore equivalent, reduced size RLT-based formulations for polynomial programming problems. Utilizing a basis partitioning scheme for an embedded linear equality subsystem, we show that a strict subset of RLT defining equalities imply the remaining ones. Applying this result, we derive significantly reduced RLT representations and develop certain coherent associated branching rules that assure convergence to a global optimum, along with static as well as dynamic basis selection strategies to implement the proposed procedure. In addition, we enhance the RLT relaxations with v-semidefinite cuts, which are empirically shown to further improve the relative performance of the reduced RLT method over the usual RLT approach. Computational results presented using a test-bed of 10 challenging polynomial programs to evaluate the different reduction strategies demonstrate that our superlative proposed approach achieved more than a four-fold improvement in computational effort in comparison with both the commercial software BARON and a recently developed open-source code, Couenne, for solving nonconvex mixed-integer nonlinear programming problems. Moreover, our approach robustly solved all the test cases to global optimality, whereas BARON and Couenne were jointly able to solve only a single instance to optimality within the set computational time limit, having an unresolved average optimality gap of 260% and 437%, respectively, for the other nine instances. This dissertation makes several broader contributions to the field of nonconvex optimization, including factorable, nonlinear mixed-integer programming problems. The proposed decision tree optimization framework can serve as a versatile management tool in the arenas of homeland security and health-care. Furthermore, we have advanced the frontier for tackling formidable nonconvex polynomial programming problems that arise in emerging fields such as signal processing, biomedical engineering, materials science, and risk management. An open-source software using the proposed reduced RLT representations, semidefinite cuts, bound-grid-factor constraints, and range reduction strategies, is currently under preparation. In addition, the different classes of challenging polynomial programming test problems that are utilized in the computational studies conducted in this dissertation have been made available for other researchers via the Web-page http://filebox.vt.edu/users/dalkiran/website/. It is our hope and belief that the modeling and methodological contributions made in this dissertation will serve society in a broader context through the myriad of widespread applications they support.
- Enhanced Air Transportation Modeling Techniques for Capacity ProblemsSpencer, Thomas Louis (Virginia Tech, 2016-09-02)Effective and efficient air transportation systems are crucial to a nation's economy and connectedness. These systems involve capital-intensive facilities and equipment and move millions of people and tonnes of freight every day. As air traffic has continued to increase, the systems necessary to ensure safe and efficient operation will continue to grow more and more complex. Hence, it is imperative that air transport analysts are equipped with the best tools to properly predict and respond to expected air transportation operations. This dissertation aims to improve on those tools currently available to air transportation analysts, while offering new ones. Specifically, this thesis will offer the following: 1) A model for predicting arrival runway occupancy times (AROT); 2) a model for predicting departure runway occupancy times (DROT); and 3) a flight planning model. This thesis will also offer an exploration of the uses of unmanned aerial vehicles for providing wireless communications services. For the predictive models of AROT and DROT, we fit hierarchical Bayesian regression models to the data, grouped by aircraft type using airport physical and aircraft operational parameters as the regressors. Recognizing that many existing air transportation models require distributions of AROT and DROT, Bayesian methods are preferred since their output are distributions that can be directly inputted into air transportation modeling programs. Additionally, we exhibit how analysts will be able to decouple AROT and DROT predictions from the traditional 4 or 5 groupings of aircraft currently in use. Lastly, for the flight planning model, we present a 2-D model using presently available wind data that provides wind-optimal flight routings. We improve over current models by allowing free-flight unconnected to pre-existing airways and by offering finer resolutions over the current 2.5 degree norm.
- A Framework for Data Quality for Synthetic InformationGupta, Ragini (Virginia Tech, 2014-07-24)Data quality has been an area of increasing interest for researchers in recent years due to the rapid emergence of 'big data' processes and applications. In this work, the data quality problem is viewed from the standpoint of synthetic information. Based on the structure and complexity of synthetic data, a need to have a data quality framework specific to it was realized. This thesis presents this framework along with implementation details and results of a large synthetic dataset to which the developed testing framework is applied. A formal conceptual framework was designed for assessing data quality of synthetic information. This framework involves developing analytical methods and software for assessing data quality for synthetic information. It includes dimensions of data quality that check the inherent properties of the data as well as evaluate it in the context of its use. The framework developed here is a software framework which is designed considering software design techniques like scalability, generality, integrability and modularity. A data abstraction layer has been introduced between the synthetic data and the tests. This abstraction layer has multiple benefits over direct access of the data by the tests. It decouples the tests from the data so that the details of storage and implementation are kept hidden from the user. We have implemented data quality measures for several quality dimensions: accuracy and precision, reliability, completeness, consistency, and validity. The particular tests and quality measures implemented span a range from low-level syntactic checks to high-level semantic quality measures. In each case, in addition to the results of the quality measure itself, we also present results on the computational performance (scalability) of the measure.
- General Aviation Demand Forecasting Models and a Microscopic North Atlantic Air Traffic Simulation ModelLi, Tao (Virginia Tech, 2015-01-06)This thesis is focused on two topics. The first topic is the General Aviation (GA) demand forecasting models. The contributions to this topic are three fold: 1) we calibrated an econometric model to investigate the impact of fuel price on the utilization rate of GA piston engine aircraft, 2) we adopted a logistic model to identify the relationship between fuel price and an aircraft's probability of staying active, and 3) we developed an econometric model to forecast the airport-level itinerant and local GA operations. Our calibration results are compared with those reported in literature. Demand forecasts are made with these models and compared with those prepared by the Federal Aviation Administration. The second topic is to model the air traffic in the Organized Track System (OTS) over the North Atlantic. We developed a discrete-time event model to simulate the air traffic that uses the OTS. We proposed four new operational procedures to improve the flight operations for the OTS. Two procedures aim to improve the OTS assignments in the OTS entry area, and the other two aim to benefit flights once they are inside the OTS. The four procedures are implemented with the simulation model and their benefits are analyzed. Several implementation issues are discussed and recommendations are given.
- Incentive Mechanism Design for Systems with Many Agents: A Multiscale Decision Theory ApproachKulkarni, Aditya Umesh (Virginia Tech, 2018-08-14)Incentives are an effective mechanism to align the interests of decision-makers. For example, employers use incentives to motivate their employees to take actions that benefit both the employers and the employees. Incentives also play a role in the interaction between firms, for example in a supply chain network. The prevalent approach to analyzing interactions between decision-makers is through principal-agent models. Due to mathematical intractability, the majority of these models are restricted to the interaction between two decision-makers. However, modern organizations have many decision-makers that interact with each other in a network. Therefore, effective incentive mechanisms for systems with many decision-makers (agents) must account for the numerous network interdependencies. The objective of this dissertation is to design incentive mechanisms for systems with many agents, with a focus on teams and multi-firm networks. Methodologically, our approach applies and builds upon multiscale decision theory (MSDT). MSDT can effectively and efficiently model the interdependencies between decision-makers and their optimal response to incentives. This dissertation consists of three parts. The first part focuses on incentives in teams, where multiple subordinates work under a single supervisor. The contribution of the team model to MSDT is the introduction of continuous decision variables; prior MSDT models have only used discrete decision variables. In the second and third part of this dissertation, we analyze a network of collaborating firms in a systems engineering project and focus on verification decisions. We introduce a belief-based model, which is a novel approach for both MSDT and verification modeling in systems engineering. Using MSDT, we determine how incentives can be used by a contractor to motivate a subcontractor to verify its design when the subcontractor prefers not to do so. We extend this two-firm model to a general multi-firm network model for verification coordination and incentivization. This firm network resembles the inter-firm collaboration present in most large-scale system engineering projects. Through better aligned verification activities, system-wide verification costs decrease, while the reliability of the final system improves.
- Integrated Aircraft Fleeting, Routing, and Crew Pairing Models and Algorithms for the Airline IndustryShao, Shengzhi (Virginia Tech, 2013-01-23)The air transportation market has been growing steadily for the past three decades since the airline deregulation in 1978. With competition also becoming more intense, airline companies have been trying to enhance their market shares and profit margins by composing favorable flight schedules and by efficiently allocating their resources of aircraft and crews so as to reduce operational costs. In practice, this is achieved based on demand forecasts and resource availabilities through a structured airline scheduling process that is comprised of four decision stages: schedule planning, fleet assignment, aircraft routing, and crew scheduling. The outputs of this process are flight schedules along with associated assignments of aircraft and crews that maximize the total expected profit. Traditionally, airlines deal with these four operational scheduling stages in a sequential manner. However, there exist obvious interdependencies among these stages so that restrictive solutions from preceding stages are likely to limit the scope of decisions for succeeding stages, thus leading to suboptimal results and even infeasibilities. To overcome this drawback, we first study the aircraft routing problem, and develop some novel modeling foundations based on which we construct and analyze an integrated model that incorporates fleet assignment, aircraft routing, and crew pairing within a single framework. Given a set of flights to be covered by a specific fleet type, the aircraft routing problem (ARP) determines a flight sequence for each individual aircraft in this fleet, while incorporating specific considerations of minimum turn-time and maintenance checks, as well as restrictions on the total accumulated flying time, the total number of takeoffs, and the total number of days between two consecutive maintenance operations. This stage is significant to airline companies as it directly assigns routes and maintenance breaks for each aircraft in service. Most approaches for solving this problem adopt set partitioning formulations that include exponentially many variables, thus requiring the design of specialized column generation or branch-and-price algorithms. In this dissertation, however, we present a novel compact polynomially sized representation for the ARP, which is then linearized and lifted using the Reformulation-Linearization Technique (RLT). The resulting formulation remains polynomial in size, and we show that it can be solved very efficiently by commercial software without complicated algorithmic implementations. Our numerical experiments using real data obtained from United Airlines demonstrate significant savings in computational effort; for example, for a daily network involving 344 flights, our approach required only about 10 CPU seconds for deriving an optimal solution. We next extend Model ARP to incorporate its preceding and succeeding decision stages, i.e., fleet assignment and crew pairing, within an integrated framework. We formulate a suitable representation for the integrated fleeting, routing, and crew pairing problem (FRC), which accommodates a set of fleet types in a compact manner similar to that used for constructing the aforementioned aircraft routing model, and we generate eligible crew pairings on-the-fly within a set partitioning framework. Furthermore, to better represent industrial practice, we incorporate itinerary-based passenger demands for different fare-classes. The large size of the resulting model obviates a direct solution using off-the-shelf software; hence, we design a solution approach based on Benders decomposition and column generation using several acceleration techniques along with a branch-and-price heuristic for effectively deriving a solution to this model. In order to demonstrate the efficacy of the proposed model and solution approach and to provide insights for the airline industry, we generated several test instances using historical data obtained from United Airlines. Computational results reveal that the massively-sized integrated model can be effectively solved in reasonable times ranging from several minutes to about ten hours, depending on the size and structure of the instance. Moreover, our benchmark results demonstrate an average of 2.73% improvement in total profit (which translates to about 43 million dollars per year) over a partially integrated approach that combines the fleeting and routing decisions, but solves the crew pairing problem sequentially. This improvement is observed to accrue due to the fact that the fully integrated model effectively explores alternative fleet assignment decisions that better utilize available resources and yield significantly lower crew costs.
- Interference Avoidance based Underlay Techniques for Dynamic Spectrum SharingMenon, Rekha (Virginia Tech, 2007-04-30)Dynamic spectrum sharing (DSS) is a new paradigm for spectrum allocation that is expected to lead to more efficient spectrum usage and alleviate the spectrum-scarcity that has been perceived in recent years. DSS refers to the opportunistic, dynamic, and uncoordinated use of the spectrum by multiple, possibly non-cooperating, systems. It allows bands which may be underutilized by incumbent or legacy systems to be shared by agile or cognitive radios on a ``do no harm" basis. An ideal DSS technique is one which efficiently uses the allocated spectrum and maximizes the performance of the DSS network while causing no interference to the legacy radio system with which it coexists. We address this issue in our work by investigating desirable features for DSS with respect to the impact on a legacy radio system as well as the performance of a DSS network. It is found that ``ideal" DSS techniques with respect to both objectives are characterized by the removal of the strongest interferers in the system and averaging of the remaining interference. This motivates the use of an interference avoidance (IA) based underlay technique for DSS. The performance benefit provided by this technique, over an IA-based overlay technique, is shown to increase with the transmission bandwidth available to the DSS system. It is also shown that this technique is more robust to inaccuracies in the system knowledge required for implementing IA. An example of an IA-based underlay technique is a spreading-sequence-based transmission scheme that employs sequence adaptation to avoid interference. We use game-theoretic tools to design such schemes for distributed or ad hoc networks. The designed schemes can also be used to avoid interfering with other agile or static radios. We then extend this work to Ultra Wideband systems which can maximally exploit the gains from the proposed scheme due to the large transmission bandwidths.
- Lot Streaming in Two-Stage Flow Shops and Assembly SystemsMukherjee, Niloy Jeet (Virginia Tech, 2014-10-09)The research work presented in this dissertation relates to lot streaming in two-stage flow shops and assembly shops. Lot streaming refers to the process of splitting a production lot into sublots, and then, processing the sublots on different machines simultaneously in an overlapping manner. Such a strategy allows finished material at each stage to be transferred downstream sooner than if production and transfer batches were restricted to be the same size. In the case when each sublot consists of just one item, a single-piece-flow is obtained. Such a continuous flow is a key element of the Toyota Production System. However, single-piece-flow increases the number of transfers and the total transportation cost (time). As a result, it may not be economically justifiable in many cases, and therefore, material may have to be transferred in batches (called transfer batches, or sublots). Lot streaming addresses the problems of determining optimal sublot sizes for use in various machine environments and optimizes different performance measures.Given this relationship between lot streaming and the Toyota Production System, lot streaming can be considered a generalization of lean principles. In this dissertation, we first provide a comprehensive review of the existing literature related to lot streaming. We show that two-stage flow shop problems have been studied more frequently than other machine environments. In particular, single-lot two-machine flow shops have been very well researched and efficient solution techniques have been discovered for a large variety of problems. While two-stage flow shop lot streaming problems have been studied extensively, we find that the existing literature assumes that production rates at each stage remain constant. Such an assumption is not valid when processing rates change, for example, due to learning. Learning here, refers to the improvements in processing rates achieved due to experience gained from processing units. We consider the case when the phenomenon of learning affects processing and setup times in a two-stage flow shop processing a single lot, and when, sublot-attached setup times exist. The decrease in unit-processing time, or sublot-attached setup time, is given by Wright's learning curve. We find closed-form expressions or simple search techniques to obtain optimal sublot sizes that minimize the makespan when the effect of learning reduces processing times, sublot-attached setup times, or, both. Then, we provide a general method to transform a large family of scheduling problems related to lot streaming in the presence of learning, to their equivalent counterparts that are not influenced by learning. This transformation is valid for all integrable learning functions (including the Wright's learning curve). As a result, a large variety of new problems involving learning can be solved using existing solution techniques. We then consider lot streaming in stochastic environments in the context of sourcing material. Such problems have been well studied in the literature related to lot streaming for cost-based objective functions when demand is continuous, and when processing times are deterministic, or, for material sourcing problems when the time required to procure a lot is stochastic but is independent of the lot size. We extend this study to the case when the time required to produce a given quantity of products is stochastic and dependent on the number of units produced. We consider the case when two sublots are used, and also compare the performance of lot streaming to the case when each sublot is sourced from an independent supplier. Next, we address a new problem related to lot streaming in a two-stage assembly shop, where we minimize a weighted sum of material handling costs and makespan. We consider the case when several suppliers provide material to a single manufacturer, who then assembles units from different suppliers into a single item. We assume deterministic, but not necessarily constant, lead times for each supplier, who may use lot streaming to provide material to the manufacturer. Lead times are defined as the length of the time interval between a supplier beginning to process material and the time when the first sublot is delivered to the manufacturer; Subsequent sublots must be transported early enough so that the manufacturer is not starved of material. The supplier may reduce this lead time by using lot streaming, but at an increased material handling cost. The decrease in lead time is also affected by other factors such as lot attached/detached setup times, transportation times etc. We allow these factors to be different for each supplier, and each lot processed by the same supplier. We refer to this problem as the Assembly Lot Streaming Problem (ALSP). We show that the ALSP can be solved using two steps. The first step consists of solution to several two-stage, single-lot, flow shop, makespan minimization problems. The solution to these problems generate prospective sublot sizes. Solution methods outlined in the existing literature can be used to complete this step. The second step obtains optimal number of sublots and production sequence. For a given production sequence, this step can be executed in polynomial-time; otherwise, the second step problem is NP-hard and integer programming formulations and decomposition-based methodologies are investigated for their solution. We make very limited assumptions regarding the handling cost and the relationship between the supplier lead time and number of sublots used. As a result, our solution methodology has a wide scope.
- Modeling, Analysis and Solution Approaches for Some Optimization Problems: High Multiplicity Asymmetric Traveling Salesman, Primary Pharmaceutical Manufacturing Scheduling, and Lot Streaming in an Assembly SystemYao, Liming (Virginia Tech, 2008-05-27)This dissertation is devoted to the modeling, analysis and development of solution approaches for some optimization-related problems encountered in industrial and manufacturing settings. We begin by introducing a special type of traveling salesman problem called "High Multiplicity Asymmetric Traveling Salesman Problem" (HMATSP). We propose a new formulation for this problem, which embraces a flow-based subtour elimination structure, and establish its validity for this problem. The model is, then, incorporated as a substructure in our formulation for a lot-sizing problem involving parallel machines and sequence-dependent setup costs, also known as the "Chesapeake Problem". Computational results are presented to demonstrate the efficacy of our modeling approach for both the generic HMATSP and its application within the context of the Chesapeake Problem. Next, we investigate an integrated lot-sizing and scheduling problem that is encountered in the primary manufacturing facility of pharmaceutical manufacturing. This problem entails determination of production lot sizes of multiple products and sequence in which to process the products on machines, which can process lots (batches) of a fixed size (due to limited capacity of containers) in the presence of sequence-dependent setup times/costs. We approach this problem via a two-stage optimization procedure. The lot-sizing decision is considered at stage 1 followed by the sequencing of production lots at stage 2. Our aim for the stage 1 problem is to allocate batches of products to time-periods in order to minimize the sum of the inventory and backordering costs subject to the available capacity in each period. The consideration of batches of final products, in addition to those for intermediate products, which comprise a final product, further complicates the lot-sizing problem. The objective for the stage 2 problem is to minimize sequence-dependent setup costs. We present a novel unifying model and a column generation-based optimization approach for this class of lot-sizing and sequencing problems. Computational experience is first provided by using randomly generated data sets to test the performances of several variants of our proposed approach. The efficacy of the best of these variants is further demonstrated by applying it to the real-life data collected with the collaboration of a pharmaceutical manufacturing company. Then, we address a single-lot, lot streaming problem for a two-stage assembly system. This assembly system is different from the traditional flow shop configuration. It consists of m parallel subassembly machines at stage 1, each of which is devoted to the production of a component. A single assembly machine at stage 2, then, assembles products after components (one each from the subassembly machines at the first stage) have been completed. Lot-detached setups are encountered on the machines at the first and second stages. Given a fixed number of transfer batches (or sublots) from each of the subassembly machines at stage 1 to the assembly machine at stage 2, our problem is to find sublot sizes so as to minimize the makespan. We develop optimality conditions to determine sublot sizes for the general problem, and present polynomial-time algorithms to determine optimal sublot sizes for the assembly system with two and three subassembly machines at stage 1. Finally, we extend the above single-lot, lot streaming problem for the two-stage assembly system to multiple lots, but still, for the objective of minimizing the makespan. Due to the presence of multiple lots, we need to address the issue of the sequencing of the lots along with lot-splitting, a fact which adds complexity to the problem. Some results derived for the single-lot version of this problem have successfully been generalized for this case. We develop a branch-and-bound-based methodology for this problem. It relies on effective lower bounds and dominance properties, which are also derived. Finally, we present results of computational experimentation to demonstrate the effectiveness of our branch-and-bound-based methodology. Because of the tightness of our upper and lower bounds, a vast majority of the problems can be solved to optimality at root node itself, while for others, the average gap between the upper and lower bounds computed at node zero is within 0.0001%. For a majority of these problems, our dominance properties, then, effectively truncate the branch-and-bound tree, and obtain optimal solution within 500 seconds.
- Modeling, Analysis, and Exact Algorithms for Some Biomass Logistics Supply Chain Design and Routing ProblemsAguayo Bustos, Maichel Miguel (Virginia Tech, 2016-07-28)This dissertation focuses on supply chain design and logistics problems with emphasis on biomass logistics and routing problems. In biomass logistics, we have studied problems arising in a switchgrass-based bio-ethanol supply chain encountered in the Southeast, and a corn stover harvest scheduling problem faced in the Midwest Unites States, both pertaining to the production of cellulosic ethanol. The main contributions of our work have been in introducing new problems to the literature that lie at the interface of the lot-sizing and routing problems, and in developing effective exact algorithms for their solution. In the routing area, we have addressed extensions of the well-known traveling salesman and vehicle routing problems. We have proposed new formulations and have developed exact algorithms for the single and multiple asymmetric traveling salesmen problems (ATSP and mATP), the high-multiplicity asymmetric traveling salesman problem (HMATSP) and its extensions, and the fixed-destination multi-depot traveling salesman problem with load balancing (FD-MTSPB). Furthermore, we have introduced a new strategy to reduce routing cost in the classical vehicle routing problem (VRP).
- Models and Algorithms for Some Combinatorial Optimization Problems: University Course Timetabling, Facility Layout and Integrated Production-Distribution SchedulingWang, Yuqiang (Virginia Tech, 2007-08-08)In this dissertation, we address three different combinatorial optimization problems (COPs), each of which has specific real-life applications. Owning to their specific nature, these problems are different from those discussed in the literature. For each of these problems, we present a mathematical programming formulation, analyze the problem to determine its useful, inherent structural properties, and develop an efficient methodology for its solution by exploiting these properties. The first problem that we address is the course timetabling problem encountered at Virginia Tech. The course timetabling problem for a university is a difficult problem and has been studied by many researchers over the years. As a result, a plethora of models and approaches have been reported in the literature. However, most of these studies have focused on applications pertaining to course scheduling for a single or at most few departments of a university. The sheer size of the university-wide timetabling problem that we address, involving thousands of courses to be scheduled in hundreds of classrooms in each semester, makes it a challenging problem. We employ an appropriate decomposition technique that relies on some inherent structural properties of the problem both during the modeling and algorithmic development phases. We show the superiority of the schedules generated by our methodology over those that are currently being used. Also, our methodology requires only a reasonable amount of computational time in solving this large-size problem. A facility layout problem involving arbitrary-shaped departments is the second problem that we investigate in this dissertation. We designate this problem as the arbitrary-shaped facility layout problem (ASFLP). The ASFLP deals with arranging a given set of departments (facilities, workstations, machines) within the confines of a given floor space, in order to optimize a desired metric, which invariably relates to the material handling cost. The topic of facility planning has been addressed rather extensively in the literature. However, a major limitation of most of the work reported in the literature is that they assume the shape of a department to be a rectangle (or even a square). The approach that relies on approximating an arbitrary-shaped department by a rectangle might result in an unattractive solution. The key research questions for the ASFLP are: (1) how to accurately model the arbitrary-shaped departments, and (2) how to effectively and efficiently determine the desired layout. We present a mixed-integer programming model that maintains the arbitrary shapes of the departments. We use a meta-heuristic to solve the large-size instances of the ASFLP in a reasonable amount of time. The third problem that we investigate is a supply chain scheduling problem. This problem involves two stages of a supply chain, specifically, a manufacturer and one or more customers. The key issue is to achieve an appropriate coordination between the production and distribution functions of the manufacturer so as to minimize the sum of the shipping and job tardiness costs. We, first, address a single customer problem, and then, extend our analysis to the case of multiple customers. For the single-customer problem, we present a polynomial-time algorithm to solve it to optimality. For the multiple-customer problem, we prove that this problem is NP-hard and solve it by appropriately decomposing it into subproblems, one of which is solvable in polynomial time. We propose a branch-and-bound-based methodology for this problem that exploits its structural properties. Results of an extensive computational experimentation are presented that show the following: (1) our algorithms are efficient to use and effective to implement; and (2) significant benefits accrue as a result of integrating the production and distribution functions.
- Modifying TRANSIMS (Transportation Analysis and Simulation) to Include Dynamic Value Pricing and Departure Time ChoiceLee, Kwang-Sub (Virginia Tech, 2009-06-10)Value pricing is now an accepted strategy for congestion and demand management in metropolitan areas. Along with alternate congestion management strategies, many transportation agencies have started looking at value pricing as a method to help financial shortfalls of new congestion management projects. Value pricing allows revenue collected from toll facilities to reduce operational concerns with underutilized High Occupancy Vehicle (HOV) facilities and relieves environmental concerns by reducing travel demand. Recently, transportation agencies have become increasingly interested in a high-occupancy toll (HOT) lane value pricing system with time-dependent tolls or dynamic tolls that change by the congestion level. However, there is a lack of proper travel demand forecasting tools that can evaluate and determine the impacts of pricing on travelers' decision in relation to congestion. The current methods use aggregated and zonal based approaches that lack the capability of tracing individual travelers through the supply network in order to capture his/her travel decisions as it pertains to the estimated cost for toll usage. The conventional models do not consider individual traveler socio-economic characteristics, particularly the heterogeneous value of time (VOT). TRANSIMS (Transportation Analysis Simulation System) differs from current travel demand forecasting methods in its underlying concepts and structure. These differences include a consistent and continuous representation of time, a detailed representation of persons and households, time-dependent routing, and a person-based Microsimulator. The TRANSIMS Microsimulator is the only simulation tool that maintains the identity of the traveler throughout the simulation and is capable of accessing the database of each individual (e.g., income, age, trip purpose). It traces the movement of people as well as vehicles on a second-by-second basis. Although TRANSIMS environment has significantly improved over the past few years, there are still issues that need to be improved upon including: the pricing of a HOT lane with dynamic tolls and the rescheduling of activities (i.e., departure time choice model) in response to network conditions. The primary objectives of this study are to improve functions of TRANSIMS by modifying source codes in order to utilize non-linear, individual VOT function in route choice of a HOT lane value pricing system, to implement 15-min dynamic tolls that vary by level of service (i.e., volume/capacity ratio) in the HOT lane(s) and to develop departure time choice model. Testing the proposed methodologies using real-world data as case studies and evaluating the impacts of dynamic tolls and/or departure time choice model are other objectives of this study. The test site of the HOT lane system is a segment of I-5 northbound from Hwy 217 to I-405 near the central business district (CBD) in Portland metropolitan region, Oregon. The experimental analyses of the application of dynamic tolls and individual VOT demonstrate the feasibility of the proposed simulation methodology. The outputs from the microscopic analysis clearly indicate the effectiveness of the analysis in scrutinizing travelers' route choice behavior based on different socio-economic and travel characteristics when different toll rates are applied. The effects of individual VOT on route choice are consistent with intuition; that is, travelers with higher VOTs are more likely to choose the HOT lane(s). In addition, the impacts of various tolls on route choice are analyzed on the basis of socio-economic and trip characteristics of each traveler. In addition to the development of the dynamic value pricing along with individual VOT, the departure time choice model is also developed. The proposed method is a post-processing of route choice and represents a sequential decision making process of travelers who want to depart early or late based on congestion, individual attributes and activity characteristics. This paper presents the results of a departure time choice model and its impacts on a HOT lane system using Portland, Oregon as a case study. The results show that 13.9% of households did change their departure time because of congestion and/or tolls.
- Nondifferentiable Optimization of Lagrangian Dual Formulations for Linear Programs with Recovery of Primal SolutionsLim, Churlzu (Virginia Tech, 2004-05-28)This dissertation is concerned with solving large-scale, ill-structured linear programming (LP) problems via Lagrangian dual (LD) reformulations. A principal motivation for this work arises in the context of solving mixed-integer programming (MIP) problems where LP relaxations, sometimes in higher dimensional spaces, are widely used for bounding and cut-generation purposes. Often, such relaxations turn out to be large-sized, ill-conditioned problems for which simplex as well as interior point based methods can tend to be ineffective. In contrast, Lagrangian relaxation or dual formulations, when applied in concert with suitable primal recovery strategies, have the potential for providing quick bounds as well as enabling useful branching mechanisms. However, the objective function of the Lagrangian dual is nondifferentiable, and hence, we cannot apply popular gradient or Hessian-based optimization techniques that are commonly used in differentiable optimization. Moreover, the subgradient methods that are popularly used are typically slow to converge and tend to stall while yet remote from optimality. On the other hand, more advanced methods, such as the bundle method and the space dilation method, involve additional computational and storage requirements that make them impractical for large-scale applications. Furthermore, although we might derive an optimal or near-optimal solution for LD, depending on the dual-adequacy of the methodology used, a primal solution may not be available. While some algorithmically simple primal solution recovery schemes have been developed in theory to accompany Lagrangian dual optimization, their practical performance has been disappointing. Rectifying these inadequacies is a challenging task that constitutes the focal point for this dissertation. Many practical applications dealing with production planning and control, engineering design, and decision-making in different operational settings fall within the purview of this context and stand to gain by advances in this technology. With this motivation, our primary interests in the present research effort are to develop effective nondifferentiable optimization (NDO) methods for solving Lagrangian duals of large-sized linear programs, and to design practical primal solution recovery techniques. This contribution would then facilitate the derivation of quick bounds/cuts and branching mechanisms in the context of branch-and-bound/cut methodologies for solving mixed-integer programming problems. We begin our research by adapting the Volume Algorithm (VA) of Barahona and Anbil (2000) developed at IBM as a direction-finding strategy within the variable target value method (VTVM) of Sherali et al. (2000). This adaptation makes VA resemble a deflected subgradient scheme in contrast with the bundle type interpretation afforded by the modification of VA as proposed by Bahiense et al. (2002). Although VA was originally developed in order to recover a primal optimal solution, we first present an example to demonstrate that it might indeed converge to a nonoptimal primal solution. However, under a suitable condition on the geometric moving average factor, we establish the convergence of the proposed algorithm in the dual space. A detailed computational study reveals that this approach yields a competitive procedure as compared with alternative strategies including the average direction strategy (ADS) of Sherali and Ulular (1989), a modified Polyak-Kelley cutting-plane strategy (PKC) designed by Sherali et al. (2001), and the modified Volume Algorithm routines RVA and BVA proposed by Bahiense et al. (2002), all embedded within the same VTVM framework. As far as CPU times are concerned, the VA strategy consumed the least computational effort for most problems to attain a near-optimal objective value. Moreover, the VA, ADS, and PKC strategies revealed considerable savings in CPU effort over a popular commercial linear program solver, CPLEX Version 8.1, when used to derive near-optimal solutions. Next, we consider two variable target value methods, the Level Algorithm of Brännlund (1993) and VTVM, which require no prior knowledge of upper bounds on the optimal objective value while guaranteeing convergence to an optimal solution. We evaluate these two algorithms in conjunction with various direction-finding and step-length strategies such as PS, ADS, VA, and PKC. Furthermore, we generalize the PKC strategy by further modifying the cut's right-hand-side values and additionally performing sequential projections onto some previously generated Polyak-Kelley's cutting-planes. We call this a generalized PKC (GPKC) strategy. Moreover, we point out some latent deficiencies in the two aforementioned variable target value algorithms in regard to their target value update mechanisms, and we suggest modifications in order to alleviate these shortcomings. We further explore an additional local search procedure to strengthen the performance of the algorithms. Noting that no related convergence analyses have been presented, we prove the convergence of the Level Algorithm when used in conjunction with the ADS, VA, or GPKC schemes. We also establish the convergence of VTVM when employing GPKC. Based on our computational study, the modified VTVM algorithm produced the best quality solutions when implemented with the GPKC strategy, where the latter performs sequential projections onto the four most recently generated Polyak-Kelley cutting-planes as available. Also, we demonstrate that the proposed modifications and the local search technique significantly improve the overall performance. Moreover, the VTVM procedure was observed to consistently outperform the Level Algorithm as well as a popular heuristic subgradient method of Held et al. (1974) that is widely used in practice. As far as CPU times are concerned, the modified VTVM procedure in concert with the GPKC strategy revealed the best performance, providing near-optimal solutions in about 27.84% of the effort at an average as that required by CPLEX 8.1 to produce the same quality solutions. We next consider the Lagrangian dual of a bounded-variable equality constrained linear programming problem. We develop two novel approaches for solving this problem, which attempt to circumvent or obviate the nondifferentiability of the objective function. First, noting that the Lagrangian dual function is differentiable almost everywhere, whenever the NDO algorithm encounters a nondifferentiable point, we employ a proposed perturbation technique (PT) in order to detect a differentiable point in the vicinity of the current solution from which a further search can be conducted. In a second approach, called the barrier-Lagrangian dual reformulation (BLR) method, the primal problem is reformulated by constructing a barrier function for the set of bounding constraints such that an optimal solution to the original problem can be recovered by suitably adjusting the barrier parameter. However, instead of solving the barrier problem itself, we dualize the equality constraints to formulate a Lagrangian dual function, which is shown to be twice differentiable. Since differentiable pathways are made available via these two proposed techniques, we can advantageously utilize differentiable optimization methods along with popular conjugate gradient schemes. Based on these constructs, we propose an algorithmic procedure that consists of two sequential phases. In Phase I, the PT and BLR methods along with various deflected gradient strategies are utilized, and then, in Phase II, we switch to the modified VTVM algorithm in concert with GPKC (VTVM-GPKC) that revealed the best performance in the previous study. We also designed two target value initialization methods to commence Phase II, based on the output from Phase I. The computational results reveal that Phase I indeed helps to significantly improve the algorithmic performance as compared with implementing VTVM-GPKC alone, even though the latter was run for twice as many iterations as used in the two-phase procedures. Among the implemented procedures, the PT method in concert with certain prescribed deflection and Phase II initialization schemes yielded the best overall quality solutions and CPU time performance, consuming only 3.19% of the effort as that required by CPLEX 8.1 to produce comparable solutions. Moreover, we also tested some ergodic primal recovery strategies with and without employing BLR as a warm-start, and observed that an initial BLR phase can significantly enhance the convergence of such primal recovery schemes. Having observed that the VTVM algorithm requires the fine-tuning of several parameters for different classes of problems in order to improve its performance, our next research investigation focuses on developing a robust variable target value framework that entails the management of only a few parameters. We therefore design a novel algorithm, called the Trust Region Target Value (TRTV) method, in which a trust region is constructed in the dual space, and its center and size are adjusted in a manner that eventually induces a dual optimum to lie at the center of the hypercube trust region. A related convergence analysis has also been conducted for this procedure. We additionally examined a variation of TRTV, where the hyperrectangular trust region is more effectively adjusted for normalizing the effects of the dual variables. In our computational study, we compared the performance of TRTV with that of the VTVM-GPKC procedure. For four direction-finding strategies (PS, VA, ADS, and GPKC), the TRTV algorithm consistently produced better quality solutions than did VTVM-GPKC. The best performance was obtained when TRTV was employed in concert with the PS strategy. Moreover, we observed that the solution quality produced by TRTV was consistently better than that obtained via VTVM, hence lending a greater degree of robustness. As far as computational effort is concerned, the TRTV-PS combination consumed only 4.94% of the CPU time required by CPLEX 8.1 at an average in order to find comparable quality solutions. Therefore, based on our extensive set of test problems, it appears that the TRTV along with the PS strategy is the best and the most robust procedure among those tested. Finally, we explore an outer-linearization (or cutting-plane) method along with a trust region approach for refining available dual solutions and recovering a primal optimum in the process. This method enables us to escape from a jamming phenomenon experienced at a non-optimal point, which commonly occurs when applying NDO methods, as well as to refine the available dual solution toward a dual optimum. Furthermore, we can recover a primal optimal solution when the resulting dual solution is close enough to a dual optimum, without generating a potentially excessive set of constraints. In our computational study, we tested two such trust region strategies, the Box-step (BS) method of Marsten et al. (1975) and a new Box Trust Region (BTR) approach, both appended to the foregoing TRTV-PS dual optimization methodology. Furthermore, we also experimented with deleting nonbinding constraints when the number of cuts exceeds a prescribed threshold value. This proposed refinement was able to further improve the solution quality, reducing the near-zero relative optimality gap for TRTV-PS by 20.6-32.8%. The best strategy turned out to be using the BTR method while deleting nonbinding constraints (BTR-D). As far as the recovery of optimal solutions is concerned, the BTR-D scheme resulted in the best measure of primal feasibility, and although it was terminated after it had accumulated only 50 constraints, it revealed a better performance than the ergodic primal recovery scheme of Shor (1985) that was run for 2000 iterations while also assuming knowledge of the optimal objective value in the dual scheme. In closing, we mention that there exist many optimization methods for complex systems such as communication network design, semiconductor manufacturing, and supply chain management, that have been formulated as large-sized mixed-integer programs, but for which deriving even near-optimal solutions has been elusive due to their exorbitant computational requirements. Noting that the computational effort for solving mixed-integer programs via branch-and-bound/cut methods strongly depends on the effectiveness with which the underlying linear programming relaxations can be solved, applying theoretically convergent and practically effective NDO methods in concert with efficient primal recovery procedures to suitable Lagrangian dual reformulations of these relaxations can significantly enhance the overall computational performance of these methods. We therefore hope that the methodologies designed and analyzed in this research effort will have a notable positive impact on analyzing such complex systems.
- Optimal and Approximate Algorithms for the Multiple-Lots-per-Carrier Scheduling and Integrated Automated Material Handling and Lot Scheduling Problems in 300mm Wafer FabsWang, Lixin (Virginia Tech, 2008-08-20)The latest generation of semiconductor wafer fabs produce Integrated Circuits (ICs) on silicon wafers of 300mm diameter. In this dissertation, we address the following two types of (new) scheduling problems that are encountered in this generation of wafer fabs: multiple-lots-per-carrier scheduling problem (MLCSP) and integrated automated material handling and lot scheduling problem (IMHLSP). We consider several variations of the MLCSP depending upon the number of machines used, the prevailing processing technology of the machines, and the type of objective functions involved. For the IMHLSP, we study two instances, one with infinite number of vehicles and the other with finite number of vehicles. We begin by introducing a single-machine, multiple-lots-per-carrier with single-wafer-processing-technology scheduling problem for the objective of minimizing the total completion time (MLCSP1). The wafer carrier is a front-opening unified pod (FOUP) that can hold a limited number of wafers. The problem is easy to solve when all the lots are of the same size. For the case of different lot sizes, we first relax the carrier (FOUP) capacity and propose a dynamic programming-based algorithm, called RelaxFOUP-DP, which enables a quick determination of its optimal solution that serves as a lower bound for the problem with limited FOUP capacity. Then, a branch-and-bound algorithm, designated as MLCSP1-B&B, is developed that relies on the lower bound determined by the RelaxFOUP-DP algorithm. Numerical tests indicate that MLCSP1-B&B finds optimal solutions much faster than the direct solution of the MLCSP1 model by the AMPL CPLEX 10.1 Solver. In fact, for the medium and low density problems, the MLCSP1-B&B algorithm finds optimal solutions at the starting node (node zero) itself. Next, we consider a single-machine, multiple-lots-per-carrier with single-carrier-processing-technology scheduling problem for the objective of minimizing total completion time (MLCSP2). As for the case of MLCSP1, the optimal solution for the case in which all the lots are of the same size can be obtained easily. For the case of different lot sizes, we determine a lower bound and an upper bound for the problem and prove the worst-case ratios for them. Subsequently we analyze a two-machine flow shop, multiple-lots-per-carrier with single-wafer-processing-technology scheduling problem for the objective of minimizing the makespan (MLCSP3). We first consider a relaxed version of this problem, and transform the original problem to a two-machine flow shop lot streaming problem. Then, we propose algorithms to find the optimal capacitated sublot sizes for the case of lots with (1) the same ratio of processing times, and, (2) different ratios of processing times on the machines. Since the optimal solutions obtained from the lot streaming problem may not be feasible to the MLCSP3, we develop heuristic methods based on the heuristic procedures for the bin packing problem. We develop four heuristic procedures for lots with the same ratio of processing times, and another four procedures for lots with different ratios of processing times on the machines. Results of our numerical experimentation are presented that show that our heuristic procedures generate almost optimal solutions in a matter of a few seconds. Next, we address the integrated automated material handling and lot scheduling problem (IMHLSP) in the presence of infinite number of vehicles. We, first, propose a new strong hybrid model, which has the advantages of both segregate and direct models. In the segregate model, a job is always transferred to the stocker after its completion at a station, while in the direct model, it is transferred to the next machine in case that machine can accommodate the jobs; otherwise, the job will stay at current station. The decisions involved in the strong hybrid model are the sequence in which to process the lots and a selection between the segregate and direct models for each lot, whichever optimizes system performance. We show that, under certain conditions about the processing times of the lots, the problem can be approximated by the cases of either infinite buffer or zero-buffer at the machines. Hence, we consider all three cases of the IMHLSP in this chapter, namely, infinite buffer, zero-buffer, and limited buffer sizes. For the strong hybrid model with limited buffer size, we propose a branch-and-bound algorithm, which uses a modified Johnson's algorithm to determine a lower bound. Two upper bounds for this algorithm are also determined. Results of our numerical investigation indicate that our algorithm finds optimal solutions faster than the direct solution of the IMHLSP model by the AMPL CPLEX 10.1 Solver. Experimental results also indicate that for the same problem size, the times required to solve the IMHLSP model with interbay movements are larger than those for intrabay movements. Finally, we investigate the IMHLSP in the presence of limited number of vehicles. Due to the complex nature of the underlying problem, we analyze small-size versions of this problem and develop algorithms for their solution. For some of these problems, we can find optimal solutions in polynomial time. Also, based on our analysis on small-size systems, we have shown why some real-time dispatching (RTD) rules used in real fabs are expected to perform well while not the others. In addition, we also propose some new and promising RTD rules based on our study.
- Optimal dynamic pricing for two perishable and substitutable productsLi, Feng (Virginia Tech, 2003-11-18)This thesis presents a dynamic pricing model where a seller offers two types of a generic product to a random number of customers. Customers show up sequentially. When a customer arrives, he will ---depending on the prices---either purchase one unit of type 1 product or one unit of type 2 product, or will leave empty-handed. The sale ends either when the entire stock is sold out, or when the customers are exhausted. The seller's task is to post the optimal prices for the two product types to each customer to maximize the expected total revenue. We use dynamic programming to formulate this problem, and derive the optimal policy for special cases. For general cases, we develop an algorithm to approximate the optimal policy and use numerical examples to demonstrate the efficiency of the algorithm. Finally, we apply the results to a continuous-time model where customers arrive according to a Poisson process. We develop a heuristic policy and use numerical examples to show the heuristic policy is very effective.