Browsing by Author "Jacobson, Sheldon H."
Now showing 1 - 18 of 18
Results Per Page
Sort Options
- An adaptive strategy for providing dynamic route guidance under non-recurrent traffic congestionLee, Sang-Keon (Virginia Tech, 1996-05-28)Traffic congestion on urban road networks has been recognized as one of the most serious problems with which modern cities are confronted. It is generally anticipated that Dynamic Route Guidance Systems (DRGS) will play an important role in reducing urban traffic congestion and improving traffic flows and safety. One of the most critical issues in designing these systems is in the development of optimal routing strategies that would maximize the benefits to overall system as well as individual users. Infrastructure based DRGS have advantage of pursuing system optimal routing strategy, which is more essential under abnormal traffic conditions such as non-recurrent congestion and natural disaster. However user compliance could be a problem under such a strategy, particularly when some of equipped drivers are urged not to choose minimum travel time path for the sake of improving the total network travel time. On the other hand, In-vehicle based DRGS can utilize the user-specified route selection criteria to avoid "Braess Paradox" under normal traffic conditions. However, it may be of little use under abnormal traffic conditions and high DRGS market penetration. In conducting the comparative analysis between system optimal strategy and user equilibrium strategy, significant differences were found within the mid-range traffic demand. The maximum total travel time difference occurs when the level of traffic demand is half of the system capacity. At this point, system optimal route guidance strategy can save more than 11% of the total travel time of user equilibrium route guidance strategy. The research proposes an adaptive routing strategy as an efficient dynamic route guidance under non-recurrent traffic congestion. Computation results show that there is no need to implement system optimal routing strategy at the initial stage of the incident. However, it is critical to use system optimal routing strategy as freeway and arterial are getting congested and the queue delay in freeway increases. The adaptive routing strategy is evaluated using Traffic simulation model, INTEGRATION. According to simulation results using an ideal network, the travel time saving ratio is maximum when both arterial and freeway have normal traffic demand under incident. In case of a realistic network, the adaptive routing strategy also proved to save the total travel time between 3% to 10% over the traditional user equilibrium routing strategy. The reduction of total travel time increases as the incident duration increases. Consequently, it is concluded that the adaptive routing strategy for DRGS is more efficient than using user equilibrium routing strategy alone.
- Assessing the Finite-Time Performance of Local Search AlgorithmsHenderson, Darrall (Virginia Tech, 2001-05-13)Identifying a globally optimal solution for an intractable discrete optimization problem is often cost prohibitive. Therefore, solutions that are within a predetermined threshold are often acceptable in practice. This dissertation introduces the concept of B-acceptable solutions where B is a predetermined threshold for the objective function value. It is difficult to assess a priori the effectiveness of local search algorithms, which makes the process of choosing parameters to improve their performance difficult. This dissertation introduces the B-acceptable solution probability in terms of B-acceptable solutions as a finite-time performance measure for local search algorithms. The B-acceptable solution probability reflects how effectively an algorithm has performed to date and how effectively an algorithm can be expected to perform in the future. The B-acceptable solution probability is also used to obtain necessary asymptotic convergence (with probability one) conditions. Upper and lower bounds for the B-acceptable solution probability are presented. These expressions assume particularly simple forms when applied to specific local search strategies such as Monte Carlo search and threshold accepting. Moreover, these expressions provide guidelines on how to manage the execution of local search algorithm runs. Computational experiments are reported to estimate the probability of reaching a B-acceptable solution for a fixed number of iterations. Logistic regression is applied as a tool to estimate the probability of reaching a B-acceptable solution for values of B close to the objective function value of a globally optimal solution as well as to estimate this objective function value. Computational experiments are reported with logistic regression for pure local search, simulated annealing and threshold accepting applied to instances of the TSP with known optimal solutions.
- A Convergence Analysis of Generalized Hill Climbing AlgorithmsSullivan, Kelly Ann (Virginia Tech, 1999-04-12)Generalized hill climbing (GHC) algorithms provide a unifying framework for describing several discrete optimization problem local search heuristics, including simulated annealing and tabu search. A necessary and a sufficient convergence condition for GHC algorithms are presented. The convergence conditions presented in this dissertation are based upon a new iteration classification scheme for GHC algorithms. The convergence theory for particular formulations of GHC algorithms is presented and the implications discussed. Examples are provided to illustrate the relationship between the new convergence conditions and previously existing convergence conditions in the literature. The contributions of the necessary and the sufficient convergence conditions for GHC algorithms are discussed and future research endeavors are suggested.
- Evaluation of the Design of a Family Practice Healthcare Clinic Using Discrete-Event SimulationSwisher, James R. (Virginia Tech, 1999-04-14)With increased pressures from governmental and insurance agencies, today's physician devotes less time to patient care and more time to administration. To alleviate this problem, Biological & Popular Culture, Inc. (Biopop) proposed the building of partnerships with healthcare professionals to provide high-quality, cost-effective medical care in a physician network setting. To assist Biopop in evaluating potential operating procedures, a discrete-event simulation model has been constructed. The model is built in an object-oriented, visual manner utilizing the Visual Simulation Environment (VSE). The model examines both internal Biopop operations and external clinic operations. The research presented herein describes the design of the simulation model and details the analysis of the clinical environment. A methodology for determining appropriate staffing and physical resources in a clinical environment is presented. This methodology takes advantage of several simulation-based statistical techniques, including batch means; fractional factorial design; and simultaneous ranking, selection, and multiple comparisons. An explanation of the experimental design is provided and results of the experimentation are presented. Based upon the experimental results, conclusions are drawn and recommendations are made for an appropriate staffing and facility size for a two-physician family practice healthcare clinic.
- Generalized hill climbing algorithms for discrete optimization problemsJohnson, Alan W. (Virginia Tech, 1996-10-01)Generalized hill climbing (GHC) algorithms are introduced, as a tool to address difficult discrete optimization problems. Particular formulations of GHC algorithms include simulated annealing (SA), local search, and threshold accepting (T A), among. others. A proof of convergence of GHC algorithms is presented, that relaxes the sufficient conditions for the most general proof of convergence for stochastic search algorithms in the literature (Anily and Federgruen [1987]). Proofs of convergence for SA are based on the concept that deteriorating (hill climbing) transitions between neighboring solutions are accepted by comparing a deterministic function of both the solution change cost and a temperature parameter to a uniform (0,1) random variable. GHC algorithms represent a more general model, whereby deteriorating moves are accepted according to a general random variable. Computational results are reported that illustrate relationships that exist between the GHC algorithm's finite-time performance on three problems, and the general random variable formulations used. The dissertation concludes with suggestions for further research.
- Information theory and the finite-time behavior of the simulated annealing algorithm: Experimental resultsFleischer, M.; Jacobson, Sheldon H. (INFORMS, 1999)This article presents an empirical approach that demonstrates a theoretical connection between (information theoretic) entropy measures and the finite-time performance of the simulated annealing algorithm. The methodology developed reads to several computational approaches for creating problem instances useful in testing and demonstrating the entropy/performance connection: use of generic configuration spaces, polynomial transformations between NP-hard problems, and modification of penalty parameters. In particular, the computational results show that higher entropy measures are associated with superior finite-time performance of the simulated annealing algorithm.
- Integrated Model to Plan Advanced Public Transportation SystemsBang, Chulho (Virginia Tech, 1998-08-11)The primary objective of this study is to develop an integrated public transportation planning framework to evaluate and plan Advanced Public Transportation Systems (APTS). With this purpose, a systems approach point of view is adopted to study the influence of new APTS technology in supply and demand transit variables. In this project the Systems Dynamics methodology is adopted to track the dynamic behavior of model variables and feedback loops forming among them. The proposed framework is illustrated in a case study involving automated vehicle location systems (AVL) applied to a small transit community. The proposed approach follows the same steps of the Systems Dynamics method; First, identify some key variables which are not only susceptive to AVL technology but also affect the supply-demand relationship of a bus transit environment. Second, trace and simplify the causal relationships of the variables considering impacts of facility supply changes to passenger demand responses and vice versa. To accomplish this, four detailed sub-models representing parts of the transit system are developed and combined under the Systems Dynamics methodology point of view. Theses Sub-models are: 1) demography, 2) urban transportation planning, 3) bus operations, and 4) evaluation. Finally, to validate the model procedure, the model is applied to a case study. This study attempts to encompass as many as possible factors around a bus transit system environment which can be impacted by new APTS technology to illustrate the use of the proposed framework. Some of these factors include: 1) Demographic characteristics; 2) urban or social activity of the study area and 3) changes to transportation facilities. The case study illustrates how the physical characteristics of the transit systems such as traffic demand, traffic conditions along the transit route, route layout, and bus performance can be affected by the new technology. Since APTS impacts are time dependent a continuous multi-loop simulation technique is adopted to track dynamic changes of all model variables. The analysis of the transit system is carried over a 20-year life cycle to illustrate the long term dynamics of the feedback structures inherent in the model. [Vita removed Aug. 2, 2010. GMc]
- Intractability results in discrete-event simulationJacobson, Sheldon H.; Yücesan, E. (EDP Sciences, 1995)Simulation is often viewed as a modeling methodology of last resort. This is due to the lack of automated algorithms and procedures that exist to aid in the construction and analysis of simulation models. Jacobson and Yucesan (1994) present four structural issue search problems associated with simulation model building, and prove them to be NP-hard, hence intractable under the worst-case analysis of computational complexity theory. In this article, three new structural issue search problems are presented and proven to be NP-hard. The consequences and implications of these results are discussed.
- Lagrangian Relaxation / Dual Approaches For Solving Large-Scale Linear Programming ProblemsMadabushi, Ananth R. (Virginia Tech, 1997-02-17)This research effort focuses on large-scale linear programming problems that arise in the context of solving various problems such as discrete linear or polynomial, and continuous nonlinear, nonconvex programming problems, using linearization and branch-and-cut algorithms for the discrete case, and using polyhedral outer-approximation methods for the continuous case. These problems arise in various applications in production planning, location-allocation, game theory, economics, and many engineering and systems design problems. During the solution process of discrete or continuous nonconvex problems using polyhedral approaches, one has to contend with repeatedly solving large-scale linear programming(LP) relaxations. Thus, it becomes imperative to employ an efficient method in solving these problems. It has been amply demonstrated that solving LP relaxations using a simplex-based algorithm, or even an interior-point type of procedure, can be inadequately slow ( especially in the presence of complicating constraints, dense coefficient matrices, and ill-conditioning ) in comparison with a Lagrangian Relaxation approach. With this motivation, we present a practical primal-dual subgradient algorithm that incorporates a dual ascent, a primal recovery, and a penalty function approach to recover a near optimal and feasible pair of primal and dual solutions. The proposed primal-dual approach is comprised of three stages. Stage I deals with solving the Lagrangian dual problem by using various subgradient deflection strategies such as the Modified Gradient Technique (MGT), the Average Direction Strategy (ADS), and a new direction strategy called the Modified Average Direction Strategy (M-ADS). In the latter, the deflection parameter is determined based on the process of projecting the unknown optimal direction onto the space spanned by the current subgradient direction and the previous direction. This projected direction approximates the desired optimal direction as closely as possible using the conjugate subgradient concept. The step-length rules implemented in this regard are the Quadratic Fit Line Search Method and a new line search method called the Directional Derivative Line Search Method in which we start with a prescribed step-length and then ascertain whether to increase or decrease the step-length value based on the right-hand and left-hand derivative information available at each iteration. In the second stage of the algorithm (Stage II), a sequence of updated primal solutions is generated using some convex combinations of the Lagrangian subproblem solutions. Alternatively, a starting primal optimal solution can be obtained using the complementary slackness conditions. Depending on the extent of feasibility and optimality attained, Stage III applies a penalty function method to improve the obtained primal solution toward a near feasible and optimal solution. We present computational experience using a set of randomly generated, structured, linear programming problems of the type that might typically arise in the context of discrete optimization.
- Network Design and Analysis Problems in Telecommunication, Location-Allocation, and Intelligent Transportation SystemsPark, Taehyung (Virginia Tech, 1998-06-22)This research is concerned with the development of algorithmic approaches for solving problems that arise in the design and analysis of telecommunication networks, location-allocation distribution contexts, and intelligent transportation networks. Specifically, the corresponding problems addressed in these areas are a local access and transport area (LATA) network design problem, the discrete equal-capacity p-median problem (PMED), and the estimation of dynamic origin-destination path ows or trip tables in a general network. For the LATA network problem, we develop a model and apply the Reformulation-Linearization Technique (RLT) to construct various enhanced tightened versions of the proposed model. We also design efficient Lagrangian dual schemes for solving the linear programming relaxation of the various enhanced models, and construct an effective heuristic procedure for deriving good quality solutions in this process. Extensive computational results are provided to demonstrate the progressive tightness resulting from the enhanced formulations and their effect on providing good quality feasible solutions. The results indicate that the proposed procedures typically yield solutions having an optimality gap of less than 2% with respect to the derived lower bound, within a reasonable effort that involves the solution of a single linear program. For the discrete equal-capacity p-median problem, we develop various valid inequalities, a separation routine for generating cutting planes via specific members of such inequalities, as well as an enhanced reformulation that constructs a partial convex hull representation that subsumes an entire class of valid inequalities via its linear programming relaxation. We also propose suitable heuristic schemes for solving this problem, based on sequentially rounding the continuous relaxation solutions obtained for the various equivalent formulations of the problem. Extensive computational results are provided to demonstrate the effectiveness of the proposed valid inequalities, enhanced formulations, and heuristic schemes. The results indicate that the proposed schemes for tightening the underlying relaxations play a significant role in enhancing the performance of both exact and heuristic solution methods for solving this class of problems. For the estimation of dynamic path ows in a general network, we propose a parametric optimization approach to estimate time-dependent path ows, or origin-destination trip tables, using available data on link traffic volumes for a general road network. Our model assumes knowledge of certain time-dependent link ow contribution factors that are a dynamic generalization of the path-link incidence matrix for the static case. We propose a column generation approach that uses a sequence of dynamic shortest path subproblems in order to solve this problem. Computational results are presented on several variants of two sample test networks from the literature. These results indicate the viability of the proposed approach for use in an on-line mode in practice. Finally, we present a summary of our developments and results, and offer several related recommendations for future research.
- On Finding the Location of an Underwater Mobile Robot Using Optimization TechniquesTunuguntla, Sai S. (Virginia Tech, 1998-08-12)This research aims at solving an engineering design problem encountered in the field of robotics using mathematical programming techniques. The problem addressed is an indispensable part of designing the operation of Ursula, an underwater mobile robot, and involves finding its location as it moves along the circumference of a nuclear reactor vessel. The study has been conducted with an intent to aid a laser based global positioning system to make this determination. The physical nature of this problem enables it to be conceptualized as a position and orientation determination problem. Ursula tests the weldments in the reactor vessel, and its position and orientation needs to be found continuously in real-time. The kinematic errors in the setup and the use of a laser based positioning system distinguish this from traditional position and orientation determination problems. The aim of this research effort is to construct a suitable representative mathematical model for this problem, and to design and compare various solution methodologies that are computationally competitive, numerically stable, and accurate.
- Optimization Models and Analysis of Routing, Location, Distribution, and Design Problems on NetworksSubramanian, Shivaram (Virginia Tech, 1999-04-13)A variety of practical network optimization problems arising in the context of public supply and commercial transportation, emergency response and risk management, engineering design, and industrial planning are addressed in this study. The decisions to be made in these problems include the location of supply centers, the routing, allocation and scheduling of flow between supply and demand locations, and the design of links in the network. This study is concerned with the development of optimization models and the analysis of five such problems, and the subsequent design and testing of exact and heuristic algorithms for solving these various network optimization problems. The first problem addressed is the time-dependent shortest pair of disjoint paths problem. We examine computational complexity issues, models, and algorithms for the problem of finding a shortest pair of disjoint paths between two nodes of a network such that the total travel delay is minimized, given that the individual arc delays are time-dependent. It is shown that this problem, and many variations of it, are nP-Hard and a 0-1 linear programming model that can be used to solve this problem is developed. This model can accommodate various degrees of disjointedness of the pair of paths, from complete to partial with respect to specific arcs. Next, we examine a minimum-risk routing problem and pursue the development, analysis, and testing of a mathematical model for determining a route that attempts to reduce the risk of low probability-high consequence accidents related with the transportation of hazardous materials (hazmat). More specifically, the problem addressed in this study involves finding a path that minimizes the conditional expectation of a consequence, given that an accident occurs, subject to the expected value of the consequence being lesser than or equal to a specified level n, and the probability of an accident on the path being also constrained to be no more than some value h. Various insights into related modeling issues are also provided. The values n and h are user-prescribed and could be prompted by the solution of shortest path problems that minimize the respective corresponding linear risk functions. The proposed model is a discrete, fractional programming problem that is solved using a specialized branch-and-bound approach. The model is also tested using realistic data associated with a case concerned with routing hazmat through the roadways of Bethlehem, Pennsylvania. The third problem deals with the development of a resource allocation strategy for emergency and risk management. An important and novel issue addressed in modeling this problem is the effect of loss in coverage due to the non-availability of emergency response vehicles that are currently serving certain primary incidents. This is accommodated within the model by including in the objective function a term that reflects the opportunity cost for serving an additional incident that might occur probabilistically on the network. A mixed-integer programming model is formulated for the multiple incident - multiple response problem, and we show how its solution capability can be significantly enhanced by injecting a particular structure into the constraints that results in an equivalent alternative model representation. Furthermore, for certain special cases of the MIMR problem, efficient polynomial-time solution approaches are prescribed. An algorithmic module composed of these procedures, and used in concert with a computationally efficient LP-based heuristic scheme that is developed, has been incorporated into an area-wide incident management decision support system (WAIMSS) at the Center for Transportation Research, Virginia Tech. The fourth problem addressed in this study deals with the development of global optimization algorithms for designing a water distribution network, or expanding an already existing one, that satisfies specified flow demands at stated pressure head requirements. The nonlinear, nonconvex network problem is transformed into the space of certain design variables. By relaxing the nonlinear constraints in the transformed space via suitable polyhedral outer approximations and applying the Reformulation-Linearization Technique (RLT), a tight linear lower bounding problem is derived. This problem provides an enhancement and a more precise representation of previous lower bounding relaxations that use similar approximations. Computational experience on three standard test problems from the literature is provided. For all these problems, a proven global optimal solution within a tolerance of 10 -4 % and/or within 1$ of optimality is obtained. For the two larger instances dealing with the Hanoi and New York test networks that have been open for nearly three decades, the solutions derived represent significant improvements, and the global optimality has been verified at the stated level of accuracy for these problems for the very first time in the literature. A new real network design test problem based on the Town of Blacksburg Water Distribution System is also offered to be included in the available library of test cases, and related computational results on deriving global optimal solutions are presented. The final problem addressed in this study is concerned with a global optimization approach for solving capacitated Euclidean distance multifacility location-allocation problems, as well as the development of a new algorithm for solving the generalized lp distance location-allocation problem. There exists no global optimization algorithm that has been developed and tested for this class of problems, aside from a total enumeration approach. Beginning with the Euclidean distance problem, we design depth-first and best-first branch-and-bound algorithms based on a partitioning of the allocation space that finitely converges to a global optimum for this nonconvex problem. For deriving lower bounds at node subproblems in these partial enumeration schemes, we employ two types of procedures. The first approach computes a lower bound via a simple projected location space lower bounding (PLSB) subproblem. The second approach derives a significantly enhanced lower bound by using a Reformulation-Linearization Technique (RLT) to transform an equivalent representation of the original nonconvex problem into a higher dimensional linear programming relaxation. In addition, certain cut-set inequalities generated in the allocation space, objective function based cuts derived in the location space, and tangential linear supporting hyperplanes for the distance function are added to further tighten the lower bounding relaxation. The RLT procedure is then extended to the.general lp distance problem for 1 < p < 2. Various issues related to the selection of branching variables, the design of heuristics via special selective backtracking mechanisms, and the study of the sensitivity of the proposed algorithm to the value of p in the lp - norm, are computationally investigated. Computational experience is also provided on a set of test problems to investigate both the PLSB and the RLT-lower bounding schemes. The results indicate that the proposed global optimization approach using the RLT-based scheme offers a promising viable solution procedure. In fact, among the problems solved, for the only two test instances previously available in the literature for the Euclidean distance case that were posed in 1979, we report proven global optimal solutions within a tolerance of 0.1% for the first time. It is hoped that the modeling, analysis, insights, and concepts provided for these various network based problems that arise in diverse routing, location, distribution, and design contexts, will provide guidelines for studying many other problems that arise in related situations.
- Optimizing Aviation Security Architectures using the SAFE ModelSavage, Cynthia Leigh (Virginia Tech, 2003-03-03)The Federal Aviation Administration (FAA) wishes to minimize the overall operational costs of their aviation security detection systems. These systems consist of a collection of security devices. The objective of this research is to develop an algorithm to design the optimal system of devices. The Secure Air Flight Effectiveness (SAFE) Model accomplishes this objective by using the probability of detection and the probability of giving a false alarm for each individual device. A Generalized Hill Climbing (GHC) algorithm was implemented to identify the system with the minimum operational cost. Suggestions for future research directions are also included.
- A Probabilistic Study of 3-SATISFIABILITYAytemiz, Tevfik (Virginia Tech, 2001-06-28)Discrete optimization problems are defined by a finite set of solutions together with an objective function value assigned to each solution. Local search algorithms provide useful tools for addressing a wide variety of intractable discrete optimization problems. Each such algorithm offers a distinct set of rules to intelligently exploit the solution space with the hope of finding an optimal/near optimal solution using a reasonable amount of computing time. This research studies and analyses randomly generated instances of 3-SATISFIABILITY to gain insights into the structure of the underlying solution space. Two random variables are defined and analyzed to assess the probability that a fixed solution will be assigned a particular objective function value in a randomly generated instance of 3-SATISFIABILITY. Then, a random vector is defined and analyzed to investigate how the solutions in the solution space are distributed over their objective function values. These results are then used to define a stopping criterion for local search algorithms applied to MAX 3-SATISFIABILITY. This research also analyses and compares the effectiveness of two local search algorithms, tabu search and random restart local search, on MAX 3-SATISFIABILITY. Computational results with tabu search and random restart local search on randomly generated instances of 3-SATISFIABILITY are reported. These results suggest that, given a limited computing budget, tabu search offers an effective alternative to random restart local search. On the other hand, these two algorithms yield similar results in terms of the best solution found. The computational results also suggest that for randomly generated instances of 3-SATISFIABILITY (of the same size), the globally optimal solution objective function values are typically concentrated over a narrow range.
- Simultaneous Generalized Hill Climbing Algorithms for Addressing Sets of Discrete Optimization ProblemsVaughan, Diane Elizabeth (Virginia Tech, 2000-07-31)Generalized hill climbing (GHC) algorithms provide a framework for using local search algorithms to address intractable discrete optimization problems. Many well-known local search algorithms can be formulated as GHC algorithms, including simulated annealing, threshold accepting, Monte Carlo search, and pure local search (among others). This dissertation develops a mathematical framework for simultaneously addressing a set of related discrete optimization problems using GHC algorithms. The resulting algorithms, termed simultaneous generalized hill climbing (SGHC) algorithms, can be applied to a wide variety of sets of related discrete optimization problems. The SGHC algorithm probabilistically moves between these discrete optimization problems according to a problem generation probability function. This dissertation establishes that the problem generation probability function is a stochastic process that satisfies the Markov property. Therefore, given a SGHC algorithm, movement between these discrete optimization problems can be modeled as a Markov chain. Sufficient conditions that guarantee that this Markov chain has a uniform stationary probability distribution are presented. Moreover, sufficient conditions are obtained that guarantee that a SGHC algorithm will visit the globally optimal solution over all the problems in a set of related discrete optimization problems. Computational results are presented with SGHC algorithms for a set of traveling salesman problems. For comparison purposes, GHC algorithms are also applied individually to each traveling salesman problem. These computational results suggest that optimal/near optimal solutions can often be reached more quickly using a SGHC algorithm.
- A Stochastic Approach to Modeling Aviation Security Problems Using the KNAPSACK ProblemSimms, Amy E. (Virginia Tech, 1997-06-20)Designers, operators, and users of multiple-device, access control security systems are challenged by the false alarm, false clear tradeoff. Given a particular access control security system, and a prespecified false clear standard, there is an optimal (minimal) false alarm rate that can be achieved. The objective of this research is to develop methods that can be used to determine this false alarm rate. Meeting this objective requires knowledge of the joint conditional probability density functions for the security device responses. Two sampling procedures, the static grid estimation procedure and the dynamic grid estimation procedure, are proposed to estimate these functions. The concept of a system response function is introduced and the problem of determining the optimal system response function that minimizes the false alarm rate, while meeting the false clear standard, is formulated as a decision problem and proven to be NP-complete. Two heuristic procedures, the Greedy algorithm and the Dynamic Programming algorithm, are formulated to address this problem. Computational results using simulated security data are reported. These results are compared to analytical results, obtained for a prespecified system response function form. Suggestions for future research are also included.
- Tactical Network Flow and Discrete Optimization Models and Algorithms for the Empty Railcar Transportation ProblemSuharko, Arief Bimantoro (Virginia Tech, 1997-12-05)Prior to 1980, the practice in multilevel autorack management was to load the railcars at various origin points, ship them to the destination ramps, unload them, and then return each car to the loading point where it originated. Recognizing the inefficiency of such a practice with respect to the fleet size that had to be maintained, and the associated poor utilization due to the excessive empty miles logged, a consolidation of the railcars was initiated and completed by February 1982. Under this pooling program, a central management was established to control the repositioning of three types of empty railcars for eight principal automobile manufacturers. Today, the practice is to consolidate the fleets of all automobile manufacturers for each equipment type, and to solve the distribution problem of repositioning empty multilevel autoracks of each type from points at which they are unloaded to automobile assembly facilities where they need to be reloaded. Each such problem is referred to in the railroad industry as a repositioning scenario. In this dissertation, we present two tactical models to assist in the task of centrally managing the distribution of empty railcars on a day-to-day basis for each repositioning scenario. These models take into account various practical issues such as uncertainties, priorities with respect to time and demand locations, multiple objectives related to minimizing different types of latenesses in delivery, and blocking issues. It is also of great practical interest to the central management team to have the ability to conduct various sensitivity analyses in its operation. Accordingly, the system provides for the capability to investigate various what-if scenarios such as fixing decisions on running a specified block of cars (control orders) along certain routes as dictated by business needs, and handling changes in supplies, demands, priorities, and transit time characteristics. Moreover, the solution methodology provides a flexible decision-making capability by permitting a series of runs based on a sequential decision-fixing process in a real-time operational mode. A turn-around response of about five minutes per scenario (on a Pentium PC or equivalent) is desired in practice. This dissertation begins by developing several progressive formulations that incorporate many practical considerations in the empty railroad car distribution planning system. We investigate the performance of two principal models in this progression to gain more insights into the implementation aspects of our approach. The first model (TDSS1: Tactical Decision Support System-1) considers all the identified features of the problem except for blocking, and results in a network formulation of the problem. This model examines various practical issues such as time and demand location-based priorities as well as uncertainty in data within a multiple objective framework. In the second model (TDSS2: Tactical Decision Support System-2), we add a substantial degree of complexity by addressing blocking considerations. Enforcement of block formation renders the model as a network flow problem with side-constraints and discrete side-variables. We show how the resulting mixed-integer-programming formulation can be enhanced via some partial convex hull constructions using the Reformulation-Linearization Technique (RLT). This tightening of the underlying linear programming relaxation is shown to permit the solution of larger problem sizes, and enables the exact solution of certain scenarios having 5,000 - 8,000 arcs. However, in order to accommodate the strict run-time limit requirements imposed in practice for larger scenarios having about 150,000 arcs, various heuristics are developed to solve this problem. In using a combination of proposed strategies, 23 principal heuristics, plus other hybrid variants, are composed for testing. By examining the performance of various exact and heuristic procedures with respect to speed of operation and the quality of solutions produced on a test-bed of real problems, we prescribe recommendations for a production code to be used in practice. Besides providing a tool to aid in the decision-making process, a principal utility of the developed system is that it provides the opportunity to conduct various what-if analyses. The effects of many of the practical considerations that have been incorporated in TDSS2 can be studied via such sensitivity analyses. A special graphical user interface has been implemented that permits railcar distributors to investigate the effects of varying supplies, demands, and routes, retrieving railcars from storage, diverting en-route railcars, and exploring various customer or user-driven fixed dispositions. The user has the flexibility, therefore, to sequentially compose a decision to implement on a daily basis by using business judgment to make suggestions and studying the consequent response prompted by the model. This system is currently in use by the TTX company, Chicago, Illinois, in order to make distribution decisions for the railroad and automobile industries. The dissertation concludes by presenting a system flowchart for the overall implemented approach, a summary of our research and provides recommendations for future algorithmic enhancements based on Lagrangian relaxation techniques. NOTE: (03/2011) An updated copy of this ETD was added after there were patron reports of problems with the file.
- A Visual Simulation Life-Cycle Of The Queston Physician NetworkJun, Jong Brian (Virginia Tech, 1999-05-07)This research develops a discrete-event simulation model of the Queston Physician Network using the Visual Simulation Environment (VSE), an object-oriented simulation software program. The Queston Physician Network, a subsidiary of Biological & Popular Culture, Inc., attempts to centralize the administrative, financial, and telecommunication needs of a network of primary care physicians located throughout the United States. The VSE, running on the NeXTSTEP operating system, is a discrete event simulation software package with the capability to tackle the complexities associated with such a network design. The advantages of VSE over other simulation languages include its visualization of objects, domain independence, and object-oriented design and modeling. The objective of the Queston simulation model is to addresses the performance capabilities of the physician network operationally centralized in the Queston Information Center. Additionally, the model could be used to analyze a physician-patient encounter of a generic clinic to identify recommended staffing and scheduling schemes. Object-oriented programming allows the objects in the model to be instantiated at the time of execution. This feature permits the creation of one flexible generic clinic that can be reused to produce several identical clinics at program execution. In this model, one generic, family practice clinic and the Queston Information Center are created. Input data provided by both medical experts and a time study are used. Verification and validation techniques are applied in all phases of the modeling effort. Results using different configurations are presented and recommendations for future research are discussed