Browsing by Author "Bish, Ebru K."
Now showing 1 - 20 of 74
Results Per Page
Sort Options
- Analysis and Approximations of Time Dependent Queueing ModelsNasr, Walid (Virginia Tech, 2008-01-18)Developing equations to compute congestion measures for the general G/G/s/c queueing model and networks of such nodes has always been a challenge. One approach to analyzing such systems is to approximate the model-specified general input processes and distributions by processes and distributions from the more computationally friendly family of phase-type processes and distributions. We develop numerical approximation methods for analysis of general time-dependent queueing nodes by introducing new approximations for the time-dependent first two moments of the number-in-system and departure-count processes.
- Analysis and Improvement of Cross-dock Operations in Less-than-Truckload Freight Transportaion IndustryTong, Xiangshang (Virginia Tech, 2009-08-11)The less-than-truckload (LTL) transportation industry is highly competitive with low profit margins. Carriers in this industry strive to reduce costs and improve customer service to remain profitable. LTL carriers rely on a network of hubs and service centers to transfer freight. A hub is typically a cross docking terminal in which shipments from inbound trailers are unloaded and reassigned and consolidated onto outbound trailers going to the correct destinations. Freight handling in a hub is labor intensive, and workers must quickly transfer freight during a short time period to support customer service levels. Reducing shipment transfer time in hubs offers the opportunity to reduce labor costs, improve customer service, and increase competitive advantages for carriers. This research focuses on improving the efficiency of hub operations in order to decrease the handling costs and increase service levels for LTL carriers. Specifically, the following two decision problems are investigated: (1) assigning trailers to dock doors to minimize the total time required to transfer shipments from inbound trailers to destination trailers and (2) sequencing unloading and loading of freight to minimize the time required by dock employees. The trailer-to-door assignment problem is modeled as a Quadratic Assignment Problem (QAP). Both semi-permanent and dynamic layouts for the trailer-to-door assignment problem are evaluated. Improvement based heuristics, including pair-wise exchange, simulated annealing, and genetic algorithms, are explored to solve the trailer-to-door assignment problem. The freight sequencing problem is modeled as a Rural Postman Problem (RPP). A Balance and Connect Algorithm (BCA) and an Assign First and Route Second Algorithm (AFRSA) are investigated and compared to Balanced Trailer-at-a-Time (BTAAT), Balanced Trailer-at-a-Time with Offloading (BTAATWO), and Nearest Neighbor (NN). The heuristics are evaluated using data from two LTL carriers. For these data sets, both the total travel distance and the transfer time of hub operations are reduced using a dynamic layout with efficient freight sequencing approaches, such as the Balance and Connect Algorithm (BCA), the Balanced Trailer-at-a-Time with Offloading (BTAATWO), and the Nearest Neighbor (NN). Specifically, with a dynamic layout, the BCA reduces travel distance by 10% to 27% over BTAAT and reduces the transfer time by 17% to 68% over BTAAT. A simulation study is also conducted for hub operations in a dynamic and stochastic environment. The solutions from the door assignment and freight sequencing approaches are evaluated in a simulation model to determine their effectiveness in this environment. The simulation results further demonstrate that the performance measures of hub operations are improved using a dynamic layout and efficient freight sequencing approaches. The main contributions of this research are the integer programming models developed for the freight sequencing problem (FSP), based on the Rural Postman Problem (RPP). This is the first known application of the RPP for the FSP. Efficient heuristics are developed for the FSP for a single worker and for multiple workers. These heuristics are analyzed and compared to previous research using industry data.
- Analysis of Decision Postponement Strategies for Aircraft Assignment under UncertaintySuwandechochai, Rawee (Virginia Tech, 2002-04-14)The ability to effectively match supply and demand can lead to significant revenue benefits in the airline industry. Airline supply management deals with assigning the right resources (i.e., aircraft and crew) to the right routes in the flight network. Due to certain crew regulations, operating characteristics, and constraints of the airline companies, these supply management decisions need to be made well in advance of departures, at a time when demand is highly uncertain. However, demand forecasts improve markedly over time, as more information on demand patterns is gathered. Thus, exploiting the flexibilities in the system that allows the partial postponement of supply decisions to a later time, when more accurate demand information is obtained, can significantly improve the airline's revenue. In this thesis, we propose and analyze the Demand Driven Swapping (DDS) approach that aims at improving the airline's revenue by reducing the supply-demand mismatches through dynamically swapping aircraft as departures approach. This research has been done in collaboration with our industrial partner, the United Airlines Research and Development Division. Due to the proximity to departures, the DDS problem is restricted by two main constraints: 1) the initial crew schedule needs to be kept intact (due to certain union contracts); and 2) airport services and operations need to be preserved to the greatest extent possible. As a result, only a limited number of simple swaps can be performed between aircraft types of the same family (i.e. crew-compatible aircraft types). However, the swaps can be potentially performed on a daily basis given the initial fleet assignments. Clearly, the swapping criteria, frequency, and timing will highly impact the revenue benefits of the DDS approach. When the swapping decisions are made several weeks prior to departures (i.e., 4-6 weeks before departures), they will not cause much disturbance to the operations, but will be performed under highly uncertain demand information. On the other hand, swapping decisions that are delayed to a time later (i.e., 1-3 weeks before departures) will decrease the possibility of bad swaps, but will result in larger costs due to the higher disruptions to airport services and operations. Thus our research objective is to provide guidelines and principles on how the flexible capacity should be managed in the system. For this purpose, we study the effectiveness of different swapping strategies, characterized in terms of their frequency and timing, for hedging against the demand uncertainty. We first study stylized analytical models to gain insights into the critical parameters that affect these benefits. Simulation models are then conducted to test the validity of our analytical findings as well as to analyze more complex strategies and assess the dynamic performance of these strategies. The analytical results indicate that strategies that make the swapping decision early in time (in order to minimize disturbances to the operations) perform very well on routes, where the demand uncertainty is low and the expected demands on the legs are well-balanced. Otherwise, a swapping strategy, which revises the swapping decision over time, should be implemented. Our simulation results, based on real data obtained from United Airlines, confirm the analytical findings.
- Analysis of the Benefits of Resource Flexibility, Considering Different Flexibility StructuresHong, Seong-Jong (Virginia Tech, 2004-04-30)We study the benefits of resource flexibility, considering two different flexibility structures. First, we want to understand the impact of the firm's pricing strategy on its resource investment decision, considering a partially flexible resource. Secondly, we study the benefits of a flexible resource strategic approach, considering a resource flexibility structure that has not been studied in the previous literature. First, we study the capacity investment decision faced by a firm that offers two products/services and that is a price-setter for both products/services. The products offered by the firm are of varying levels (complexities), such that the resources that can be used to produce the higher level product can also be used to produce the lower level one. Although the firm needs to make its capacity investment decision under high demand uncertainty, it can utilize this limited (downward) resource flexibility, in addition to pricing, to more effectively match its supply with demand. Sample applications include a service company, whose technicians are of different capabilities, such that a higher level technician can perform all tasks performed by a lower level technician; a firm that owns a main plant, satisfying both end-product and intermediate-product demand, and a subsidiary, satisfying the intermediate-product demand only. We formulate this decision problem as a two-stage stochastic programming problem with recourse, and characterize the structural properties of the firm's optimal resource investment strategy when resource flexibility and pricing flexibility are considered in the investment decision. We show that the firm's optimal resource investment strategy follows a threshold policy. This structure allows us to understand the impact of coordinated decision-making, when the resource flexibility is taken into account in the investment decision, on the firm's optimal investment strategy, and establish the conditions under which the firm invests in the flexible resource. We also study the impact of demand correlation on the firm's optimal resource investment strategy, and show that it may be optimal for the firm to invest in both flexible and dedicated resources when product demand patterns are perfectly positively correlated. Our results offer managerial principles and insights on the firm's optimal resource investment strategy as well as extend the newsvendor problem with pricing, by allowing for multiple resources (suppliers), multiple products, and resource pooling. Secondly, we study the benefits of a delayed decision making strategy under demand uncertainty, considering a system that satisfies two demand streams with two capacitated and flexible resources. Resource flexibility allows the firm to delay its resource allocation decision to a time when partial information on demands is obtained and demand uncertainty is reduced. We characterize the structure of the firm's optimal delayed resource allocation strategy. This characterization allows us to study how the revenue benefits of the delayed resource allocation strategy depend on demand and capacity parameters, and the length of the selling season. Our study shows that the revenue benefits of this strategy can be significant, especially when demand rates of the different types are close, while resource capacities are much different. Based on our analysis, we provide guidelines on the utilization of such strategies. Finally, we incorporate the uncertainty in demand parameters into our models and study the effectiveness of several delayed capacity allocation mechanisms that utilize the resource flexibility. In particular, we consider that demand forecasts are uncertain at the start of the selling season and are updated using a Bayesian framework as early demand figures are observed. We propose several heuristic capacity allocation policies that are easy to implement as well as a heuristic procedure that relies on a stochastic dynamic programming formulation and perform a numerical study. Our study determines the conditions under which each policy is effective.
- The Approach-dependent, Time-dependent, Label-constrained Shortest Path Problem and Enhancements for the CART Algorithm with Application to Transportation SystemsJeenanunta, Chawalit (Virginia Tech, 2004-05-10)In this dissertation, we consider two important problems pertaining to the analysis of transportation systems. The first of these is an approach-dependent, time-dependent, label-constrained shortest path problem that arises in the context of the Route Planner Module of the Transportation Analysis Simulation System (TRANSIMS), which has been developed by the Los Alamos National Laboratory for the Federal Highway Administration. This is a variant of the shortest path problem defined on a transportation network comprised of a set of nodes and a set of directed arcs such that each arc has an associated label designating a mode of transportation, and an associated travel time function that depends on the time of arrival at the tail node, as well as on the node via which this node was approached. The lattermost feature is a new concept injected into the time-dependent, label-constrained shortest path problem, and is used to model turn-penalties in transportation networks. The time spent at an intersection before entering the next link would depend on whether we travel straight through the intersection, or make a right turn at it, or make a left turn at it. Accordingly, we model this situation by incorporating within each link's travel time function a dependence on the link via which its tail node was approached. We propose two effective algorithms to solve this problem by adapting two efficient existing algorithms to handle time dependency and label constraints: the Partitioned Shortest Path (PSP) algorithm and the Heap-Dijkstra (HP-Dijkstra) algorithm, and present related theoretical complexity results. In addition, we also explore various heuristic methods to curtail the search. We explore an Augmented Ellipsoidal Region Technique (A-ERT) and a Distance-Based A-ERT, along with some variants to curtail the search for an optimal path between a given origin and destination to more promising subsets of the network. This helps speed up computation without sacrificing optimality. We also incorporate an approach-dependent delay estimation function, and in concert with a search tree level-based technique, we derive a total estimated travel time and use this as a key to prioritize node selections or to sort elements in the heap. As soon as we reach the destination node, while it is within some p% of the minimum key value of the heap, we then terminate the search. We name the versions of PSP and HP-Dijkstra that employ this method as Early Terminated PSP (ET-PSP) and Early Terminated Heap-Dijkstra (ETHP-Dijkstra) algorithms. All of these procedures are compared with the original Route Planner Module within TRANSIMS, which is implemented in the Linux operating system, using C++ along with the g++ GNU compiler. Extensive computational testing has been conducted using available data from the Portland, Oregon, and Blacksburg, Virginia, transportation networks to investigate the efficacy of the developed procedures. In particular, we have tested twenty-five different combinations of network curtailment and algorithmic strategies on three test networks: the Blacksburg-light, the Blacksburg-full, and the BigNet network. The results indicate that the Heap-Dijkstra algorithm implementations are much faster than the PSP algorithmic approaches for solving the underlying problem exactly. Furthermore, mong the curtailment schemes, the ETHP-Dijkstra with p=5%, yields the best overall results. This method produces solutions within 0.37-1.91% of optimality, while decreasing CPU effort by 56.68% at an average, as compared with applying the best available exact algorithm. The second part of this dissertation is concerned with the Classification and Regression Tree (CART) algorithm, and its application to the Activity Generation Module of TRANSIMS. The CART algorithm has been popularly used in various contexts by transportation engineers and planners to correlate a set of independent household demographic variables with certain dependent activity or travel time variables. However, the algorithm lacks an automated mechanism for deriving classification trees based on optimizing specified objective functions and handling desired side-constraints that govern the structure of the tree and the statistical and demographic nature of its leaf nodes. Using a novel set partitioning formulation, we propose new tree development, and more importantly, optimal pruning strategies to accommodate the consideration of such objective functions and side-constraints, and establish the theoretical validity of our approach. This general enhancement of the CART algorithm is then applied to the Activity Generator module of TRANSIMS. Related computational results are presented using real data pertaining to the Portland, Oregon, and Blacksburg, Virginia, transportation networks to demonstrate the flexibility and effectiveness of the proposed approach in classifying data, as well as to examine its numerical performance. The results indicate that a variety of objective functions and constraints can be readily accommodated to efficiently control the structural information that is captured by the developed classification tree as desired by the planner or analyst, dependent on the scope of the application at hand.
- Approximating Deterministic Changes to Ph(t)/Ph(t)/1/c and Ph(t)/M(t)/s/c Queueing ModelsKulkarni, Aditya Umesh (Virginia Tech, 2012-05-25)A deterministic change to a time-varying queueing model is described as either changing the number of entities, the queue capacity, or the number of servers in the system at selected times. We use a surrogate distribution for N(t), the number of entities in the system at time t, to approximate deterministic changes to the Ph(t)/Ph(t)/1/c and the Ph(t)/M(t)/s/c queueing models. We develop a solution technique to minimize the number of state probabilities to be approximated.
- Assessing the Finite-Time Performance of Local Search AlgorithmsHenderson, Darrall (Virginia Tech, 2001-05-13)Identifying a globally optimal solution for an intractable discrete optimization problem is often cost prohibitive. Therefore, solutions that are within a predetermined threshold are often acceptable in practice. This dissertation introduces the concept of B-acceptable solutions where B is a predetermined threshold for the objective function value. It is difficult to assess a priori the effectiveness of local search algorithms, which makes the process of choosing parameters to improve their performance difficult. This dissertation introduces the B-acceptable solution probability in terms of B-acceptable solutions as a finite-time performance measure for local search algorithms. The B-acceptable solution probability reflects how effectively an algorithm has performed to date and how effectively an algorithm can be expected to perform in the future. The B-acceptable solution probability is also used to obtain necessary asymptotic convergence (with probability one) conditions. Upper and lower bounds for the B-acceptable solution probability are presented. These expressions assume particularly simple forms when applied to specific local search strategies such as Monte Carlo search and threshold accepting. Moreover, these expressions provide guidelines on how to manage the execution of local search algorithm runs. Computational experiments are reported to estimate the probability of reaching a B-acceptable solution for a fixed number of iterations. Logistic regression is applied as a tool to estimate the probability of reaching a B-acceptable solution for values of B close to the objective function value of a globally optimal solution as well as to estimate this objective function value. Computational experiments are reported with logistic regression for pure local search, simulated annealing and threshold accepting applied to instances of the TSP with known optimal solutions.
- Asymptotic Worst-Case Analyses for the Open Bin Packing ProblemOngkunaruk, Pornthipa (Virginia Tech, 2005-10-07)The open bin packing problem (OBPP) is a new variant of the well-known bin packing problem. In the OBPP, items are packed into bins so that the total content before the last item in each bin is strictly less than the bin capacity. The objective is to minimize the number of bins used. The applications of the OBPP can be found in the subway station systems in Hong Kong and Taipei and the scheduling in manufacturing industries. We show that the OBPP is NP-hard and propose two heuristic algorithms instead of solving the problem to optimality. We propose two offline algorithms in which the information of the items is known in advance. First, we consider the First Fit Decreasing (FFD) which is a good approximation algorithm for the bin packing problem. We prove that its asymptotic worst-case performance ratio is no more than 3/2. We observe that its performance for the OBPP is worse than that of the BPP. Consequently, we modify it by adding the algorithm that the set of largest items is the set of last items in each bin. Then, we propose the Modified First Fit Decreasing (MFFD) as an alternative and prove that its asymptotic worst-case performance ratio is no more than 91/80. We conduct empirical tests to show their average-case performance. The results show that in general, the FFD and MFFD algorithms use no more than 33% and 1% of the number of bins than that of optimal packing, respectively. In addition, the MFFD is asymptotically optimal when the sizes of items are (0,1) uniformly distributed.
- Availability Analysis for the Quasi-Renewal ProcessRehmert, Ian Jon (Virginia Tech, 2000-10-09)The behavior of repairable equipment is often modeled under assumptions such as perfect repair, minimal repair, or negligible repair. However the majority of equipment behavior does not fall into any of these categories. Rather, repair actions do take time and the condition of equipment following repair is not strictly "as good as new" or "as bad as it was" prior to repair. Non-homogeneous processes that reflect this type of behavior are not studied nearly as much as the minimal repair case, but they far more realistic in many situations. For this reason, the quasi-renewal process provides an appealing alternative to many existing models for describing a non-homogeneous process. A quasi-renewal process is characterized by a parameter that indicates process deterioration or improvement by falling in the interval [0,1) or (1,Infinity) respectively. This parameter is the amount by which subsequent operation or repair intervals are scaled in terms of the immediately previous operation or repair interval. Two equivalent expressions for the point availability of a system with operation intervals and repair intervals that deteriorate according to a quasi-renewal process are constructed. In addition to general expressions for the point availability, several theoretical distributions on the operation and repair intervals are considered and specific forms of the quasi-renewal and point availability functions are developed. The two point availability expressions are used to provide upper and lower bounds on the approximated point availability. Numerical results and general behavior of the point availability and quasi-renewal functions are examined. The framework provided here allows for the description and prediction of the time-dependent behavior of a non-homogeneous process without the assumption of limiting behavior, a specific cost structure, or minimal repair.
- Availability Analysis for the Quasi-Renewal Process with an Age-Dependent Preventive Maintenance PolicyIntiyot, Boonyarit (Virginia Tech, 2007-09-05)A quasi-renewal process is more realistic in modeling the behavior of a repairable system than traditional models such as perfect repair and minimal repair since it reflects the deterioration process of the system over time while traditional models do not. The quasi-renewal parameter is set to a value between 0 and 1 to indicate the rate of deterioration. Moreover, a quasi-renewal process can also be used to model the increasing time of maintenance actions due to the increasing difficulty of maintaining an aging system by setting the parameter to a value larger than 1. We construct a model where the operating times follow a quasi-renewal process and the corrective/preventive maintenance times follow another quasi-renewal process. A quasi-renewal function and two equivalent point availability expressions are developed for the model described by a quasi-renewal process with and age-dependent preventive maintenance policy. In addition, numerical results from various theoretical distributions are obtained to illustrate the behavior of the models. The two equivalent point availability functions each contains an infinite sum and must be truncated to obtain a numerical approximation. The two approximated point availability functions form upper and lower bounds on the real value. The bounds are useful for determining the result accuracy, which can be arbitrarily increased by adding more terms to the truncated summation. Our framework provides a new time-dependent availability model for a non-stationary process with a preventive maintenance policy without any cost structure or optimization problem.
- Benefits of integrated screening and vaccination for infection controlRabil, Marie Jeanne; Tunc, Sait; Bish, Douglas R.; Bish, Ebru K. (2021-12)Importance: Screening and vaccination are essential in the fight against infectious diseases, but need to be integrated and customized based on community and disease characteristics. Objective: To develop effective screening and vaccination strategies, customized for a college campus, to reduce COVID-19 infections, hospitalizations, deaths, and peak hospitalizations. Design, Setting, and Participants: We construct a compartmental model of disease spread for vaccination and routine screening, and study the efficacy of four mitigation strategies (routine screening only, vaccination only, vaccination with partial routine screening, vaccination with full routine screening), and a no-intervention strategy. The study setting is a hypothetical college campus of 5,000 students and 455 faculty members, with 11 undetected, asymptotic SARS-CoV-2 infections at the start of an 80-day semester. For sensitivity analysis, we vary the screening frequency, daily vaccination rate, initial vaccination coverage, and screening and vaccination compliance; and consider three scenarios that represent low/medium/high transmission rates and test efficacy. Model parameters come from publicly available or published sources. Results: With low initial vaccination coverage, even aggressive vaccination and screening result in a high number of infections: 1,024/2,040 (1,532/1,773) with routine daily (every other day) screening of the unvaccinated; 275/895 with daily screening extended to the newly vaccinated in base- and worst-case scenarios, with reproduction numbers 4.75 and 6.75, respectively, representative of COVID-19 Delta variant. With the emergence of the Omicron variant, the reproduction number may increase and/or effective vaccine coverage may decrease if a booster shot is needed to maximize vaccine efficacy. Conclusion: Integrated vaccination and routine screening can allow for a safe opening of a college when initial vaccination coverage is sufficiently high. The interventions need to be customized considering the initial vaccination coverage, estimated compliance, screening and vaccination capacity, disease transmission and adverse outcome rates, and the number of infections/peak hospitalizations the college is willing to tolerate.
- Benefits of integrated screening and vaccination for infection controlRabil, Marie Jeanne; Tunc, Sait; Bish, Douglas R.; Bish, Ebru K. (PLOS, 2022-04-21)Importance: Screening and vaccination are essential in the fight against infectious diseases, but need to be integrated and customized based on community and disease characteristics. Objective: To develop effective screening and vaccination strategies, customized for a college campus, to reduce COVID-19 infections, hospitalizations, deaths, and peak hospitalizations. Design, setting, and participants: We construct a compartmental model of disease spread under vaccination and routine screening, and study the efficacy of four mitigation strategies (routine screening only, vaccination only, vaccination with partial or full routine screening), and a no-intervention strategy. The study setting is a hypothetical college campus of 5,000 students and 455 faculty members during the Fall 2021 academic semester, when the Delta variant was the predominant strain. For sensitivity analysis, we vary the screening frequency, daily vaccination rate, initial vaccine coverage, and screening and vaccination compliance; and consider scenarios that represent low/medium/high transmission and test efficacy. Model parameters come from publicly available or published sources. Results: With low initial vaccine coverage (30% in our study), even aggressive vaccination and screening result in a high number of infections: 1,020 to 2,040 (1,530 to 2,480) with routine daily (every other day) screening of the unvaccinated; 280 to 900 with daily screening extended to the newly vaccinated in base- and worst-case scenarios, which respectively consider reproduction numbers of 4.75 and 6.75 for the Delta variant. Conclusion: Integrated vaccination and routine screening can allow for a safe opening of a college when both the vaccine effectiveness and the initial vaccine coverage are sufficiently high. The interventions need to be customized considering the initial vaccine coverage, estimated compliance, screening and vaccination capacity, disease transmission and adverse outcome rates, and the number of infections/peak hospitalizations the college is willing to tolerate.
- Branch-and-Price Method for Stochastic Generalized Assignment Problem, Hospital Staff Scheduling Problem and Stochastic Short-Term Personnel Planning ProblemKim, Seon Ki (Virginia Tech, 2009-03-05)The work presented in this dissertation has been focused on exploiting the branch-and-price (BNP) method for the solution of various stochastic mixed integer programming problems (MIPs). In particular, we address the stochastic generalized assignment problem (SGAP), a hospital staff scheduling problem (HSSP), a stochastic hospital staff scheduling problem (SHSSP), and a stochastic short-term personnel planning problem (SSTPP). The BNP method has been developed in concert with the dual stabilization technique and other enhancements of this method for each of these problems. In view of an excessive number of scenarios that arise for these problems, we also implement the Monte Carlo method within the BNP scheme. The superiority of the BNP-based method over the branch-and-cut (BNC) method is demonstrated for all of these problems. The first problem that we address is the SGAP for which the processing time of a job on a machine is assumed to be stochastic. Even though the generalized assignment problem (GAP) has been solved using the BNP method, yet no study has been reported in the literature on the use of the BNP method for the solution of the SGAP. Our work has been motivated by the desire to fill this gap. We begin by showing that it is better to solve the SGAP as a stochastic program in contrast to solving it by using the expected values of the times required to process the jobs on the machines. Then, we show that the stochastic model of the SGAP is a complete recourse model — a useful property which permits the first stage decisions to produce feasible solutions for the recourse problems. We develop three BNP-based methods for the solution of the SGAP. The first of these is BNP-SGAP, which is a combination of branch-and-bound and column generation methods. The pricing problem of BNP-SGAP is separable with regard to each machine, and it is a multiple-constraint knapsack problem. The second method is BNP-SGAP implemented in concert with the dual stabilization technique (DST), and it is designated as BNPDST-SGAP. We have introduced a new DST by modifying the Boxstep method of Pigatti et al. [76]. We have shown that our method performs better than the method of Pigatti et al. [76] resulting in over two-fold savings in cpu times on average. The third method that we develop for the solution of the SGAP is BNPDST-SGAP implemented with an advanced start to obtain an initial feasible solution. We use a greedy heuristic to obtain this solution, and this heuristic is a modification of a similar method used for the knapsack problem. It relies on the information available at a node of the underlying branch-and-bound tree. We have shown that this procedure obtains an initial feasible solution, if it exists at that node. We designate this method as BNPDSTKP-SGAP. We have also developed a BNC method to solve the SGAP using CPLEX 9.0. We have compared the performances of the BNP and BNC methods on various problem instances obtained by varying the number of machines, the ratio of the number of machines to the number of jobs, the machine capacity, and the penalty cost per unit of extra resource required at each machine. Our results show that all BNP-based methods perform better than the BNC method, with the best performance obtained for BNPDSTKP-SGAP. An issue with the use of the scenario-based methods that we have employed for the solution of the SGAP is that the number of scenarios generally grows exponentially in problem parameters, which gives rise to a large-size problem. To overcome the complexity caused by the presence of a large number of scenarios for the solution of the SGAP, we introduce the use of the Monte Carlo method (MCM) within the BNP scheme. We designate this method as BNPDSTKP-SGAP with MCM. It affords the use of a small subset of scenarios at a time to estimate the "true" optimal objective function value. Replications of the subsets of scenarios are carried out until the objective function value satisfies a stopping criterion. We have established theoretical results for the use of the MCM. These pertain to determining unbiased estimates of: (i) lower and upper bounds of the "true" optimal objective function value, (ii) the "true" optimal solution, and (iii) the optimality gap. We have also provided the 100(1-ï ¡) confidence interval on the optimality gap. Our experimental investigation has shown the efficacy of using this method. It obtains almost optimal solutions, with the objective function value lying within 5% of the "true" optimal objective function value, while giving almost ten-fold savings in cpu time. Our experimentation has also revealed that an increment in the number of scenarios in each replication makes a greater impact on the quality of the solution obtained than an increment in the number of replications. We have also observed the impact of a change in the variance of a processing time distribution on cpu time. As expected, the optimal objective function value increases with increment in processing time variability. Also, by comparing the results with the expected value solution, it is observed that the greater the variability in the data, the better it is to use the stochastic program. The second problem that we study is the hospital staff scheduling problem. We address the following three versions of this problem: HSSP (General): Implementation of schedule incorporating the four principal elements, namely, surgeons, operations, operating rooms, and operation times; HSSP (Priority): Inclusion of priority for some surgeons over the other surgeons regarding the use of the facility in HSSP (General); HSSP (Pre-arranged): Implementation of a completely pre-fixed schedule for some surgeons. The consideration of priority among the surgeons mimics the reality. Our BNP method for the solution of these problems is similar to that for the SGAP except for the following: (i) a feasible solution at a node is obtained with no additional assignment, i.e., it consists of the assignments made in the preceding nodes of that node in the branch-and-bound tree; (ii) the columns with positive reduced cost are candidates for augmentation in the CGM; and (iii) a new branching variable selection strategy is introduced, which selects a fractional variable as a branching variable by fixing a value of which we enforce the largest number of variables to either 0 or 1. The priority problem is separable in surgeons. The results of our experimentation have shown the efficacy of using the BNP-based method for the solution of each HSSP as it takes advantage of the inherent structure of each of these problems. We have also compared their performances with that of the BNC method developed using CPLEX. For the formulations HSSP (General), HSSP (Priority), and HSSP (Pre-arranged), the BNP method gives better results for 22 out of 30, 29 out of 34, and 20 out 32 experiments over the BNC method, respectively. Furthermore, while the BNC method fails to obtain an optimal solution for 15 experiments, the BNP method obtains optimal solutions for all 96 experiments conducted. Thus, the BNP method consistently outperforms the BNC method for all of these problems. The third problem that we have investigated in this study is the stochastic version of the HSSP, designated as the Stochastic HSSP (SHSSP), in which the operation times are assumed to be stochastic. We have introduced a formulation for this formulation, designated as SHSSP2 (General), which allows for overlapping of schedules for surgeons and operating rooms, and also, allows for an assignment of a surgeon to perform an operation that takes less than a pre-arranged operation time, but all incurring appropriate penalty costs. A comparison of the solution of SHSSP2 (General) and its value with those obtained by using expected values (the corresponding problem is designated as Expected-SHSSP2 (General)) reveals that Expected-SHSSP2 (General) may end up with inferior and infeasible schedules. We show that the recourse model for SHSSP2 (General) is a relatively complete recourse model. Consequently, we use the Monte Carlo method (MCM) to reduce the complexity of solving SHSSP2 (General) by considering fewer scenarios. We employ the branch-and-cut (BNC) method in concert with the MCM for solving SHSSP2 (General). The solution obtained is evaluated using tolerance ratio, closeness to optimality, length of confidence interval, and cpu time. The MCM substantially reduces computational effort while producing almost optimal solutions and small confidence intervals. We have also considered a special case of SHSSP2 (General), which considers no overlapping schedules for surgeons and operating rooms and assigns exactly the same operation time for each assignment under each scenario, and designate it as SHSSP2 (Special). With this, we consider another formulation that relies on the longest operation time among all scenarios for each assignment of a surgeon to an operation in order to avoid scheduling conflicts, and we designate this problem as SHSSP (Longest). We show SHSSP (Longest) to be equivalent to deterministic HSSP, designated as HSSP (Equivalent), and we further prove it to be equivalent to SHSSP (General) in terms of the optimal objective function value and the optimal assignments of operations to surgeons. The schedule produced by HSSP (Equivalent) does not allow any overlap among the operations performed in an operating room. That is, a new operation cannot be performed if a previous operation scheduled in that room takes longer than expected. However, the schedule generated by HSSP (Equivalent) may turn out to be a conservative one, and may end up with voids due to unused resources in case an operation in an operating room is completed earlier than the longest time allowed. Nevertheless, the schedule is still a feasible one. In such a case, the schedule can be left-shifted, if possible, because the scenarios are now revealed. Moreover, such voids could be used to perform other procedures (e.g., emergency operations) that have not been considered within the scope of the SHSSP addressed here. Besides, such a schedule can provide useful guidelines to plan for resources ahead of time. The fourth problem that we have addressed in this dissertation is the stochastic short-term personnel planning problem, designated as Stochastic STPP (SSTPP). This problem arises due to the need for finding appropriate temporary contractors (workers) to perform requisite jobs. We incorporate uncertainty in processing time or amount of resource required by a contractor to perform a job. Contrary to the SGAP, the recourse model for this problem is not a relatively complete recourse model. As a result, we cannot employ a MCM method for the solution of this problem as it may give rise to an infeasible solution. The BNP method for the SSTPP employs the DST and the advanced start procedure developed for the SGAP, and due to extra constraints and presence of binary decision variables, we use the branching variable selection strategy developed for the HSSP models. Because of the distinctive properties of the SSTPP, we have introduced a new node selection strategy. We have compared the performances of the BNC-based and BNP-based methods based on the cpu time required. The BNP method outperforms the BNC method in 75% of the experiments conducted, and the BNP method is found to be quite stable with smaller variance in cpu times than those for the BNC method. It affords solution of difficult problems in smaller cpu times than those required for the BNC method.
- Capacity Investment, Flexibility, and Product Substitution/Complementarity under Demand UncertaintySuwandechochai, Rawee (Virginia Tech, 2005-12-15)We provide a comprehensive characterization of the relationship between optimal capacity and the degree of product substitution/complementarity under price/production postponement, considering different business practices (holdback versus clearance, negative price policies) and different demand models. Specifically, we consider a firm that produces two products, which can be substitutable or complementary. The demand of each product is a linear function of the prices of both products (with the relationship depending on the substitution/complementarity structure), and is subject to an additive stochastic shock. We consider two types of linear demand functions that are commonly used in the economics and operations management literature. The firm operates in a monopolistic setting and acts as a price-setter for both products. Overall the firm needs to make three sets of decisions: capacity, production quantities, and prices. While the capacity investment decision has to be made ex-ante observation of demand curves, price and/or quantity decisions can be postponed until after demand curves are observed. We consider two postponement strategies: price and quantity postponement, and price postponement only. We characterize the optimal pricing/production/investment decisions for each postponement strategy. Using these characterizations, we show that product substitution/complementarity is a key demand characteristic, which has a large impact on the optimal capacity. Our results show that how the optimal capacity behaves in substitution/complementarity parameter is quite similar under both postponement strategies, and under holdback and clearance. However, this behavior depends highly on other underlying assumptions (i.e., whether or not negative prices are allowed) and on the demand model used.
- A Combined Inventory-Location Model for Distribution Network DesignHodgdon, Tammy Jo (Virginia Tech, 2004-11-15)Two important areas of decision-making in distribution system design involve facility location and inventory policy determination. Facility location analyzes questions such as how many facilities should be opened, where they should be located, and which customers should be assigned to which DCs. Inventory policy determination involves more tactical decisions such as the order quantities and frequencies at each level or echelon in the network. It is believed that these two decisions can influence each other significantly. Including a multi-echelon inventory policy decision in a location analysis allows a user to capitalize on the strengths that each DC has to offer (e.g., lower labor rates, land costs, etc.). Likewise, when the locations of two facilities are known, a multi-echelon inventory policy can be designed better to incorporate the exact lead times and fixed costs between the facilities at each level of the system. Despite this, the two problems are typically solved independently. This research addresses these problems together and investigates different heuristic methods for solving a combined inventory-location model. We begin by presenting the background and formulation for each problem. These formulations are then combined to show how the two problems can be mathematically formulated together. Rather than solve the problem exactly, two heuristic methods using different philosophies are tested. We apply these heuristic methods to the combined inventory-location problem to determine how much we can improve distribution network design solutions and what type of heuristic methodology is most effective in gaining these improvements. Our results show that the combined inventory-location model is capable of improving on the solutions obtained by a location model with a fixed inventory policy. The improvement based on the data sets tested in this research was approximately $60,000. However, in cases where the inventory costs are a larger portion of the total cost, the improvement made by the inventory-location model increased to over $1,000,000. We also found that our second heuristic method tested provided statistically significant improved results over our first heuristic method. Moreover, the second heuristic method typically ran 67% faster. The improved results, although small in a relative sense (the average improvement was 0.18%), would still represent a large absolute improvement in supply chain costs. As much as $174,000 was saved in the data sets tested for this research.
- Comparative Statics Analysis of Some Operations Management ProblemsZeng, Xin (Virginia Tech, 2012-08-08)We propose a novel analytic approach for the comparative statics analysis of operations management problems on the capacity investment decision and the influenza (flu) vaccine composition decision. Our approach involves exploiting the properties of the underlying mathematical models, and linking those properties to the concept of stochastic orders relationship. The use of stochastic orders allows us to establish our main results without restriction to a specific distribution. A major strength of our approach is that it is "scalable," i.e., it applies to capacity investment decision problem with any number of non-independent (i.e., demand or resource sharing) products and resources, and to the influenza vaccine composition problem with any number of candidate strains, without a corresponding increase in computational effort. This is unlike the current approaches commonly used in the operations management literature, which typically involve a parametric analysis followed by the use of the implicit function theorem. Providing a rigorous framework for comparative statics analysis, which can be applied to other problems that are not amenable to traditional parametric analysis, is our main contribution. We demonstrate this approach on two problems: (1) Capacity investment decision, and (2) influenza vaccine composition decision. A comparative statics analysis is integral to the study of these problems, as it allows answers to important questions such as, "does the firm acquire more or less of the different resources available as demand uncertainty increases? does the firm benefit from an increase in demand uncertainty? how does the vaccine composition change as the yield uncertainty increases?" Using our proposed approach, we establish comparative statics results on how the newsvendor's expected profit and optimal capacity decision change with demand risk and demand dependence in multi-product multi-resource newsvendor networks; and how the societal vaccination benefit, the manufacturer's profit, and the vaccine output change with the risk of random yield of strains.
- Comparison of Scheduling Algorithms for a Multi-Product Batch-Chemical Plant with a Generalized Serial NetworkTra, Niem-Trung L. (Virginia Tech, 2000-01-24)Despite recent advances in computer power and the development of better algorithms, theoretical scheduling methodologies developed for batch-chemical production are seldom applied in industry (Musier & Evans 1989 and Grossmann et al. 1992). Scheduling decisions may have significant impact on overall company profitability by defining how capital is utilized, the operating costs required, and the ability to meet due dates. The purpose of this research is to compare different production scheduling methods by applying them to a real-world multi-stage, multi-product, batch-chemical production line. This research addresses the problem that the theoretical algorithms are seldom applied in industry and allows for performance analysis of several theoretical algorithms. The research presented in this thesis focuses on the development and comparison of several scheduling algorithms. The two objectives of this research are to: 1. modify different heuristic production scheduling algorithms to minimize tardiness for a multi-product batch plant involving multiple processing stages with several out-of-phase parallel machines in each stage; and 2. compare the robustness and performance of these production schedules using a stochastic discrete event simulation of a real-world production line. The following three scheduling algorithms are compared: 1. a modified Musier and Evans scheduling algorithm (1989); 2. a modified Ku and Karimi Sequence Building Algorithm (1991); and 3. a greedy heuristic based on an earliest-due-date (EDD) policy. Musier and Evans' heuristic improvement method (1989) is applied to the three algorithms. The computation times to determine the total tardiness of each schedule are compared. Finally, all the schedules are tested for robustness and performance in a stochastic setting with the use of a discrete event simulation (DES) model. Mignon, Honkomp, and Reklaitis' evaluation techniques (1995) and Multiple Comparison of the Best are used to help determine the best algorithm.
- Decision Support for Casualty Triage in Emergency ResponseKamali, Behrooz (Virginia Tech, 2016-05-04)Mass-casualty incidents (MCI) cause a sudden increase in demand of medical resources in a region. The most important and challenging task in addressing an MCI is managing overwhelmed resources with the goal of increasing total number of survivors. Currently, most of the decisions following an MCI are made in an ad-hoc manner or by following static guidelines that do not account for amount of available resources and number of the casualties. The purpose of this dissertation is to introduce and analyze sophisticated service prioritization and resource allocation tools. These tools can be used to produce service order strategies that increase the overall number of survivors. There are several models proposed that account for number and mix of the casualties, and amount and type of the resources available. Large number of the elements involved in this problem makes the model very complex, and thus, in order to gain some insights into the structure of the optimal solutions, some of the proposed models are developed under simplifying assumptions. These assumptions include limitations on the number of casualty types, handling of deaths, servers, and types of resources. Under these assumptions several characteristics of the optimal policies are identified, and optimal algorithms for various scenarios are developed. We also develop an integrated model that addresses service order, transportation, and hospital selection. A comprehensive set of computational results and comparison with the related works in the literature are provided in order to demonstrate the efficacy of the proposed methodologies.
- A Demand Driven Re-fleeting Approach for Aircraft Assignment Under UncertaintyZhu, Xiaomei (Virginia Tech, 2001-07-23)The current airline practice is to assign aircraft capacity to scheduled flights well in advance of departure. At such an early stage in this process, the high uncertainty of demand poses a major impediment for airlines to best match the airplane capacities with the final demand. However, the accuracy of the demand forecast improves markedly over time, and revisions to the initial fleet assignment become naturally pertinent when the observed demand considerably differs from the assigned aircraft capacity. The Demand Driven Re-fleeting (DDR) approach proposed in this thesis offers a dynamic re-assignment of aircraft capacity to the flight network, as and when improved demand forecasts become available, so as to maximize the total revenue. Because of the need to preserve the initial crew schedule, this re-assignment approach is limited within a single family of aircraft and to the flights assigned to this particular family. This restriction significantly reduces the problem size. As a result, it becomes computationally tractable to include path level demand information into the DDR model, although the problem size can then get very large because of the numerous combinations of composing paths from legs. As an extension, models considering path-class level differences, day-of-week demand variations, and re-capture effects are also presented. The DDR model for a single family with path level demand considerations is formulated as a mixed-integer programming problem. The model's polyhedral structure is studied to explore ways for tightening its representation and for deriving certain classes of valid inequalities. Various approaches for implementing such reformulation techniques are investigated and tested. The best of these procedures for solving large-scale challenging instances of the problem turns out to be an integrated approach that uses certain selected model augmentations and valid inequalities generated via a suitable separation routine and a partial convex hull construction process. Using this strategy in concert with properly selected CPLEX options reduces the CPU time by an average factor of 7.48 over an initial model for a test-bed of problems each having 200 flights in total. Prompted by this integrated heuristic approach, a procedure for finding solutions within a prescribed limit of optimality is suggested. To demonstrate the effectiveness of these developed methodologies, we also solved two large-scale practical-sized networks that respectively involve 800 and 1060 flights, and 18196 and 33105 paths in total, with 300 and 396 flights belonging to the designated family. These problems were typically solved within 6 hours on a SUN Ultra 1 Workstation having 260 MB RAM and a clock-speed of 167 MHz, with one exception that required 14 hours of CPU time. This level of computational effort is acceptable considering that such models are solved at a planning stage in the decision process.
- Discrete Two-Stage Stochastic Mixed-Integer Programs with Applications to Airline Fleet Assignment and Workforce Planning ProblemsZhu, Xiaomei (Virginia Tech, 2006-04-21)Stochastic programming is an optimization technique that incorporates random variables as parameters. Because it better reflects the uncertain real world than its traditional deterministic counterpart, stochastic programming has drawn increasingly more attention among decision-makers, and its applications span many fields including financial engineering, health care, communication systems, and supply chain management. On the flip side, stochastic programs are usually very difficult to solve, which is further compounded by the fact that in many of the aforementioned applications, we also have discrete decisions, thereby rendering these problems even more challenging. In this dissertation, we study the class of two-stage stochastic mixed-integer programs (SMIP), which, as its name suggests, lies at the confluence of two formidable classes of problems. We design a novel algorithm for this class of problems, and also explore specialized approaches for two related real-world applications. Although a number of algorithms have been developed to solve two-stage SMIPs, most of them deal with problems containing purely integer or continuous variables in either or both of the two stages, and frequently require the technology and/or recourse matrices to be deterministic. As a ground-breaking effort, in this work, we address the challenging class of two-stage SMIPs that involve 0-1 mixed-integer variables in both stages. The only earlier work on solving such problems (Carøe and Schultz (1999)) requires the optimization of several non-smooth Lagrangian dual problems using subgradient methods in the bounding process, which turns out to be computationally very expensive. We begin with proposing a decomposition-based branch-and-bound (DBAB) algorithm for solving two-stage stochastic programs having 0-1 mixed-integer variables in both stages. Since the second-stage problems contain binary variables, their value functions are in general nonconvex and discontinuous; hence, the classical Benders' decomposition approach (or the L-shaped method) for solving two-stage stochastic programs, which requires convex subproblem value functions, cannot be directly applied. This motivates us to relax the second-stage problems and accompany this relaxation with a convexification process. To make this process computationally efficient, we propose to construct a certain partial convex hull representation of the two-stage solution space, using the relaxed second-stage constraints and the restrictions confining the first-stage variables to lie within some hyperrectangle. This partial convex hull is sequentially generated using a convexification scheme, such as the Reformulation-Linearization Technique (RLT), which yields valid inequalities that are functions of the first-stage variables and, of noteworthy importance, are reusable in the subsequent subproblems by updating the values of the first-stage variables. Meanwhile, since the first stage contains continuous variables, whenever we tentatively fix these variables at some given feasible values, the resulting constraints may not be facial with respect to the associated bounding constraints that are used to construct the partial convex hull. As a result, the constructed Benders' subproblems define lower bounds for the second-stage value functions, and likewise, the resulting Benders' master problem provides a lower bound for the original stochastic program defined over the same hyperrectangle. Another difficulty resulting from continuous first-stage variables is that when the given first-stage solution is not extremal with respect to its bounds, the second-stage solution obtained for a Benders' subproblem defined with respect to a partial convex hull representation in the two-stage space may not satisfy the model's binary restrictions. We thus need to be able to detect whether or not a Benders' subproblem is solved by a given fractional second-stage solution. We design a novel procedure to check this situation in the overall algorithmic scheme. A key property established, which ensures global convergence, is that these lower bounds become exact if the given first-stage solution is a vertex of the defining hyperrectangle, or if the second-stage solution satisfies the binary restrictions. Based on these algorithmic constructs, we design a branch-and-bound procedure where the branching process performs a hyperrectangular partitioning of the projected space of the first-stage variables, and lower bounds for the nodal problems are computed by applying the proposed modified Benders' decomposition method. We prove that, when using the least-lower-bound node-selection rule, this algorithm converges to a global optimal solution. We also show that the derived RLT cuts are not only reusable in subsequent Benders iterations at the same node, but are also inheritable by the subproblems of the children nodes. Likewise, the Benders' cuts derived for a given sub-hyperrectangle can also be inherited by the lower bounding master programs solved for its children nodes. Using these cut inheritance properties results in significant savings in the overall computational effort. Some numerical examples and computational results are presented to demonstrate the efficacy of this approach. The sizes of the deterministic equivalent of our test problems range from having 386 continuous variables, 386 binary variables, and 386 constraints, up to 1795 continuous variables, 1539 binary variables, and 1028 constraints. The results reveal an average savings in computational effort by a factor of 9.5 in comparison with using a commercial mixed-integer programming package (CPLEX 8.1) on a deterministic equivalent formulation. We then explore an important application of SMIP to enhance the traditional airline fleet assignment models (FAM). Given a flight schedule network, the fleet assignment problem solved by airline companies is concerned with assigning aircraft to flight legs in order to maximize profit with respect to captured path- or itinerary-based demand. Because certain related crew scheduling regulations require early information regarding the type of aircraft serving each flight leg, the current practice adopted by airlines is to solve the fleet assignment problem using estimated demand data 10-12 weeks in advance of departure. Given the level of uncertainty, deterministic models at this early stage are inadequate to obtain a good match of aircraft capacity with passenger demands, and revisions to the initial fleet assignment become naturally pertinent when the observed demand differs considerably from the assigned aircraft capacities. From this viewpoint, the initial decision should embrace various market scenarios so that it incorporates a sufficient look-ahead feature and provides sufficient flexibility for the subsequent re-fleeting processes to accommodate the inevitable demand fluctuations. With this motivation, we propose a two-stage stochastic programming approach in which the first stage is concerned with the initial fleet assignment decisions and, unlike the traditional deterministic methodology, focuses on making only a family-level assignment to each flight leg. The second stage subsequently performs the detailed assignments of fleet types within the allotted family to each leg under each of the multiple potential scenarios that address corresponding path- or itinerary-based demands. In this fashion, the initial decision of what aircraft family should serve each flight leg accomplishes the purpose of facilitating the necessary crew scheduling decisions, while judiciously examining the outcome of future re-fleeting actions based on different possible demand scenarios. Hence, when the actual re-fleeting process is enacted several weeks later, this anticipatory initial family-level assignment will hopefully provide an improved overall fleet type re-allocation that better matches demand. This two-stage stochastic model is complemented with a secondary model that performs adjustments within each family, if necessary, to provide a consistent fleet type-assignment information for accompanying decision processes, such as yield management. We also propose several enhanced fleet assignment models, including a robust optimization model that controls decision variation among scenarios and a stochastic programming model that considers the recapture effect of spilled demand. In addition to the above modeling concepts and framework, we also contribute in developing effective solution approaches for the proposed model, which is a large-scale two-stage stochastic 0-1 mixed-integer program. Because the most pertinent information needed from the initial fleet assignment is at the family level, and the type-level assignment is subject to change at the re-fleeting stage according to future demand realizations, our solution approach focuses on assigning aircraft families to the different legs in the flight network at the first stage, while finding relaxed second-stage solutions under different demand scenarios. Based on a polyhedral study of a subsystem extracted from the original model, we derive certain higher-dimensional convex hull as well as partial convex hull representations for this subsystem. Accordingly, we propose two variants for the primary model, both of which relax the binary restrictions on the second-stage variables, but where the second variant then also accommodates the partial convex hull representations, yielding a tighter, albeit larger, relaxation. For each variant, we design a suitable solution approach predicated on Benders' decomposition methodology. Using certain realistic large-scale flight network test problems having 900 flight legs and 1,814 paths, as obtained from United Airlines, the proposed stochastic modeling approach was demonstrated to increase daily expected profits by about 3% (which translates to about $160 million per year) in comparison with the traditional deterministic model in present usage, which considers only the expected demand. Only 1.6% of the second-stage binary variables turn out to be fractional in the first variant, and this number is further reduced to 1.2% by using the tighter variant. Furthermore, when attempting to solve the deterministic equivalent formulation for these two variants using a commercial mixed-integer programming package (CPLEX 8.1), both the corresponding runs were terminated after reaching a 25-hour cpu time limit. At termination, the software was still processing the initial LP relaxation at the root node for each of these runs, and no feasible basis was found. Using the proposed algorithms, on the other hand, the solution times were significantly reduced to 5 and 19 hours for the two variants, respectively. Considering that the fleet assignment models are solved around three months in advance of departure, this solution time is well acceptable at this early planning stage, and the improved quality in the solution produced by considering the stochasticity in the system is indeed highly desirable. Finally, we address another practical workforce planning problem encountered by a global financial firm that seeks to manage multi-category workforce for functional areas located at different service centers, each having office-space and recruitment-capacity constraints. The workforce demand fluctuates over time due to market uncertainty and dynamic project requirements. To hedge against the demand fluctuations and the inherent uncertainty, we propose a two-stage stochastic programming model where the first stage makes personnel recruiting and allocation decisions, while the second stage, based on the given personnel decision and realized workforce demand, decides on the project implementation assignment. The second stage of the proposed model contains binary variables that are used to compute and also limit the number of changes to the original plan. Since these variables are concerned with only one quality aspect of the resulting workforce plan and do not affect feasibility issues, we replace these binary variables with certain conservative policies regarding workforce assignment change restrictions in order to obtain more manageable subproblems that contain purely continuous variables. Numerical experiments reveal that the stochastic programming approach results in significantly fewer alterations to the original workforce plan. When using a commercial linear programming package CPLEX 9.0 to solve the deterministic equivalent form directly, except for a few small-sized problems, this software failed to produce solutions due to memory limitations, while the proposed Benders' decomposition-based solution approach consistently solved all the practical-sized test problems with reasonable effort. To summarize, this dissertation provides a significant advancement in the algorithmic development for solving two-stage stochastic mixed-integer programs having 0-1 mixed-integer variables in both stages, as well as in its application to two important contemporary real-world applications. The framework for the proposed solution approaches is to formulate tighter relaxations via partial convex hull representations and to exploit the resulting structure using suitable decomposition methods. As decision robustness is becoming increasingly relevant from an economic viewpoint, and as computer technological advances provide decision-makers the ability to explore a wide variety of scenarios, we hope that the proposed algorithms will have a notable positive impact on solving stochastic mixed-integer programs. In particular, the proposed stochastic programming airline fleet assignment and the workforce planning approaches studied herein are well-poised to enhance the profitability and robustness of decisions made in the related industries, and we hope that similar improvements are adapted by more industries where decisions need to be made in the light of data that is shrouded by uncertainty.