Browsing by Author "Sarin, Subhash C."
Now showing 1 - 20 of 157
Results Per Page
Sort Options
- An agent based manufacturing scheduling module for Advanced Planning and SchedulingAttri, Hitesh (Virginia Tech, 2005-01-04)A software agents based manufacturing scheduling module for Advanced Planning and Scheduling (APS) is presented. The problem considered is scheduling of jobs with multiple operations, distinct operation processing times, arrival times, and due dates in a job shop environment. Sequence dependent setups are also considered. The additional constraints of material and resource availability are also taken into consideration. The scheduling is to be considered in integration with production planning. The production plans can be changed dynamically and the schedule is to be generated to reflect the appropriate changes. The design of a generic multi-agent framework which is domain independent along with algorithms that are used by the agents is also discussed.
- An aggregate capital budgeting model using a product portfolio approachMoolman, George Christiaan (Virginia Tech, 1994)A product portfolio approach is used in this dissertation to develop a model permitting capital budgeting to be modeled interactively with aggregate production planning, in light of market supply and demand functions. Primary emphasis is on the maximization of profit, but other goals are also addressed. These are maximization of the rate of return, maximization of market share, and minimization of the cost of excess capacity. A linear mixed integer programming model is developed for each of these objectives. Then, a single goal programming model that combines all four objectives is formulated. Costs are not allocated to products. Accordingly, the notion of cash flows per product (or per project) is not used. Instead, cost is incurred as a result of the demand that a product portfolio places on resources. All costs are considered to be incurred in the acquisition and utilization (in the form of activities) of resources. Four distinct levels of activities are considered: unit, batch, product sustaining, and facility sustaining. The demand for each resource is aggregated over all levels of variability and over all the products in the product portfolio. The direct cash outflow or inflow as a result of changing resource capacity is continuously traded off against the eventual cost or benefit of changing the capacity (in the form of changed revenues and as a function of both time and market supply and demand). Capital structure and capital investment decisions are considered simultaneously for a given set of assumptions. Different sources of funds are utilized for different costs of capital. Lending and borrowing are simultaneously incorporated without the solutions becoming inconsistent due to incorrect or inappropriate discount factors. This is mainly attributable to the fact that the organization, as a single entity that manufactures a product portfolio, demands capital, and invests excess funds. The net present value of the organization (not of projects or products) is maximized. Also, the output of each project is modeled specifically. This alleviates the practical problem of fractional acceptance of projects. Variable market supply and demand functions are also included and modeled explicitly. Finally, it is shown that the developed model contains several elements of aggregate production planning. The main conclusions from this research are: 1) Better capital budgeting results can be obtained if costs are not allocated to projects (or products) when resources are shared among different projects or products; 2) Financing and investment decisions can be made interactively (with the developed model) without the solutions becoming inconsistent due to unknown discount rates; 3) Resource acquisition and resource consumption should be modeled explicitly in capital budgeting; and 4) The model yields an improvement over existing capital budgeting techniques for a given set of assumptions. Some recommendations are presented for further research to extend these conclusions.
- Algorithmic Approaches for Solving the Euclidean Distance Location and Location-Allocation ProblemsAl-Loughani, Intesar Mansour (Virginia Tech, 1997-07-08)This dissertation is concerned with the development of algorithmic approaches for solving the minisum location and location-allocation problems in which the Euclidean metric is used to measure distances. To overcome the nondifferentiability difficulty associated with the Euclidean norm function, specialized solution procedures are developed for both the location and the location-allocation problems. For the multifacility location problem (EMFLP), two equivalent convex differentiable reformulations are proposed. The first of these is formulated directly in the primal space, and relationships between its Karush-Kuhn-Tucker (KKT) conditions and the necessary and sufficient optimality conditions for EMFLP are established in order to explore the use of standard convex differentiable nonlinear programming algorithms that are guaranteed to converge to KKT solutions. The second equivalent differentiable formulation is derived via a Lagrangian dual approach based on the optimum of a linear function over a unit ball (circle). For this dual approach, which recovers Francis and Cabot's (1972) dual problem, we also characterize the recovery of primal location decisions, hence settling an issue that has remained open since 1972. In another approach for solving EMFLP, conjugate or deflected subgradient based algorithms along with suitable line-search strategies are proposed. The subgradient deflection method considered is the Average Direction Strategy (ADS) imbedded within the Variable Target Value Method (VTVM). The generation of two types of subgradients that are employed in conjunction with ADS are investigated. The first type is a simple valid subgradient that assigns zero components corresponding to the nondifferentiable terms in the objective function. The second type expends more effort to derive a low-norm member of the subdifferential in order to enhance the prospect of obtaining a descent direction. Furthermore, a Newton-based line-search is also designed and implemented in order to enhance the convergence behavior of the developed algorithm. Various combinations of the above strategies are composed and evaluated on a set of test problems. Computational results for all the proposed algorithmic approaches are presented, using a set of test problems that include some standard problems from the literature. These results exhibit the relative advantages of employing the new proposed procedures. Finally, we study the capacitated Euclidean distance location-allocation problem. There exists no global optimization algorithm that has been developed and tested for this class of problems, aside from a total enumeration approach. We develop a branch-and-bound algorithm that implicitly/partially enumerates the vertices of the feasible region of the transportation constraints in order to determine a global optimum for this nonconvex problem. For deriving lower bounds on node subproblems, a specialized variant of the Reformulation-Linearization Technique (RLT) is suitably designed which transforms the representation of this nonconvex problem from the original defining space into a higher dimensional space associated with a lower bounding (largely linear) convex program. The maximum of the RLT relaxation based lower bound that is obtained via a deflected subgradient strategy applied to a Lagrangian dual formulation of this problem, and another readily computed lower bound in the projected location space is considered at each node of the branch-and-bound tree for fathoming purposes. In addition, certain cut-set inequalities in the allocation space, and objective function based cuts in the location space are generated to further tighten the lower bounding relaxation. Computational experience is provided on a set of randomly generated test problems to investigate both the RLT-based and the projected location- space lower bounding schemes. The results indicate that the proposed global optimization approach for this class of problem offers a promising viable solution procedure. In fact, for two instances available available in the in the literature, we report significantly improved solutions. The dissertation concludes with recommendations for further research for this challenging class of problems. Data for the collection of test problems is provided in the Appendix to facilitate further testing in this area.
- Analysis and Simulation of Switchgrass Harvest Systems for Large-scale Biofuel ProductionMcCullough, Devita (Virginia Tech, 2012-08-15)In the United States, the Energy Independence and Security Act of 2007 mandates the annual production of 136 billion liters of renewable fuel in the US by 2022 (US Congress, 2007). As the nation moves towards energy independence, it is critical to address the current challenges associated with large-scale biofuel production. The biomass logistics network considered consists of three core operations: farmgate operations, highway-hauling operations, and receiving facility operations. To date, decision-making has been limited in post-production management (harvesting, in-field hauling, and storage) in farmgate operations. In this thesis, we study the impacts in the logistics network resulting from the selection of one of four harvest scenarios. A simulation model was developed, which simulated the harvest and filling of a Satellite Storage Location (SSL), using conventional hay harvest equipment, specifically, a round baler. The model evaluated the impacts of four harvest scenarios (ranging from short, October-December, to extended, July-March), on baler equipment requirements, baler utilization, and the storage capacity requirements of round bales, across a harvest production region. The production region selected for this study encompassed a 32-km radius surrounding a hypothetical bio-crude plant in Gretna, VA, and considered 141 optimally selected SSLs. The production region was divided into 6 sub-regions (i.e. tours). The total production region consisted of 15,438 ha and 682 fields. The fields ranged in size from 6 to 156 ha. Of the four scenarios examined in the analysis, each displayed similar trends across the six tours. Variations in the baler requirements that were observed among the tours resulted from variability in field size distribution, field to baler allocations, and total production area. The available work hours were found to have a significant impact on the resource requirements to fulfill harvest operations and resource requirements were greatly reduced when harvest operations were extended throughout the 9-month harvest season. Beginning harvest in July and extending harvest through March resulted in reductions in round balers ranging from 50-63%, as compared to the short harvest scenario, on a sub-regional basis. On a regional basis, beginning harvest in July and extending harvest through March resulted in baler reductions up to 58.2%, as compared to the short harvest scenario. For a 9-month harvest, harvesting approximately 50% of total switchgrass harvest in July-September, as compared to harvesting approximately 50% in October-December, resulted in reductions in round balers ranging from 33.3- 43.5%. An extended (9-month) harvest resulted in the lowest annual baler requirements, and on average lower baler utilization rates. The reduced harvest scenarios, when compared to the extended harvest scenarios, resulted in a significant increase in the number of annual balers required for harvest operations. However, among the reduced harvest scenarios (i.e. Scenario 3 and 4), the number of annual balers required for harvest operations showed significantly less variation than between the extended harvest scenarios (i.e. Scenarios 1 and 2). As a result, an increased utilization of the balers in the system, short harvest scenarios resulted in the highest average baler utilization rates. Storage capacity requirements were however found to be greater for short harvest scenarios. For the reduced harvest scenario, employing an October-December harvest window, approximately 50% of harvest was completed by the end of October, and 100% of total harvest was completed by the third month of harvest (i.e. December).
- Analytic Evaluation of the Expectation and Variance of Different Performance Measures of a Schedule under Processing Time VariabilityNagarajan, Balaji (Virginia Tech, 2003-09-19)The realm of manufacturing is replete with instances of uncertainties in job processing times, machine statuses (up or down), demand fluctuations, due dates of jobs and job priorities. These uncertainties stem from the inability to gather accurate information about the various parameters (e.g., processing times, product demand) or to gain complete control over the different manufacturing processes that are involved. Hence, it becomes imperative on the part of a production manager to take into account the impact of uncertainty on the performance of the system on hand. This uncertainty, or variability, is of considerable importance in the scheduling of production tasks. A scheduling problem is primarily to allocate the jobs and determine their start times for processing on a single or multiple machines (resources) for the objective of optimizing a performance measure of interest. If the problem parameters of interest e.g., processing times, due dates, release dates are deterministic, the scheduling problem is relatively easier to solve than for the case when the information is uncertain about these parameters. From a practical point of view, the knowledge of these parameters is, most often than not, uncertain and it becomes necessary to develop a stochastic model of the scheduling system in order to analyze its performance. Investigation of the stochastic scheduling literature reveals that the preponderance of the work reported has dealt with optimizing the expected value of the performance measure. By focusing only on the expected value and ignoring the variance of the measure used, the scheduling problem becomes purely deterministic and the significant ramifications of schedule variability are essentially neglected. In many a practical cases, a scheduler would prefer to have a stable schedule with minimum variance than a schedule that has lower expected value and unknown (and possibly high) variance. Hence, it becomes apparent to define schedule efficiencies in terms of both the expectation and variance of the performance measure used. It could be easily perceived that the primary reasons for neglecting variance are the complications arising out of variance considerations and the difficulty of solving the underlying optimization problem. Moreover, research work to develop closed-form expressions or methodologies to determine the variance of the performance measures is very limited in the literature. However, conceivably, such an evaluation or analysis can only help a scheduler in making appropriate decisions in the face of uncertain environment. Additionally, these expressions and methodologies can be incorporated in various scheduling algorithms to determine efficient schedules in terms of both the expectation and variance. In our research work, we develop such analytic expressions and methodologies to determine the expectation and variance of different performance measures of a schedule. The performance measures considered are both completion time and tardiness based measures. The scheduling environments considered in our analysis involve a single machine, parallel machines, flow shops and job shops. The processing times of the jobs are modeled as independent random variables with known probability density functions. With the schedule given a priori, we develop closed-form expressions or devise methodologies to determine the expectation and variance of the performance measures of interest. We also describe in detail the approaches that we used for the various scheduling environments mentioned earlier. The developed expressions and methodologies were programmed in MATLAB R12 and illustrated with a few sample problems. It is our understanding that knowing the variance of the performance measure in addition to its expected value would aid in determining the appropriate schedule to use in practice. A scheduler would be in a better position to base his/her decisions having known the variability of the schedules and, consequently, can strike a balance between the expected value and variance.
- The applicability of APT towards meeting control needs in discrete parts manufacturingBidani, Sandeep (Virginia Tech, 1989-09-05)For about ten years, Texas Instruments has been developing a software environment of integrated tools for designing, debugging and documenting process control solutions that run on programmable controllers. The product - the Applications Productivity Tool (APT), allows process and control engineers to design and program in a graphical environment that compiles into machine code (relay ladder logic). APT is primarily targeted for the batch manufacturing industry in which engineers combine elements of both discrete and continuous control strategies. The objective of this research was to determine the applicability of APT in discrete parts manufacturing, using two applications of discrete manufacturing. One of these applications was a Fischertechnik model of a manufacturing system, configured to simulate the production of three distinct parts. The other application was the flexible manufacturing system being assembled in the Computer Integrated Manufacturing Laboratory (CIM Laboratory), which is equipped to produce models of a robot and a CNC milling machine.
- An application of artificial intelligence methods to scheduling parallel processorsSteffen, Mitchell S. (Virginia Tech, 1985-08-05)This research investigated applying Artificial Intelligence (AI) method to develop a scheduling and sequencing system for parallel processors, subject to preference, sequencing, and buffer inventory constraints. Specifically, hierarchical planning and, constraint-directed search were used to develop prototype scheduling system for a case study problem. This research also investigated dividing the scheduling problem into sub-periods to allow parallel scheduling and efficient handling of time-dependent, constraints. The prototype system uses, problem-constraints to define sub-period boundaries, and determine which processors and jobs to include in the sub-period problems. It then solves the sub-period schedules in sequence. The prototype system was tested using operational data from the case study and compared to schedules created by the case study scheduler. The prototype system produced schedules very similar to the human scheduler, and relaxed constraints only slightly more than the scheduler in searching for solutions. The success of the prototype system demonstrated: 1) the effectiveness of hierarchical planning and constraint-directed search as methods for developing scheduling systems for parallel processors; 2) that constraint satisfaction, as opposed to solving an objective function, is a useful alternative method for modeling scheduling problems; and 3) dividing the scheduling problem into sub-period problems reduces the size of the search space- encountered in parallel scheduling while allowing fulfillment of time dependent constraints.
- Application of DMAIC to integrate Lean Manufacturing and Six SigmaStephen, Philip (Virginia Tech, 2004-06-11)The slow rate of corporate improvement is not due to lack of knowledge of six sigma or lean. Rather, the fault lies in making the transition from theory to implementation. Managers need a step-by-step, unambiguous roadmap of improvement that leads to predictable results. This roadmap provides the self-confidence, punch, and power necessary for action and is the principal subject of this research. Unique to this research is the way the integration of lean and six sigma is achieved; by way of an integration matrix formed by lean implementation protocols and six sigma project phases. This integration matrix is made more prescriptive by an integrated leanness assessment tool, which will guide the user given their existing level of implementation and integration. Further guidance in each of the cells formed by the integration matrix is provided by way of phase methodologies and statistical/non-statistical tools. The output of this research is a software tool that could be used in facilities at any stage of lean implementation, including facilities with no existing lean implementation. The developed software tool has the capability to communicate among current and former project teams within any group, division, or facility in the organization. The developed software tool has also the capability to do data analysis (Example: Design of Experiments, Value Stream Mapping, Multi-Vari Analysis etc.). By way of the integration matrix, leanness assessment and the data analysis capability, the developed software tool will give managers a powerful tool that will help in their quest to achieve lean six sigma.
- Application of genetic algorithm to mixed-model assembly line balancingEvans, Jonathan D. (Virginia Tech, 1996-03-05)The demand for increased diversity, reduced cycle time, and reduced work-in-process has caused increased popularity of mixed-model assembly lines. These lines combine the productivity of an assembly line and the flexibility of a job shop. The mixed-model assembly line allows setup time between models to be zero. Large lines mixed-model assembly lines require a timely, near-optimal method. A well balanced line reduces worker idle time and simplifies the mixed-model assembly line sequencing problem. Prior attempts to solve the balancing problem have been in-adequate. Heuristic techniques are too simple to find near-optimal solutions and yield only one solution. An exhaustive search requires too much processing time. Simulated Annealing works well, but yields only one solution per run and the solutions may vary because of the random nature of the Simulated Annealing process. Multiple runs are required to get more than one solution, each run requiring some amount of time which depends on problem size. If only one run is performed, the solution achieved may be far from optimal. In addition, Simulated Annealing requires different parameters depending on the size of the problem. The Genetic Algorithm (GA) is a probabilistic heuristic search strategy. In most cases, it begins with a population of random solutions. Then the population is reproduced using crossover and mutation with the fittest solutions having a higher probability of being parents. The idea is survival of the fittest, poor or unfit solutions do not reproduce and are replaced by better or fitter solutions. The final generation should yield multiple near optimal solutions. The objective of this study is to investigate the Genetic Algorithm and its performance compared to Simulated Annealing for large mixed-model assembly lines. The results will show that the Genetic Algorithm will perform comparably to the Simulated Annealing. The Genetic Algorithm will be used to solve various mixed-model assembly line problems to discover the correct parameters to solve any mixed-model assembly line balancing problem.
- The application of simulated annealing to the mixed model, deterministic assembly line balancing problemEdwards, Sherry L. (Virginia Tech, 1993-07-15)With the trend towards greater product customization and shorter delivery time, the use of mixed model assembly lines is increasing. A line balancing approach is needed that can address the complex nature of the mixed model line and produce near optimal solutions to problems of realistic size. Due to the combinatorial nature of the line balancing problem, exact solution techniques are limited to small problems. Heuristic methods, on the other hand, are often too simplistic to find good solutions. Furthermore, many of the existing techniques cannot be expanded to handle the mixed model problem. Simulated Annealing (SA) is a search methodology which has exhibited good results when applied to combinatorial optimization problems. In fact, researchers have found that SA is able to find near-optimal solutions while its processing time increases only as a polynomial function of problem size. However, none of the applications found in the literature fully explore the technique's ability to handle a highly-constrained problem such as line balancing.
- Applying heuristic traffic assignment in natural disaster evacuation: a decision support systemHwang, Kuo-Ping (Virginia Polytechnic Institute and State University, 1986)The goal of this research is to develop a heuristic traffic assignment method to simulate the traffic flow of a transportation network at a real-time speed. The existing assignment methods are reviewed and a heuristic path-recording assignment method is proposed. Using the new heuristic assignment method, trips are loaded onto the network in a probabilistic approach for the first iteration; paths are recorded, and path impedance is computed as the basis for further assignment iteration. The real-time traffic assignment model developed with the new assignment method is called HEUPRAE. The difference in link traffic between this new assignment and Dial's multipath assignment ranges from 10 to 25 percent. Saving in computer time is about 55 percent. The proposed heuristic path-recording assignment is believed to be an efficient and reliable method. Successful development of this heuristic assignment method helps solve those transportation problems which need assignment results at a real-time speed, and for which the assignment process lasts a couple of hours. Evacuation planning and operation are well suited to the application of this real-time heuristic assignment method. Evacuation planning and operations are major activities in emergency management. Evacuation planning instructs people where to go, which route to take, and the time needed to accomplish an evacuation. Evacuation operations help the execution of an evacuation plan in response to the changing nature of a disaster. The Integrated Evacuation Decision Support System (IEDSS) is a computer system which employs the evacuation planning model, MASSVAC2, and the evacuation operation model, HEUPRAE, to deal with evacuations. The IEDSS uses computer graphics to prepare input and interpret output. It helps a decision maker analyze the evacuation system, review evacuation plans, and issue an evacuation order at a proper time. Users of the IEDSS can work on evacuation problems in a friendly interactive visual environment. The application of the IEDSS to the hurricane and flood problems for the city of Virginia Beach shows how IEDSS is practically implemented. It proves the usefulness of the IEDSS in coping with disasters.
- Assembly Yield Model for Area Array PackagesSharma, Sanjay (Virginia Tech, 2000-03-06)The traditional design of printed circuit board assembly focuses on finding a set of parameter values (that characterizes the process), such that the desired circuit performance specifications are met. It is usually assumed that this set of values can be accurately realized when the circuit or the assembly is built. Unfortunately, this assumption is not realistic for assemblies produced in mass scale. Fluctuations in manufacturing processes cause defects in actual values of the parameters. This variability in design parameters, in turn, causes defects in the functionality of the assemblies. The ratio of the acceptable assemblies to total assemblies produced constitutes the yield of the assembly process. Assembly yields of area array packages are heavily dependent on design of the board as much as package and process parameters. The economics of IC technology is such that the maximization of yield rather than the optimization of performance has become the topic of prime importance. The projected value of yield has always been a factor for consideration in the advancement of Integrated Chip technology. Due to considerable reduction in the package size, minimum allowable tolerance and tight parameter variations, electronic assemblies have to be simulated, characterized and tested before translating them to a production facility. Also, since the defect levels are measured in parts per million, it is impractical to build millions of assemblies for the purpose of identifying the best parameter. A mathematical model that relates design parameters and their variability to assembly yield can help in the effective estimation of the yield. This research work led to the development of a mathematical model that can incorporate variability in the package, board and assembly related parameters and construction of an effective methodology to predict the assembly yield of area array packages. The assembly yield predictions of the model are based on the characteristics of input variables (whether they follow a normal, empirical or experimental distribution). By incorporating the tail portion of the parameter distribution (up to ±6 standard deviation on normal distribution), a higher level of accuracy in assembly yield prediction is achieved. An estimation of the interaction of parameters is obtained in terms of the expected number of defective joints and/or components and a degree of variability around this expected value. As an implementation of the mathematical model, a computer program is developed. The software is user friendly and prompts the user for information on the input variables, it predicts the yield as expected number of defective joints per million and expected number of defective components (assemblies) per million. The software can also be used to predict the number of defects for a user-specified number of components (less or more than one million assemblies). The area array assembly yield model can be used to determine the impact of process parameter variations on assembly yields. The model can also be used to assess the manufacturability of a new design, represent the capability of an assembly line for bench marking purposes, help modify designs for better yield, and to define the minimum acceptable manufacturability standards and tolerances for components, boards and designs.
- Asymptotic Worst-Case Analyses for the Open Bin Packing ProblemOngkunaruk, Pornthipa (Virginia Tech, 2005-10-07)The open bin packing problem (OBPP) is a new variant of the well-known bin packing problem. In the OBPP, items are packed into bins so that the total content before the last item in each bin is strictly less than the bin capacity. The objective is to minimize the number of bins used. The applications of the OBPP can be found in the subway station systems in Hong Kong and Taipei and the scheduling in manufacturing industries. We show that the OBPP is NP-hard and propose two heuristic algorithms instead of solving the problem to optimality. We propose two offline algorithms in which the information of the items is known in advance. First, we consider the First Fit Decreasing (FFD) which is a good approximation algorithm for the bin packing problem. We prove that its asymptotic worst-case performance ratio is no more than 3/2. We observe that its performance for the OBPP is worse than that of the BPP. Consequently, we modify it by adding the algorithm that the set of largest items is the set of last items in each bin. Then, we propose the Modified First Fit Decreasing (MFFD) as an alternative and prove that its asymptotic worst-case performance ratio is no more than 91/80. We conduct empirical tests to show their average-case performance. The results show that in general, the FFD and MFFD algorithms use no more than 33% and 1% of the number of bins than that of optimal packing, respectively. In addition, the MFFD is asymptotically optimal when the sizes of items are (0,1) uniformly distributed.
- Automated storage and retrieval system design reportEaglesham, Mark A. (Virginia Tech, 1995-12-15)This report describes the design and operation of an Automated Storage and Retrieval System (AS/RS) to serve the Flexible Manufacturing and Assembly System (FMAS) in the Manufacturing Systems Laboratory at Virginia Tech. The system requirements of the AS/RS, justification of design choices, and the proposed modes of operating the system are described. The AS/RS was designed to automatically move material on pallets between the storage racks in the laboratory to the FMAS conveyor interface. The system was designed and built, and has been tested to perform the desired operating functions. The scope of this project was limited to designing and installing the hardware component of the AS/RS, and testing it to ensure that it will satisfy the system requirements of the FMAS. The educational objective of the project is to enable fully automated control of all cell activities via the FMAS Computer Network.
- Bayesian Optimization for Engineering Design and Quality Control of Manufacturing SystemsAlBahar, Areej Ahmad (Virginia Tech, 2022-04-14)Manufacturing systems are usually nonlinear, nonstationary, highly corrupted with outliers, and oftentimes constrained by physical laws. Modeling and approximation of their underly- ing response surface functions are extremely challenging. Bayesian optimization is a great statistical tool, based on Bayes rule, used to optimize and model these expensive-to-evaluate functions. Bayesian optimization comprises of two important components namely, a sur- rogate model often the Gaussian process and an acquisition function often the expected improvement. The Gaussian process, known for its outstanding modeling and uncertainty quantification capabilities, is used to represent the underlying response surface function, while the expected improvement is used to select the next point to be evaluated by trading- off exploitation and exploration. Although Bayesian optimization has been extensively used in optimizing unknown and expensive-to-evaluate functions and in hyperparameter tuning of deep learning models, mod- eling highly outlier-corrupted, nonstationary, and stress-induced response surface functions hinder the use of conventional Bayesian optimization models in manufacturing systems. To overcome these limitations, we propose a series of systematic methodologies to improve Bayesian optimization for engineering design and quality control of manufacturing systems. Specifically, the contributions of this dissertation can be summarized as follows. 1. A novel asymmetric robust kernel function, called AEN-RBF, is proposed to model highly outlier-corrupted functions. Two new hyperparameters are introduced to im- prove the flexibility and robustness of the Gaussian process model. 2. A nonstationary surrogate model that utilizes deep multi-layer Gaussian processes, called MGP-CBO, is developed to improve the modeling of complex anisotropic con- strained nonstationary functions. 3. A Stress-Aware Optimal Actuator Placement framework is designed to model and op- timize stress-induced nonlinear constrained functions. Through extensive evaluations, the proposed methodologies have shown outstanding and significant improvements when compared to state-of-the-art models. Although these pro- posed methodologies have been applied to certain manufacturing systems, they can be easily adapted to other broad ranges of problems.
- Bi-criteria Scheduling Problems on Parallel MachinesPrakash, Divya (Virginia Tech, 1995-07-20)Mathematical programming has not been used extensively for the solution of scheduling problems. Moreover, the study of bicriteria problems on single and parallel machines is an open field for research. This thesis is aimed at developing algorithms to solve bicriteria problems more efficiently and in reasonable amount of time and with little compromise on the optimality of the solutions obtained. Two classes of problems are considered. The first class consists of scheduling unit duration tasks on parallel machines. Various combinations of primary and secondary criteria are considered and optimal seeking algorithms of polynomial time complexity are developed. The second class of problems assume general processing time tasks. An algorithm is developed for the primary criterion of total tardiness and the secondary criterion of total flow time. This algorithm is based on the solution of the underlying mathematical program and makes use of dominance relationship among the jobs and fixing of variables. Experimental results are presented regarding its performance.
- Bi-objective multi-assignment capacitated location-allocation problemMaach, Fouad (Virginia Tech, 2006-10-10)Optimization problems of location-assignment correspond to a wide range of real situations, such as factory network design. However most of the previous works seek in most cases at minimizing a cost function. Traffic incidents routinely impact the performance and the safety of the supply. These incidents can not be totally avoided and must be regarded. A way to consider these incidents is to design a network on which multiple assignments are performed. Precisely, the problem we focus on deals with power supplying that has become a more and more complex and crucial question. Many international companies have customers who are located all around the world; usually one customer per country. At the other side of the scale, power extraction or production is done in several sites that are spread on several continents and seas. A strong willing of becoming less energetically-dependent has lead many governments to increase the diversity of supply locations. For each kind of energy, many countries expect to deal ideally with 2 or 3 location sites. As a decrease in power supply can have serious consequences for the economic performance of a whole country, companies prefer to balance equally the production rate among all sites as the reliability of all the sites is considered to be very similar. Sharing equally the demand between the 2 or 3 sites assigned to a given area is the most common way. Despite the cost of the network has an importance, it is also crucial to balance the loading between the sites to guarantee that no site would take more importance than the others for a given area. In case an accident happens in a site or in case technical problems do not permit to satisfy the demand assigned to the site, the overall power supply of this site is still likely to be ensured by the one or two available remaining site(s). It is common to assign a cost per open power plant and another cost that depends on the distance between the factory or power extraction point and the customer. On the whole, such companies who are concerned in the quality service of power supply have to find a good trade-off between this factor and their overall functioning cost. This situation exists also for companies who supplies power at the national scale. The expected number of areas as well that of potential sites, can reach 100. However the targeted size of problem to be solved is 50. This thesis focuses on devising an efficient methodology to provide all the solutions of this bi-objective problem. This proposal is an investigation of close problems to delimit the most relevant approaches to this untypical problem. All this work permits us to present one exact method and an evolutionary algorithm that might provide a good answer to this problem.
- Branch-and-Price Method for Stochastic Generalized Assignment Problem, Hospital Staff Scheduling Problem and Stochastic Short-Term Personnel Planning ProblemKim, Seon Ki (Virginia Tech, 2009-03-05)The work presented in this dissertation has been focused on exploiting the branch-and-price (BNP) method for the solution of various stochastic mixed integer programming problems (MIPs). In particular, we address the stochastic generalized assignment problem (SGAP), a hospital staff scheduling problem (HSSP), a stochastic hospital staff scheduling problem (SHSSP), and a stochastic short-term personnel planning problem (SSTPP). The BNP method has been developed in concert with the dual stabilization technique and other enhancements of this method for each of these problems. In view of an excessive number of scenarios that arise for these problems, we also implement the Monte Carlo method within the BNP scheme. The superiority of the BNP-based method over the branch-and-cut (BNC) method is demonstrated for all of these problems. The first problem that we address is the SGAP for which the processing time of a job on a machine is assumed to be stochastic. Even though the generalized assignment problem (GAP) has been solved using the BNP method, yet no study has been reported in the literature on the use of the BNP method for the solution of the SGAP. Our work has been motivated by the desire to fill this gap. We begin by showing that it is better to solve the SGAP as a stochastic program in contrast to solving it by using the expected values of the times required to process the jobs on the machines. Then, we show that the stochastic model of the SGAP is a complete recourse model — a useful property which permits the first stage decisions to produce feasible solutions for the recourse problems. We develop three BNP-based methods for the solution of the SGAP. The first of these is BNP-SGAP, which is a combination of branch-and-bound and column generation methods. The pricing problem of BNP-SGAP is separable with regard to each machine, and it is a multiple-constraint knapsack problem. The second method is BNP-SGAP implemented in concert with the dual stabilization technique (DST), and it is designated as BNPDST-SGAP. We have introduced a new DST by modifying the Boxstep method of Pigatti et al. [76]. We have shown that our method performs better than the method of Pigatti et al. [76] resulting in over two-fold savings in cpu times on average. The third method that we develop for the solution of the SGAP is BNPDST-SGAP implemented with an advanced start to obtain an initial feasible solution. We use a greedy heuristic to obtain this solution, and this heuristic is a modification of a similar method used for the knapsack problem. It relies on the information available at a node of the underlying branch-and-bound tree. We have shown that this procedure obtains an initial feasible solution, if it exists at that node. We designate this method as BNPDSTKP-SGAP. We have also developed a BNC method to solve the SGAP using CPLEX 9.0. We have compared the performances of the BNP and BNC methods on various problem instances obtained by varying the number of machines, the ratio of the number of machines to the number of jobs, the machine capacity, and the penalty cost per unit of extra resource required at each machine. Our results show that all BNP-based methods perform better than the BNC method, with the best performance obtained for BNPDSTKP-SGAP. An issue with the use of the scenario-based methods that we have employed for the solution of the SGAP is that the number of scenarios generally grows exponentially in problem parameters, which gives rise to a large-size problem. To overcome the complexity caused by the presence of a large number of scenarios for the solution of the SGAP, we introduce the use of the Monte Carlo method (MCM) within the BNP scheme. We designate this method as BNPDSTKP-SGAP with MCM. It affords the use of a small subset of scenarios at a time to estimate the "true" optimal objective function value. Replications of the subsets of scenarios are carried out until the objective function value satisfies a stopping criterion. We have established theoretical results for the use of the MCM. These pertain to determining unbiased estimates of: (i) lower and upper bounds of the "true" optimal objective function value, (ii) the "true" optimal solution, and (iii) the optimality gap. We have also provided the 100(1-ï ¡) confidence interval on the optimality gap. Our experimental investigation has shown the efficacy of using this method. It obtains almost optimal solutions, with the objective function value lying within 5% of the "true" optimal objective function value, while giving almost ten-fold savings in cpu time. Our experimentation has also revealed that an increment in the number of scenarios in each replication makes a greater impact on the quality of the solution obtained than an increment in the number of replications. We have also observed the impact of a change in the variance of a processing time distribution on cpu time. As expected, the optimal objective function value increases with increment in processing time variability. Also, by comparing the results with the expected value solution, it is observed that the greater the variability in the data, the better it is to use the stochastic program. The second problem that we study is the hospital staff scheduling problem. We address the following three versions of this problem: HSSP (General): Implementation of schedule incorporating the four principal elements, namely, surgeons, operations, operating rooms, and operation times; HSSP (Priority): Inclusion of priority for some surgeons over the other surgeons regarding the use of the facility in HSSP (General); HSSP (Pre-arranged): Implementation of a completely pre-fixed schedule for some surgeons. The consideration of priority among the surgeons mimics the reality. Our BNP method for the solution of these problems is similar to that for the SGAP except for the following: (i) a feasible solution at a node is obtained with no additional assignment, i.e., it consists of the assignments made in the preceding nodes of that node in the branch-and-bound tree; (ii) the columns with positive reduced cost are candidates for augmentation in the CGM; and (iii) a new branching variable selection strategy is introduced, which selects a fractional variable as a branching variable by fixing a value of which we enforce the largest number of variables to either 0 or 1. The priority problem is separable in surgeons. The results of our experimentation have shown the efficacy of using the BNP-based method for the solution of each HSSP as it takes advantage of the inherent structure of each of these problems. We have also compared their performances with that of the BNC method developed using CPLEX. For the formulations HSSP (General), HSSP (Priority), and HSSP (Pre-arranged), the BNP method gives better results for 22 out of 30, 29 out of 34, and 20 out 32 experiments over the BNC method, respectively. Furthermore, while the BNC method fails to obtain an optimal solution for 15 experiments, the BNP method obtains optimal solutions for all 96 experiments conducted. Thus, the BNP method consistently outperforms the BNC method for all of these problems. The third problem that we have investigated in this study is the stochastic version of the HSSP, designated as the Stochastic HSSP (SHSSP), in which the operation times are assumed to be stochastic. We have introduced a formulation for this formulation, designated as SHSSP2 (General), which allows for overlapping of schedules for surgeons and operating rooms, and also, allows for an assignment of a surgeon to perform an operation that takes less than a pre-arranged operation time, but all incurring appropriate penalty costs. A comparison of the solution of SHSSP2 (General) and its value with those obtained by using expected values (the corresponding problem is designated as Expected-SHSSP2 (General)) reveals that Expected-SHSSP2 (General) may end up with inferior and infeasible schedules. We show that the recourse model for SHSSP2 (General) is a relatively complete recourse model. Consequently, we use the Monte Carlo method (MCM) to reduce the complexity of solving SHSSP2 (General) by considering fewer scenarios. We employ the branch-and-cut (BNC) method in concert with the MCM for solving SHSSP2 (General). The solution obtained is evaluated using tolerance ratio, closeness to optimality, length of confidence interval, and cpu time. The MCM substantially reduces computational effort while producing almost optimal solutions and small confidence intervals. We have also considered a special case of SHSSP2 (General), which considers no overlapping schedules for surgeons and operating rooms and assigns exactly the same operation time for each assignment under each scenario, and designate it as SHSSP2 (Special). With this, we consider another formulation that relies on the longest operation time among all scenarios for each assignment of a surgeon to an operation in order to avoid scheduling conflicts, and we designate this problem as SHSSP (Longest). We show SHSSP (Longest) to be equivalent to deterministic HSSP, designated as HSSP (Equivalent), and we further prove it to be equivalent to SHSSP (General) in terms of the optimal objective function value and the optimal assignments of operations to surgeons. The schedule produced by HSSP (Equivalent) does not allow any overlap among the operations performed in an operating room. That is, a new operation cannot be performed if a previous operation scheduled in that room takes longer than expected. However, the schedule generated by HSSP (Equivalent) may turn out to be a conservative one, and may end up with voids due to unused resources in case an operation in an operating room is completed earlier than the longest time allowed. Nevertheless, the schedule is still a feasible one. In such a case, the schedule can be left-shifted, if possible, because the scenarios are now revealed. Moreover, such voids could be used to perform other procedures (e.g., emergency operations) that have not been considered within the scope of the SHSSP addressed here. Besides, such a schedule can provide useful guidelines to plan for resources ahead of time. The fourth problem that we have addressed in this dissertation is the stochastic short-term personnel planning problem, designated as Stochastic STPP (SSTPP). This problem arises due to the need for finding appropriate temporary contractors (workers) to perform requisite jobs. We incorporate uncertainty in processing time or amount of resource required by a contractor to perform a job. Contrary to the SGAP, the recourse model for this problem is not a relatively complete recourse model. As a result, we cannot employ a MCM method for the solution of this problem as it may give rise to an infeasible solution. The BNP method for the SSTPP employs the DST and the advanced start procedure developed for the SGAP, and due to extra constraints and presence of binary decision variables, we use the branching variable selection strategy developed for the HSSP models. Because of the distinctive properties of the SSTPP, we have introduced a new node selection strategy. We have compared the performances of the BNC-based and BNP-based methods based on the cpu time required. The BNP method outperforms the BNC method in 75% of the experiments conducted, and the BNP method is found to be quite stable with smaller variance in cpu times than those for the BNC method. It affords solution of difficult problems in smaller cpu times than those required for the BNC method.
- Capacity Investment, Flexibility, and Product Substitution/Complementarity under Demand UncertaintySuwandechochai, Rawee (Virginia Tech, 2005-12-15)We provide a comprehensive characterization of the relationship between optimal capacity and the degree of product substitution/complementarity under price/production postponement, considering different business practices (holdback versus clearance, negative price policies) and different demand models. Specifically, we consider a firm that produces two products, which can be substitutable or complementary. The demand of each product is a linear function of the prices of both products (with the relationship depending on the substitution/complementarity structure), and is subject to an additive stochastic shock. We consider two types of linear demand functions that are commonly used in the economics and operations management literature. The firm operates in a monopolistic setting and acts as a price-setter for both products. Overall the firm needs to make three sets of decisions: capacity, production quantities, and prices. While the capacity investment decision has to be made ex-ante observation of demand curves, price and/or quantity decisions can be postponed until after demand curves are observed. We consider two postponement strategies: price and quantity postponement, and price postponement only. We characterize the optimal pricing/production/investment decisions for each postponement strategy. Using these characterizations, we show that product substitution/complementarity is a key demand characteristic, which has a large impact on the optimal capacity. Our results show that how the optimal capacity behaves in substitution/complementarity parameter is quite similar under both postponement strategies, and under holdback and clearance. However, this behavior depends highly on other underlying assumptions (i.e., whether or not negative prices are allowed) and on the demand model used.
- Cellular and functional production environments: design methodology and comparisonSarper, Hüseyin (Virginia Polytechnic Institute and State University, 1988)A hybrid methodology was developed to fairly compare functional and cellular production environments with respect to the production of machined parts which constitute the indivisible components of some final products. The methodology provides a means of designing each production environment at the lowest possible cost and then comparing the two environments with respect to cost and non-cost performance measures. The results show that the long-held belief that the cellular manufacturing or group technology method of production may be superior to that of the traditional functional or job shop layout may not be correct. A detailed comparison using four problem sets with different job and machine mixes failed to indicate a clear case in which the cellular environment performed better than the functional. The methodology consists of two stages. Stage one has six hierarchical steps which systematically determine machine requirements and layout planning of each environment through mathematical modelling. External and internal operation constraints and inputs such as stochastic daily demand and operation times were considered. Stochastic programming was used in handling uncertain daily demand and operation times by specifying a desired minimum probability of meeting the demand for each job type in both environments. The MPSIII package was used in solving large mixed integer problems that resulted once nonlinear terms, due to the chance-constrained nature of the segments of the models, were linearized. Because of the large problem sizes, MPSIII input files had to be created using FORTRAN codes. In stage two, the SIMAN simulation language was used to determine the feasibility of stage one decisions and to obtain other system information. In simulation, some approximations were made to implement stage one decisions. For example, jobs received an average processing time in each operation class area rather than the exact operation time of the specific machine type to which the jobs were assigned in stage one. The effect of material handling distances and the use of limited number of work-in-process carriers were considered. Although the methodology was mainly developed for the comparison of the two production environments, it is readily usable for individual design of either production environment. In addition to the two main stages of development, this research also required the development of two other procedures: unitizing daily demands and the modifying the previously available job/cell grouping methods.