Browsing by Author "Chen, Fengshan Frank"
Now showing 1 - 20 of 20
Results Per Page
Sort Options
- Adaptive Scheduling and Tool Flow Control in Automated Manufacturing SystemsChen, Jie (Virginia Tech, 2003-04-02)The recent manufacturing environment is characterized as having diverse products due to mass customization, short production lead-time, and unstable customer demand. Today, the need for flexibility, quick responsiveness, and robustness to system uncertainties in production scheduling decisions has increased significantly. In traditional job shops, tooling is usually assumed as a fixed resource. However, when tooling resource is shared among different machines, a greater product variety, routing flexibility with a smaller tool inventory can be realized. Such a strategy is usually enabled by an automatic tool changing mechanism and tool delivery system to reduce the time for tooling setup, hence allows parts to be processed in small batches. In this research, a dynamic scheduling problem under flexible tooling resource constraints is studied. An integrated approach is proposed to allow two levels of hierarchical, dynamic decision making for job scheduling and tool flow control in Automated Manufacturing Systems. It decomposes the overall problem into a series of static sub-problems for each scheduling window, handles random disruptions by updating job ready time, completion time, and machine status on a rolling horizon basis, and considers the machine availability explicitly in generating schedules. Two types of manufacturing system models are used in simulation studies to test the effectiveness of the proposed dynamic scheduling approach. First, hypothetical models are generated using some generic shop flow structures (e.g. flexible flow shops, job shops, and single-stage systems) and configurations. They are tested to provide the empirical evidence about how well the proposed approach performs for the general automated manufacturing systems where parts have alternative routings. Second, a model based on a real industrial flexible manufacturing system was used to test the effectiveness of the proposed approach when machine types, part routing, tooling, and other production parameters closely mimic to the real flexible manufacturing operations. The study results show that the proposed scheduling approach significantly outperforms other dispatching heuristics, including Cost Over Time (COVERT), Apparent Tardiness Cost (ATC), and Bottleneck Dynamics (BD), on due-date related performance measures under both types of manufacturing systems models. It is also found that the performance difference between the proposed scheduling approach and other heuristics tend to become more significant when the number of machines is increased. The more operation steps a system has, the better the proposed method performs, relative to the other heuristics. This research also investigates in what conditions (e.g. the number of machines, the number of operation steps, and shop load conditions) the proposed approach works the best, and how the performance of this proposed approach changes when these conditions change. When tooling resource is shared, parts can be routed to machines that do not have all the required tools. This may result in higher routing flexibility. However, research work to date in sharing of tooling resources often places more emphasis on the real-time control and manipulation of tools, and pays less attention to the loading of machines and initial tool allocation at the planning stage. In this research, a machine-loading model with shared tools is proposed to maximize routing flexibility while maintaining minimum resident tools. The performance of the proposed loading heuristic is compared to that of a random loading method using hypothetically generated single stage system models. The study result indicates that better system performances can be obtained by taking into account the resident tooling ratio in assigning part types and allocating tools to machines at the initial planning stage.
- An agent based manufacturing scheduling module for Advanced Planning and SchedulingAttri, Hitesh (Virginia Tech, 2005-01-04)A software agents based manufacturing scheduling module for Advanced Planning and Scheduling (APS) is presented. The problem considered is scheduling of jobs with multiple operations, distinct operation processing times, arrival times, and due dates in a job shop environment. Sequence dependent setups are also considered. The additional constraints of material and resource availability are also taken into consideration. The scheduling is to be considered in integration with production planning. The production plans can be changed dynamically and the schedule is to be generated to reflect the appropriate changes. The design of a generic multi-agent framework which is domain independent along with algorithms that are used by the agents is also discussed.
- Analytic Evaluation of the Expectation and Variance of Different Performance Measures of a Schedule under Processing Time VariabilityNagarajan, Balaji (Virginia Tech, 2003-09-19)The realm of manufacturing is replete with instances of uncertainties in job processing times, machine statuses (up or down), demand fluctuations, due dates of jobs and job priorities. These uncertainties stem from the inability to gather accurate information about the various parameters (e.g., processing times, product demand) or to gain complete control over the different manufacturing processes that are involved. Hence, it becomes imperative on the part of a production manager to take into account the impact of uncertainty on the performance of the system on hand. This uncertainty, or variability, is of considerable importance in the scheduling of production tasks. A scheduling problem is primarily to allocate the jobs and determine their start times for processing on a single or multiple machines (resources) for the objective of optimizing a performance measure of interest. If the problem parameters of interest e.g., processing times, due dates, release dates are deterministic, the scheduling problem is relatively easier to solve than for the case when the information is uncertain about these parameters. From a practical point of view, the knowledge of these parameters is, most often than not, uncertain and it becomes necessary to develop a stochastic model of the scheduling system in order to analyze its performance. Investigation of the stochastic scheduling literature reveals that the preponderance of the work reported has dealt with optimizing the expected value of the performance measure. By focusing only on the expected value and ignoring the variance of the measure used, the scheduling problem becomes purely deterministic and the significant ramifications of schedule variability are essentially neglected. In many a practical cases, a scheduler would prefer to have a stable schedule with minimum variance than a schedule that has lower expected value and unknown (and possibly high) variance. Hence, it becomes apparent to define schedule efficiencies in terms of both the expectation and variance of the performance measure used. It could be easily perceived that the primary reasons for neglecting variance are the complications arising out of variance considerations and the difficulty of solving the underlying optimization problem. Moreover, research work to develop closed-form expressions or methodologies to determine the variance of the performance measures is very limited in the literature. However, conceivably, such an evaluation or analysis can only help a scheduler in making appropriate decisions in the face of uncertain environment. Additionally, these expressions and methodologies can be incorporated in various scheduling algorithms to determine efficient schedules in terms of both the expectation and variance. In our research work, we develop such analytic expressions and methodologies to determine the expectation and variance of different performance measures of a schedule. The performance measures considered are both completion time and tardiness based measures. The scheduling environments considered in our analysis involve a single machine, parallel machines, flow shops and job shops. The processing times of the jobs are modeled as independent random variables with known probability density functions. With the schedule given a priori, we develop closed-form expressions or devise methodologies to determine the expectation and variance of the performance measures of interest. We also describe in detail the approaches that we used for the various scheduling environments mentioned earlier. The developed expressions and methodologies were programmed in MATLAB R12 and illustrated with a few sample problems. It is our understanding that knowing the variance of the performance measure in addition to its expected value would aid in determining the appropriate schedule to use in practice. A scheduler would be in a better position to base his/her decisions having known the variability of the schedules and, consequently, can strike a balance between the expected value and variance.
- Application of DMAIC to integrate Lean Manufacturing and Six SigmaStephen, Philip (Virginia Tech, 2004-06-11)The slow rate of corporate improvement is not due to lack of knowledge of six sigma or lean. Rather, the fault lies in making the transition from theory to implementation. Managers need a step-by-step, unambiguous roadmap of improvement that leads to predictable results. This roadmap provides the self-confidence, punch, and power necessary for action and is the principal subject of this research. Unique to this research is the way the integration of lean and six sigma is achieved; by way of an integration matrix formed by lean implementation protocols and six sigma project phases. This integration matrix is made more prescriptive by an integrated leanness assessment tool, which will guide the user given their existing level of implementation and integration. Further guidance in each of the cells formed by the integration matrix is provided by way of phase methodologies and statistical/non-statistical tools. The output of this research is a software tool that could be used in facilities at any stage of lean implementation, including facilities with no existing lean implementation. The developed software tool has the capability to communicate among current and former project teams within any group, division, or facility in the organization. The developed software tool has also the capability to do data analysis (Example: Design of Experiments, Value Stream Mapping, Multi-Vari Analysis etc.). By way of the integration matrix, leanness assessment and the data analysis capability, the developed software tool will give managers a powerful tool that will help in their quest to achieve lean six sigma.
- Component-based Intelligent Control Architecture for Reconfigurable Manufacturing SystemsSu, Jiancheng (Virginia Tech, 2007-11-13)The present dynamic manufacturing environment has been characterized by a greater variety of products, shorter life-cycles of products and rapid introduction of new technologies, etc. Recently, a new manufacturing paradigm, i.e. Reconfigurable Manufacturing Systems (RMS), has emerged to address such challenging issues. RMSs are able to adapt themselves to new business conditions timely and economically with a modular design of hardware/software system. Although a lot of research has been conducted in areas related to RMS, very few studies on system-level control for RMS have been reported in literature. However, the rigidity of current manufacturing systems is mainly from their monolithic design of control systems. Some new developments in Information Technology (IT) bring new opportunities to overcome the inflexibility that shadowed control systems for years. Component-based software development gains its popularity in 1990's. However, some well-known drawbacks, such as complexity and poor real-time features counteract its advantages in developing reconfigurable control system. New emerging Extensible Markup Language (XML) and Web Services, which are based on non-proprietary format, can eliminate the interoperability problems that traditional software technologies are incompetent to accomplish. Another new development in IT that affects the manufacturing sector is the advent of agent technology. The characteristics of agent-based systems include autonomous, cooperative, extendible nature that can be advantageous in different shop floor activities. This dissertation presents an innovative control architecture, entitled Component-based Intelligent Control Architecture (CICA), designed for system-level control of RMS. Software components and open-standard integration technologies together are able to provide a reconfigurable software structure, whereas agent-based paradigm can add the reconfigurability into the control logic of CICA. Since an agent-based system cannot guarantee the best global performance, agents in the reference architecture are used to be exception handlers. Some widely neglected problems associated with agent-based system such as communication load and local interest conflicts are also studied. The experimental results reveal the advantage of new agent-based decision making system over the existing methodologies. The proposed control system provides the reconfigurability that lacks in current manufacturing control systems. The CICA control architecture is promising to bring the flexibility in manufacturing systems based on experimental tests performed.
- Cost Modeling Based on Support Vector Regression for Complex Products During the Early Design PhasesHuang, Guorong (Virginia Tech, 2007-08-09)The purpose of a cost model is to provide designers and decision-makers with accurate cost information to assess and compare multiple alternatives for obtaining the optimal solution and controlling cost. The cost models developed in the design phases are the most important and the most difficult to develop. Therefore it is necessary to identify appropriate cost drivers and employ appropriate modeling techniques to accurately estimate cost for directing designers. The objective of this study is to provide higher predictive accuracy of cost estimation for directing designer in the early design phases of complex products. After a generic cost estimation model is presented and the existing methods for identification of cost drivers and different cost modeling techniques are reviewed, the dissertation first proposes new methodologies to identify and select the cost drivers: Causal-Associated (CA) method and Tabu-Stepwise selection approach. The CA method increases understanding and explanation of the cost analysis and helps avoid missing some cost drivers. The Tabu-Stepwise selection approach is used to select significant cost drivers and eliminate irrelevant cost drivers under nonlinear situation. A case study is created to illustrate their procedure and benefits. The test data show they can improve predictive capacity. Second, this dissertation introduces Tabu-SVR, a nonparametric approach based on support vector regression (SVR) for cost estimation for complex products in the early design phases. Tabu-SVR determines the parameters of SVR via a tabu search algorithm improved by the author. For verification and validation of performance on Tabu-SVR, the five common basic cost characteristics are summarized: accumulation, linear function, power function, step function, and exponential function. Based on these five characteristics and the Flight Optimization Systems (FLOPS) cost module (engine part), seven test data sets are generated to test Tabu-SVR and are used to compare it with other traditional methods (parametric modeling, neural networking and case-based reasoning). The results show Tabu-SVR significantly improves the performance compared to SVR based on empirical study. The radial basis function (RBF) kernel, which is much more robust, often has better performance over linear and polynomial kernel functions. Compared with other traditional cost estimating approaches, Tabu-SVR with RBF kernel function has strong predicable capability and is able to capture nonlinearities and discontinuities along with interactions among cost drivers. The third part of this dissertation focuses on semiparametric cost estimating approaches. Extensive studies are conducted on three semiparametric algorithms based on SVR. Three data sets are produced by combining the aforementioned five common basic cost characteristics. The experiments show Semiparametric Algorithm 1 is the best approach under most situations. It has better cost estimating accuracy over the pure nonparametric approach and the pure parametric approach. The model complexity influences the estimating accuracy for Semiparametric Algorithm 2 and Algorithm 3. If the inexact function forms are used as the parametric component of semiparametric algorithm, they often do not bring any improvement of cost estimating accuracy over the pure nonparametric approach and even worsen the performance. The last part of this dissertation introduces two existing methods for sensitivity analysis to improve the explanation capability of the cost estimating approach based on SVR. These methods are able to show the contribution of cost drivers, to determine the effect of cost drivers, to establish the profiles of cost drivers, and to conduct monotonic analysis. They finally can help designers make trade-off study and answer “what-i” questions.
- Hardware-based Parallel Simulation of Flexible Manufacturing SystemsXu, Dong (Virginia Tech, 2001-07-16)This research explores a hardware-based parallel simulation mechanism that can dramatically improve the speed of simulating flexible manufacturing systems (FMS) by applying appropriate enabling hardware technologies. The hardware-based parallel simulation refers to running a simulation on a multi-microprocessor integrated circuit board, called the simulator, which is specifically designed for the purpose of simulating a specific FMS. The board is composed of a collection of micro-emulators capable of mimicking the operation of equipment in FMS such as machining centers, transporters, and load/unload stations. To design possible architectures for the board, a mapping technology is applied by making use of the physical layout information of an FMS. Under such a mapping method, the simulation model is decomposed into a cluster of micro emulator on the board where each workstation is represented by one micro emulator. Three potential architectures for the proposed simulator, namely, the bus-based architecture, the shared-memory based architecture, and the parallel I/O port based architecture, are studied. To provide a suitable parallel computing platform, a prototype simulator based on the combination of the shared-memory and the parallel I/O port architecture is physically built. Besides the development of the hardware simulator, a time scaling simulation method is also developed for execution on the proposed simulator. The method uses the on-board digital clock to synchronize the parallel simulation being performed on different microprocessors. The advantage of the time scaling technology is that the sequence of simulation events is sorted naturally in consistent with the real events. In this way, no entangled waiting is needed as in the conservative parallel simulation methods so as to reduce the synchronization overhead and the danger of having deadlock. Experiments on the prototype simulator show that the time scaling simulation method, combined with the unique hardware features of the FMS specific simulator, achieves a large speedup compared to conventional software-based simulation methods.
- Holonic-based control system for automated material handling systemsBabiceanu, Radu Florin (Virginia Tech, 2005-07-12)In real-word manufacturing environments, finding the right job sequences and their associated schedules when resource, precedence, and timing constraints are imposed is a difficult task. For most practical problems classical scheduling easily leads to an exponential growth in the number of possible schedules. Moreover, a decision time period of hours or even minutes is too long. Good solutions are often needed in real-time. The problem becomes even more complicated if changes, such as new orders or resource breakdowns, occur within the manufacturing system. One approach to overcome the challenges of solving classical scheduling problems is the use of distributed schemes such as agent or holonic-based control architectures. This dissertation presents an innovative control architecture that uses the holonic concept, capable of delivering good solutions when applied in dynamic environments. The general holonic control framework presented in this research has specific characteristics not found in others reported so far. Using a modular approach it takes into account all the categories of hardware and software resources of a manufacturing system. Due to its modularity, the holonic control framework can be used for assigning and scheduling different task types, separately or simultaneously. Thus, it can be used not only for assigning and scheduling transport tasks, but also for finding feasible solutions to the job assignment and scheduling of processing tasks, or to better utilize the auxiliary equipment and devices in a manufacturing system. In the holonic system, under real-time constraints, a feasible schedule for the material handling resources emerges from the combination of individual holon's schedules. Internal evaluation algorithms and coordination mechanisms between the entities in the architecture form the basis for the resultant schedules. The experimental results obtained show a percentage difference between the makespan values obtained using the holonic scheduling approach and the optimal values of under seven percent. Since current control systems in use in industry lack the ability to adapt to dynamic manufacturing environments, the holonic architecture designed and the tests performed in this research could be a part in the effort to build the foundations for the control systems of the next generation manufacturing systems.
- Impact of Alternative Flow Control Policies on Value Stream Delivery Robustness Under Demand Instability: a System Dynamics Modeling and Simulation ApproachSousa, George (Virginia Tech, 2004-09-17)This research explores the effect of proposed management policies and related structures on the dynamics of value streams, particularly under demand instability. It relies on methods from the systems thinking and modeling literature and was designed to fulfill three main objectives. Objective 1: Provide insight into the causes of problematic behavior in traditional value streams. Objective 2: Identify modes of demand behavior suitable for pull-based systems operation. Objective 3: Propose and test alternative value stream management policies and structures. The achievement of objectives 1 and 3 required the fulfillment of both a hypothetical and a real case. The hypothetical case was designed to describe the problem and improvement alternatives in generic terms, whereas the real case served to contextualize the main generic modeling elements in a real world situation, thus serving as an illustrative example. The research approach was one based on system dynamics modeling and simulation methodologies that reflect the scientific method. Three alternative policies were created and tested. Policy 1: a decision rule for altering the number of kanbans in circulation at the protective decoupling inventory during production cycles. Policy 2: a decision rule for defining the amount of demand to include in value stream schedules. Policy 3: a decision rule for setting a purposefully unbalanced downstream production capacity. The results suggest a benefit from the combined use of Policies 2 and 3 in the face of sudden demand peaks. Policy 1 is expected to provide minor benefits but also significantly increase the risk of upstream instability and therefore its use is not recommended. This study provides a causality perspective of the structure of value streams, and gives enterprise engineers new insights into the state-of-the-art in value stream design.
- Implementation of a Production Architecture For a Post-2000 Market: Demonstration of a Microfactory ConceptNeal, John Allen III (Virginia Tech, 2001-12-10)The development of a "Next Generation Manufacturing System" is currently an active area of research worldwide. The research described in this dissertation addresses one sub-element within this research area; namely, the demonstration of a decentralized, automated production architecture. The goal of the work is to increase the ability of a manufacturing enterprise to respond to rapid technological and market change in the post-2000 global economy. The research is comprised of three objectives; definition of a decentralized organizational structure of autonomous production activities, implementation of the defined organization in a real world manufacturing environment, and a comparison of historical (centralized architecture) performance data and decentralized performance data. To accomplish these objectives, the proposed production architecture is implemented at a real world manufacturing site and performance data are acquired and tested against a stated hypothesis. The research entails the modification of a selected electronics module assembly activity in the following ways: 1) comprehensive automation of assembly processes; 2) simplification of production practice through a minimization of operator interaction and a reduction of assembly transaction points requiring operator intervention; and 3) restructuring of organizational functions resulting in decentralization and operational autonomy. The null hypothesis was successfully rejected and it was shown that the implementation of automation, simplification, and decentralization resulted in an enhancement of production performance (i.e., a reduction in throughput time, labor cost, overhead cost, and total product cost) without degrading production quality. A test of the null hypothesis based on the data indicates a statistically significant (i.e., p less than or equal to 0.05) reduction in throughput time, labor cost, overhead cost, and total product cost while no statistically significant difference in the before and after production quality data was shown. A possible interpretation of these results is that the implementation of automation, simplification, and decentralization did result in a reduction in the labor cost, overhead cost, and total product cost and did not result in a degradation in production quality.
- Inter-Enterprise Cost-Time ProfilingRivera, Leonardo (Virginia Tech, 2006-08-03)Measuring the use of resources in a production process has been a subject under great scrutiny since more than a hundred years ago. Traditionally, costing systems and cost accounting systems have been in charge of such functions in manufacturing corporations. On the other hand, in recent years Lean Manufacturing has become a powerful and popular force for change. A premier tool for process visualization and understanding is Value Stream Mapping, and it focuses primarily in the time dimension of the processes. However, it is clear that the interaction of cost and time is very important. This is felt in everyday occurrences, such as paying interests for credit cards, mortgages and other types of loans. It is intuitive that the longer a certain amount of money is held, the more it costs. Also, if a larger amount of money is held for one day, it will obviously cost more than holding a smaller amount of money. Therefore, cost and time, BOTH, determine the real cost of the use of money. However, this simple perception has not been applied equally to the measurement of manufacturing processes. They usually concentrate on either cost or time, but seldom in both at the same time and their interaction. The Westinghouse corporation formalized the concepts of the Cost-Time Profile in 1993, based on work done there during several decades. Simply put, the Cost-Time Profile measures how much money is invested in the manufacturing process of a product and for how long, creating a chart that presents the accumulated cost at every point in time (Cost-Time Profile) and measuring the area under this curve (Cost-Time Investment), and then using this quantification to measure the bottom line impact. This research has accomplished two main things: the detailed consideration of the Cost-Time Profile (CTP) and the issues and factors that affect it, and the extension of the concepts to the new reality of Extended Enterprises. In a logical sequence, the basic concepts of CTP are defined and presented. Then, the extension of them to Inter-Enterprise environments follows. Successive sections present how to build a CTP and the Inter-Enterprise Cost-Time Profile (IE-CTP), as well as discussing the factors that should be taken into account to bring the IE-CTP to practical applications, such as the effect of batching; the interaction with existing accounting systems; the consideration of direct cost, overhead and profit and the relationships between companies in supply networks to build IE-CTPs. Then the issue of how to improve the results of the Cost-Time Investment (CTI) and CTP is addressed, and schedule optimization models are developed; generic improvement scenarios and lean implementation scenarios are discussed; some simulation studies are presented for cases when this tool has advantages over deterministic tools and an IE-CTP specific software tool is presented. After learning how to improve the CTP and CTI, a discussion about how to use it and implement it is presented, and finally the summary and conclusions close this research report, identifying the contributions presented and leaving open avenues for future research.
- Measuring Leanness of Manufacturing Systems and Identifying Leanness Target by Considering AgilityWan, Hung-da (Virginia Tech, 2006-07-12)The implementation of lean manufacturing concepts has shown significant impacts on various industries. Numerous tools and techniques have been developed to tackle specific problems in order to eliminate wastes and carry out lean concepts. With the focus on "how to make a system leaner," little effort has been made on determining "how lean the system is." Lean assessment surveys evaluate the current status of a system qualitatively against predefined lean indicators. Lean metrics are developed to quantify performance of improvement initiatives, but each metric only focuses on one specific area. Value Stream Maps demonstrate the current and future states graphically with the emphasis on time-based performance only. A truly quantitative and synthesized measure for overall leanness has not been established. In some circumstances, being lean may not be the only goal for manufacturers. In order to compete in the rapidly changing marketplace, manufacturing systems should also be agile to respond quickly to uncertain demands. Nevertheless, being extremely agile may increase the cost of regular operations and reduce the leanness of the system. Similarly, being extremely lean may reduce flexibility and lower the agility level. Therefore, a manufacturing system should be agile enough to handle the uncertainty of demands and meanwhile be lean enough to deliver goods with competitive prices and lead time. In order to achieve the appropriate leanness level, a leanness measure is needed to address not only "how lean the system is" but also "how lean it should be." In this research, a methodology is proposed to quantitatively measure leanness level of manufacturing systems using the Data Envelopment Analysis (DEA) technique. The production process of each work piece is defined as a Decision Making Unit (DMU) that transforms inputs of Cost and Time into output Value. Using a Slacks-Based Measure (SBM) model, the DEA-Leanness Measure is developed to quantify the leanness level of each DMU by comparing the DMU against the frontier of leanness. A Cost-Time-Value analysis is developed to create virtual DMUs to push the frontier towards ideal leanness so that an effective benchmark can be established. The DEA-Leanness Measure provides a unit-invariant leanness score valued between 0 and 1, which is an indication of "how lean the system is" and also "how much leaner the system can be." With the help of Cost-Time Profiling technique, directions of potential improvement can be identified by comparing the profiles of DMUs with different leanness scores. The leanness measure can also be weighted between Cost, Time and Value variables. The weighted DEA-Leanness Measure provides a way to evaluate the impacts of improvement initiatives with an emphasis on the company's strategic focus. Performing the DEA-Leanness measurement requires detailed cost and time data. A Web-Based Kanban is developed to facilitate automated data collection and real-time performance analysis. In some circumstances where detailed data is not readily available but a Value Stream Maps (VSM) has been constructed, the applications of DEA-Leanness Measure based on existing VSM are explored. Besides pursuing leanness, satisfying a customer's demand pattern requires certain level of agility. Based on the DEA-Leanness Measure, appropriate leanness targets can be identified for manufacturing systems considering sufficient agility level. The Online-Delay and Offline-Delay Targets are determined to represent the minimum acceptable delays considering inevitable waste within and beyond a manufacturing system. Combining the two targets, a Lean-Agile Performance Index can then be derived to evaluate if the system has achieved an appropriate level of leanness with sufficient agility for meeting the customers' demand. Hypothetical cases mimicking real manufacturing systems are developed to verify the proposed methodologies. An Excel-based DEA-Leanness Solver and a Web-Kanban System have been developed to solve the mathematical models and to substantiate potential applications of the leanness measure in real world. Finally, future research directions are suggested to further enhance the results of this research.
- Modeling, Analysis and Solution Approaches for Some Optimization Problems: High Multiplicity Asymmetric Traveling Salesman, Primary Pharmaceutical Manufacturing Scheduling, and Lot Streaming in an Assembly SystemYao, Liming (Virginia Tech, 2008-05-27)This dissertation is devoted to the modeling, analysis and development of solution approaches for some optimization-related problems encountered in industrial and manufacturing settings. We begin by introducing a special type of traveling salesman problem called "High Multiplicity Asymmetric Traveling Salesman Problem" (HMATSP). We propose a new formulation for this problem, which embraces a flow-based subtour elimination structure, and establish its validity for this problem. The model is, then, incorporated as a substructure in our formulation for a lot-sizing problem involving parallel machines and sequence-dependent setup costs, also known as the "Chesapeake Problem". Computational results are presented to demonstrate the efficacy of our modeling approach for both the generic HMATSP and its application within the context of the Chesapeake Problem. Next, we investigate an integrated lot-sizing and scheduling problem that is encountered in the primary manufacturing facility of pharmaceutical manufacturing. This problem entails determination of production lot sizes of multiple products and sequence in which to process the products on machines, which can process lots (batches) of a fixed size (due to limited capacity of containers) in the presence of sequence-dependent setup times/costs. We approach this problem via a two-stage optimization procedure. The lot-sizing decision is considered at stage 1 followed by the sequencing of production lots at stage 2. Our aim for the stage 1 problem is to allocate batches of products to time-periods in order to minimize the sum of the inventory and backordering costs subject to the available capacity in each period. The consideration of batches of final products, in addition to those for intermediate products, which comprise a final product, further complicates the lot-sizing problem. The objective for the stage 2 problem is to minimize sequence-dependent setup costs. We present a novel unifying model and a column generation-based optimization approach for this class of lot-sizing and sequencing problems. Computational experience is first provided by using randomly generated data sets to test the performances of several variants of our proposed approach. The efficacy of the best of these variants is further demonstrated by applying it to the real-life data collected with the collaboration of a pharmaceutical manufacturing company. Then, we address a single-lot, lot streaming problem for a two-stage assembly system. This assembly system is different from the traditional flow shop configuration. It consists of m parallel subassembly machines at stage 1, each of which is devoted to the production of a component. A single assembly machine at stage 2, then, assembles products after components (one each from the subassembly machines at the first stage) have been completed. Lot-detached setups are encountered on the machines at the first and second stages. Given a fixed number of transfer batches (or sublots) from each of the subassembly machines at stage 1 to the assembly machine at stage 2, our problem is to find sublot sizes so as to minimize the makespan. We develop optimality conditions to determine sublot sizes for the general problem, and present polynomial-time algorithms to determine optimal sublot sizes for the assembly system with two and three subassembly machines at stage 1. Finally, we extend the above single-lot, lot streaming problem for the two-stage assembly system to multiple lots, but still, for the objective of minimizing the makespan. Due to the presence of multiple lots, we need to address the issue of the sequencing of the lots along with lot-splitting, a fact which adds complexity to the problem. Some results derived for the single-lot version of this problem have successfully been generalized for this case. We develop a branch-and-bound-based methodology for this problem. It relies on effective lower bounds and dominance properties, which are also derived. Finally, we present results of computational experimentation to demonstrate the effectiveness of our branch-and-bound-based methodology. Because of the tightness of our upper and lower bounds, a vast majority of the problems can be solved to optimality at root node itself, while for others, the average gap between the upper and lower bounds computed at node zero is within 0.0001%. For a majority of these problems, our dominance properties, then, effectively truncate the branch-and-bound tree, and obtain optimal solution within 500 seconds.
- Modeling, Analysis,and Design of Responsive Manufacturing Systems Using Classical Control TheoryFong, Nga Hin Benjamin (Virginia Tech, 2005-04-15)The manufacturing systems operating within today's global enterprises are invariably dynamic and complicated. Lean manufacturing works well where demand is relatively stable and predictable where product diversity is low. However, we need a much higher agility where customer demand is volatile with high product variety. Frequent changes of product designs need quicker response times in ramp-up to volume. To stay competitive in this 21st century global industrialization, companies must posses a new operation design strategy for responsive manufacturing systems that react to unpredictable market changes as well as to launch new products in a cost-effective and efficient way. The objective of this research is to develop an alternative method to model, analyze, and design responsive manufacturing systems using classical control theory. This new approach permits industrial engineers to study and better predict the transient behavior of responsive manufacturing systems in terms of production lead time, WIP overshoot, system responsiveness, and lean finished inventory. We provide a one-to-one correspondence to translate manufacturing terminologies from the System Dynamics (SD) models into the block diagram representation and transfer functions. We can analytically determine the transient characteristics of responsive manufacturing systems. This analytical formulation is not offered in discrete event simulation or system dynamics approach. We further introduce the Root Locus design technique that investigates the sensitivity of the closed-loop poles location as they relate to the manufacturing world on a complex s-plane. This subsequent complex plane analysis offers new management strategies to better predict and control the dynamic responses of responsive manufacturing systems in terms of inventory build-up (i.e., leanness) and lead time. We define classical control theory terms and interpret their meanings according to the closed-loop poles locations to assist production management in utilizing the Root Locus design tool. Again, by applying this completely graphic view approach, we give a new design approach that determine the responsive manufacturing parametric set of values without iterative trial-and-error simulation replications as found in discrete event simulation or system dynamics approach.
- A Multi-Agent System and Auction Mechanism for Production Planning over Multiple Facilities in an Advanced Planning and Scheduling SystemGoel, Amol (Virginia Tech, 2004-10-13)One of the major planning problems faced by medium and large manufacturing enterprises is the distribution of production over various (production) facilities. The need for cross-facility capacity management is most evident in the high-tech industries having capital-intensive equipment and short technology life cycle. There have been solutions proposed in the literature that are based on the lagragian decomposition method which separate the overall multiple product problem into a number of single product problems. We believe that multi-agent systems, given their distributed problem solving approach can be used to solve this problem, in its entirety, more effectively. According to other researchers who have worked in this field, auction theoretic mechanisms are a good way to solve complex production planning problems. This research study develops a multi-agent system and negotiation protocol based on combinatorial auction framework to solve the given multi-facility planning problem. The output of this research is a software library, which can be used as a multi-agent system model of the multi-product, multi-facility capacity allocation problem. The negotiation protocol for the agents is based on an iterative combinatorial auction framework which can be used for making allocation decisions in this environment in real-time. A simulator based on this library is created to validate the multi-agent model as well as the auction theoretic framework for different scenarios in the problem domain. The planning software library is created using open source standards so that it can be seamlessly integrated with scheduling library being developed as a part of the Advanced Planning and Scheduling (APS) system project or any other software suite which might require this functionality. The research contribution of this study is in terms of a new multi-agent architecture for an Advanced Planning and Control (APS) system as well as a novel iterative combinatorial auction mechanism which can be used as an agent negotiation protocol within this architecture. The theoretical concepts introduced by this research are implemented in the MultiPlanner production planning tool which can be used for generating master production plans for manufacturing enterprises. The validation process carried out on both the iterative combinatorial framework and the agent-based production planning methodology demonstrate that the proposed solution strategies can be used for integrated decision making in the multi-product, multi-facility production planning domain. Also, the software tool developed as part of this research is a robust, platform independent tool which can be used by manufacturing enterprises to make relevant production planning decisions.
- New Strategic and Dynamic Variation Reduction Techniques for Assembly LinesMusa, Rami (Virginia Tech, 2007-03-29)Variation is inevitable in any process, so it has to be dealt with effectively and economically. Reducing variation can be achieved in assembly lines strategically and dynamically. Implementing both the strategic and dynamic variation reduction techniques is expected to lead to further reduction in the number of failed final assemblies. The dissertation is divided into three major parts. In the first part, we propose to reduce variation for assemblies by developing efficient inspection plans based on (1) historical data for existing products, or simulated data for newly developed products; (2) Monte Carlo simulation; and (3) optimization search techniques. The cost function to be minimized is the total of inspection, rework, scrap and failure costs. The novelty of the proposed approach is three-fold. First, the use of CAD data to develop inspection plans for newly launched products is new, and has not been introduced in the literature before. Second, frequency of inspection is considered as the main decision variable, instead of considering whether or not to inspect a quality characteristic of a subassembly. Third, we use a realistic reaction plan (rework-scrap-keep) that mimics reality in the sense that not all out-of-tolerance items should be scrapped or reworked. At a certain stage, real-time inspection data for a batch of subassemblies could be available. In the second part of this dissertation, we propose utilizing this data in near real-time to dynamically reduce variation by assigning the inspected subassembly parts together. In proposing mathematical models, we found that they are hard to solve using traditional optimization techniques. Therefore, we propose using heuristics.Finally, we propose exploring opportunities to reduce the aforementioned cost function by integrating the inspection planning model with the Dynamic Throughput Maximization (DTM) model. This hybrid model adds one decision variable in the inspection planning; which is whether to implement DTM (assemble the inspected subassemblies selectively) or to assemble the inspected items arbitrarily. We expect this hybrid implementation to substantially reduce the failure cost when assembling the final assemblies for some cases. To demonstrate this, we solve a numerical example that supports our findings.
- Process Modeling, Performance Analysis and Configuration Simulation in Integrated Supply Chain Network DesignDong, Ming (Virginia Tech, 2001-07-27)Supply chain management has been recently introduced to address the integration of organizational functions ranging from the ordering and receipt of raw materials throughout the manufacturing processes, to the distribution and delivery of products to the customer. Its application demonstrates that this idea enables organizations to achieve higher quality products, better customer service, and lower inventory cost. In order to achieve high performance, supply chain functions must operate in an integrated and coordinated manner. Several challenging problems associated with integrated supply chain design are: (1) how to model and coordinate the supply chain business processes, specifically in the area of supply chain workflows; (2) how to analyze the performance of an integrated supply chain network so that optimization techniques can be employed to improve customer service and reduce inventory cost; and (3) how to evaluate dynamic supply chain networks and obtain a comprehensive understanding of decision-making issues related to supply network configurations. These problems are most representative in the supply chain theory's research and applications. There are three major objectives for this research. The first objective is to develop viable modeling methodologies and analyzing algorithms for supply chain business processes so that the logic properties of supply chain process models can be analyzed and verified. This problem has not been studied in integrated supply chain literature to date. To facilitate the modeling and verification analysis of supply chain workflows, an object-oriented Petri nets based modular modeling and analyzing approach is presented. The proposed, structured, process-modeling algorithm provides an effective way to design structured supply chain business processes. The second objective is to develop a network of inventory-queue models for the performance analysis and optimization of an integrated supply network with inventory control at all sites. An inventory-queue is a queueing model that incorporates an inventory replenishment policy for the output store. This dissertation extends the previous work done on the supply network model with base-stock control and service requirements. Instead of one-for-one base stock policy, batch-ordering policy and lot-sizing problems are considered. To determine the replenishment lead times of items at the stores, a fixed-batch target-level production authorization mechanism is employed to explicitly obtain performance measures of the supply chain queueing model. The validity of the proposed model is illustrated by comparing the results from the analytical performance evaluation model and those obtained from the simulation study. The third objective is to develop simulation models for understanding decision-making issues of the supply chain network configuration in an integrated environment. Simulation studies investigate multi-echelon distribution systems with installation stock reorder policy and echelon stock reorder policy. The results show that, depending on the structure of multi-echelon distribution systems, either echelon stock or installation stock policy may be advantageous. This dissertation presents a new transshipment policy, called "alternate transshipment policy," to improve supply chain performance. In an integrated supply chain network that considers both the distribution function and the manufacturing function, the impacts of component commonality on network performance are also evaluated. The results of analysis-of-variance and Tukey's tests reveal that there is a significant difference in performance measures, such as delivery time and order fill rates, when comparing an integrated supply chain with higher component commonality to an integrated supply chain with lower component commonality. Several supply chain network examples are employed to substantiate the effectiveness of the proposed methodologies and algorithms.
- Reconfigurable Hardware-Based Simulation Modeling of Flexible Manufacturing SystemsTang, Wei (Virginia Tech, 2005-11-18)This dissertation research explores a reconfigurable hardware-based parallel simulation mechanism that can dramatically improve the speed of simulating the operations of flexible manufacturing systems (FMS). Here reconfigurable hardware-based simulation refers to running simulation on a reconfigurable hardware platform, realized by Field Programmable Gate Array (FPGA). The hardware model, also called simulator, is specifically designed for mimicking a small desktop FMS. It is composed of several micro-emulators, which are capable of mimicking operations of equipment in FMS, such as machine centers, transporters, and load/unload stations. To design possible architectures for the simulator, a mapping technology is applied using the physical layout information of an FMS. Under such a mapping method, the simulation model is decomposed into a cluster of micro emulators on the board where each machine center is represented by one micro emulator. To exploit the advantage of massive parallelism, a kind of star network architecture is proposed, with the robot sitting at the center. As a pilot effort, a prototype simulator has been successfully built. A new simulation modeling technology named synchronous real-time simulation (SRS) is proposed. Instead of running conventional programs on a microprocessor, this new technology adopts several concepts from electronic area, such as using electronic signals to mimic the behavior of entities and using specifically designed circuits to mimic system resources. Besides, a time-scaling simulation method is employed. The method uses an on-board global clock to synchronize all activities performed on different emulators, and by this way tremendous overhead on synchronization can be avoided. Experiments on the prototype simulator demonstrate the validity of the new modeling technology, and also show that tremendous speedup compared to conventional software-based simulation methods can be achieved.
- A Sequence-Pair and Mixed Integer Programming Based Methodology for the Facility Layout ProblemLiu, Qi (Virginia Tech, 2004-11-22)The facility layout problem (FLP) is one of the most important and challenging problems in both the operations research and industrial engineering research domains. In FLP research, the continuous-representation-based FLP can consider all possible all-rectangular department solutions. Given this flexibility, this representation has become the representation of-choice in FLP research. Much of this research is based on a methodology of mixed integer programming (MIP) models. However, these MIP-FLP models can only solve problems with a limited number of departments to optimality due to a large number of binary variables used in the models to prevent departments from overlapping. Our research centers around the sequence-pair representation, a concept that originated in the Very Large Scale Integration (VLSI) design literature. We show that an exhaustive search of the sequence-pair solution space will result in finding the optimal layout of the MIP-FLP and that every sequence-pair solution is binary-feasible in the MIP-FLP. Based on this fact, we propose a methodology that combines the sequence-pair and MIP-FLP model to efficiently solve large continuous-representation-based FLPs. Our heuristic approach searches the sequence-pair solution space and then use the sequence-pair representation to simplify and solve the MIPFLP model. Based on this methodology, we systematically study the different aspects of the FLP throughout this dissertation. As the first contribution of this dissertation, we present a genetic algorithm based heuristic, SEQUENCE, that combines the sequence-pair representation and the most recent MIPFLP model to solve the all-rectangular-department continuous-representation-based FLP. Numerical experiments based on different sized test problems from both the literature and industrial applications are provided and the solutions are compared with both the optimal solutions and the solutions from other heuristics to show the effectiveness and efficiency of our heuristic. For eleven data sets from the literature, we provide solutions better than those previously found. For the FLP with fixed departments, many sequence-pairs become infeasible with respect to the fixed department location and dimension restrictions. As our second contribution, to address this difficulty, we present a repair operator to filter the infeasible sequence-pairs with respect to the fixed departments. This repair operator is integrated into SEQUENCE to solve the FLP with fixed departments more efficiently. The effectiveness of combining SEQUENCE and the repair operator for solving the FLP with fixed departments is illustrated through a series of numerical experiments where the SEQUENCE solutions are compared with other heuristics' solutions. The third contribution of this dissertation is to formulate and solve the FLP with an existing aisle structure (FLPAL). In many industrial layout designs, the existing aisle structure must be taken into account. However, there is very little research that has been conducted in this area. We extend our research to further address the FLPAL. We first present an MIP model for the FLPAL (MIP-FLPAL) and run numerical experiments to test the performance of the MIP-FLPAL. These experiments illustrate that the MIP-FLPAL can only solve very limited sized FLPAL problems. Therefore, we present a genetic algorithm based heuristic, SEQUENCE-AL, to combine the sequence-pair representation and MIP-FLPAL to solve larger-sized FLPAL problems. Different sized data sets are solved by SEQUENCE-AL and the solutions are compared with both the optimal solutions and other heuristics' solutions to show the effectiveness of SEQUENCE-AL. The fourth contribution of this dissertation is to formulate and solve the FLP with non-rectangular-shaped departments. Most FLP research focuses on layout design with all rectangular-shaped departments, while in industry there are many FLP applications with non-rectangular-shaped departments. We extend our research to solve the FLP with nonrectangular-shaped departments. We first formulate the FLP with non-rectangular-shaped departments (FLPNR) to a MIP model (MIP-FLPNR), where each non-rectangular department is partitioned into rectangular-shaped sub-departments and the sub-departments from the same department are connected according to the department's orientation. The effect of different factors on the performance of the MIP-FLPNR is explored through a series of numerical tests, which also shows that MIP-FLPNR can only solve limited-sized FLPNR problems. To solve larger-sized FLPNR problems, we present a genetic algorithm based heuristic, SEQUENCE-NR, along with two repair operators based on the mathematical properties of the MIP-FLPNR to solve the larger-sized FLPNR. A series of numerical tests are conducted on SEQUENCE-NR to compare the SEQUENCE-NR solutions with both the optimal solutions and another heuristic's solutions to illustrate the effectiveness of SEQUENCE-NR. As the first systematic research study on a methodology that combines the sequence-pair representation and the MIP-based FLP, this dissertation addresses different types of continuous-representation based facility layout design problems: from block layout design with and without fixed departments to re-layout design with an existing aisle structure, and from layout design with all-rectangular-shaped departments to layout design with arbitrary non-rectangular-shaped departments. For each type of layout design problem, numerical experiments are conducted to illustrate the effectiveness of our specifically designed family of sequence-pair and MIP-based heuristics. As a result, better solutions than those previously found are provided for some widely used data sets from the literature and some new datasets based on both the literature and industrial applications are proposed for the first time. Furthermore, future research that continues to combine the sequence-pair representation and the MIP-FLP model to solve the FLP is also discussed, indicating the richness of this research domain.
- Solving Single and Multiple Plant Sourcing Problems with a Multidimensional Knapsack ModelCherbaka, Natalie Stanislaw (Virginia Tech, 2004-11-18)This research addresses sourcing decisions and how those decisions can affect the management of a company's assets. The study begins with a single-plant problem, in which one facility chooses, from a list of parts, which parts to bring in-house. The selection is based on maximizing the value of the selected parts, while remaining within the plant's capacity. This problem is defined as the insourcing problem and modeled as a multidimensional knapsack problem (MKP). The insourcing model is extended to address outsourcing and multiple plants. This multi-plant model, also modeled as an MKP, enables the movement of parts from one plant to another and consideration of a company-wide objective function (as opposed to a single-plant objective function as in the insourcing model). The sourcing problem possesses characteristics that distinguish it from the standard MKP. One such characteristic is what we define as multiple attributes. To understand the multiple attribute characteristic, we compare the various dimensions in the multidimensional knapsack problem. A classification is given for an MKP as either having a single attribute (SA) or multiple attributes (MA). Mathematically, the problems of each attribute classification can be modeled in the same way with simply a different interpretation of the knapsack constraints. However, experimentation indicates that the MA-MKP is more difficult to solve than the SA-MKP. For small problems, with 100 variables and 5 constraints, the CPU time required to find the optimal solution for MA-MKP to SA-MKP problems has a ratio of 32:1. To determine effective methods for addressing the MA-MKP, standard mixed integer programming techniques are tested. The results of this testing are that the exact approaches are not successful in dramatically reducing the solution time to the level of the SA problems. However, a simple heuristic that performs very well on the MA-MKP is presented. The heuristic utilizes variations on the benefit-to-cost ratio and strongest surrogate constraints. The results from experimentation for MA-MKP problem sets, generated using the methods for standard MKP test data sets in the literature, are presented and indicate that the heuristic performs well and improves with larger problems. The average gap between the heuristic solution and the optimal solution is 1.39% for 200-part problems and is reduced to 0.69% when the size of the problem is increased to 298 parts. Although the MA characteristic reflects the sourcing problem, the actual data used in the eperimentation is generated with techniques presented in the literature for standard MKP test problems. Therefore, to more accurately represent the sourcing problem, industry data from a manufacturing facility is studied to identify further sourcing problem characteristics. As a result, industry-motivated data sets are generated that reflect the characteristics of industry data, yet maintain the structure of literature data sets to allow for easy comparison. It is found that both industry and industry-motivated data sets, although possessing the MA characteristic, are much easier to solve than SA problems. Indicators of difficulty appear to be the constraint tightness and a measure of the matrix sparsity. The sparsity is a significant factor because industry data tends to be very sparse, while data sets generated in the literature are completely dense. Another interesting result from the industry-motivated data sets with the single-plant problem is the tendency for a facility to prefer currently produced parts over insourcing new parts from outside the facility. It is not uncommon for a company to have more than one facility with a particular capability. Therefore, the sourcing model is extended to include multiple facilities. With multiple-facilities, effectively all the parts are removed to form one list, and then each part is assigned to one of the facilities or outsourced externally. The multi-facility model is similar to the single-facility model with the addition of assignment constraints enforcing that each part can be assigned to only one facility. Experimentation is performed for the two-, three-, and four-facility models. The problem gets easier to solve as the number of facilities increases. With a greater number of facilities, it is likely that for each part one of facilities will dominate as the best option. Therefore, other solutions can quickly be eliminated and the problem solved more quickly. The two-facility problem is the most difficult; however, the heuristic performs well with an average gap of 0.06% between the heuristic and optimal solutions. We conclude with a summary on experiences with modeling and solving the sourcing problem for a sheet metal fabrication facility. The model solved for this problem had over 1857 parts with 19 machines, which translates to over 70,000 variables and 38 constraints. Although extremely large compared to problems solved in the literature, this problem was solvable because of the unique structure of industry data. Our work with the facility saved the parent organization up to $4.16M per year and provided a tool that encourages a systematic and quantitative process for evaluating decisions related to sheet metal fabrication capacity.