Browsing by Author "Ellis, Kimberly P."
Now showing 1 - 20 of 57
Results Per Page
Sort Options
- Air Traffic Control Resource Management Strategies and the Small Aircraft Transportation System: A System Dynamics PerspectiveGalvin, James J. (Virginia Tech, 2002-12-02)The National Aeronautics and Space Administration (NASA) is leading a research effort to develop a Small Aircraft Transportation System (SATS) that will expand air transportation capabilities to hundreds of underutilized airports in the United States. Most of the research effort addresses the technological development of the small aircraft as well as the systems to manage airspace usage and surface activities at airports. The Federal Aviation Administration (FAA) will also play a major role in the successful implementation of SATS, however, the administration is reluctant to embrace the unproven concept. The purpose of the research presented in this dissertation is to determine if the FAA can pursue a resource management strategy that will support the current radar-based Air Traffic Control (ATC) system as well as a Global Positioning Satellite (GPS)-based ATC system required by the SATS. The research centered around the use of the System Dynamics modeling methodology to determine the future behavior of the principle components of the ATC system over time. The research included a model of the ATC system consisting of people, facilities, equipment, airports, aircraft, the FAA budget, and the Airport and Airways Trust Fund. The model generated system performance behavior used to evaluate three scenarios. The first scenario depicted the base case behavior of the system if the FAA continued its current resource management practices. The second scenario depicted the behavior of the system if the FAA emphasized development of GPS-based ATC systems. The third scenario depicted a combined resource management strategy that supplemented radar systems with GPS systems. The findings of the research were that the FAA must pursue a resource management strategy that primarily funds a radar-based ATC system and directs lesser funding toward a GPS-based supplemental ATC system. The most significant contribution of this research was the insight and understanding gained of how several resource management strategies and the presence of SATS aircraft may impact the future US Air Traffic Control system.
- Analysis and Improvement of Cross-dock Operations in Less-than-Truckload Freight Transportaion IndustryTong, Xiangshang (Virginia Tech, 2009-08-11)The less-than-truckload (LTL) transportation industry is highly competitive with low profit margins. Carriers in this industry strive to reduce costs and improve customer service to remain profitable. LTL carriers rely on a network of hubs and service centers to transfer freight. A hub is typically a cross docking terminal in which shipments from inbound trailers are unloaded and reassigned and consolidated onto outbound trailers going to the correct destinations. Freight handling in a hub is labor intensive, and workers must quickly transfer freight during a short time period to support customer service levels. Reducing shipment transfer time in hubs offers the opportunity to reduce labor costs, improve customer service, and increase competitive advantages for carriers. This research focuses on improving the efficiency of hub operations in order to decrease the handling costs and increase service levels for LTL carriers. Specifically, the following two decision problems are investigated: (1) assigning trailers to dock doors to minimize the total time required to transfer shipments from inbound trailers to destination trailers and (2) sequencing unloading and loading of freight to minimize the time required by dock employees. The trailer-to-door assignment problem is modeled as a Quadratic Assignment Problem (QAP). Both semi-permanent and dynamic layouts for the trailer-to-door assignment problem are evaluated. Improvement based heuristics, including pair-wise exchange, simulated annealing, and genetic algorithms, are explored to solve the trailer-to-door assignment problem. The freight sequencing problem is modeled as a Rural Postman Problem (RPP). A Balance and Connect Algorithm (BCA) and an Assign First and Route Second Algorithm (AFRSA) are investigated and compared to Balanced Trailer-at-a-Time (BTAAT), Balanced Trailer-at-a-Time with Offloading (BTAATWO), and Nearest Neighbor (NN). The heuristics are evaluated using data from two LTL carriers. For these data sets, both the total travel distance and the transfer time of hub operations are reduced using a dynamic layout with efficient freight sequencing approaches, such as the Balance and Connect Algorithm (BCA), the Balanced Trailer-at-a-Time with Offloading (BTAATWO), and the Nearest Neighbor (NN). Specifically, with a dynamic layout, the BCA reduces travel distance by 10% to 27% over BTAAT and reduces the transfer time by 17% to 68% over BTAAT. A simulation study is also conducted for hub operations in a dynamic and stochastic environment. The solutions from the door assignment and freight sequencing approaches are evaluated in a simulation model to determine their effectiveness in this environment. The simulation results further demonstrate that the performance measures of hub operations are improved using a dynamic layout and efficient freight sequencing approaches. The main contributions of this research are the integer programming models developed for the freight sequencing problem (FSP), based on the Rural Postman Problem (RPP). This is the first known application of the RPP for the FSP. Efficient heuristics are developed for the FSP for a single worker and for multiple workers. These heuristics are analyzed and compared to previous research using industry data.
- Analysis of the Effect of Ordering Policies for a Manufacturing Cell Transitioning to Lean ProductionHafner, Alan D. (Virginia Tech, 2003-06-11)Over the past two decades, Lean Production has begun to replace traditional manufacturing techniques around the world, mainly due to the success of the Toyota Motor Company. One key to Toyota's success that many American companies have not been able to emulate is the transformation of their suppliers to the lean philosophy. This lack of supplier transformation in America is due to a variety of reasons including differences in supplier proximity, supplier relationships, supplier performance levels, and the ordering policies used for supplied parts. The focus of this research is analyzing the impact of ordering policies for supplied parts of a manufacturing cell utilizing Lean Production techniques. This thesis presents a simulation analysis of a multi-stage, lean manufacturing cell that produces a family of products. The analysis investigates how the ordering policy for supplied parts affects the performance of the cell under conditions of demand variability and imperfect supplier performance. The ordering policies evaluated are a periodic-review inventory control policy (s, S) and two kanban policies. The performance of the cell is measured by the flowtime of the product through the cell, the on-time-delivery to their customer, the number of products shipped each week, the amount of work-in-process inventory in the cell, the approximate percentage of time the cell was stocked out, and the average supplied part inventory levels for the cell. Using this simulation model, an experimental analysis is conducted using an augmented central composite design. Then, a multivariate analysis is performed on the results of the experiments. The results obtained from this study suggest that the preferred ordering policy for supplied parts is the (s, S) inventory policy for most levels of the other three factors and most of the performance measures. This policy, however, results in increased levels of supplied part inventory, which is the primary reason for the high performance for most response variables. This increased inventory is in direct conflict with the emphasis on inventory and waste reduction, one of the key principles of Lean Production. Furthermore, the inflated kanban policy tends to perform well at high levels of supplier on-time delivery and low levels of customer demand variability. These results are consistent with the proper conditions under which to implement Lean Production: good supplier performance and level customer demand. Thus, while the (s, S) inventory policy may be advantageous as a company begins transitioning to Lean Production, the inflated kanban policy may be preferable once the company has established good supplier performance and level customer demand.
- Analysis of Worker Assignment Policies on Production Line Performance Utilizing a Multi-skilled WorkforceMcDonald, Thomas N. (Virginia Tech, 2004-02-27)Lean production prescribes training workers on all tasks within the cell to adapt to changes in customer demand. Multi-skilling of workers can be achieved by cross-training. Cross-training can be improved and reinforced by implementing job rotation. Lean production also prescribes using job rotation to improve worker flexibility, worker satisfaction, and to increase worker knowledge in how their work affects the rest of the cell. Currently, there is minimal research on how to assign multi-skilled workers to tasks within a lean production cell while considering multi-skilling and job rotation. In this research, a new mathematical model was developed that assigns workers to tasks, while ensuring job rotation, and determines the levels of skill, and thus training, necessary to meet customer demand, quality requirements, and training objectives. The model is solved using sequential goal programming to incorporate three objectives: overproduction, cost of poor quality, and cost of training. The results of the model include an assignment of workers to tasks, a determination of the training necessary for the workers, and a job rotation schedule. To evaluate the results on a cost basis, the costs associated with overproduction, defects, and training were used to calculate the net present cost for one year. The solutions from the model were further analyzed using a simulation model of the cell to determine the impact of job rotation and multi-skilling levels on production line performance. The measures of performance include average flowtime, work-in-process (WIP) level, and monthly shipments (number produced). Using the model, the impact of alternative levels of multi-skilling and job rotation on the performance of cellular manufacturing systems is investigated. Understanding the effect of multi-skilling and job rotation can aid both production managers and human resources managers in determining which workers need training and how often workers should be rotated to improve the performance of the cell. The lean production literature prescribes training workers on all tasks within a cell and developing a rotation schedule to reinforce the cross-training. Four levels of multi-skilling and three levels of job rotation frequency are evaluated for both a hypothetical cell and a case application in a relatively mature actual production cell. The results of this investigation provide insight on how multi-skilling and job rotation frequency influence production line performance and provide guidance on training policies. The results show that there is an interaction effect between multi-skilling and job rotation for flowtime, work-in-process, in both the hypothetical cell and the case application and monthly shipments in the case application. Therefore, the effect of job rotation on performance measures is not the same at all levels of multi-skilling thus indicating that inferences about the effect of changing multi-skilling, for example, should not be made without considering the job rotation level. The results also indicate that the net present cost is heavily influenced by the cost of poor quality. The results for the case application indicated that the maturity level of the cell influences the benefits derived from increased multi-skilling and affects several key characteristics of the cell. As a cell becomes more mature, it is expected that the quality levels increase and that the skill levels on tasks normally performed increase. Because workers in the case application already have a high skill level on some tasks, the return on training is not as significant. Additionally, the mature cell has relatively high quality levels from the beginning and any improvements in quality would be in small increments rather than in large breakthroughs. The primary contribution of this research is the development of a sequential goal programming worker assignment model that addresses overproduction, poor quality, cross-training, and job rotation in order to meet the prescription in the lean production literature of only producing to customer demand while utilizing multi-skilled workers. Further contributions are analysis of how multi-skilling level and job rotation frequency impact the performance of the cell. Lastly, a contribution is the application of optimization and simulation methods for comprehensively analyzing the impact of worker assignment on performance measures.
- Application of HTML/VRML to Manufacturing Systems EngineeringKrishnamurthy, Kasthuri Rangan (Virginia Tech, 2000-12-14)Manufacturing systems are complex entities comprised of people, processes, products, information systems and data, material processing, handling, and storage systems. Because of this complexity, systems must be modeled using a variety of views and modeling formalisms. In order to design and analyze manufacturing systems, the multiple views and models often need to be considered simultaneously. However, no single tool or computing environment currently exists that allows this to be done in an efficient and intelligible manner. New tools such as HTML and VRML present a promising approach for tackling these problems. They make possible environments where the different models can coexist and where mapping/linking between the models can be achieved. This research is concerned with developing a hybrid HTML/VRML environment for manufacturing systems modeling and analysis. Experiment was performed to compare this hybrid-modeling HTML/VRML environment to the traditional database environment in order to answer typical design/analysis questions associated with manufacturing systems, and to establish the potential advantages of this approach. Analyzing results obtained from the experiment indicated that the HTML/VRML approach might result in better understanding of a manufacturing system than the traditional database approach.
- Capacity Investment, Flexibility, and Product Substitution/Complementarity under Demand UncertaintySuwandechochai, Rawee (Virginia Tech, 2005-12-15)We provide a comprehensive characterization of the relationship between optimal capacity and the degree of product substitution/complementarity under price/production postponement, considering different business practices (holdback versus clearance, negative price policies) and different demand models. Specifically, we consider a firm that produces two products, which can be substitutable or complementary. The demand of each product is a linear function of the prices of both products (with the relationship depending on the substitution/complementarity structure), and is subject to an additive stochastic shock. We consider two types of linear demand functions that are commonly used in the economics and operations management literature. The firm operates in a monopolistic setting and acts as a price-setter for both products. Overall the firm needs to make three sets of decisions: capacity, production quantities, and prices. While the capacity investment decision has to be made ex-ante observation of demand curves, price and/or quantity decisions can be postponed until after demand curves are observed. We consider two postponement strategies: price and quantity postponement, and price postponement only. We characterize the optimal pricing/production/investment decisions for each postponement strategy. Using these characterizations, we show that product substitution/complementarity is a key demand characteristic, which has a large impact on the optimal capacity. Our results show that how the optimal capacity behaves in substitution/complementarity parameter is quite similar under both postponement strategies, and under holdback and clearance. However, this behavior depends highly on other underlying assumptions (i.e., whether or not negative prices are allowed) and on the demand model used.
- Collection-and-Delivery-Points: A Variation on a Location-Routing ProblemSavage, Laura Elizabeth (Virginia Tech, 2019-09-20)Missed deliveries are a major issue for package carriers and a source of great hassle for the customers. Either the carrier attempts to redeliver the package, incurring the additional expense of visiting the same house up to three times, or they leave the package on the doorstep, vulnerable to package thieves. In this dissertation, a system of collection-and-delivery-points (CDPs) has been proposed to improve customer service and reduce carrier costs. A CDP is a place, either in an existing business or a new location, where the carrier drops any missed deliveries and the customers can pick the packages at their convenience. To examine the viability of a CDP system in North America, a variation on a location-routing problem (LRP) was created, a mixed-integer programming model called the CDP-LRP. Unlike standard LRPs, the CDP-LRP takes into account both the delivery truck route distance and the direct customer travel to the CDPs. Also, the facilities being placed are not located at the beginning and ending of the truck routes, but are stops along the routes. After testing, it became clear that, because of the size and complexity of the problem, the CDP-LRP is unable to be solved exactly in a reasonable amount of time. Heuristics developed for the standard LRP cannot be applied to the CDP-LRP because of the differences between the models. Therefore, three heuristics were created to approximate the solution to the CDP-LRP, each with two different embedded modified vehicle routing problem (VRP) algorithms, the Clark-Wright and the Sweep, modified to handle the additional restrictions caused by the CDPs. The first is an improvement heuristic, in which each closed CDP is tested as a replacement for each open CDP, and the move that creates the most savings is implemented. The second begins with every CDP open, and closes them one at a time, while the third does the reverse and begins will only one open CDP, then opens the others one by one. In each case, a penalty is applied if the customer travel distance is too long. Each heuristic was tested for each possible number of open CDPs, and the least expensive was chosen as the best solution. Each heuristic and VRP algorithm combination was tested using three delivery failure rates and different data sets: three small data sets pulled from VRP literature, and randomly generated clustered and uniformly distributed data sets with three different numbers of customers. OpenAll and OpenOne produced better solutions than Replacement in most instances, and the Sweep Algorithm outperformed the Clark-Wright in both solution quality and time in almost every test. To judge the quality of the heuristic solutions, the results were compared to the results of a simple locate-first, route-second sequential algorithm that represents the way the decision would commonly be made in industry today. The CDPs were located using a simple facility location model, then the delivery routes were created with the Sweep algorithm. These results were mixed: for the uniformly distributed data sets, if the customer travel penalty threshold and customer density are low enough, the heuristics outperform the sequential algorithm. For the clustered data sets, the sequential algorithm produces solutions as good as or slightly better than the sequential algorithm, because the location of the potential CDP inside the clusters means that the penalty has less impact, and the addition of more open CDPs has less effect on the delivery route distances. The heuristic solutions were also compared to a second value – the route costs incurred by the carrier in the current system of redeliveries, calculated by placing additional customers in the routes and running the Sweep algorithm – to judge the potential savings that could be realized by implementing a CDP system in North America. Though in some circumstances the current system is less expensive, depending on the geographic distribution of the customers and the delivery failure rate, in other circumstances the cost savings to the carrier could be as high as 27.1%. Though the decision of whether or not to set up a CDP system in an area would need to be made on a case-by-case basis, the results of this study suggest that such a system could be successful in North America.
- Comparison of Scheduling Algorithms for a Multi-Product Batch-Chemical Plant with a Generalized Serial NetworkTra, Niem-Trung L. (Virginia Tech, 2000-01-24)Despite recent advances in computer power and the development of better algorithms, theoretical scheduling methodologies developed for batch-chemical production are seldom applied in industry (Musier & Evans 1989 and Grossmann et al. 1992). Scheduling decisions may have significant impact on overall company profitability by defining how capital is utilized, the operating costs required, and the ability to meet due dates. The purpose of this research is to compare different production scheduling methods by applying them to a real-world multi-stage, multi-product, batch-chemical production line. This research addresses the problem that the theoretical algorithms are seldom applied in industry and allows for performance analysis of several theoretical algorithms. The research presented in this thesis focuses on the development and comparison of several scheduling algorithms. The two objectives of this research are to: 1. modify different heuristic production scheduling algorithms to minimize tardiness for a multi-product batch plant involving multiple processing stages with several out-of-phase parallel machines in each stage; and 2. compare the robustness and performance of these production schedules using a stochastic discrete event simulation of a real-world production line. The following three scheduling algorithms are compared: 1. a modified Musier and Evans scheduling algorithm (1989); 2. a modified Ku and Karimi Sequence Building Algorithm (1991); and 3. a greedy heuristic based on an earliest-due-date (EDD) policy. Musier and Evans' heuristic improvement method (1989) is applied to the three algorithms. The computation times to determine the total tardiness of each schedule are compared. Finally, all the schedules are tested for robustness and performance in a stochastic setting with the use of a discrete event simulation (DES) model. Mignon, Honkomp, and Reklaitis' evaluation techniques (1995) and Multiple Comparison of the Best are used to help determine the best algorithm.
- Correlation of the Elastic Properties of Stretch Film on Unit Load ContainmentBisha, James Victor (Virginia Tech, 2012-05-24)The purpose of this research was to correlate the applied material properties of stretch film with its elastic properties measured in a laboratory setting. There are currently no tools available for a packaging engineer to make a scientific decision on how one stretch film performs against another without applying the film. The system for stretch wrap comparison is mostly based on trial and error which can lead to a significant loss of product when testing a new film or shipping a new product for the first time. If the properties of applied stretch film could be predicted using a tensile test method, many different films could be compared at once without actually applying the film, saving time and money and reducing risk. The current method for evaluating the tensile properties of stretch film advises the user apply a hysteresis test to a standard sample size and calculate several standard engineering values. This test does not represent how the material is actually used. Therefore, a new tensile testing method was developed that considers the film gauge (thickness) and its prestretch. The results of this testing method allowed for the calculation of the material stiffness (Bisha Stiffness) and were used to predict its performance in unit load containment. Applied stretch film is currently compared measuring containment force, which current standards define as the amount of force required to pull out a 15.2cm diameter plate, 10.1cm out, located 25.4cm down from the top and 45.7cm over from the side of a standard 121.9cm width unit load. Given this definition, increasing the amount of force required to pull the plate out can be achieved by manipulating two different stretch film properties, either increasing the stiffness of the film or increasing the tension of the film across the face of the unit load during the application process. Therefore, for this research, the traditional definition of containment force has been broken down into two components. Applied film stiffness was defined as the amount of force required to pull the film a given distance off the unit load. Containment force was defined as the amount of force that an applied film exerts on the corner of the unit load. The applied stretch film was evaluated using two different methods. The first method used the standard 10.1cm pull plate (same plate as ASTM D 4649) to measure the force required to pull the film out at different increments from the center on the face of the unit load. This measurement force was transformed into a material stiffness and film tension (which were subsequently resolved into containment force). The second, newly developed, method involved wrapping a bar under the film, on the corner of the unit load, and pulling out on the bar with a tensile testing machine. This method allowed for the direct measurement of the containment force and material stiffness. The results indicated that while some statistically significant differences were found for certain films, the material stiffness and containment were relatively consistent and comparable using either method.The use of the Bisha Stiffness to predict the applied stiffness and containment force yielded a statistically significant correlation but with a very low coefficient of determination. These results suggest that while film thickness and prestretch are key variables that can predict applied stiffness and containment force, more research should be conducted to study other variables that may allow for a better. High variability of the predictions observed were caused by the differences in film morphology between the different method of elongation (tensile vs application). This study was the first that attempted to define and correlate the tensile properties of stretch film and the applied properties of stretch film. From this research many, terms have been clarified, myths have been dispelled, formulas have been properly derived and applied to the data collected and a clear path forward had been laid out for future researchers to be able to predict applied stiffness and containment force from the elastic properties of stretch film.
- Cost Modeling Based on Support Vector Regression for Complex Products During the Early Design PhasesHuang, Guorong (Virginia Tech, 2007-08-09)The purpose of a cost model is to provide designers and decision-makers with accurate cost information to assess and compare multiple alternatives for obtaining the optimal solution and controlling cost. The cost models developed in the design phases are the most important and the most difficult to develop. Therefore it is necessary to identify appropriate cost drivers and employ appropriate modeling techniques to accurately estimate cost for directing designers. The objective of this study is to provide higher predictive accuracy of cost estimation for directing designer in the early design phases of complex products. After a generic cost estimation model is presented and the existing methods for identification of cost drivers and different cost modeling techniques are reviewed, the dissertation first proposes new methodologies to identify and select the cost drivers: Causal-Associated (CA) method and Tabu-Stepwise selection approach. The CA method increases understanding and explanation of the cost analysis and helps avoid missing some cost drivers. The Tabu-Stepwise selection approach is used to select significant cost drivers and eliminate irrelevant cost drivers under nonlinear situation. A case study is created to illustrate their procedure and benefits. The test data show they can improve predictive capacity. Second, this dissertation introduces Tabu-SVR, a nonparametric approach based on support vector regression (SVR) for cost estimation for complex products in the early design phases. Tabu-SVR determines the parameters of SVR via a tabu search algorithm improved by the author. For verification and validation of performance on Tabu-SVR, the five common basic cost characteristics are summarized: accumulation, linear function, power function, step function, and exponential function. Based on these five characteristics and the Flight Optimization Systems (FLOPS) cost module (engine part), seven test data sets are generated to test Tabu-SVR and are used to compare it with other traditional methods (parametric modeling, neural networking and case-based reasoning). The results show Tabu-SVR significantly improves the performance compared to SVR based on empirical study. The radial basis function (RBF) kernel, which is much more robust, often has better performance over linear and polynomial kernel functions. Compared with other traditional cost estimating approaches, Tabu-SVR with RBF kernel function has strong predicable capability and is able to capture nonlinearities and discontinuities along with interactions among cost drivers. The third part of this dissertation focuses on semiparametric cost estimating approaches. Extensive studies are conducted on three semiparametric algorithms based on SVR. Three data sets are produced by combining the aforementioned five common basic cost characteristics. The experiments show Semiparametric Algorithm 1 is the best approach under most situations. It has better cost estimating accuracy over the pure nonparametric approach and the pure parametric approach. The model complexity influences the estimating accuracy for Semiparametric Algorithm 2 and Algorithm 3. If the inexact function forms are used as the parametric component of semiparametric algorithm, they often do not bring any improvement of cost estimating accuracy over the pure nonparametric approach and even worsen the performance. The last part of this dissertation introduces two existing methods for sensitivity analysis to improve the explanation capability of the cost estimating approach based on SVR. These methods are able to show the contribution of cost drivers, to determine the effect of cost drivers, to establish the profiles of cost drivers, and to conduct monotonic analysis. They finally can help designers make trade-off study and answer “what-i” questions.
- Critical Success Factors for Sustaining Kaizen Event OutcomesGlover, Wiljeana Jackson (Virginia Tech, 2010-04-05)A Kaizen event is a focused and structured improvement project, using a dedicated cross-functional team to improve a targeted work area, with specific goals, in an accelerated timeframe. Kaizen events have been widely reported to produce positive change in business results and human resource outcomes. However, it can be difficult for many organizations to sustain or improve upon the results of a Kaizen event after it concludes. Furthermore, the sustainability of Kaizen event outcomes has received limited research attention to date. This research is based on a field study of 65 events across eight manufacturing organizations that used survey data collected at the time of the event and approximately nine to eighteen months after the event. The research model was developed from Kaizen event practitioner resources, Kaizen event literature, and related process improvement sustainability and organizational change literature. The model hypothesized that Kaizen Event Characteristics, Work Area Characteristics, and Post-Event Characteristics were related to Kaizen event Sustainability Outcomes. Furthermore, the model hypothesized that Post-Event Characteristics would mediate the relationship between Kaizen Event and Work Area Characteristics and the Sustainability Outcomes. The study hypotheses were analyzed through multiple regression models and generalized estimating equations were used to account for potential nesting effects (events within organizations). The factors that were most strongly related to each Sustainability Outcome were identified. Work Area Characteristics learning and stewardship and experimentation and continuous improvement and Post-Event Characteristics performance review and accepting changes were significant direct or indirect predictors of multiple Sustainability Outcomes and these findings were generally supported by the literature. There were also some unanticipated findings, particularly regarding the modeling of Sustainability Outcomes result sustainability and goal sustainability, which appear to illustrate potential issues regarding how organizations define and track the performance of Kaizen events over time and present areas for future research. Overall, this study advances academic knowledge regarding Kaizen event outcome sustainability. The findings also present guidelines so that practitioners may better influence the longer-term impact of Kaizen events on their organizations. The research findings may also extend to other improvement activities, thus presenting additional areas for future work.
- A Data Clustering Approach to Support Modular Product Family DesignSahin, Asli (Virginia Tech, 2007-09-21)Product Platform Planning is an emerging philosophy that calls for the planned development of families of related products. It is markedly different from the traditional product development process and relatively new in engineering design. Product families and platforms can offer a multitude of benefits when applied successfully such as economies of scale from producing larger volumes of the same modules, lower design costs from not having to redesign similar subsystems, and many other advantages arising from the sharing of modules. While advances in this are promising, there still remain significant challenges in designing product families and platforms. This is particularly true for defining the platform components, platform architecture, and significantly different platform and product variants in a systematic manner. Lack of precise definition for platform design assets in terms of relevant customer requirements, distinct differentiations, engineering functions, components, component interfaces, and relations among all, causes a major obstacle for companies to take full advantage of the potential benefits of product platform strategy. The main purpose of this research is to address the above mentioned challenges during the design and development of modular platform-based product families. It focuses on providing answers to a fundamental question, namely, how can a decision support approach from product module definition to the determination of platform alternatives and product variants be integrated into product family design? The method presented in this work emphasizes the incorporation of critical design requirements and specifications for the design of distinctive product modules to create platform concepts and product variants using a data clustering approach. A case application developed in collaboration with a tire manufacturer is used to verify that this research approach is suitable for reducing the complexity of design results by determining design commonalities across multiple design characteristics. The method was found helpful for determining and integrating critical design information (i.e., component dimensions, material properties, modularization driving factors, and functional relations) systematically into the design of product families and platforms. It supported decision-makers in defining distinctive product modules within the families and in determining multiple platform concepts and derivative product variants.
- Data Exchange for Artificial Intelligence Incubation in Manufacturing Industrial InternetZeng, Yingyan (Virginia Tech, 2024-08-21)Industrial Cyber-physical Systems (ICPSs) connect industrial equipment and manufacturing processes via ubiquitous sensors, actuators, and computer units, forming the Manufacturing Industrial Internet (MII). With the data generated from MII, Artificial Intelligence (AI) greatly advances the data-driven decision making for manufacturing efficiency, quality improvement, and cost reduction. However, data with poor quality have posed significant challenges to the incubation (i.e., training, validation, and deployment) of AI models. In the offline training phase, training data with poor quality will result in inaccurate AI models. In the online training and deployment phases, high-volume and informative-poor data lead to the discrepancy of the AI modeling performance in different phases, and also lead to high communication and computation workload, and high cost in data acquisition and storage. In the incubation of AI models for multiple manufacturing stages or systems, exchanging and sharing datasets can significantly improve the efficiency of data collection for single manufacturing enterprise, and improve the quality of training datasets. However, inaccurate estimation of the value of datasets can cause ineffective dataset exchange and hamper the scaling up of AI systems. High-quality and high-value data not only enhance the modeling performance during AI incubation, but also contribute to effective data exchange for potential synergistic intelligence in MII. Therefore, it is important to assess and ensure the data quality in terms of its value for AI models. In this dissertation, our ultimate goal is to establish a data exchange paradigm to provide high-quality and high-value data for AI incubation in MII. To achieve the goal, three research tasks are proposed for different phases in AI incubation: (1) a prediction-oriented data generation method to actively generate highly informative data in the offline training phase for high prediction performance (Chapter 2); (2) an ensemble active learning by contextual bandits framework for acquisition and evaluation of passively collected online data for the continuous improvement and resilient modeling performance during the online training and deployment phases (Chapter 3); and (3) a context-aware, performance-oriented, and privacy-preserving dataset-sharing framework to efficiently share and exchange small-but-high-quality datasets between trusted stakeholders to allow their on-demand usage (Chapter 4). All the proposed methodologies have been evaluated and validated through simulation studies and applications to real manufacturing case studies. In Chapter 5, the contribution of the work is summarized and the future research directions are proposed.
- Data Filtering and Modeling for Smart Manufacturing NetworkLi, Yifu (Virginia Tech, 2020-08-13)A smart manufacturing network connects machines via sensing, communication, and actuation networks. The data generated from the networks are used in data-driven modeling and decision-making to improve quality, productivity, and flexibility while reducing the cost. This dissertation focuses on improving the data-driven modeling of the quality-process relationship in smart manufacturing networks. The quality-process variable relationships are important to understand for guiding the quality improvement by optimizing the process variables. However, several challenges emerge. First, the big data sets generated from the manufacturing network may be information-poor for modeling, which may lead to high data transmission and computational loads and redundant data storage. Second, the data generated from connected machines often contain inexplicit similarities due to similar product designs and manufacturing processes. Modeling such inexplicit similarities remains challenging. Third, it is unclear how to select representative data sets for modeling in a manufacturing network setting, considering inexplicit similarities. In this dissertation, a data filtering method is proposed to select a relatively small and informative data subset. Multi-task learning is combined with latent variable decomposition to model multiple connected manufacturing processes that are similar-but-non-identical. A data filtering and modeling framework is also proposed to filter the manufacturing data for manufacturing network modeling adaptively. The proposed methodologies have been validated through simulation and the applications to real manufacturing case studies.
- Decision Support System to Predict the Manufacturing Yield of Printed Circuit Board Assembly LinesHelo, Felipe (Virginia Tech, 1999-12-01)This research focuses on developing a model to predict the yield of a printed circuit board manufactured on a given assembly line. Based on an extensive literature review as well as discussion with industrial partners, it was determined that there is no tool available for assisting engineers in determining reliable estimates of their production capabilities as they introduce new board designs onto their current production lines. Motivated by this need, a more in-depth study of manufacturing yield as well as the electronic assembly process was undertaken. The relevant literature research was divided into three main fields: process modeling, board design, and PCB testing. The model presented in this research combines elements from process modeling and board design into a single yield model. An optimization model was formulated to determine the fault probabilities that minimize the difference between actual yield values and predicted yield values. This model determines fault probabilities (per component type) based on past production yields for the different board designs assembled. These probabilities are then used to estimate the yields of future board designs. Two different yield models were tested and their assumptions regarding the nature of the faults were validated. The model that assumes independence between faults provided better yield predictions. A preliminary case study was performed to compare the performance of the presented model with that of previous models using data available from the literature. The proposed yield model predicts yield within 3% of the actual yield value, outperforming previous regression models that predicted yield within 10%, and artificial neural network models that predicted yield within 5%. A second case study was performed using data gathered from actual production lines. The proposed yield model continued to provide very good yield predictions. The average difference with respect to the actual yields from this case study ranged between 1.25% and 2.27% for the lines studied. Through sensitivity analysis, it was determined that certain component types have a considerably higher effect on yield than others. Once the proposed yield model is implemented, design suggestions can be made to account for manufacturability issues during the design process.
- Design and Reconfiguration of Manufacturing Systems in Agile Manufacturing EnvironmentsDaghestani, Shamil F. (Virginia Tech, 1998-12-01)Agile manufacturing has become a topic of great interest over the past several years. The entire domain of modeling and analyzing different types of agile manufacturing environments and systems, however, remain largely unexplored. The objective of this research is to provide fundamental insight into how manufacturing systems should be designed and reconfigured over time in order to cope with different agile manufacturing environments. To achieve this objective, three approaches are developed and integrated into one simulation-based model. The first approach is used to model different agile manufacturing environments. The second approach is used to define various ways in which manufacturing systems can be designed and reconfigured (i.e., design/reconfiguration strategies). The third comprises the cost and objective functions used to measure system performance when different design/reconfiguration strategies are used in different agile manufacturing environments. Based upon the assumptions adopted during this thesis, the experimental work performed suggests that despite the fact that agility incurs high costs, agile manufacturing systems are indeed necessary for certain manufacturing environments in which product life cycles are short yet demand per product type is high. Therefore, it is important in certain manufacturing environments to focus on reconfiguration in short periods of time, even at the expense of higher reconfiguration costs.
- Design of Cellular Manufacturing Systems for Dynamic and Uncertain Production Requirements with Presence of Routing FlexibilityMungwattana, Anan (Virginia Tech, 2000-09-01)Shorter product life-cycles, unpredictable demand, and customized products have forced manufacturing firms to operate more efficiently and effectively in order to adapt to changing requirements. Traditional manufacturing systems, such as job shops and flow lines, cannot handle such environments. Cellular manufacturing, which incorporates the flexibility of job shops and the high production rate of flow lines, has been seen as a promising alternative for such cases. Although cellular manufacturing provides great benefits, the design of cellular manufacturing systems is complex for real-life problems. Existing design methods employ simplifying assumptions which often deteriorate the validity of the models used for obtaining solutions. Two simplifying assumptions used in existing design methods are as follows. First, product mix and demand do not change over the planning horizon. Second, each operation can be performed by only one machine type, i.e., routing flexibility of parts is not considered. This research aimed to develop a model and a solution approach for designing cellular manufacturing systems that addresses these shortcomings by assuming dynamic and stochastic production requirements and employing routing flexibility. A mathematical model and an optimal solution procedure were developed for the design of cellular manufacturing under dynamic and stochastic production environment employing routing flexibility. Optimization techniques for solving such problems usually require a substantial amount of time and memory space, therefore, a simulated annealing based heuristic was developed to obtain good solutions within reasonable amounts of time. The heuristic was evaluated in two ways. First, different cellular manufacturing design problems were generated and solved using the heuristic. Then, solutions obtained from the heuristic were compared with lower bounds of solutions obtained from the optimal solution procedure. The lower bounds were used instead of optimal solutions because of the computational time required to obtain optimal solutions. The results show that the heuristic performs well under various circumstances, but routing flexibility has a major impact on the performance of the heuristic. The heuristic appears to perform well regardless of problem size. Second, known solutions of two CM design problems from literature were used to compare with those from the heuristic. The heuristic slightly outperforms one design approach, but substantially outperforms the other design approach.
- Designing Order Picking Systems for Distribution CentersParikh, Pratik J. (Virginia Tech, 2006-09-01)This research addresses decisions involved in the design of an order picking system in a distribution center. A distribution center (DC) in a logistics system is responsible for obtaining materials from different suppliers and assembling (or sorting) them to fulfill a number of different customer orders. Order picking, which is a key activity in a DC, refers to the operation through which items are retrieved from storage locations to fulfill customer orders. Several decisions are involved when designing an order picking system (OPS). Some of these decisions include the identification of the picking-area layout, configuration of the storage system, and determination of the storage policy, picking method, picking strategy, material handling system, pick-assist technology, etc. For a given set of these parameters, the best design depends on the objective function (e.g., maximizing throughout, minimizing cost, etc.) being optimized. The overall goal of this research is to develop a set of analytical models for OPS design. The idea is to help an OPS designer to identify the best performing alternatives out of a large number of possible alternatives. Such models will complement experienced-based or simulation-based approaches, with the goal of improving the efficiency and efficacy of the design process. In this dissertation we focus on the following two key OPS design issues: configuration of the storage system and selection between batch and zone order picking strategies. Several factors that affect these decisions are identified in this dissertation; a common factor amongst these being picker blocking. We first develop models to estimate picker blocking (Contribution 1) and use the picker blocking estimates in addressing the two OPS design issues, presented as Contributions 2 and 3. In Contribution 1 we develop analytical models using discrete-time Markov chains to estimate pick-face blocking in wide-aisle OPSs. Pick-face blocking refers to the blocking experienced by a picker at a pick-face when another picker is already picking at that pick-face. We observe that for the case when pickers may pick only one item at a pick-face, similar to in-the-aisle blocking, pick-face blocking first increases with an increase in pick-density and then decreases. Moreover, pick-face blocking increases with an increase in the number of pickers and pick to walk time ratio, while it decreases with an increase in the number of pick-faces. For the case when pickers may pick multiple items at a pick-face, pick-face blocking increases monotonically with an increase in the pick-density. These blocking estimates are used in addressing the two OPS design issues, which are presented as Contributions 2 and 3. In Contribution 2 we address the issue of configuring the storage system for order picking. A storage system, typically comprised of racks, is used to store pallet-loads of various stock keeping units (SKU) --- a SKU is a unique identifier of products or items that are stored in a DC. The design question we address is related to identifying the optimal height (i.e., number of storage levels), and thus length, of a one-pallet-deep storage system. We develop a cost-based optimization model in which the number of storage levels is the decision variable and satisfying system throughput is the constraint. The objective of the model is to minimize the system cost, which is comprised of the cost of labor and space. To estimate the cost of labor we first develop a travel-time model for a person-aboard storage/retrieval (S/R) machine performing Tchebyshev travel as it travels in the aisle. Then, using this travel-time model we estimate the throughput of each picker, which helps us estimate the number of pickers required to satisfy the system throughput for a given number of storage levels. An estimation of the cost of space is also modeled to complete the total cost model. Results from an experimental study suggest that a low (in height) and long (in length) storage system tends to be optimal for situations where there is a relatively low number of storage locations and a relatively high throughput requirement; this is in contrast with common industry perception of the higher the better. The primary reason for this contrast is because the industry does not consider picker blocking and vertical travel of the S/R machine. On the other hand, results from the same optimization model suggest that a manual OPS should, in almost all situations, employ a high (in height) and short (in length) storage system; a result that is consistent with industry practice. This consistency is expected as picker blocking and vertical travel, ignored in industry, are not a factor in a manual OPS. In Contribution 3 we address the issue of selecting between batch and zone picking strategies. A picking strategy defines the manner in which the pickers navigate the picking aisles of a storage area to pick the required items. Our aim is to help the designer in identifying the least expensive picking strategy to be employed that meets the system throughput requirements. Consequently, we develop a cost model to estimate the system cost of a picking system that employs either a batch or a zone picking strategy. System cost includes the cost of pickers, equipment, imbalance, sorting system, and packers. Although all elements are modeled, we highlight the development of models to estimate the cost of imbalance and sorting system. Imbalance cost refers to the cost of fulfilling the left-over items (in customer orders) due to workload-imbalance amongst pickers. To estimate the imbalance cost we develop order batching models, the solving of which helps in identifying the number of items unfulfilled. We also develop a comprehensive cost model to estimate the cost of an automated sorting system. To demonstrate the use of our models we present an illustrative example that compares a sort-while-pick batch picking system with a simultaneous zone picking system. To summarize, the overall goal of our research is to develop a set of analytical models to help the designer in designing order picking systems in a distribution center. In this research we focused on two key design issues and addressed them through analytical approaches. Our future research will focus on addressing other design issues and incorporating them in a decision support system.
- Discrete Approximations, Relaxations, and Applications in Quadratically Constrained Quadratic ProgrammingBeach, Benjamin Josiah (Virginia Tech, 2022-05-02)We present works on theory and applications for Mixed Integer Quadratically Constrained Quadratic Programs (MIQCQP). We introduce new mixed integer programming (MIP)-based relaxation and approximation schemes for general Quadratically Constrained Quadratic Programs (QCQP's), and also study practical applications of QCQP's and Mixed-integer QCQP's (MIQCQP). We first address a challenging tank blending and scheduling problem regarding operations for a chemical plant. We model the problem as a discrete-time nonconvex MIQCP, then approximate this model as a MILP using a discretization-based approach. We combine a rolling horizon approach with the discretization of individual chemical property specifications to deal with long scheduling horizons, time-varying quality specifications, and multiple suppliers with discrete arrival times. Next, we study optimization methods applied to minimizing forces for poses and movements of chained Stewart platforms (SPs). These SPs are parallel mechanisms that are stiffer, and more precise, on average, than their serial counterparts at the cost of a smaller range of motion. The robot will be used in concert with several other types robots to perform complex assembly missions in space. We develop algorithms and optimization models that can efficiently decide on favorable poses and movements that reduce force loads on the robot, hence reducing wear on this machine, and allowing for a larger workspace and a greater overall payload capacity. In the third work, we present a technique for producing valid dual bounds for nonconvex quadratic optimization problems. The approach leverages an elegant piecewise linear approximation for univariate quadratic functions and formulate this approximation using mixed-integer programming (MIP). Combining this with a diagonal perturbation technique to convert a nonseparable quadratic function into a separable one, we present a mixed-integer convex quadratic relaxation for nonconvex quadratic optimization problems. We study the strength (or sharpness) of our formulation and the tightness of its approximation. We computationally demonstrate that our model outperforms existing MIP relaxations, and on hard instances can compete with state-of-the-art solvers. Finally, we study piecewise linear relaxations for solving quadratically constrained quadratic programs (QCQP's). We introduce new relaxation methods based on univariate reformulations of nonconvex variable products, leveraging the relaxation from the third work to model each univariate quadratic term. We also extend the NMDT approach (Castro, 2015) to leverage discretization for both variables in a bilinear term, squaring the resulting precision for the same number of binary variables. We then present various results related to the relative strength of the various formulations.
- Distribution Planning for Rail and Truck Freight Transportation SystemsFeng, Yazhe (Virginia Tech, 2012-06-21)Rail and truck freight transportation systems provide vital logistics services today. Rail systems are generally used to transport heavy and bulky commodities over long distances, while trucks tend to provide fast and flexible service for small and high-value products. In this dissertation, we study two different distribution planning problems that arise in rail and truck transportation systems. In the railroad industry, shipments are often grouped together to form a block to reduce the impact of reclassification at train yards. We consider the time and capacity constrained routing (TCCR) problem, which assigns shipments to blocks and train-runs to minimize overall transportation costs, while considering the train capacities and shipment due dates. Two mathematical formulations are developed, including an arc-based formulation and a path-based formulation. To solve the problem efficiently, two solution approaches are proposed. The sequential algorithm assigns shipments in order of priority while considering the remaining train capacities and due dates. The bump-shipment algorithm initially schedules shipments simultaneously and then reschedules the shipments that exceed the train capacity. The algorithms are evaluated using a data set from a major U.S. railroad with approximately 500,000 shipments. Industry-sized problems are solved within a few minutes of computational time by both the sequential and bump-shipment algorithms, and transportation costs are reduced by 6% compared to the currently used trip plans. For truck transportation systems, trailer fleet planning (TFP) is an important issue to improve services and reduce costs. In this problem, we consider the quantities and types of trailers to purchase, rent, or relocate among depots to meet time varying demands. Mixed-integer programming models are developed for both homogeneous and heterogeneous TFP problems. The objective is to minimize the total fleet investment costs and the distribution costs across multiple depots and multiple time periods. For homogeneous TFP problem, a two-phase solution approach is proposed. Phase I concentrates on distribution costs and determines the suggested fleet size. A sweep-based routing heuristic is applied to generate candidate routes of good quality. Then a reduced mathematical model selects routes for meeting customer demands and determines the preferred fleet size. Phase II provides trailer purchase, relocation, and rental decisions based on the results of Phase I and relevant cost information. This decomposition approach removes the interactions between depots and periods, which greatly reduces the complexity of the integrated optimization model. For the heterogeneous TFP problem, trailers with different capacities, costs, and features are considered. The two-phase approach, developed for the homogeneous TFP, is modified. A rolling horizon scheme is applied in Phase I to consider the trailer allocations in previous periods when determining the fleet composition for the current period. Additionally, the sweep-based routing heuristic is also extended to capture the characteristics of continuous delivery practice where trailers are allowed to refill products at satellite facilities. This heuristic generates routes for each trailer type so that the customer-trailer restrictions are accommodated. The numerical studies, conducted using a data set with three depots and more than 400 customers, demonstrate the effectiveness of the two-phase approaches. Compared to the integrated optimization models, the two-phase approaches obtain quality solutions within a reasonable computational time and demonstrate robust performance as the problem sizes increase. Based on these results, a leading industrial gas provider is currently integrating the proposed solution approaches as part of their worldwide distribution planning software.
- «
- 1 (current)
- 2
- 3
- »