Browsing by Author "Sumichrast, Robert T."
Now showing 1 - 20 of 30
Results Per Page
Sort Options
- Application of genetic algorithm to mixed-model assembly line balancingEvans, Jonathan D. (Virginia Tech, 1996-03-05)The demand for increased diversity, reduced cycle time, and reduced work-in-process has caused increased popularity of mixed-model assembly lines. These lines combine the productivity of an assembly line and the flexibility of a job shop. The mixed-model assembly line allows setup time between models to be zero. Large lines mixed-model assembly lines require a timely, near-optimal method. A well balanced line reduces worker idle time and simplifies the mixed-model assembly line sequencing problem. Prior attempts to solve the balancing problem have been in-adequate. Heuristic techniques are too simple to find near-optimal solutions and yield only one solution. An exhaustive search requires too much processing time. Simulated Annealing works well, but yields only one solution per run and the solutions may vary because of the random nature of the Simulated Annealing process. Multiple runs are required to get more than one solution, each run requiring some amount of time which depends on problem size. If only one run is performed, the solution achieved may be far from optimal. In addition, Simulated Annealing requires different parameters depending on the size of the problem. The Genetic Algorithm (GA) is a probabilistic heuristic search strategy. In most cases, it begins with a population of random solutions. Then the population is reproduced using crossover and mutation with the fittest solutions having a higher probability of being parents. The idea is survival of the fittest, poor or unfit solutions do not reproduce and are replaced by better or fitter solutions. The final generation should yield multiple near optimal solutions. The objective of this study is to investigate the Genetic Algorithm and its performance compared to Simulated Annealing for large mixed-model assembly lines. The results will show that the Genetic Algorithm will perform comparably to the Simulated Annealing. The Genetic Algorithm will be used to solve various mixed-model assembly line problems to discover the correct parameters to solve any mixed-model assembly line balancing problem.
- The application of simulated annealing to the mixed model, deterministic assembly line balancing problemEdwards, Sherry L. (Virginia Tech, 1993-07-15)With the trend towards greater product customization and shorter delivery time, the use of mixed model assembly lines is increasing. A line balancing approach is needed that can address the complex nature of the mixed model line and produce near optimal solutions to problems of realistic size. Due to the combinatorial nature of the line balancing problem, exact solution techniques are limited to small problems. Heuristic methods, on the other hand, are often too simplistic to find good solutions. Furthermore, many of the existing techniques cannot be expanded to handle the mixed model problem. Simulated Annealing (SA) is a search methodology which has exhibited good results when applied to combinatorial optimization problems. In fact, researchers have found that SA is able to find near-optimal solutions while its processing time increases only as a polynomial function of problem size. However, none of the applications found in the literature fully explore the technique's ability to handle a highly-constrained problem such as line balancing.
- Availability analysis of opportunistic age replacement policiesDegbotse, Alfred Tsatsu (Virginia Tech, 1996-09-15)This research develops the availability function for a two component series system in which a component is replaced because of component failure or because it reaches a prescribed age. Also each component replacement provides an opportunity for the replacement of the other component. This last maintenance policy is called an opportunistic replacement strategy. The system functions only if the both components of the system are functioning. The system fails if either of the components fails. Component i is replaced if it fails before attaining age Ti since it was last replaced or maintained. The component i is preventatively maintained if is has not failed by the age Ti. This type of replacement plan is called age replacement policy. When component 1 is being replaced or preventatively maintained, if the age of component j ≠ i exceeds τj then both components i and j are replaced at the same time. This type of replacement is called opportunistic replacement of component j and τj is called the opportunistic replacement time for component j. The time dependent and long run availability measures for the system are developed. A nested renewal theory approach is used is used to develop the system availability function. The nesting is defined by considering the replacement of a specific one of the components as an elementary renewal event and the simultaneous replacement of both components as the macroscopic renewal event. More specifically, the renewal process for the system represents a starting point for the entire system and is in fact a renewal process. The intervals between system regeneration points are called “major intervals". The age replacement time Ti and opportunistic replacement time τi are treated as decision parameters during the model development. The probability distribution of the major interval is developed and the Laplace transform of the system availability is developed. Four replacement models are obtained from the main opportunistic age replacement policy. These are a failure replacement policy, an opportunistic failure model, a partial opportunistic age replacement policy and an opportunistic age replacement policy. These models are obtained as specific cases of the general model. The long run availability measure for the failure replacement model is proven to be the same measure as that developed by Barlow and Proschan. This proof validates the modeling approach.
- A case study: creating and sustaining competitive advantage through an information technology application in the lodging industryCho, Wonae (Virginia Tech, 1996-09-26)The use of information technology (IT) is becoming an essential component within the commercial sector. While large number of companies have adopted IT applications to achieve competitive advantage, and number of studies have been done on competitive advantage through an IT application, it is not clear what the impact of an IT application on competitive advantage is. The purpose of this study was to examine the impact of an IT application on competitive advantage and how to create and sustain competitive advantage through an IT application. For that purpose, this study adopted Sethi and King's (1994) instrumental tool, while the theory of resource based view of the firm (RBV) was the theoretical underpinning for the investigation of how to create and sustain competitive advantage through an IT application. In other words, this study examined how an IT application impacts the seven dimensions which are attributes of competitive advantage through an IT application, and how a firm's resources and capabilities, which are measured in three dimensions. moderate the impact of an IT application on competitive advantage. The three dimensions were identified from the review of literature concerning on the theory of RBV.
- Component availability for an age replacement preventive maintenance policyMurdock, William P. (Virginia Tech, 1995)This research develops the availability function for a continuously demanded component which is maintained by an age replacement preventive maintenance policy. The availability function, A(t), is a function of time and is defined as the probability that the component functions at time t. The component is considered to have two states: operating and failed. In this policy, the component is repaired or replaced at time of failure. Otherwise, if the component survives T time units, a preventive maintenance service is performed. T is known as the age replacement period or preventive maintenance policy. The component is considered to be as good as new after either service action is completed. A renewal theory approach is used to develop A(t). Past research has concerned infinite time horizons letting analysis proceed with limiting values. This research considers component economic life that is finite. The lifetime, failure service time and preventive maintenance service time probability distributions are unique and independent. Laplace transforms are used to simplify model development. The age replacement period, T, is treated as a parameter during model development. The partial Laplace transform is developed to deal with truncated random time periods. A general model is developed in which the resulting availability function is dependent on both continuous time and T. An exact expression for the Laplace transform of A(t, T) is developed. Two specific cases are considered. In the first case, the lifetime, repair and preventive maintenance times are unique exponential distributions. This case is used to validate model performance. Tests are performed for t→0, t→∞ and for times in between these extremes. Results validate model performance. The second case models the lifetime as a Weibull distribution with exponential failure repair and preventive maintenance times. Results validate model performance in this case also. Exact infinite series for the partial and normal Laplace transform of the Weibull distribution and survivor function are presented. Research results show that the optimum infinite time horizon age replacement period does not maximize average availability for all finite values of component economic life. This result is critical in lifecycle maintenance planning.
- Computer integrated machining parameter selection in a job shop using expert systems and algorithmsGopalakrishnan, B. (Virginia Polytechnic Institute and State University, 1988)The research for this dissertation is focused on the selection of machining parameters for a job shop using expert systems and algorithms. The machining processes are analyzed in detail and rule based expert systems are developed for the analysis of process plans based on operation and work-material compatibility, the selection of machines, cutting tools, cutting fluids, and tool angles. Data base design is examined for this problem. Algorithms are developed to evaluate the selection of machines and cutting tools based on cost considerations. An algorithm for optimizing cutting conditions in turning operations has been developed. Data framework and evaluation procedures are developed for other machining operations involving different types of machines and tools.
- Computer Network Routing with a Fuzzy Neural NetworkBrande, Julia K. Jr. (Virginia Tech, 1997-11-07)The growing usage of computer networks is requiring improvements in network technologies and management techniques so users will receive high quality service. As more individuals transmit data through a computer network, the quality of service received by the users begins to degrade. A major aspect of computer networks that is vital to quality of service is data routing. A more effective method for routing data through a computer network can assist with the new problems being encountered with today's growing networks. Effective routing algorithms use various techniques to determine the most appropriate route for transmitting data. Determining the best route through a wide area network (WAN), requires the routing algorithm to obtain information concerning all of the nodes, links, and devices present on the network. The most relevant routing information involves various measures that are often obtained in an imprecise or inaccurate manner, thus suggesting that fuzzy reasoning is a natural method to employ in an improved routing scheme. The neural network is deemed as a suitable accompaniment because it maintains the ability to learn in dynamic situations. Once the neural network is initially designed, any alterations in the computer routing environment can easily be learned by this adaptive artificial intelligence method. The capability to learn and adapt is essential in today's rapidly growing and changing computer networks. These techniques, fuzzy reasoning and neural networks, when combined together provide a very effective routing algorithm for computer networks. Computer simulation is employed to prove the new fuzzy routing algorithm outperforms the Shortest Path First (SPF) algorithm in most computer network situations. The benefits increase as the computer network migrates from a stable network to a more variable one. The advantages of applying this fuzzy routing algorithm are apparent when considering the dynamic nature of modern computer networks.
- Cost-based shop control using artificial neural networksWiegmann, Lars (Virginia Tech, 1992)The production control system of a shop consists of three stages: due-date prediction, order release, and job dispatching. The literature has dealt thoroughly with the third stage, but there is a paucity of study on either of the first two stages or on interaction between the stages. This dissertation focuses on the first stage of production control, due-date prediction, by examining methodologies for improved prediction that go beyond either practitioner or published approaches. In particular, artificial neural networks and regression nonlinear in its variables are considered. In addition, interactive effects with the third stage, shop-floor dispatching, are taken into consideration. The dissertation conducts three basic studies. The first examines neural networks and regression nonlinear in its variables as alternatives to conventional due-date prediction. The second proposes a new cost-based criterion and prediction methodology that explicitly includes costs of earliness and tardiness directly in the forecast; these costs may differ in form and/or degree from each other. And third, the benefit of tying together the first and third stages of production control is explored. The studies are conducted by statistically analyzing data generated from simulated shops. Results of the first study conclude that both neural networks and regression nonlinear in its variables are preferred significantly to approaches advanced to date in the literature and in practice. Moreover, in the second study, it is found that the consequences of not using the cost-based criterion can be profound, particularly if a firm's cost function is asymmetric about the due date. Finally, it is discovered that the integrative, interactive methodology developed in the third study is significantly superior to the current non-integrative and non-interactive approaches. In particular, interactive neural network prediction is found to excel in the presence of asymmetric cost functions, whereas regression nonlinear in its variables is preferable under symmetric costs.
- Critical success factors of lodging yield management systems: an empirical studyGriffin, Robert K. (Virginia Tech, 1994-08-01)The primary objective of this research effort was to examine the relationships between successful lodging yield management systems and controllable independent variables in the form of critical success factors (CSFs). The identification of variables consequential to system success is considered to be an important step towards improving system design, implementation, and operation. Twenty-three system success constructs, 27 potential CSFs, and three confounding variables were identified through an extensive literature review, discussions with system vendors, developers, and users, and through data analysis. Eleven different lodging yield management systems (LYMSs) were identified, and three of them were sampled. The dependent variables were converted into a single weighted regression factor score using a principal components model. The respondent's position, size of property, and type of property were found to be confounding variables. The dependent and independent variables were correlated to identifY the CSFs. Every independent variable was identified as a CSF for at least one of the three systems, and the strength of the correlations were generally high. System, user, and task factors were found to be highly correlated to system success. Support and environmental factors were found to be moderately to weakly correlated to system success.
- A Decision Support System for the Electrical Power Districting ProblemBergey, Paul K. (Virginia Tech, 2000-04-21)Due to a variety of political, economic, and technological factors, many national electricity industries around the globe are transforming from non-competitive monopolies with centralized systems to decentralized operations with competitive business units. This process, commonly referred to as deregulation (or liberalization) is driven by the belief that a monopolistic industry fails to achieve economic efficiency for consumers over the long run. Deregulation has occurred in a number of industries such as: aviation, natural gas, transportation, and telecommunications. The most recent movement involving the deregulation of the electricity marketplace is expected to yield consumer benefit as well. To facilitate deregulation of the electricity marketplace, competitive business units must be established to manage various functions and services independently. In addition, these business units must be given physical property rights for certain parts of the transmission and distribution network in order to provide reliable service and make effective business decisions. However, partitioning a physical power grid into economically viable districts involves many considerations. We refer to this complex problem as the electrical power districting problem. This research is intended to identify the necessary and fundamental characteristics to appropriately model and solve an electrical power districting problem. Specifically, the objectives of this research are five-fold. First, to identify the issues relevant to electrical power districting problems. Second, to investigate the similarities and differences of electrical power districting problems with other districting problems published in the research literature. Third, to develop and recommend an appropriate solution methodology for electrical power districting problems. Fourth, to demonstrate the effectiveness of the proposed solution method for a specific case of electric power districting in the Republic of Ghana, with data provided by the World Bank. Finally, to develop a decision support system for the decision makers at the World Bank for solving Ghana's electrical power districting problem.
- Decision support systems design: a nursing scheduling applicationCeccucci, Wendy A. (Virginia Tech, 1994-01-28)The systems development life cycle (SDLC) has been the traditional method of decision support systems design. However, in the last decade several methodologies have been introduced to address the limitations arising in the use of the traditional method. These approaches include Courban's iterative design, Keen's adaptive design, prototyping and a number of mixed methodologies incorporating prototyping into the SDLC. Each of the previously established design methodologies has a number of differing characteristics that make each of them a more suitable strategy for certain environments. However, in some environments the current methodologies present certain limitations or unnecessary expenditures. These limitations suggest the need for an alternative methodology. This dissertation develops a new methodology, priority design, to meet this need. To determine what methodology would be most effective in a given situation, an analysis of the operating environment must be performed. Such issues as project complexity, project uncertainty, and limited user involvement must be addressed. This dissertation develops a set of guidelines to assist in this analysis. For clarity, the guidelines are applied to three, well-documented case studies. As an application of the priority design methodology, a decision support system for nurse scheduling is developed. The development of a useful DSS for nurse scheduling requires that projected staff requirements and issues of both coverage and differential assignment of personnel be addressed.
- Determinants of Successful Acquisition Management: A Process Perspective in the Lodging IndustryKim, Kyung-Hwan (Virginia Tech, 1998-08-20)The objective of this study was to uncover the critical success factors that have significant value-added impacts on corporate acquisitions in the lodging industry. Specifically, this study attempted to systematically discover evidence about the determinants of a successful pre-acquisition management process, and the determinants of successful post-acquisition integration, as well as to identify an appropriate evaluation criteria for determining the post-acquisition performance of an acquisition deal. In addition, this study tried to identify important acquisition objectives of hotel acquirers. This study employed an integrated and holistic viewpoint that includes the most critical corporate acquisition issues simultaneously and in a multi-dimensional framework. As a research methodology, a Delphi technique, which is a non-face-to-face communication method, was employed and proved its effectiveness throughout the study. The key question guiding this research is, what are the critical factors in the overall acquisition process that contribute to successful acquisitions? The findings of this study indicate that the most important acquisition objective for acquirers in the lodging industry is to accelerate the growth of their firms. Further, the most important critical success factor for hotel acquirers before the deal is completed is the identification of the trend of the target firm's cash flow from operations, and reliable and valid information about the target is the most significant dimension in the pre-acquisition management phase. The study results suggest that the most significant key success factor in the post-acquisition integration stage for the lodging industry is to plan and establish a post-acquisition strategy as early as possible, even before the deal is done, while the development of an effective post-acquisition transition strategy immediately after the deal is closed is the most crucial dimension in the post-acquisition integration phase. One of the most significant findings of this study was that hotel executives gave relatively higher importance to pre-acquisition management strategy than to the post-acquisition integration process. In terms of post-acquisition performance evaluation criteria, measures from a value-based management (VBM) approach received the highest rank in evaluating the economic gains of corporate acquisitions in the lodging industry. The study results can help to improve hospitality industry academics' and practitioners' understanding of important M&A phenomena leading to significant changes in the industry's competitive landscape.
- An evaluation process for material handling systems within FMSRiel, Philippe F. (Virginia Polytechnic Institute and State University, 1989)The problem of evaluating new manufacturing technologies, in particular, flexible manufacturing systems (FMS) is a complex one, as its interdisciplinary nature involves multiple variables. These variables are qualitative as well as quantitative, strategic, as well as technological, intangible as well as tangible. This dissertation deals with the problem of the overall evaluation process, in particular, the evaluation of material handling systems within FMS. In particular, automated guided vehicle systems (MVS) are studied from a technical viewpoint, as they are related to strategic and economic considerations. Two main evaluation frameworks are developed. One integrates multiattribute decision models, namely, the analytic hierarchy process or AHP and the displaced ideal model (DIM), and the other integrates analytical techniques with simulation modeling. As a by product, flexibility indices are also developed for MVS and linked to the fundamental aspects of the evaluation of new technologies. This research also shows how analytical techniques can be combined with simulation modeling to form a more extensive evaluation process that includes opportunity costs as well as the usual tangible costs. Finally, a technical analysis of FMS/MVS is done on some typical cell configurations using the flexibility indices developed in this research.
- Expert systems for financial analysis of university auxiliary enterprisesMcCart, Christina D. (Virginia Tech, 1991)An essential task of university administration is to monitor the financial position of its auxiliary enterprises. This is an ill-defined and complex task which often requires more administrative time and information than is available. In order to perform this task in an adequate manner a large amount of expertise is required to: (1) determine what constitutes reasonable performance, (2) define unacceptable levels of performance, and (3) suggest courses of action which will alleviate an unacceptable situation. Thorough analysis requires a substantial amount of an expert’s time. The purpose of this research is to explore the opportunities for the enhancement of the financial analysis of auxiliary enterprises through the use of expert systems. The research has included: (1) a comprehensive review of analytical techniques that can be used in financial position analysis, (2) a determination of the the applicability of such techniques to auxiliary enterprises, and (3) an assessment of their amenability to expert system development. As a part of the above described research, an expert system prototype was developed which addresses several of the above issues for one auxiliary enterprise at Virginia Polytechnic Institute and State University. It integrates the knowledge of an expert with both accounting data from the VPI & SU accounting system and other types of data from the auxiliary enterprise operation. The system provides a comprehensive, systematic analysis of the financial position of the Tailor Shop at VPI & SU. This analysis is performed in much less time than would be required by an expert. As a result of the research conducted, it has been concluded that building such a system is possible and it can provide significant benefits to a user. However, financial position analysis requires a substantial amount of data and numerical calculations, both of which require large amounts of computer memory and computations. Therefore, designing an expert system to efficiently perform this task requires the use of a package or a language that efficiently utilizes computer memory and CPU.
- An exploration of the robustness of traditional regression analysis versus analysis using backpropagation networksMarkham, Ina Samanta (Virginia Tech, 1992)Research linking neural networks and statistics has been at two ends of a spectrum: either highly theoretical or application specific. This research attempts to bridge the gap on the spectrum by exploring the robustness of regression analysis and backpropagation networks in conducting data analysis. Robustness is viewed as the degree to which a technique is insensitive to abnormalities in data sets, such as violations of assumptions. The central focus of regression analysis is the establishment of an equation that describes the relationship between the variables in a data set. This relationship 1s used primarily for the prediction of one variable based on the known values of the other variables. Certain assumptions have to be made regarding the data in order to obtain a tractable solution and the failure of one or more of these assumptions results in poor prediction. The assumptions underlying linear regression that are used to characterize data sets in this research are characterized by: (a) sample size and error variance, (b) outliers, skewness, and kurtosis, (c) multicollinearity, and (d) nonlinearity and underspecification. By using this characterization, the robustness of each technique is studied under what is, in effect, the relaxation of assumptions one at a time. The comparison between regression and backpropagation is made using the root mean square difference between the predicted output from each technique and the actual output.
- An Exploratory Study of the Strategic Value of Information Technology: A Theoretical Application of the Co-Alignment ModelJung, Hyung-il (Virginia Tech, 2004-09-22)Despite the impact of Information Technology (IT) in today's service economy, its nature and role are elusive or ambiguous to say the least. This ambiguity has made it so difficult to measure the value of IT. To clarify the ambiguity, this study, with a focus on the strategic dimension of IT application in the web of organizational activities, proposes a conceptual model that relates IT application to Knowledge Management and then to Strategy. In this effort, incorporating the Co-alignment model as a theoretical binding agent, the role of IT is defined as a facilitator of organizational knowledge management that is regarded as the core of strategic management. The conceptual model proposed is further developed into a structural model for empirical testing. The goodness of fit of the model is assessed through the technique of the Structural Equation Modeling (SEM) along with first-order and second-order confirmatory factor analyses (CFA) using the survey responses of unit managers of multi-unit restaurant companies of the U.S. and Korea. Since the mail survey was conducted in two different nations, relevant multi cultural issues are also addressed to justify the use of combined samples for the study. The results of the statistical analyses indicate that IT application can be incorporated successfully into the domain of strategic management of restaurant companies as the facilitator of Knowledge Management activities. The hypotheses of the links between IT application and financial performance remained unsolved due to invalid data. However, this study made a certain degree of contribution in identifying the dynamics of IT application in the process of strategic management incorporating the principle of the Co-alignment model.
- A fuzzy set paradigm for conceptual system design evaluationVerma, Dinesh (Virginia Tech, 1994)A structured and disciplined system engineering process is essential for the efficient and effective development of products and systems which are both responsive to customer needs and globally competitive. Rigor and discipline during the later life-cycle phases of design and development (preliminary and detailed) cannot compensate for an ill-conceived system concept and for premature commitments made during the conceptual design phase. This significance notwithstanding, the nascent stage of system design has been largely ignored by the research and development community. This research is unique. It focuses on conceptual system design and formalizes analysis and evaluation activities during this important life-cycle phase. The primary goal of developing a conceptual design analysis and evaluation methodology has been achieved, including complete integration with the system engineering process. Rather than being a constraint, this integration led to a better definition of conceptual design activity and the coordinated progression of synthesis, analysis, and evaluation. Concepts from fuzzy set theory and the calculus of fuzzy arithmetic were adapted to address and manipulate imprecision and subjectivity. A number of design decision aids were developed to reduce the gap between commitment and project specific knowledge, to facilitate design convergence, and to help realize a preferred system design concept.
- Industry Based Fundamental Analysis: Using Neural Networks and a Dual-Layered Genetic Algorithm ApproachStivason, Charles T. (Virginia Tech, 1998-11-16)This research tests the ability of artificial learning methodologies to map market returns better than logistic regression. The learning methodologies used are neural networks and dual-layered genetic algorithms. These methodologies are used to develop a trading strategy to generate excess returns. The excess returns are compared to test the trading strategy's effectiveness. Market-adjusted and size-adjusted excess returns are calculated. Using a trading strategy based approach the logistic regression models generated greater returns than the neural network and dual-layered genetic algorithm models. It appears that the noise in the financial markets prevents the artificial learning methodologies from properly mapping the market returns. The results confirm the findings that fundamental analysis can be used to generate excess returns.
- A knowledge-based simulation optimization system with machine learningCrouch, Ingrid W. M. (Virginia Tech, 1992-05-06)A knowledge-based system is formulated to guide the search strategy selection process in simulation optimization. This system includes a framework for machine learning which enhances the knowledge base and thereby improves the ability of the system to guide optimizations. Response surfaces (i.e., the response of a simulation model to all possible input combinations) are first classified based on estimates of various surface characteristics. Then heuristics are applied to choose the most appropriate search strategy. As the search is carried out and more information about the surface becomes available, the knowledge-based system reclassifies the response surface and, if appropriate, selects a different search strategy. Periodically the system’s Learner is invoked to upgrade the knowledge base. Specifically, judgments are made to improve the heuristic knowledge (rules) in the knowledge base (i.e., rules are added, modified, or combined). The Learner makes these judgments using information from two sources. The first source is past experience -- all the information generated during previous simulation optimizations. The second source is results of experiments that the Learner performs to test hypotheses regarding rules in the knowledge base. The great benefits of simulation optimization (coupled with the high cost) have highlighted the need for efficient algorithms to guide the selection of search strategies. Earlier work in simulation optimization has led to the development of different search strategies for finding optimal-response-producing input levels. These strategies include response surface methodology, simulated annealing, random search, genetic algorithms, and single-factor search. Depending on the characteristics of the response surface (e.g., presence or absence of local optima, number of inputs, variance), some strategies can be more efficient and effective than others at finding an optimal solution. If the response surface were perfectly characterized, the most appropriate search strategy could, ideally, be immediately selected. However, characterization of the surface itself requires simulation runs. The knowledge-based system formulated here provides an effective approach to guiding search strategy selection in simulation optimization.
- Material Cutting Plan Generation Using Multi-Expert and Evolutionary ApproachesHung, Chang-Yu (Virginia Tech, 2000-07-05)Firms specializing in the construction of large commercial buildings and factories must often design and build steel structural components as a part of each project. Such firms must purchase large steel plates, cut them into pieces and then weld the pieces into H-beams and other construction components. The details of the order and the production operation are specified in the "cutting plan." This dissertation focuses on solving this "cutting plan generation" problem with the goal of minimizing cost. Two solution approaches are proposed in this dissertation: a multi-expert system and an evolutionary algorithm. The expert system extends the field by relying on the knowledge of multiple experts. Furthermore, unlike traditional rule-base expert systems, this expert system (XS) uses procedural rules to capture and represent experts' knowledge. The second solution method, called CPGEA, involves development of an evolutionary algorithm based on Falkenauer's grouping genetic algorithm. A series of experiments is designed and performed to investigate the efficiency and effectiveness of the proposed approaches. Two types of data are used in the experiments. Historical data are real data provided by a construction company. Solutions developed manually and implemented are available. In addition, simulated data has been generated to more fully test the solution methods. Experiments are performed to optimize CPGEA parameters as well as to compare the approaches to each other, to known solutions and to theoretical bounds developed in this dissertation. Both approaches show excellent results in solving historical cases with an average cost 1% above the lower bound of the optimal solution. However, as revealed by experiments with simulated problems, the performance decreases in cases where the optimal solution includes multiple identical plates. The performance of the XS is affected by this problem characteristic more than that of CPGEA. While CPGEA is more robust in effectively solving a range of problems, the XS requires substantially less processing time. Both approaches can be useful in different practical situations.