Browsing by Author "Moore, Laurence J."
Now showing 1 - 15 of 15
Results Per Page
Sort Options
- Cellular manufacturing: applicability and system designLeu, Yow-yuh (Virginia Tech, 1991-08-07)As competition has intensified, many American manufacturers have sought alternatives to rejuvenate their production systems. Cellular manufacturing systems have received considerable interest from both academics and practitioners. This research examines three major issues in cellular manufacturing that have not been adequately addressed: applicability, structural design, and operational design. Applicability, in this study, is concerned with discerning the circumstances in which cellular manufacturing is the system of choice. The methodology employed is simulation and two experimental studies are conducted. The objective of Experiment I, a 2 x 3 x 3 factorial design, is to investigate the role of setup time and move time on system performance and to gain insight into why and how one layout could outperform another. The results of Experiment I suggest that move time is a significant factor for job shops and that workload variation needs to be reduced if the performance of cellular manufacturing is to be improved. Experiment II evaluates the impact of setup time reduction and operational standardization on the performance of cellular manufacturing. The results of Experiment II suggest that cellular manufacturing is preferred if the following conditions exist: (1) well balanced workload, (2) standardized products, (3) standardized operations, and (4) setup times independent from processing times.
- Computer Network Routing with a Fuzzy Neural NetworkBrande, Julia K. Jr. (Virginia Tech, 1997-11-07)The growing usage of computer networks is requiring improvements in network technologies and management techniques so users will receive high quality service. As more individuals transmit data through a computer network, the quality of service received by the users begins to degrade. A major aspect of computer networks that is vital to quality of service is data routing. A more effective method for routing data through a computer network can assist with the new problems being encountered with today's growing networks. Effective routing algorithms use various techniques to determine the most appropriate route for transmitting data. Determining the best route through a wide area network (WAN), requires the routing algorithm to obtain information concerning all of the nodes, links, and devices present on the network. The most relevant routing information involves various measures that are often obtained in an imprecise or inaccurate manner, thus suggesting that fuzzy reasoning is a natural method to employ in an improved routing scheme. The neural network is deemed as a suitable accompaniment because it maintains the ability to learn in dynamic situations. Once the neural network is initially designed, any alterations in the computer routing environment can easily be learned by this adaptive artificial intelligence method. The capability to learn and adapt is essential in today's rapidly growing and changing computer networks. These techniques, fuzzy reasoning and neural networks, when combined together provide a very effective routing algorithm for computer networks. Computer simulation is employed to prove the new fuzzy routing algorithm outperforms the Shortest Path First (SPF) algorithm in most computer network situations. The benefits increase as the computer network migrates from a stable network to a more variable one. The advantages of applying this fuzzy routing algorithm are apparent when considering the dynamic nature of modern computer networks.
- A computer-based DSS for funds management in a large state university environmentTyagi, Rajesh (Virginia Polytechnic Institute and State University, 1986)The comprehensive computerized decision support system developed in this research employs two techniques, computer modeling and goal programming, to assist top university financial officers in assessing the current status of funds sources and uses. The purpose of the DSS is to aid in reaching decisions concerning proposed projects, and to allocate funds from sources to uses on an aggregate basis according to a rational set of prescribed procedures. The computer model provides fast and easy access to the database and it permits the administrator to update the database as new information is received. Goal programming is used for modeling the allocation process since it provides a framework for the inclusion of multiple goals that may be conflicting and incommensurable. The goal programming model allocates funds from sources to uses based on a priority structure associated with the goals. The DSS, which runs interactively, performs a number of tasks that include: selection of model parameters, formulating goals and priority structure, and solving the GP model. It also provides on-line access to the database so that it may be updated as necessary. In addition, the DSS generates reports regarding funds allocation and goal achievements to allow analysis of the model results. The decision support system also provides a framework for experimentation with various goal and priority structures, thus facilitating what-if analyses. The user can also perform a sensitivity analysis by observing the effect of assigning different relative importance to a goal or set of goals.
- A decision support system for tuition and fee policy analysisGreenwood, Allen G. (Virginia Polytechnic Institute and State University, 1984)Tuition and fees are a major source of income for colleges and universities and a major portion of the cost of a student's education. The university administration's task of making sound and effective tuition and fee policy decisions is becoming both more critical and more complex. This is a result of the increased reliance on student-generated tuition-and-fee income, the declining college-age student population, reductions in state and Federal funds, and escalating costs of operation. The comprehensive computerized decision support system (DSS) developed in this research enhances the administration's planning, decision-making, and policy-setting processes. It integrates data and reports with modeling and analysis in order to provide a systematic means for analyzing tuition and fee problems, at a detailed and sophisticated level, without the user having to be an expert in management science techniques or computers. The DSS with its imbedded multi-year goal programming (GP) model allocates the university's revenue requirements to charges for individual student categories based on a set of user-defined objectives, constraints, and priorities. The system translates the mathematical programming model into a valuable decision-making aid by making it directly and readily accessible to the administration. The arduous tasks of model formulation and solution, the calculation of the model's parameter values, and the generation of a series of reports to document the results are performed by the system; whereas, the user is responsible for defining the problem framework, selecting the goals, setting the targets, establishing the priority structure, and assessing the solution. The DSS architecture is defined in terms of three highly integrated subsystems - dialog, data, and models - that provide the following functions: user/system interface, program integration, process control, data storage and handling, mathematical, statistical, and financial computations, as well as display, memory aid, and report generation. The software was developed using four programming languages/systems: EXEC 2, FORTRAN, IFPS, and LINDO. While the system was developed, tested, and implemented at Virginia Polytechnic Institute and State University, the concepts developed in this research are general enough to be applied to any public institution of higher education.
- An evaluation of scheduling policies in a dual resource constrained assembly shopRussell, Roberta S. (Virginia Polytechnic Institute and State University, 1983)Research in job shop scheduling has concentrated on sequencing simple, single component jobs that require no coordination of multiple parts for assembly. However, since most jobs in reality involve some assembly work, scheduling multiple component jobs through an assembly shop, where both serial and parallel operations take place, represents a more realistic and practical problem. The scheduling environment for multiple component jobs in terms of routing, sequencing, and the pacing of common components may be quite complex, and, as such, requires special scheduling considerations. The purpose of this research is to evaluate scheduling policies for the production of assembled products in a job shop environment, termed "assembly shop". The specific scheduling policies examined include duedate assignment procedures, labor assignment procedures, and item sequencing rules. The sensitivity of these policies to product structure is also addressed.
- An examination of how buyers subjectively perceive and evaluate product bundlesYadav, Manjit S. (Virginia Tech, 1990)This dissertation examines how buyers evaluate a bundle of items and how perceptions of savings are formed in the context of a bundle offer. Two conceptual models were developed and tested: 1) a model of the bundle's acquisition value, and 2) a model of the bundle's transaction value. Based on behavioral decision theory and recent developments in pricing research, the model of acquisition value focuses on the role of both price and non-price information. It is proposed that buyers use an anchoring and adjustment process to evaluate a bundle of items, evaluating the most important item first and then making incremental adjustments based on the evaluation of other items. The model of transaction value is based on the premise that buyers combine perceived savings on the individual items and perceived additional savings on the bundle to form their overall perception of savings in a bundle offer. Two laboratory experiments were conducted using student subjects to test the proposed hypotheses. Experiment 1 tested the anchoring and adjustment hypothesis, while experiment 2 investigated the model of transaction value. A 3(bundle context) X 2(anchor context) between-subjects design was employed in the first experiment. The experimental factor "bundle context" provided an opportunity to create evaluative scenarios in which subjects evaluated either only individual items or bundles with two or three items; "anchor context" manipulated the most important item in the bundles to be either excellent or poor. A computer-assisted data collection procedure was employed to obtain unobtrusive measures of the order in which subjects examined items in a bundle. Results of the first experiment provided evidence consistent with the proposed anchoring and adjustment process: 1) subjects examined bundle items perceived as more important prior to those items that were perceived as less important, and 2) the overall evaluation of a bundle was a weighted average of the bundle items' evaluations. However, the hypothesis that the anchor item's evaluation may influence the evaluation of other bundle items was supported only for one of the four non-anchor items. The second experiment manipulated savings on items and additional savings on a bundle in a 3X3 between-subjects design. Subjects examined an advertisement featuring two luggage items and then responded to items in a questionnaire. The hypothesis that buyers combine perceived savings on items and perceived additional savings on the bundle to form perceptions of overall savings in a bundle offer was supported. As hypothesized, the relative influence of perceived additional savings on the bundle was greater than the influence of perceived savings on the individual items. Although no hypotheses about interaction effects were proposed, there was evidence that perceived savings on items and perceived additional savings on the bundle interact. Tests of the model using LISREL yielded further evidence supporting the proposed transaction value model.
- Expert systems for financial analysis of university auxiliary enterprisesMcCart, Christina D. (Virginia Tech, 1991)An essential task of university administration is to monitor the financial position of its auxiliary enterprises. This is an ill-defined and complex task which often requires more administrative time and information than is available. In order to perform this task in an adequate manner a large amount of expertise is required to: (1) determine what constitutes reasonable performance, (2) define unacceptable levels of performance, and (3) suggest courses of action which will alleviate an unacceptable situation. Thorough analysis requires a substantial amount of an expert’s time. The purpose of this research is to explore the opportunities for the enhancement of the financial analysis of auxiliary enterprises through the use of expert systems. The research has included: (1) a comprehensive review of analytical techniques that can be used in financial position analysis, (2) a determination of the the applicability of such techniques to auxiliary enterprises, and (3) an assessment of their amenability to expert system development. As a part of the above described research, an expert system prototype was developed which addresses several of the above issues for one auxiliary enterprise at Virginia Polytechnic Institute and State University. It integrates the knowledge of an expert with both accounting data from the VPI & SU accounting system and other types of data from the auxiliary enterprise operation. The system provides a comprehensive, systematic analysis of the financial position of the Tailor Shop at VPI & SU. This analysis is performed in much less time than would be required by an expert. As a result of the research conducted, it has been concluded that building such a system is possible and it can provide significant benefits to a user. However, financial position analysis requires a substantial amount of data and numerical calculations, both of which require large amounts of computer memory and computations. Therefore, designing an expert system to efficiently perform this task requires the use of a package or a language that efficiently utilizes computer memory and CPU.
- An exploration of the robustness of traditional regression analysis versus analysis using backpropagation networksMarkham, Ina Samanta (Virginia Tech, 1992)Research linking neural networks and statistics has been at two ends of a spectrum: either highly theoretical or application specific. This research attempts to bridge the gap on the spectrum by exploring the robustness of regression analysis and backpropagation networks in conducting data analysis. Robustness is viewed as the degree to which a technique is insensitive to abnormalities in data sets, such as violations of assumptions. The central focus of regression analysis is the establishment of an equation that describes the relationship between the variables in a data set. This relationship 1s used primarily for the prediction of one variable based on the known values of the other variables. Certain assumptions have to be made regarding the data in order to obtain a tractable solution and the failure of one or more of these assumptions results in poor prediction. The assumptions underlying linear regression that are used to characterize data sets in this research are characterized by: (a) sample size and error variance, (b) outliers, skewness, and kurtosis, (c) multicollinearity, and (d) nonlinearity and underspecification. By using this characterization, the robustness of each technique is studied under what is, in effect, the relaxation of assumptions one at a time. The comparison between regression and backpropagation is made using the root mean square difference between the predicted output from each technique and the actual output.
- On developing an expert system: a knowledge base for GP formulation and analysisAggarwal, Ajay K. (Virginia Tech, 1991-01-10)An expert system approach to help OR naive users formulate and solve goal programs is proposed. The approach is demonstrated for single product blending problems using VP-Expert as the developmental tool. Results of a study using undergraduate and graduate business students to test the expert system effectiveness are provided. An expert system determines the problem type using a taxonomy based upon problem context. Each problem type possesses distinct characteristics. Characteristics of twenty-four different problem types are discussed. Formulation of constraints using problem characteristics is demonstrated. The expert system uses constraint information to assist users in goal selection. Goal structures are constructed using a pairwise comparison technique. Solution values, recommendations based upon sensitivity analysis, and trade-offs between conflicting goals are provided to the user. A feedback loop permitting model changes and reiteration of solution and recommendation steps is provided.
- Optimal design, procurement and support of multiple repairable equipment and logistic systemsMoore, Thomas P. (Virginia Polytechnic Institute and State University, 1986)A concept for the mathematical modeling of multiple repairable equipment and logistic systems (MREAL systems) is developed; These systems consist of multiple populations of repairable equipment, and their associated design, procurement, maintenance, and supply support. MREAL systems present management and design problems which parallel the·management and design of multiple, consumable item inventory systems. However, the MREAL system is more complex since it has a repair component. The MREAL system concept is described in a classification hierarchy which attempts to categorize the components of such systems. A specific mathematical model (MREAL1) is developed for a subset of these components. Included in MREAL1 are representations of the equipment reliability and maintainability design problem, the maintenance capacity problem, the retirement age problem, and the population size problem, for each of the multiple populations. MREAL1 models the steady state stochastic behavior of the equipment repair facilities using an approximation which is based upon the finite source, multiple server queuing system. System performance measures included in MREAL1 are: the expected MREAL total system life cycle cost (including a shortage cost penalty); the steady state expected number of shortages; the probability of catastrophic failure in each equipment population; and two budget based measures of effectiveness. Two optimization methods are described for a test problem developed for MREAL1. The first method computes values of the objective function and the constraints for a specified subset of the solution space. The best feasible solution found is recorded. This method can also examine all possible solutions, or can be used in a manual search. The second optimization method performs an exhaustive enumeration. of the combinatorial programming portion of MREAL1, which represents equipment design. For each enumerated design combination, an attempt is made to find the optimal solution to the remaining nonlinear discrete programming problem. A sequential unconstrained minimization technique is used which is based on an augmented Lagrangian penalty function adapted to the integer nature of MREAL1. The unconstrained minimization is performed by a combination of Rosenbrock's search technique, the steepest descent method, and Fibonacci line searches, adapted to the integer nature of the search. Since the model contains many discrete local minima, the sequential unconstrained minimization is repeated from different starting solutions, based upon a heuristic selection procedure. A gradient projection method provides the termination criteria for each unconstrained minimization.
- Overlaying the just-in-time with Kanban system on an American production environmentPhilipoom, Patrick Robert (Virginia Polytechnic Institute and State University, 1986)During the past several years, the publicized successes of Japanese production management techniques have created an interest in the potential of these techniques for application in an American manufacturing environment. One such Japanese technique that has been the focus of much attention from American manufacturers and production managers is the "just-in-time (JIT)" technique implemented with "Kanbans.”¹ However, the applications of the JIT technique in Japan that have been reported have been for large scale assembly line operations that, in general, encompass the unique physical and philosophical characteristics typical of Japanese production systems. The factors that contribute to the success of the JIT system in Japan are frequently not exhibited in manufacturing systems in the United States, especially in American systems that combine assembly and shop-type operations and encompass a high degree of system variability. As such, it is questionable whether the JIT technique can be successfully adapted to American manufacturing systems~that do not display the characteristics of Japanese production operations. Nevertheless, a number of American manufacturing companies, in hope of achieving at least some of the Japanese success in inventory control, quality control and production scheduling, have begun implementing the JIT technique in their own unique production environment. The purpose of this dissertation is to investigate implementing JIT in a non-Japanese production environment and to show how JIT can be adapted so that it can have a broader range of applicability, especially under the particular set of conditions that are very likely to exist in many American production environments. ¹Toyota uses a system of cards, called Kanbans, to control inventory and schedule production in their automotive assembly plants.
- Potential impacts of various capital gains tax structures on forest investmentsRapera, Corazon L. (Virginia Tech, 1990)The objective of the study was to determine how various capital gains tax structures affect decisions to invest in new forest investments. These effects were measured by changes in the after-tax present values of bare land under each tax structure. The three capital gains tax structures modeled were: the current federal income tax law without basis indexing, the current federal income tax law with basis indexing, and the accrued income tax with indexing. Other things equal, the direction of effects on present values of bare land of capital gains tax structures and the other factors in the model was the same for White pine Christmas trees and Douglas fir timber. Highest present values occurred with basis indexing and lowest present values were with the accrued income tax structure, in all possible combinations of the above variables. Higher present values with basis indexing were due to tax savings. Tax saving from basis indexing per dollar of cost basis increases, reaches a maximum, then decreases as the payoff period lengthens, at a given inflation rate, with all other things equal. The payoff period that maximizes tax savings per dollar of cost basis decreases, as real interest rates increase. When the capital gains tax rate is 34% and inflation rate is 5%, and when real interest rates range from 3% to 9%, the payoff period with maximum tax savings ranges from 20 to 10 years. Since most forest investments have rotations longer than 20 years, this result implies that basis indexing will probably not affect decisions about new forest investments very much. It will also not affect the timing of gains realization for capital assets, not necessarily forestry in nature only, that had already been held for longer than 20 years. Two equity criteria were considered in the study. The first criterion requires the tax to be neutral with respect to allocation of land to different uses. The second criterion requires capital gains recipients to pay, at investment maturity and with other things equal, taxes equal to the sum of annual taxes on increases in asset value (accrued income) accumulated with interest. The study showed that, without inflation, the realized income tax (the current federal income tax) is neutral with respect to allocation of land to uses with different rotations because the tax reduces the bid prices for land uses with different rotations by equal percentages, other things equal. However, with inflation, the results suggest that basis indexing is needed in order to maintain the tax’s neutrality with respect to allocation of land to uses with different rotations. Under the second criterion, a forestry example was compared with a bank account, both with equal value growth rates. It showed that taxes paid on realized capital gains at investment maturity are lower than the sum of annual taxes on accrued income accumulated with interest, given the same tax rate. Thus, the current federal income tax, which taxes capital gains upon realization, does not meet the second equity criterion. Based on this criterion, the tax favors assets that yield capital gains over assets with annual incomes. In order to meet the second equity criterion, realized capital gains should pay taxes at the ERITAX rate. The ERITAX rate, when applied to realized capital gains, gives tax revenues equal to accrued income taxes accumulated with interest to investment maturity. However, when the annual accrued income tax rate is high, or when the rotation is long, or when the timber value growth rate is low relative to the interest rate, the ERITAX rate can exceed 100% of the capital gains, thus driving some bare land values below values in alternative uses. This result is consistent with the finding that the accrued income tax is non-neutral with respect to allocation of land to different uses and is biased against land uses with long payoff periods, given the same establishment costs. Thus, when the second equity criterion is met, the tax becomes biased against land uses with long rotations. These results indicate that none of the taxes modeled can meet the two equity criteria simultaneously. Even so, among forest investments, the current federal income tax with basis indexing is the most desirable because it is least likely to distort allocation of land to forestry.
- A Spatial Decision Support System for Planning Broadband, Fixed Wireless Telecommunication NetworksScheibe, Kevin Paul (Virginia Tech, 2004-08-05)Over the last two decades, wireless technology has become ubiquitous in the United States and other developed countries. Consumer devices such as AM/FM radios, cordless and cellular telephones, pagers, satellite televisions, garage door openers, and television channel changers are just some of the applications of wireless technology. More recently, wireless computer networking has seen increasing employment. A few reasons for this move toward wireless networking are improved electronics transmitters and receivers, reduced costs, simplified installation, and enhanced network expandability. The objective of the study is to generate understanding of the planning inherent in a broadband, fixed wireless telecommunication network and to implement that knowledge into an SDSS. Intermediate steps toward this goal include solutions to both fixed wireless point-to-multipoint (PMP) and fixed wireless mesh networks, which are developed and incorporated into the SDSS. This study explores the use of a Spatial Decision Support System (SDSS) for broadband fixed wireless connectivity to solve the wireless network planning problem. The spatial component of the DSS is a Geographic Information System (GIS), which displays visibility for specific tower locations. The SDSS proposed here incorporates cost, revenue, and performance capabilities of a wireless technology applied to a given area. It encompasses cost and range capabilities of wireless equipment, the customers' propensity to pay, the market penetration of a given service offering, the topology of the area in which the wireless service is proffered, and signal obstructions due to local geography. This research is both quantitative and qualitative in nature. Quantitatively, the wireless network planning problem may be formulated as integer programming problems (IP). The line-of-sight restriction imposed by several extant wireless technologies necessitates the incorporation of a GIS and the development of an SDSS to facilitate the symbiosis of the mathematics and geography. The qualitative aspect of this research involves the consideration of planning guidelines for the general wireless planning problem. Methodologically, this requires a synthesis of the literature and insights gathered from using the SDSS above in a what-if mode.
- The use of correlated simulation experiments in response surface optimizationDonohue, Joan M. (Virginia Polytechnic Institute and State University, 1988)Response surface methodology (RSM) provides a useful framework for the optimization of stochastic simulation models. The sequential experimentation and model fitting procedures of RSM enable prediction of the response and location of the optimum operating conditions. In a simulation environment, the experimentation phase of RSM involves selecting the input variable levels for each simulation run and assigning pseudorandom number streams to the stochastic model components. Through an appropriate assignment of random number streams to simulation runs, correlation among the simulated responses can be induced, thereby affecting reductions in the variances of certain model coefficients. Three methods of correlation induction are considered in this research: (i) no correlation induction, achieved through the use of independent streams, (ii) positive correlation induction, achieved through the use of common streams, and (iii) a combination of positive and negative correlation induction, achieved through the use of the assignment rule blocking strategy. The performance of the correlation induction strategies is evaluated in terms of two mean squared error design criteria; MSE of response and MSE of slope. The MSE of slope criteria is useful in the early stages of RSM, when the experimental objective is location of the region containing the optimum. The MSE of response criteria is useful in the latter stages of RSM, when the experimental objective is prediction of the optimum response. The correlation induction strategies are evaluated under two experimental situations; fitting a first order model while protecting against quadratic curvature in the response surface, and fitting a second order model while protecting against cubic curvature. In the case of fitting a first order model, two-level factorial designs are used to evaluate the correlation induction strategies, and in the second order case, four design classes are considered; central composite designs, Box-Behnken designs, three-level factorial designs, and small composite designs. The findings of this research indicate that the assignment rule blocking strategy generally performs the best of the three strategies under both MSE criteria, and the performance of this strategy improves as the magnitudes of the induced correlations increase. The independent streams strategy is a poor choice when the design criteria is MSE of slope and the common streams strategy is a poor choice when the design criteria is MSE of response. The central composite and Box-Behnken designs were found to perform the best of the four second order design classes. The three-level factorial designs performed poorly under MSE of response criteria and the small composite designs performed poorly under the MSE of slope criteria.
- The use of neural networks in the combining of time series forecasts with differential penalty costsKohers, Gerald (Virginia Tech, 1993-10-15)The need for accurate forecasting and its potential benefits are well established in the literature. Virtually all individuals and organizations have at one time or another made decisions based on forecasts of future events. This widespread need for accurate predictions has resulted in considerable growth in the science of forecasting. To a large degree, practitioners are heavily dependent on academicians for generating new and improved forecasting techniques. In response to an increasingly dynamic environment, diverse and complex forecasting methods have been proposed to more accurately predict future events. These methods, which focus on the different characteristics of historical data, have ranged in complexity from simplistic to very sophisticated mathematical computations requiring a high level of expertise. By combining individual techniques to form composite forecasts in order to improve on the forecasting accuracy, researchers have taken advantage of the various strengths of these techniques. A number of combining methods have proven to yield better forecasts than individual methods, with the complexity of the various combining methods ranging from a simple average to quite complex weighting schemes. The focus of this study is to examine the usefulness of neural networks in composite forecasting. Emphasis is placed on the effectiveness of two neural networks (i.e., a backpropagation neural network and a modular neural network) relative to three traditional composite models (i.e., a simple average, a constrained mathematical programming model, and an unconstrained mathematical programming model) in the presence of four penalty cost functions for forecasting errors. Specifically, the overall objective of this study is to compare the shortterm predictive ability of each of the five composite forecasting techniques on various first-order autoregressive models, taking into account penalty cost functions representing four different situations. The results of this research suggest that in the vast majority of scenarios examined in this study, the neural network model clearly outperformed the other composite models.