Browsing by Author "Matheson, Lance A."
Now showing 1 - 20 of 24
Results Per Page
Sort Options
- An Agent-Based Distributed Decision Support System Framework for Mediated NegotiationLoPinto, Frank Anthony (Virginia Tech, 2004-04-26)Implementing an e-market for limited supply perishable asset (LiSPA) products is a problem at the intersection of online purchasing and distributed decision support systems (DistDSS). In this dissertation, we introduce and define LiSPA products, provide real-world examples, develop a framework for a distributed system to implement an e-market for LiSPA products, and provide proof-of-concept for the two major components of the framework. The DistDSS framework requires customers to instantiate agents that learn their preferences and evaluate products on their behalf. Accurately eliciting and modeling customer preferences in a quick and easy manner is a major hurdle for implementing this agent-based system. A methodology is developed for this problem using conjoint analysis and neural networks. The framework also contains a model component that is addressed in this work. The model component is presented as a mediator of customer negotiation that uses the agent-based preference models mentioned above and employs a linear programming model to maximize overall satisfaction of the total market.
- Behavioral Logistics and Fatigue Management in Vehicle Routing and Scheduling ProblemsBowden, Zachary E. (Virginia Tech, 2016-05-03)The vehicle routing problem (VRP), is a classic optimization problem that aims to determine the optimal set of routes for a fleet of vehicles to meet the demands of a set of customers. The VRP has been studied for many decades and as such, there are many variants and extensions to the original problem. The research presented here focuses on two different types of vehicle routing and scheduling planning problems: car shipping and fatigue-aware scheduling. In addition to modeling and solving the car shipping problem, this research presents a novel way for ways in which drivers can describe their route preferences in a decision support system. This work also introduces the first fatigue-aware vehicle scheduling problem called the Truck Driver Scheduling Problem with Fatigue Management (TDSPFM). The TDSPFM is utilized to produce schedules that keep the drivers more alert than existing commercial vehicle regulations. Finally, this work analyzes the effect of the starting alertness level on driver alertness for the remainder of the work week and examines a critical shortcoming in existing regulations.
- Building a knowledge based simulation optimization system with discovery learningSiochi, Fernando C. (Virginia Tech, 1995)Simulation optimization is a developing research area whereby a set of input conditions is sought that produce a desirable output (or outputs) to a simulation model. Although many approaches to simulation optimization have been developed, the research area is by no means mature. This research makes three contributions in the area of simulation optimization. The first is fundamental in that it examines simulation outputs, called "response surfaces," and notes their behavior. In particular both point and region estimates are studied for different response surfaces: Conclusions are developed that indicate when and where simulation-optimization techniques such as Response Surface Methodology should be applied. The second contribution provides assistance in selecting a region to begin a simulation-optimization search. The new method is based upon the artificial intelligence based approach best-first search. Two examples of the method are given. The final contribution of this research expands upon the ideas by Crouch for building a "Learner" to improve heuristics in simulation over time. The particular case of parameter-modification learning is developed and illustrated by example. The dissertation concludes with limitations and suggestions for future work.
- Computational Studies in Multi-Criteria Scheduling and OptimizationMartin, Megan Wydick (Virginia Tech, 2017-08-11)Multi-criteria scheduling provides the opportunity to create mathematical optimization models that are applicable to a diverse set of problem domains in the business world. This research addresses two different employee scheduling applications using multi-criteria objectives that present decision makers with trade-offs between global optimality and the level of disruption to current operating resources. Additionally, it investigates a scheduling problem from the product testing domain and proposes a heuristic solution technique for the problem that is shown to produce very high-quality solutions in short amounts of time. Chapter 2 addresses a grant administration workload-to-staff assignment problem that occurs in the Office of Research and Sponsored Programs at land-grant universities. We identify the optimal workload assignment plan which differs considerably due to multiple reassignments from the current state. To achieve the optimal workload reassignment plan we demonstrate a technique to identify the n best reassignments from the current state that provides the greatest progress toward the utopian solution. Solving this problem over several values of n and plotting the results allows the decision maker to visualize the reassignments and the progress achieved toward the utopian balanced workload solution. Chapter 3 identifies a weekly schedule that seeks the most cost-effective set of coach-to-program assignments in a gymnastics facility. We identify the optimal assignment plan using an integer linear programming model. The optimal assignment plan differs greatly from the status quo; therefore, we utilize a similar approach from Chapter 2 and use a multiple objective optimization technique to identify the n best staff reassignments. Again, the decision maker can visualize the trade-off between the number of reassignments and the resulting progress toward the utopian staffing cost solution and make an informed decision about the best number of reassignments. Chapter 4 focuses on product test scheduling in the presence of in-process and at-completion inspection constraints. Such testing arises in the context of the manufacture of products that must perform reliably in extreme environmental conditions. Each product receives a certification at the successful completion of a predetermined series of tests. Operational efficiency is enhanced by determining the optimal order and start times of tests so as to minimize the make span while ensuring that technicians are available when needed to complete in-process and at-completion inspections We first formulate a mixed-integer programming model (MILP) to identify the optimal solution to this problem using IBM ILOG CPLEX Interactive Optimizer 12.7. We also present a genetic algorithm (GA) solution that is implemented and solved in Microsoft Excel. Computational results are presented demonstrating the relative merits of the MILP and GA solution approaches across a number of scenarios.
- Concurrent optimization in designing for logistics supportHatch, Melanie L. (Virginia Tech, 1994-05-15)The military community has considerable experience in the areas of procuring and managing large systems. These systems are often expected to perform their intended function over a period of several years and as a result, they will require an extensive support structure consisting of personnel, equipment and spare assets. For this reason, Logistics Management has always been an important field within the military and is gaining recognition within private industry as well. The evolutionary process which starts with the identification of a need and continues through design, production and retirement is known as a product's life cycle. Studies have shown that the decisions which are made initially, during the design of the product, will determine 80% of the total system costs. Several efforts have been initiated to improve the product design process and emphasize the life cycle approach. These include; Concurrent Engineering, Logistics Support Analysis (LSA) and Quality Function Deployment (QFD). These efforts necessitate an overhaul of the decision-making methods used in the product design process. Consequently, within the military community and private industry, the time-honored sequential-hierarchical-decision approach to design is being replaced with concurrent decision-making. The sequential process of the hierarchical method can lead to suboptimal designs which significantly increase manufacturing and follow-on support costs.
- Control Strategies and Parameter Compensation for Permanent Magnet Synchronous Motor DrivesMonajemy, Ramin (Virginia Tech, 2000-06-27)Variable speed motor drives are being rapidly deployed for a vast range of applications in order to increase efficiency and to allow for a higher level of control over the system. One of the important areas within the field of variable speed motor drives is the system's operational boundary. Presently, the operational boundaries of variable speed motor drives are set based on the operational boundaries of single speed motors, i.e. by limiting current and power to rated values. This results in under-utilization of the system, and places the motor at risk of excessive power losses. The constant power loss (CPL) concept is introduced in this dissertation as the correct basis for setting and analyzing the operational boundary of variable speed motor drives. The control and dynamics of the permanent magnet synchronous motor (PMSM) drive operating with CPL are proposed and analyzed. An innovative implementation scheme of the proposed method is developed. It is shown that application of the CPL control system to existing systems results in faster dynamics and higher utilization of the system. The performance of a motor drive with different control strategies is analyzed and compared based on the CPL concept. Such knowledge allows for choosing the control strategy that optimizes a motor drive for a particular application. Derivations for maximum speed, maximum current requirements, maximum torque and other performance indices, are presented based on the CPL concept. High performance drives require linearity in torque control for the full range of operating speed. An analysis of concurrent flux weakening and linear torque control for PMSM is presented, and implementation strategies are developed for this purpose. Implementation strategies that compensate for the variation of machine parameters are also introduced. A new normalization technique is introduced that significantly simplifies the analysis and simulation of a PMSM drive's performance. The concepts presented in this dissertation can be applied to all other types of machines used in high performance applications. Experimental work in support of the key claims of this dissertation is provided.
- Cost-based shop control using artificial neural networksWiegmann, Lars (Virginia Tech, 1992)The production control system of a shop consists of three stages: due-date prediction, order release, and job dispatching. The literature has dealt thoroughly with the third stage, but there is a paucity of study on either of the first two stages or on interaction between the stages. This dissertation focuses on the first stage of production control, due-date prediction, by examining methodologies for improved prediction that go beyond either practitioner or published approaches. In particular, artificial neural networks and regression nonlinear in its variables are considered. In addition, interactive effects with the third stage, shop-floor dispatching, are taken into consideration. The dissertation conducts three basic studies. The first examines neural networks and regression nonlinear in its variables as alternatives to conventional due-date prediction. The second proposes a new cost-based criterion and prediction methodology that explicitly includes costs of earliness and tardiness directly in the forecast; these costs may differ in form and/or degree from each other. And third, the benefit of tying together the first and third stages of production control is explored. The studies are conducted by statistically analyzing data generated from simulated shops. Results of the first study conclude that both neural networks and regression nonlinear in its variables are preferred significantly to approaches advanced to date in the literature and in practice. Moreover, in the second study, it is found that the consequences of not using the cost-based criterion can be profound, particularly if a firm's cost function is asymmetric about the due date. Finally, it is discovered that the integrative, interactive methodology developed in the third study is significantly superior to the current non-integrative and non-interactive approaches. In particular, interactive neural network prediction is found to excel in the presence of asymmetric cost functions, whereas regression nonlinear in its variables is preferable under symmetric costs.
- A Decision Support System for the Electrical Power Districting ProblemBergey, Paul K. (Virginia Tech, 2000-04-21)Due to a variety of political, economic, and technological factors, many national electricity industries around the globe are transforming from non-competitive monopolies with centralized systems to decentralized operations with competitive business units. This process, commonly referred to as deregulation (or liberalization) is driven by the belief that a monopolistic industry fails to achieve economic efficiency for consumers over the long run. Deregulation has occurred in a number of industries such as: aviation, natural gas, transportation, and telecommunications. The most recent movement involving the deregulation of the electricity marketplace is expected to yield consumer benefit as well. To facilitate deregulation of the electricity marketplace, competitive business units must be established to manage various functions and services independently. In addition, these business units must be given physical property rights for certain parts of the transmission and distribution network in order to provide reliable service and make effective business decisions. However, partitioning a physical power grid into economically viable districts involves many considerations. We refer to this complex problem as the electrical power districting problem. This research is intended to identify the necessary and fundamental characteristics to appropriately model and solve an electrical power districting problem. Specifically, the objectives of this research are five-fold. First, to identify the issues relevant to electrical power districting problems. Second, to investigate the similarities and differences of electrical power districting problems with other districting problems published in the research literature. Third, to develop and recommend an appropriate solution methodology for electrical power districting problems. Fourth, to demonstrate the effectiveness of the proposed solution method for a specific case of electric power districting in the Republic of Ghana, with data provided by the World Bank. Finally, to develop a decision support system for the decision makers at the World Bank for solving Ghana's electrical power districting problem.
- Design and Application of Genetic Algorithms for the Multiple Traveling Salesperson Assignment ProblemCarter, Arthur E. (Virginia Tech, 2003-04-21)The multiple traveling salesmen problem (MTSP) is an extension of the traveling salesman problem with many production and scheduling applications. The TSP has been well studied including methods of solving the problem with genetic algorithms. The MTSP has also been studied and solved with GAs in the form of the vehicle-scheduling problem. This work presents a new modeling methodology for setting up the MTSP to be solved using a GA. The advantages of the new model are compared to existing models both mathematically and experimentally. The model is also used to model and solve a multi line production problem in a spreadsheet environment. The new model proves itself to be an effective method to model the MTSP for solving with GAs. The concept of the MTSP is then used to model and solve with a GA the use of one salesman make many tours to visit all the cities instead of using one continuous trip to visit all the cities. While this problem uses only one salesman, it can be modeled as a MTSP and has many applications for people who must visit many cities on a number of short trips. The method used effectively creates a schedule while considering all required constraints.
- The Effects of Roles and Personality Characteristics on Software Development Team EffectivenessStevens, K. Todd Jr. (Virginia Tech, 1998-03-20)The objective of this research is to show the utility of roles and personality characteristics to the evaluation and formation of software development teams. The goals of this research include demonstrating empirically that Belbin's team roles can be used to form and evaluate software teams, providing a partial validation of the analyses by using the Belbin roles to analyze teams from the software industry, and comparing the personality data collected for this research to data from two previous studies and to the general population. In the highly competitive software industry, improving the software development process can be critical to a company's success. More specifically, improving a team's productivity can save employers significant time and money. This investigation addresses the productivity of software development teams in a series of studies. First, controlled studies empirically show that Belbin's roles can be used in team formation to improve team performance. Second, additional studies, both qualitative and quantitative, demonstrate that Belbin's roles can be used as criteria in team evaluation and formation. Finally, teams from the software development industry are evaluated, providing a partial validation of the usefulness of Belbin's roles to software teams. The cumulative effect of the results of the studies in this investigation demonstrate that Belbin's roles can be used effectively in team formation and evaluation. Specifically, Belbin's roles for leadership and innovation are shown in empirical studies to be important in the formation of software teams, and all of the Belbin roles are used in the evaluation of teams in academia as well as in industry. The results of this investigation should be used in team formation and evaluation, in an academic setting as well as in the software development industry. For team evaluation, deficiencies uncovered in the Belbin roles should be remedied, and positive aspects should be encouraged. In team formation, teams should contain the complement of Belbin roles and should specifically contain the leadership and innovation roles focused on as part of this investigation. It is clear from this investigation that Belbin's roles can be used effectively to improve software development teams.
- Electronic Data Interchange: An Inventory Perspective of Its Economic Viability and Recommendations for Information Technology Driven ImplementationO'Malley, John Richard Jr. (Virginia Tech, 2000-01-28)Electronic commerce (EC) in its various forms is perceived by many organizations as the way that business will be conducted in the future. Much of the current wave of interest in EC is driven by new, readily available technologies like the Internet and the World Wide Web. The excitement regarding Web Commerce has lead many to believe that EC is relatively new. In reality, EC in the form of Electronic Data Interchange (EDI) has existed for 30 years and accounts for far more business than WC. It is the preferred, and often required, way of doing business with many large organizations such as the U.S. Federal Government, Ford, General Motors, and Wal-Mart. While EDI has existed for 30 years, it has not experienced the rapid adoption rate that Web Commerce has in the last few years. Currently, less than 10 percent of U.S. businesses and less than 5 percent of world businesses utilize EDI. The adoption rates for other recent information technologies, such as the World Wide Web and e-mail, have been much higher in a much smaller time frame which leads to the question of why has diffusion of EDI occurred so slowly compared to other recent information technologies. According to Kalakota and Whinston (1996), it is not due to technology problems with EDI but instead with its benefits. This is in conflict with Emmelhainz (1990) and Sokol (1995) who point out the tremendous benefits to firms that adopt EDI. This dissertation researches the reasons for the low EDI adoption rate based on financial benefits. It then develops an economic model that computes the cost savings which result when an EDI system is implemented. Sensitivity analysis is performed to understand the economic mechanisms of EDI. Based on the model developed here, recommendations are made for changing EDI to increase its market penetration. Finally, based on the recommendations an alternative EDI system, JEEDI, is developed. The financial effectiveness of the JEEDI system over existing EDI systems is then demonstrated using the economic model developed here.
- Ensemble Learning Techniques for Structured and Unstructured DataKing, Michael Allen (Virginia Tech, 2015-04-01)This research provides an integrated approach of applying innovative ensemble learning techniques that has the potential to increase the overall accuracy of classification models. Actual structured and unstructured data sets from industry are utilized during the research process, analysis and subsequent model evaluations. The first research section addresses the consumer demand forecasting and daily capacity management requirements of a nationally recognized alpine ski resort in the state of Utah, in the United States of America. A basic econometric model is developed and three classic predictive models evaluated the effectiveness. These predictive models were subsequently used as input for four ensemble modeling techniques. Ensemble learning techniques are shown to be effective. The second research section discusses the opportunities and challenges faced by a leading firm providing sponsored search marketing services. The goal for sponsored search marketing campaigns is to create advertising campaigns that better attract and motivate a target market to purchase. This research develops a method for classifying profitable campaigns and maximizing overall campaign portfolio profits. Four traditional classifiers are utilized, along with four ensemble learning techniques, to build classifier models to identify profitable pay-per-click campaigns. A MetaCost ensemble configuration, having the ability to integrate unequal classification cost, produced the highest campaign portfolio profit. The third research section addresses the management challenges of online consumer reviews encountered by service industries and addresses how these textual reviews can be used for service improvements. A service improvement framework is introduced that integrates traditional text mining techniques and second order feature derivation with ensemble learning techniques. The concept of GLOW and SMOKE words is introduced and is shown to be an objective text analytic source of service defects or service accolades.
- Examining Electronic Markets in Which Intelligent Agents Are Used for Comparison Shopping and Dynamic PricingHertweck, Bryan M. (Virginia Tech, 2005-09-08)Electronic commerce markets are becoming increasingly popular forums for commerce. As those markets mature, buyers and sellers will both vigorously seek techniques to improve their performance. The Internet lends itself to the use of agents to work on behalf of buyers and sellers. Through simulation, this research examines different implementations of buyers' agents (shopbots) and sellers' agents (pricebots) so that buyers, sellers, and agent builders can capitalize on the evolution of e-commerce technologies. Internet markets bring price visibility to a level beyond what is observed in traditional brick-and-mortar markets. Additionally, an online seller is able to update prices quickly and cheaply. Due to these facts, there are many pricing strategies that sellers can implement via pricebot to react to their environments. The best strategy for a particular seller is dependent on characteristics of its marketplace. This research shows that the extent to which buyers are using shopbots is a critical driver of the success of pricing strategies. When measuring profitability, the interaction between shopbot usage and seller strategy is very strong - what works well at low shopbot usage levels may perform poorly at high levels. If a seller is evaluating strategies based on sales volume, the choice may change. Additionally, as markets evolve and competitors change strategies, the choice of most effective counterstrategies may evolve as well. Sellers need to clearly define their goals and thoroughly understand their marketplace before choosing a pricing strategy. Just as sellers have choices to make in implementing pricebots, buyers have decisions to make with shopbots. In addition to the factors described above, the types of shopbots in use can actually affect the relative performance of pricing strategies. This research also shows that varying shopbot implementations (specifically involving the use of a price memory component) can affect the prices that buyers ultimately pay - an especially important consideration for high-volume buyers. Modern technology permits software agents to employ artificial intelligence. This work demonstrates the potential of neural networks as a tool for pricebots. As discussed above, a seller's best strategy option can change as the behavior of the competition changes. Simulation can be used to evaluate a multitude of scenarios and determine what strategies work best under what conditions. This research shows that a neural network can be effectively implemented to classify the behavior of competitors and point to the best counterstrategy.
- A Framework for Development in Rural Arid and Semi-Arid Environments in Africa: The Somalia CaseMitchell, John Talmadge (Virginia Tech, 2020-05-11)This study proposes a framework and a process promoting creation of sustainable jobs and businesses in rural, arid and semi-arid agricultural conflict zones of Sub Saharan Africa, focusing on Somalia's societal stabilization and conflict mitigation. This task requires developing risk-reducing measures for infrastructure and service delivery in rural, post-conflict zones. Literature reviews identified two economic growth theories rooted in sustainability concepts for localized, pro-poor development. Ecological Economics Theory (EET) and Endogenous Growth Theory (EGT) are the philosophical bases establishing investment priorities. Additional research regarding Somali culture, key conflict factors, and potential business opportunities, provides an understanding of salient facts in Somalia's on-going, 27-years of war and potential culturally acceptable development pathways. Informal sources, Somali and non-Somali, were consulted to further identify and verify potential avenues for economic growth, sustainability, educational opportunities, allowing Somalia to emerge from the strife it has endured. Visits to Somalia and Somaliland confirmed that livestock, its products and related requirements, are key components for economic growth and job creation. Investigation, via pilot testing and case studies, was undertaken of technologies with potential to improve productive capacity and disrupt existing value chains. Initial framework elements were evaluated for job and business creation, through unstructured, semi-structured interviews, and questionnaire of Somali officials, and Somali and non-Somali conflict zone development practitioners. The pilot test used a small sample size and is a limitation of this work. Findings from the literature review, informal discussions, and the pilot test are synthesized into the framework presented in Chapter 5. The framework proposes development of an innovative, disruptive, and scalable business model that facilitates the simultaneous implementation of renewable energy production. It targets education for the livestock and agroforestry industry of Somalia, improving job and business opportunities. The model proposes modification of used shipping containers for the creation of modular elements, to satisfying infrastructural building components to initiate skills practice, job, and business growth.
- A hierarchical analysis of factors affecting the adoption and marketing of timber bridgesSmith, Robert L. (Virginia Tech, 1994)Several aspects influencing the adoption of timber bridges were investigated. Initially, perceptions of timber as a bridge material were rated by highway officials in twenty-eight states. Timber was rated lowest in overall performance by each group (State Department of Transportation engineers, private consultants, and local highway officials) throughout the United States. The highest rated bridge material was prestressed concrete, followed by reinforced concrete, steel and timber. The most important factors in the bridge material decision included: Lifespan of material, past performance of material, maintenance requirements of material, resistance to natural deterioration, initial cost, and lifecycle cost. Timber was compared to other bridge materials on eight preselected attributes. Timber rated the lowest on the attributes of low maintenance, ease of design, long life, and high strength. Highway officials in four states (Mississippi, Virginia, Washington, and Wisconsin) were personally interviewed. The Analytic Hierarchy Process (AHP) was used to characterize their decision of a bridge material. The most important bridge criteria were similar in each state, however, their effect on the overall decision differed by state. Prestressed and reinforced concrete were the materials of choice in all states. The results of this study indicate that, based on the six criteria measured, timber will seldom be the material of choice for highway bridges. Timber bridge manufacturers were surveyed to understand current marketing and management techniques in the promotion of timber bridges. Marketing efforts were most prevalent in the Midwest. Timber bridge sales represented, on average, less than 7% of total sales from responding companies. Wood treating and gluelaminating firms represented over 75% of the timber bridge firms. One-half of the responding timber bridge companies felt that timber bridge sales would increase an average of 15% over the next five years. Barriers and incentives to timber bridge adoption were investigated. The greatest incentives include: year around construction, resistance to deicing chemicals, quick construction, and aesthetic qualities. Major barriers appear to be: short lifespan, maintenance requirements, decay, perceptions of strength, and that "timber doesn't perform well under high weight and traffic volumes". The realistic size of the bridge market was estimated not to exceed 600 to 700 designed bridges a year. This would require the use of 10 to 12 million board feet of lumber.
- Improving Post-Disaster Recovery: Decision Support for Debris Disposal OperationsFetter, Gary (Virginia Tech, 2010-03-31)Disaster debris cleanup operations are commonly organized into two phases. During the first phase, the objective is to clear debris from evacuation and other important pathways to ensure access to the disaster-affected area. Practically, Phase 1 activities largely consist of pushing fallen trees, vehicles, and other debris blocking streets and highways to the curb. These activities begin immediately once the disaster has passed, with the goal of completion usually within 24 to 72 hours. In Phase 2 of debris removal, which is the focus of this study, completion can take months or years. Activities in this phase include organizing and managing curbside debris collection, reduction, recycling, and disposal operations (FEMA 2007). This dissertation research investigates methods for improving post-disaster debris cleanup operations—one of the most important and costly aspects of the least researched area of disaster operations management (Altay and Green 2006). The first objective is to identify the unique nature of the disaster debris cleanup problem and the important decisions faced by disaster debris coordinators. The second goal is to present three research projects that develop methods for assisting disaster management coordinators with debris cleanup operations. In the first project, which is the topic of Chapter 3, a facility location model is developed for addressing the problem of opening temporary disposal and storage reduction facilities, which are needed to ensure efficient and effective cleanup operations. In the second project, which is the topic of Chapter 4, a multiple objective mixed-integer linear programming model is developed to address the problem of assigning debris cleanup resources across the disaster-affected area at the onset of debris cleanup operations. The third project and the focus of Chapter 5 addresses the problem of equitably controlling ongoing cleanup operations in real-time. A self-balancing CUSUM statistical process control chart is developed to assist disaster management coordinators with equitably allocating cleanup resources as information becomes available in real-time. All of the models in this dissertation are evaluated using data from debris cleanup operations in Chesapeake, Virginia, completed after Hurricane Isabel in 2003.
- An integrated approach to software process assessmentHenry, Joel (Virginia Tech, 1993-02-05)This dissertation describes a methodology for assessing the software process (both development and maintenance) used by an organization. The assessment methodology integrates the principles of Total Quality Management and the work of the Software Engineering Institute. The integrated assessment methodology results in a well understood, well-documented, quantitatively evaluated software process. The methodology utilizes four steps: investigation, modeling, data collection, and analysis of both process content and process output. The integrated assessment methodology was implemented at a large commercial software organization over a two year period. Implementation results are presented and significant conclusions are discussed. Four areas for further research are also presented.
- A Methodology for Characterization and Performance Analysis of Connection-Based Network Access TechnologiesNovak, David C. (Virginia Tech, 2001-04-18)Network administration has become more difficult as the number of Internet users has grown and customer usage patterns have changed over time. Rapidly increasing subscriber bases, data intensive applications (such as streaming audio and video), heavy Web browsing, and large file downloads require significant resources and may tax existing network bandwidth. Reliability and quality of service are becoming serious issues for service providers across the country. Due to the dynamic nature of the information technology (IT) sector in general, it is difficult to predict future network usage patterns or what types of applications may be available, and how these applications may be used over time. This research presents a methodology to facilitate capacity planning and to improve the evaluation of network performance for connection-based networks using the Virginia Tech modem pool as a test bed. The abstract research question is can innovative business strategies be employed in lieu of, or in addition to, traditional management practices such as adding capacity in order to improve the performance of a dialup network? Examples of such strategies or business rules may include limiting the duration of an online session or limiting the number of times a given customer can dial into the pool in a specified time period. A complete network traffic characterization is conducted based on service time and interarrival time variables. A longitudinal analysis is performed to examine how traffic patterns have changed over time. Finally, a simulation model is utilized to examine how imposing different business rules during peak-periods of operation can reduce the blocking probability and improve the overall level-of-service. The potential contribution of this research appears to be significant based on the lack of existing literature.
- Multidimensional Visualization of Process Monitoring and Quality Assurance Data in High-Volume Discrete ManufacturingTeets, Jay Marshall (Virginia Tech, 2007-01-19)Advances in microcomputing hardware and software over the last several years have resulted in personal computers with exceptional computational power and speed. As the costs associated with microcomputer hardware and software continue to decline, manufacturers have begun to implement numerous information technology components on the shop floor. Components such as microcomputer file servers and client workstations are replacing traditional (manual) methods of data collection and analysis since they can be used as a tool for real-time decision-making. Server-based and web-based shop floor data collection and monitoring software applications are able to collect vast amounts of data in a relatively short period of time. In addition, advances in telecommunications and computer interconnectivity allow for the remote access and sharing of this data for additional analysis. Rarely, however, does the method by which a manager reviews production and quality data keep pace with the large amount of data being collected and thus available for analysis. Visualization techniques that allow the decision maker to react quickly, such as the ability to view and manipulate vast amounts of data in real-time, may provide an alternative for operations managers and decision-makers. These techniques can be used to improve the communication between the manager using a microcomputer and the microcomputer itself through the use of computer-generated, domain-specific visualizations. This study explores the use of visualization tools and techniques applied to manufacturing systems as an aid in managerial decision-making. Numerous visual representations that support process and quality monitoring have been developed and presented for evaluation of process and product quality characteristics. These visual representations are based on quality assurance data and process monitoring data from a high-volume, discrete product manufacturer with considerable investment in both automated and intelligent processes and information technology components. A computer-based application was developed and used to display the visual representations that were then presented to a sample group of evaluators who evaluated them with respect to their ability to utilize them in making accurate and timely decisions about the processes being monitored. This study concludes with a summary of the results and provides a direction for future research efforts.
- A Parametric Simulation Model for Evaluating Cost Effectiveness of Remote Monitoring for Risk Reduction in Rural Water Supply Systems and Application to the Tazewell County, Virginia SystemWetzel, George L. (Virginia Tech, 2003-05-23)A simulation model analyzes cost effectiveness of remote facility monitoring for risk reduction in rural water supply systems by performing a break-even analysis that compares operating costs with manual and remote monitoring. Water system operating cost includes the value of water loss (i.e., realized risk) resulting from operating excursions which are inversely related to mechanical reliability. Reliability is controlled by facility monitoring that identifies excursions enabling operators to implement mitigating measures. Cost effectiveness refers to the cost relationship among operating alternatives that reveals changed economic conditions at different operating rates inherent in the inverse relationship between fixed and variable costs. Break-even analysis describes cost effectiveness by identifying the operating rate above which the more capital intensive alternative will result in lower operating cost. Evidence indicates that increased monitoring frequency associated with remote monitoring can reduce water system operating cost by improving reliability, but whether remote monitoring is cost effective depends upon system-specific factors. The lack of a documented tool for evaluating this type of cost effectiveness led to the project objective of developing a model that performs break-even analysis by simulating water system operating costs as functions of system size (delivery rate). When the spreadsheet-based static deterministic parametric simulation model is run for the Tazewell County, Virginia water system based upon 1998 data, break even is predicted at approximately fifty-five percent of annual capacity (116,338,000 gallons) with operating cost of $1,043,400. Maximum annual operating cost reduction from a $317,600 investment provides payback in nine years.