Browsing by Author "Liu, Y. A."
Now showing 1 - 20 of 25
Results Per Page
Sort Options
- Accurate frequency estimation with phasor anglesChen, Jian (Virginia Tech, 1994-04-04)A power system should always operate in a balanced and stable condition at its designed frequency. Any significant upset of this balance will produce a change in the power system frequency. It is the responsibility of the monitoring and protective devices to detect and restore the system to the equilibrium operating condition at the nominal frequency as soon as it is practical to do so. An accurate measurement of both frequency deviation and rate of change of frequency will greatly facilitate the restoration process. In this thesis, a recursive algorithm for precise frequency and rate of change of frequency measurement is presented. The algorithm consists of three major steps. First, a rough frequency estimation for a data window is computed using a second order least error square approximation on the phasor angles of the input waveform. Then, a resampling based on the rough frequency estimation is carried out, followed by another second order least error square approximation to obtain the final results. The results of simulations using this approach are provided.
- Application of Concurrent Development Practices to Petrochemical Equipment DesignLomax, Franklin Delano (Virginia Tech, 2001-03-16)Principles of concurrent development are applied to the design of a small-scale device for converting natural gas or liquefied petroleum gas into hydrogen. The small hydrogen generator is intended for serial production for application in the production of industrial hydrogen, fueling stationary fuel cell power systems and refueling hydrogen-fueled fuel cell electric vehicles. The concurrent development process is contrasted with the traditional, linear development process for petrochemical systems and equipment, and the design is benchmarked against existing small hydrogen generators as well as industrial hydrogen production apparatus. A novel system and hardware design are described, and a single cycle of concurrent development is applied in the areas of catalyst development, thermodynamic optimization, and reactor modeling and design. The impact of applying concurrent development techniques is assessed through economic modeling, and directions for future development work are identified.
- Application of COSMO-SAC to Solid Solubility in Pure and Mixed Solvent Mixtures for Organic Pharmacological CompoundsMullins, Paul Eric (Virginia Tech, 2007-01-24)In this work, we present two open literature databases, the VT-2005 Sigma Profile Database and the VT-2006 Solute Sigma Profile Database, that contain sigma profiles for 1,645 unique compounds. A sigma profile is a molecular-specific distribution of the surface-charge density, which enables the application of solvation-thermodynamic models to predict vapor-liquid and solid-liquid equilibria, and other properties. The VT-2005 Sigma Profile Database generally focuses on solvents and small molecules, while the VT-2006 Solute Sigma Profile Database primarily consists of larger, pharmaceutical-related solutes. We design both of these databases for use with the conductor-like screening model−segment activity coefficient (COSMO-SAC), a liquid-phase activity-coefficient model. The databases contain the necessary information to perform binary and multicomponent VLE and SLE predictions. We offer detailed tutorials and procedures for use with our programs so the reader may also use their own research on our research group website (www.design.che.vt.edu). We validate the VT-2005 Sigma Profile Database by pure component vapor pressure predictions and validate the VT-2006 Solute Sigma Profile Database by solid solubility predictions in pure solvents compared with literature data from multiple sources. Using both databases, we also explore the application of COSMO-SAC to solubility predictions in mixed solvents. This work also studies the effects of conformational isomerism on VLE and SLE property prediction. Finally, we compare COSMO-SAC solubility predictions to solubility predictions by the Non-Random Two-Liquid, Segment Activity Coefficient (NRTL-SAC) model. We find UNIFAC is a more accurate method for predicting VLE behavior than the COSMO-SAC model for many of the systems studied, and that COSMO-SAC predicts solute mole fraction in pure solvents with an average root-mean-squared error (log10(xsol)) of 0.74, excluding outliers, which is greater than the RMS error value of 0.43 using the NRTL-SAC model.
- Comparison of Multieffect Distillation and Extractive Distillation Systems for Corn-Based Ethanol PlantsDion Ngute, Miles Ndika (Virginia Tech, 2012-01-13)Recent publications on ethanol production and purification shows optimized energy and water consumptions as low as 22,000 Btu/gal ethanol and 1.54 gal water/gal ethanol respectively using multieffect distillation. Karuppiah, et al use column rating and mathematical optimization methods and shortcut design models to design evaluate and optimize the energy and water consumption. In this work, we compare shortcut design and rigorous simulation models for an ethanol purification distillation system, and we show that distillation systems based on shortcut design underestimate the true energy and water consumption of the distillation system. We then use ASPEN Plus, to design a multieffect distillation system and an extractive distillation system using rigorous simulation and compare the two for energy and water consumptions. We show that the extractive distillation system has lower steam and cooling water consumptions and consequently lower energy and water consumptions than multieffect distillation in corn-to-ethanol production and purification. We also show that the extractive distillation system is cheaper than the multieffect distillation system on a cost per gal ethanol basis. This work gives an energy consumption of 29987 Btu/gal ethanol and water consumptions 2.82 gal/gal ethanol for the multieffect distillation system at a manufacturing cost of $3.03/gal ethanol. For the extractive distillation system, we calculate an energy consumption of 28199 Btu/gal ethanol and a water consumption of 2.79 gal/gal ethanol at a manufacturing cost of $2.88/gal ethanol.
- Computer Simulation and Optimization of the NOx Abatement System at the Radford Facility and Army Ammunition PlantSweeney, Andrew Jeffrey (Virginia Tech, 1999-03-21)This thesis discusses findings gained through work with the NOx abatement system at Radford Facility and Army Ammunition Plant (RFAAP). Removal of harmful substances from flue-gas emissions has garnered increased priority in the chemical industry in preceding decades, as governmental restrictions on these substances become more stringent and as national awareness concerning environmental quality and resource utilization continues to grow. These reasons make the study of NOx abatement an important and challenging endeavor. This work concerns itself specifically with reduction of NOx in flue-gas emissions from stationary sources. First we present an overview of current technology and approaches to controlling NOx for stationary sources. Next, we focus in on one particular approach to control of NOx within the context of a case study of the technology used at the Radford Facility and Army Ammunition Plant. RFAAP employs a scrubber/absorber tower followed in series by a selective catalytic reduction (SCR) reaction vessel in their NOx abatement system. We use as the method of study computer simulations within ASPEN Plus, a process simulation software package for chemical plants. We develop three different models with which to characterize NOx abatement at RFAAP, a conversion model, an equilibrium model and a kinetic model. The conversion-reaction model approximates the absorption and SCR reactions with constant percentage extent-of-reaction values. Though useful for initial investigation and mass balance information, we find the conversion model's insensitivity to process changes to be unacceptable for in-depth study of the case of NOx absorption and SCR. The equilibrium-reaction model works on the assumption that all the reactions reach chemical equilibrium. For the conditions studied here, we find the equilibrium model accurately simulates NOx absorption but fails in the case of SCR. Therefore, we introduce a kinetic-reaction model to handle the SCR. The SCR reactions prove to be highly rate-dependant and the kinetic approach performs well. The final evolution of the ASPEN Plus simulation uses an equilibrium model for the absorption operation and a kinetic model for the SCR. We explore retrofit options using this combined model and propose process improvements. We end this work with observations of the entire project in the form of conclusions and recommendations for improving the operation of the NOx abatement system through process-parameter optimization and equipment-retrofit schemes. By leading the reader through the process by which we arrived at a successful and highly informative computer model for NOx absorption and SCR, we hope to educate the reader on the subtleties of NOx abatement by absorption and SCR. We attempt to break down the numerous complex processes to present a less daunting prospect to the engineer challenged with the application of current NOx removal technology. In addition, we introduce the reader to the power and usefulness of computer modeling in instances of such complexity. The model teaches us about the details of the process and helps us develop concrete information for its optimization. Ideally, the reader could use a similar approach in tackling related operations and not confine the usefulness of this thesis to NOx absorption and SCR. The audiences that we think would benefit from exposure to this thesis are the following: • Environmental engineers with a NOx problem; • Process engineers interested in optimization tools; • Design engineers exploring flue-gas treatment options; • Combustion engineer desiring to learn about SCR; • Chemists and mathematicians intrigued by the complexities of NOx absorption chemistry.
- An expert system for solvent-based separation process synthesisBrunet, Jean-Christophe (Virginia Tech, 1992-04-18)Expert systems are being used more daily in chemical engineering. This work continues the development of an EXpert system for SEParation flowsheet synthesis named EXSEP. Written in Prolog, it can generate flowsheets for four multicomponent separations: distillation, absorption, stripping and liquid-liquid extraction. For these separations, we describe a large collection of heuristics (or rules) that are used for flowsheet synthesis. EXSEP uses several of these heuristics and the Kremser equation to test the thermodynamic feasibility of separation tasks. EXSEP requires only basic input data such as the expected component flow rates in each product and the component K-values. With those data, EXSEP searches for the sets of the number of theoretical stages, solvent flow rate, and component-recovery ratios that characterize a number of feasible and economical flowsheets. The use of the component assignment matrix (CAM) combined with Prolog list processing makes EXSEP very fast (several seconds) to generate solutions. We test EXSEP with several examples of industrial separation processes and compare the results with the literature. We also compare EXSEP results with rigorous simulations using commercial CAD software (e.g., DESIGN II). In most cases, EXSEP gives very similar and even better flowsheets. However, EXSEP is limited to dilute solvent-based separations and cannot solve problems where the major feed component is also the solvent (e.g. sour-water steam stripper). The development of EXSEP on IBM-PC makes it very "user friendly". In the future, EXSEP should be expanded with additional modules such as extractive and azeotropic distillation, and bulk absorption. It should also include modules for separation method and solvent selections, which are great challenges in flowsheet synthesis.
- Fundamental Modeling of Solid-State Polymerization Process Systems for Polyesters and PolyamidesLucas, Bruce (Virginia Tech, 2005-08-25)The dissertation describes and assembles the building blocks for sound and accurate models for solid-state polymerization process systems of condensation polymers, particularly poly(ethylene terephthalate) and nylon-6. The work centers on an approach for modeling commercial-scale, as opposed to laboratory-scale, systems. The focus is not solely on coupled polymerization and diffusion, but extends to crystallization, physical properties, and phase equilibrium, which all enhance the robustness of the complete model. There are three applications demonstrating the utility of the model for a variety of real, industrial plant operations. One of the validated simulation models is for commercial production of three different grades of solid-state PET. There are also validated simulation models for the industrial leaching and solid-state polymerization of nylon-6 covering a range of operating conditions. The results of these studies justify our mixing-cell modeling approach as well as the inclusion of all relevant fundamental concepts. The first several chapters discuss in detail the engineering fundamentals that we must consider for modeling these polymerization process systems. These include physical properties, phase equilibrium, crystallization, diffusion, polymerization, and additional modeling considerations. The last two chapters cover the modeling applications.
- Modeling, Simulation, and Optimization of large-Scale Commercial Desalination PlantsAl-Shayji, Khawla Abdul Mohsen (Virginia Tech, 1998-04-17)This dissertation introduces desalination processes in general and multistage flash (MSF) and reverse osmosis (RO) in particular. It presents the fundamental and practical aspects of neural networks and provides an overview of their structures, topology, strengths, and limitations. This study includes the neural network applications to prediction problems of large-scale commercial MSF and RO desalination plants in conjunction with statistical techniques to identify the major independent variables to optimize the process performance. In contrast to several recent studies, this work utilizes actual operating data (not simulated) from a large-scale commercial MSF desalination plant (48 million gallonsper day capacity, MGPD) and RO plant (15 MGPD) located in Kuwait and the Kingdom of Saudi Arabia, respectively. We apply Neural Works Professional II/Plus (NeuralWare, 1993) and SAS (SAS Institute Inc., 1996) software to accomplish this task. This dissertation demonstrates how to apply modular and equation-solving approaches for steady-state and dynamic simulations of large-scale commercial MSF desalination plants using ASPEN PLUS (Advanced System for Process Engineering PLUS) and SPEEDUP (Simulation Program for Evaluation and Evolutionary Design of Unsteady Processes) marketed by Aspen Technology, Cambridge, MA. This work illustrates the development of an optimal operating envelope for achieving a stable operation of a commercial MSF desalination plant using the SPEEDUP model. We then discuss model linearization around nominal operating conditions and arrive at pairing schemes for manipulated and controlled variables by interaction analysis. Finally, this dissertation describes our experience in applying a commercial software, DynaPLUS, for combined steady-state and dynamic simulations of a commercial MSF desalination plant. This dissertation is unique and significant in that it reports the first comprehensive study of predictive modeling, simulation, and optimization of large-scale commercial desalination plants. It is the first detailed and comparative study of commercial desalination plants using both artificial intelligence and computer-aided design techniques. The resulting models are able to reproduce accurately the actual operating data and to predict the optimal operating conditions of commercial desalination plants.
- Multiscale Modeling of an Industrial Nylon-6 LeacherGaglione, Anthony (Virginia Tech, 2007-01-23)This thesis presents a multiscale model of an industrial nylon-6 leacher. We develop several models at various spatial scales and implement them together in a simplistic, efficient way to develop an overall leacher model. We solve dynamic transport differential equations using the finite-volume method and method of lines in an in-house-developed FORTRAN program. We use the ODEPACK package of ordinary differential equation (ODE) solvers to solve our system of coupled ODEs. Our multiscale model performs transport, thermodynamic, physical property, and mass-transfer calculations at a finite-volume scale. We introduce two additional scales: a mesoscale, in which we perform computational fluid dynamic (CFD) simulations, and a molecular scale. Our CFD simulations solve for turbulent properties of fluid flowing over a packed bed. We incorporate the turbulent diffusivity of the fluid into our finite-volume leacher model. We perform molecular simulations and use the conductor-like screening model-segment activity coefficient (COSMO-SAC) model to generate solubility predictions of small, cyclic oligomers in water and ε-caprolactam. Additionally, we develop an extension of COSMO-SAC to model polymer species, which we refer to as Polymer-COSMO-SAC, and apply it to solve liquid-liquid equilibrium equations. We present a unique methodology to apply COSMO-based models to polymer species, which shows reasonable results for nylon-6. Because of the computational intensity of our Polymer-COSMO-SAC liquid-liquid equilibrium algorithm, we generate pre-computed tables of equilibrium predictions that we may import into our leacher model. Our integration of multiscale models maximizes efficiency and feasibility with accuracy. We are able to use our multiscale models to estimate necessary parameters, but we need to fit two mass-transfer related parameters to industrial data. We validate our model against the plant data and find average-absolute errors in the final mass percent of ε-caprolactam and cyclic dimer in polymer chips of 25.0% and 54.7%, respectively. Several plant data sets are suspected outliers and we believe an unforeseen equilibrium limitation may cause this discrepancy. If we remove these outlying data sets, we then find average-absolute errors of 7.5% and 19.3% for ε-caprolactam and cyclic dimer, respectively. We then use our validated model to perform application and sensitivity studies to gain critical insight into the leacher's operating conditions.
- Neural Networks in Bioprocessing and Chemical EngineeringBaughman, D. Richard (Virginia Tech, 1995-12-01)This dissertation introduces the fundamental principles and practical aspects of neural networks, focusing on their applications in bioprocessing and chemical engineering. This study introduces neural networks and provides an overview of their structures, strengths, and limitations, together with a survey of their potential and commercial applications (Chapter 1). In addition to covering both the fundamental and practical aspects of neural computing (Chapter 2), this dissertation demonstrates, by numerous illustrative examples, practice problems, and detailed case studies, how to develop, train and apply neural networks in bioprocessing and chemical engineering. This study includes the neural network applications of interest to the biotechnologists and chemical engineers in four main groups: (1) fault classification and feature categorization (Chapter 3); (2) prediction and optimization (Chapter 4); (3) process forecasting, modeling, and control of time-dependent systems (Chapter 5); and (4) preliminary design of complex processes using a hybrid combination of expert systems and neural networks (Chapter 6). This dissertation is also unique in that it includes the following ten detailed case studies of neural network applications in bioprocessing and chemical engineering: · Process fault-diagnosis of a chemical reactor. · Leonard-Kramer fault-classification problem. · Process fault-diagnosis for an unsteady-state continuous stirred-tank reactor system. · Classification of protein secondary-structure categories. · Quantitative prediction and regression analysis of complex chemical kinetics. · Software-based sensors for quantitative predictions of product compositions from fluorescent spectra in bioprocessing. · Quality control and optimization of an autoclave curing process for manufacturing composite materials. · Predictive modeling of an experimental batch fermentation process. · Supervisory control of the Tennessee Eastman plant-wide control problem · Predictive modeling and optimal design of extractive bioseparation in aqueous two-phase systems This dissertation also includes a glossary, which explains the terminology used in neural network applications in science and engineering.
- Optimization in electrical distribution systems: Discrete Ascent Optimal ProgrammingDolloff, Paul A. (Virginia Tech, 1996-02-15)This dissertation presents a new algorithm for optimal power flow in distribution systems. The new algorithm, Discrete Ascent Optimal Programming (DAOP), will converge to the same solution as the Lagrange multiplier approach as demonstrated by example. An intuitive discussion illustrating the path of convergence is presented along with a theorem concerning convergence. Because no partial derivatives, solutions of simultaneous equations, or matrix operations are required, the DAOP algorithm is simple to apply and program. DAOP is especially suited for programming with pointers. Advantages of the new algorithm include its simplicity, ease of incorporating inequality constraints, and the ability to predict the number of steps required to reach a solution. In addition to optimal power flow, the algorithm, heuristic in nature, can be applied to switch placement design, reconfiguration, and economic dispatch. The basic principles of the algorithm have been used to devise a phase balancing routine which has been implemented in the Distribution Engineering Workstation (DEWorkstation) software package sponsored by the Electric Power Research Institute (EPRI). The new algorithm presented in this dissertation works toward a solution by performing a series of calculations within a finite number of steps. At the start of the algorithm, the assumption is made that no power is flowing in the system. Each step adds a discrete unit of load to the system in such a fashion as to minimize loss. As progress toward the solution is made, more and more load is satisfied and the losses in the system continue to increase. The algorithm is terminated when all system load is satisfied. When the algorithm is finished, the sources which should supply each load have been identified along with the amount of power delivered by each source. Discussion will show that the method will converge to a solution that is within the discrete step size of the optimum. The algorithm can be thought of as an ascent method because the cost (losses) continually increases as more and more load is satisfied. Hence, the name Discrete Ascent Optimal Programming (DAOP) has been given to the algorithm. The new algorithm uses the topology of the power system such that the entire system is not considered at each step. Therefore, DAOP is not an exhaustive state enumeration scheme. Only those portions of the system containing loads most closely connected (via least loss paths) to the sources are first considered. As loads become supplied during the course of the solution, other loads are considered and supplied until the system is fully loaded.
- Predicting Phase Equilibria Using COSMO-Based Thermodynamic Models and the VT-2004 Sigma-ProfileOldland, Richard Justin (Virginia Tech, 2004-11-16)Solvation-thermodynamics models based on computational quantum mechanics, such as the conductor-like screening model (COSMO), provide a good alternative to traditional group-contribution methods for predicting thermodynamic phase behavior. Two COSMO-based thermodynamic models are COSMO-RS (real solvents) and COSMO-SAC (segment activity coefficient). The main molecule-specific input for these models is the sigma profile, or the probability distribution of a molecular surface segment having a specific charge density. Generating the sigma profiles represents the most time-consuming and computationally expensive aspect of using COSMO-based methods. A growing number of scientists and engineers are interested in the COSMO-based thermodynamic models, but are intimidated by the complexity of generating the sigma profiles. This thesis presents the first free, open-literature database of 1,513 self-consistent sigma profiles, together with two validation examples. The offer of these profiles will enable interested scientists and engineers to use the quantum-mechanics-based, COSMO methods without having to do quantum mechanics. This thesis summarizes the application experiences reported up to October 2004 to guide the use of the COSMO-based methods. Finally, this thesis also provides a FORTRAN program and a procedure to generate additional sigma profiles consistent with those presented here, as well as a FORTRAN program to generate binary phase-equilibrium predictions using the COSMO-SAC model.
- Predictive Modeling of Large-Scale Integrated Refinery Reaction and Fractionation Systems from Plant Data: Fluid Catalytic Cracking (FCC) and Continuous Catalyst Regeneration (CCR) Catalytic Reforming ProcessesPashikanti, Kiran (Virginia Tech, 2011-08-31)This dissertation includes two accounts of rigorous modeling of petroleum refinery modeling using rigorous reaction and fractionation units. The models consider various process phenomena and have been extensively used during a course of a six-month study to understand and predict behavior. This work also includes extensive guides to allow users to develop similar models using commercial software tools. (1) Predictive Modeling of Large-Scale Integrated Refinery Reaction and Fractionation Systems from Plant Data: Fluid Catalytic Cracking (FCC) Process with Planning Applications: This work presents the methodology to develop, validate and apply a predictive model for an integrated fluid catalytic cracking (FCC) process. We demonstrate the methodology by using data from a commercial FCC plant in the Asia Pacific with a feed capacity of 800,000 tons per year. Our model accounts for the complex cracking kinetics in the riser-regenerator and associated gas plant phenomena. We implement the methodology with Microsoft Excel spreadsheets and a commercial software tool, Aspen HYSYS/Petroleum Refining from Aspen Technology, Inc. The methodology is equally applicable to other commercial software tools. This model gives accurate predictions of key product yields and properties given feed qualities and operating conditions. This work differentiates itself from previous work in this area through the following contributions: (1) detailed models of the entire FCC plant, including the overhead gas compressor, main fractionator, primary and sponge oil absorber, primary stripper and debutanizer columns; (2) process to infer molecular composition required for the kinetic model using routinely collected bulk properties of feedstock; (3) predictions of key liquid product properties not published alongside previous related work (density, D-86 distillation curve and flash point); (4) case studies showing industrially useful applications of the model; and (5) application of the model with an existing LP-based planning tool. (2) Predictive Modeling of Large-Scale Integrated Refinery Reaction and Fractionation Systems from Plant Data: Continuous Catalyst Regeneration (CCR) Reforming Process: This work presents a model for the rating and optimization of an integrated catalytic reforming process with UOP-style continuous catalyst regeneration (CCR). We validate this model using plant data from a commercial CCR reforming process handling a feed capacity of 1.4 million tons per year in the Asia Pacific. The model relies on routinely monitored data such ASTM distillation curves, paraffin-napthene- aromatic (PNA) analysis and operating conditions. We account for dehydrogenation, dehydrocyclization, isomerization and hydrocracking reactions that typically occur with petroleum feedstock. In addition, this work accounts for the coke deposited on the catalyst and product recontacting sections. This work differentiates itself from the reported studies in the literature through the following contributions: (1) detailed kinetic model that accounts for coke generation and catalyst deactivation; (2) complete implementation of a recontactor and primary product fractionation; (3) feed lumping from limited feed information; (4) detailed procedure for kinetic model calibration; (5) industrially relevant case studies that highlight the effects of changes in key process variables; and (6) application of the model to refinery-wide production planning.
- Predictive Modeling of Metal-Catalyzed Polyolefin ProcessesKhare, Neeraj Prasad (Virginia Tech, 2003-11-14)This dissertation describes the essential modeling components and techniques for building comprehensive polymer process models for metal-catalyzed polyolefin processes. The significance of this work is that it presents a comprehensive approach to polymer process modeling applied to large-scale commercial processes. Most researchers focus only on polymerization mechanisms and reaction kinetics, and neglect physical properties and phase equilibrium. Both physical properties and phase equilibrium play key roles in the accuracy and robustness of a model. This work presents the fundamental principles and practical guidelines used to develop and validate both steady-state and dynamic simulation models for two large-scale commercial processes involving the Ziegler-Natta polymerization to produce high-density polyethylene (HDPE) and polypropylene (PP). It also provides a model for the solution polymerization of ethylene using a metallocene catalyst. Existing modeling efforts do not include physical properties or phase equilibrium in their calculations. These omissions undermine the accuracy and predictive power of the models. The forward chapters of the dissertation discuss the fundamental concepts we consider in polymer process modeling. These include physical and thermodynamic properties, phase equilibrium, and polymerization kinetics. The later chapters provide the modeling applications described above.
- Process Integration: Unifying Concepts, Industrial Applications and Software ImplementationMann, James Gainey (Virginia Tech, 1999-10-15)This dissertation is a complete unifying approach to the fundamentals, industrial applications and software implementation of an important branch of process-engineering principles and practice, called process integration. The latter refers to the system-oriented, thermodynamically-based and integrated approaches to the analysis, synthesis and retrofit of process plants, focusing on integrating the use of materials and energy, and minimizing the generation of emissions and wastes. This work extends process integration to include applications for industrial water reuse and wastewater minimization and presents previous developments in a unified manner. The basic ideas of process integration are: (1) to consider first the big picture by looking at the entire manufacturing process as an integrated system; (2) to apply process-engineering principles to key process steps to establish a priori targets for the use of materials and energy, and for the generation of emissions and wastes; and (3) to finalize the details of the process design and retrofit later to support the integrated view, particularly in meeting the established targets. Pinch technology is a set of primarily graphical tools for analyzing a process plant's potential for energy conservation, emission reduction and waste minimization. Here, we identify targets for the minimum consumption of heating and cooling utilities, mass-separating agents, freshwater consumption, wastewater generation and effluent treatment and propose economical grassroots designs and retrofit projects to meet these goals. An emerging alternative approach to pinch technology, especially when analyzing complex water-using operations and effluent-treatment systems, is mathematical optimization. We solve nonlinear programming problems for simple water-using operations through readily available commercial software. However, more complex, nonconvex problems require sophisticated reformulation techniques to guarantee optimality and are the subject of continuing academic and commercial development. This work develops the principles and practice of an environmentally significant breakthrough of process integration, called water-pinch technology. The new technology enables the practicing engineers to maximize water reuse, reduce wastewater generation, and minimize effluent treatment through pinch technology and mathematical optimization. It applies the technology in an industrial water-reuse demonstration project in a petrochemical complex in Taiwan, increasing the average water reuse (and thus reducing the wastewater treatment) in the five manufacturing facilities from 18.6% to 37%. This dissertation presents complete conceptual and software developments to unify the known branches of process integration, such as heat and mass integration, and wastewater minimization, and explores new frontiers of applications to greatly simplify the tools of process integration for practicing engineers.
- Process Modeling of Next-Generation Liquid Fuel Production - Commercial Hydrocracking Process and Biodiesel ManufacturingChang, Ai-Fu (Virginia Tech, 2011-09-07)This dissertation includes two process modeling studies -- (1) predictive modeling of large-scale integrated refinery reaction and fractionation systems from plant data – hydrocracking process; and (2) integrated process modeling and product design of biodiesel manufacturing. \r\n1. Predictive Modeling of Large-Scale Integrated Refinery Reaction and Fractionation Systems from Plant Data -- Hydrocracking Processes: This work represents a workflow to develop, validate and apply a predictive model for rating and optimization of large-scale integrated refinery reaction and fractionation systems from plant data. We demonstrate the workflow with two commercial processes -- medium-pressure hydrocracking unit with a feed capacity of 1 million ton per year and high-pressure hydrocracking unit with a feed capacity of 2 million ton per year in the Asia Pacific. This work represents the detailed procedure for data acquisition to ensure accurate mass balances, and for implementing the workflow using Excel spreadsheets and a commercial software tool, Aspen HYSYS from Aspen Technology, Inc. The workflow includes special tools to facilitate an accurate transition from lumped kinetic components used in reactor modeling to the boiling point based pseudo-components required in the rigorous tray-by-tray distillation simulation. Two to three months of plant data are used to validate models' predictability. The resulting models accurately predict unit performance, product yields, and fuel properties from the corresponding operating conditions.\r\n2. Integrated Process Modeling and Product Design of Biodiesel Manufacturing: This work represents first a comprehensive review of published literature pertaining to developing an integrated process modeling and product design of biodiesel manufacturing, and identifies those deficient areas for further development. It also represents new modeling tools and a methodology for the integrated process modeling and product design of an entire biodiesel manufacturing train. We demonstrate the methodology by simulating an integrated process to predict reactor and \r\nseparator performance, stream conditions, and product qualities with different feedstocks. The results show that the methodology is effective not only for the rating and optimization of an existing biodiesel manufacturing, and but also for the design of a new process to produce biodiesel with specified fuel properties.
- Real time simulations of EMTP resultsKong, Kang-Chuen (Virginia Tech, 1993-11-05)A mathematical model of a power system is created in the Electromagnetic Transients Program (EMTP). The EMTP is an off-line program which produces transients during a fault under different operating conditions and records the three phase voltage data at two computer relay locations. The EMTP simulation results are later played back in real time in a MVME133A computer platform and the digital signals are converted to analog signals through a D/ A board. These analog signals are filtered and provide inputs to the Power Systems Simulator via the power amplifiers. The computer relays, which are being tested, are connected to the Power Systems Simulator. This thesis describes both the EMTP simulation setup and the playback system configuration and algorithm. In addition, the operating procedure for real time playback of an EMTP output file is also described.
- Real-time implementation of high breakdown point estimators in electric power systems via system decompositionCheniae, Michael G. (Virginia Tech, 1994-09-06)This dissertation presents a new, highly robust algorithm for electric power system state estimation. A graph theory-based system decomposition scheme is coupled with a high breakdown point estimator to allow reliable identification of multiple interacting bad data even in cases of conforming errors. The algorithm is inherently resistant to bad measurements in positions of leverage, makes no a priori measurement error probability distribution assumptions, and is applicable in a real-time environment. In addition to presenting a new state estimation algorithm, the weaknesses of two prominent state determination methods are explored. The comparative advantages of high breakdown point estimators are then summarized. New theorems quantifying the previously unexamined effect system sparsity has on the exact fit point of some members of this estimator family are presented. These results serve as the catalyst for the overall state estimation algorithm presented. Numerous practical implementation issues are addressed with efficient implementation techniques described at each step.
- Reduced order power system models for transient stability studiesAnderson, Sharon Lee (Virginia Tech, 1993-12-05)As the load on the power system grows and new transmission facilities become increasingly difficult to build, the utilities must look to ways to make the most of the current transmission system. Adaptive relaying is one way to enhance the ability of the power system. On the Florida - Georgia interface an adaptive out-of-step relay is being installed. This relay determines if swings on the power system will remain stable by performing a better then real-time transient stability study. Because of the computing capacity required for a transient stability study, the study cannot be performed on the full power system. A reduced model must be used. In this thesis, various methods of obtaining reduced models for use in the relay will be explored. The models will be verified with a full system model using Electric Power Research Institute's (EPRI) Extended Transient-Midterm Stability Package (ETMSP).
- Research and Development of Simulation and Optimization Technology for Commercial Nylon-6 Manufacturing ProcessesSeavey, Kevin Christopher (Virginia Tech, 2003-04-14)This dissertation concerns the development of simulation and optimization technology for industrial, hydrolytic nylon-6 polymerizations. The significance of this work is that it is a comprehensive and fundamental analysis of nearly all of the pertinent aspects of simulation. It steps through all of the major steps for developing process models, including simulation of the reaction kinetics, phase equilibrium, physical properties, and mass-transfer- limited devolatization. Using this work, we can build accurate models for all major processing equipment involved in nylon-6 production. Contributions in this dissertation are of two types. Type one concerns the formalization of existing knowledge of nylon-6 polymerization mixtures, mainly for documentation and teaching purposes. Type two, on the other hand, concerns original research contributions. Formalizations of existing knowledge include reaction kinetics and physical properties. Original research contributions include models for phase equilibrium, diffusivities of water and caprolactam, and devolatization in vacuum-finishing reactors. We have designed all of the models herein to be fundamental, yet accessible to the practicing engineer. All of the analysis was done using commercial software packages offered by Aspen Technology, Cambridge, MA. We chose these packages for two reasons: (1) These packages enable one to quickly build fundamental steady-state and dynamic models of polymer trains; and (2) These packages are the only ones commercially available for simulating polymer trains.