Browsing by Author "Loganathan, G. V."
Now showing 1 - 20 of 66
Results Per Page
Sort Options
- Adhesive areal sampling of gravel bed streamsFripp, Jon Brooks (Virginia Tech, 1991-05-05)The characteristics of a given stream or river are linked to the material that makes up its channel bed. Usually, a vertical stratification by particle size can be recognized. The presence a coarser surface layer is considered to be one of the most important features of a gravel bed stream. Since this surface layer consists of a distinct population of material, it is necessary to be able to separate it from the underlying material and quantify it distinctly. This is done through surface sampling. Two of the most common adhesive areal sampling techniques, and the subject of the present work, are known as clay and wax sampling. If the material obtained in an areal sample is analyzed as a frequency distribution by weight, it has been shown that the size distribution is biased in favor of the larger particles when compared to the results of a bulk sample. The present research shows that this bias is dependent not only upon the sampling method used to remove the material but also upon the size distribution of the sample itself. Not only are the raw results of areal samples not comparable with volumetric samples, but they are not comparable with other areal samples. Before any comparisons are made among areal samples, it is recommended that the size distribution of each areal sample be first converted into the size distribution that would have resulted from an equivalent volumetric sample. The features and limitations of the gravel simulation model that is used to obtain the necessary conversion formula is also the subject of the present work. In addition, the conversion of both matrix and framework supported gravel mixtures that has been areally sampled with either clay or wax, is addressed. Finally, criteria for approximating the minimum depth required for a volumetric sample is presented.
- Algorithmic Approaches for Solving the Euclidean Distance Location and Location-Allocation ProblemsAl-Loughani, Intesar Mansour (Virginia Tech, 1997-07-08)This dissertation is concerned with the development of algorithmic approaches for solving the minisum location and location-allocation problems in which the Euclidean metric is used to measure distances. To overcome the nondifferentiability difficulty associated with the Euclidean norm function, specialized solution procedures are developed for both the location and the location-allocation problems. For the multifacility location problem (EMFLP), two equivalent convex differentiable reformulations are proposed. The first of these is formulated directly in the primal space, and relationships between its Karush-Kuhn-Tucker (KKT) conditions and the necessary and sufficient optimality conditions for EMFLP are established in order to explore the use of standard convex differentiable nonlinear programming algorithms that are guaranteed to converge to KKT solutions. The second equivalent differentiable formulation is derived via a Lagrangian dual approach based on the optimum of a linear function over a unit ball (circle). For this dual approach, which recovers Francis and Cabot's (1972) dual problem, we also characterize the recovery of primal location decisions, hence settling an issue that has remained open since 1972. In another approach for solving EMFLP, conjugate or deflected subgradient based algorithms along with suitable line-search strategies are proposed. The subgradient deflection method considered is the Average Direction Strategy (ADS) imbedded within the Variable Target Value Method (VTVM). The generation of two types of subgradients that are employed in conjunction with ADS are investigated. The first type is a simple valid subgradient that assigns zero components corresponding to the nondifferentiable terms in the objective function. The second type expends more effort to derive a low-norm member of the subdifferential in order to enhance the prospect of obtaining a descent direction. Furthermore, a Newton-based line-search is also designed and implemented in order to enhance the convergence behavior of the developed algorithm. Various combinations of the above strategies are composed and evaluated on a set of test problems. Computational results for all the proposed algorithmic approaches are presented, using a set of test problems that include some standard problems from the literature. These results exhibit the relative advantages of employing the new proposed procedures. Finally, we study the capacitated Euclidean distance location-allocation problem. There exists no global optimization algorithm that has been developed and tested for this class of problems, aside from a total enumeration approach. We develop a branch-and-bound algorithm that implicitly/partially enumerates the vertices of the feasible region of the transportation constraints in order to determine a global optimum for this nonconvex problem. For deriving lower bounds on node subproblems, a specialized variant of the Reformulation-Linearization Technique (RLT) is suitably designed which transforms the representation of this nonconvex problem from the original defining space into a higher dimensional space associated with a lower bounding (largely linear) convex program. The maximum of the RLT relaxation based lower bound that is obtained via a deflected subgradient strategy applied to a Lagrangian dual formulation of this problem, and another readily computed lower bound in the projected location space is considered at each node of the branch-and-bound tree for fathoming purposes. In addition, certain cut-set inequalities in the allocation space, and objective function based cuts in the location space are generated to further tighten the lower bounding relaxation. Computational experience is provided on a set of randomly generated test problems to investigate both the RLT-based and the projected location- space lower bounding schemes. The results indicate that the proposed global optimization approach for this class of problem offers a promising viable solution procedure. In fact, for two instances available available in the in the literature, we report significantly improved solutions. The dissertation concludes with recommendations for further research for this challenging class of problems. Data for the collection of test problems is provided in the Appendix to facilitate further testing in this area.
- Analysis of Roanoke Region Weather Patterns Under Global TeleconnectionsLaRocque, Eric John (Virginia Tech, 2006-02-07)This work attempts to relate global teleconnections, through physical phenomena such as the El Nino-Southern Oscillation (ENSO), Artic Oscillation (AO), North Atlantic Oscillation (NAO), and the Pacific North American (PNA) pattern to synoptic-scale weather patterns and precipitation in the Roanoke, Virginia region. The first chapter describes the behavior of the El Nino-Southern Oscillation (ENSO) by implementing non-homogeneous and homogeneous Markov Chain models on a monthly time series of the Troup Southern Oscillation Index (SOI), a sea level pressure based index. Meanwhile, in the second chapter the author has related or an attempt has been made to relate global teleconnections (through ENSO and AO) to a synoptic scale, station-centered set of weather types in order to assess trends in precipitation. The final portion of this work describes spatial variability of seasonal precipitation in southwestern Virginia in a context that incorporates global teleconnections (through AO, PNA, NAO, and ENSO) and frontogenesis. It was found that the Markov property can be used to describe and predict the monthly evolution of ENSO. Also evident is an increased probability of a wetter spring in the Roanoke region when El Nino combines with the negative phase of the AO during the previous winter. Meanwhile, Roanoke winters subsequent to a fall season described by this same El Nino-AO condition are predicted to receive more precipitation than average. This work additionally showed possible trends between frontal-precipitation events in the Roanoke region and global teleconnections.
- Application of the Analytic Hierarchy Process Optimization Algorithm in Best Management Practice SelectionYoung, Kevin D. (Virginia Tech, 2006-05-30)The efficiency of a best management practice (BMP) is defined simply as a measure of how well the practice or series of practices removes targeted pollutants. While this concept is relatively simple, mathematical attempts to quantify BMP efficiency are numerous and complex. Intuitively, the pollutant removal capability of a BMP should be fundamental to the BMP selection process. However, as evidenced by the absence of removal efficiency as an influential criterion in many BMP selection procedures, it is typically not at the forefront of the BMP selection and design process. Additionally, of particular interest to any developer or municipal agency is the financial impact of implementing a BMP. Not only does the implementation cost exist, but there are long-term maintenance costs associated with almost any BMP. Much like pollutant removal efficiency, implementation and maintenance costs seem as though they should be integral considerations in the BMP selection process. However, selection flow charts and matrices employed by many localities neglect these considerations. Among the categories of criteria to consider in selecting a BMP for a particular site or objective are site-specific characteristics; local, state, and federal ordinances; and implementation and long-term maintenance costs. A consideration such as long-term maintenance cost may manifest itself in a very subjective fashion during the selection process. For example, a BMPs cost may be of very limited interest to the reviewing locality, whereas cost may be the dominant selection criterion in the eyes of a developer. By contrast, the pollutant removal efficiency of a BMP may be necessarily prioritized in the selection process because of the required adherence to governing legislation. These are merely two possible criteria influencing selection. As more and more selection criteria are considered, the task of objectively and optimally selecting a BMP becomes increasingly complex. One mathematical approach for optimization in the face of multiple influential criteria is the Analytic Hierarchy Process. "The analytic hierarchy process (AHP) provides the objective mathematics to process the inescapably subjective and personal preferences of an individual or a group in making a decision" (Schmoldt, 2001, pg. 15). This paper details the development of two categories of comprehensive BMP selection matrices expressing long-term pollutant removal performance and annual maintenance and operations cost respectively. Additionally, the AHP is applied in multiple scenarios to demonstrate the optimized selection of a single BMP among multiple competing BMP alternatives. Pairwise rankings of competing BMP alternatives are founded on a detailed literature review of the most popular BMPs presently implemented throughout the United States.
- Assessing Drought Flows For Yield EstimationGillespie, Jason Carter (Virginia Tech, 2002-12-12)Determining safe yield of an existing water supply is a basic aspect of water supply planning. Where water is withdrawn from a river directly without any storage, the withdrawal is constrained by the worst drought flow in the river. There is no flexibility for operational adjustments other than implementing conservation measures. Where there is a storage reservoir, yields higher than the flow in the source stream can be maintained for a period of time by releasing the water in storage. The determination of safe yield in this situation requires elaborate computation. This thesis presents a synthesis of methods of drought flow analysis and yield estimation. The yield depends on both the magnitude of the deficit and its temporal distribution. A new Markov chain analysis for assessing frequencies of annual flows is proposed. The Markov chain results compare very well with the empirical data analysis. Another advantage of the Markov chain analysis is that both high and low flows are considered simultaneously; no separate analyses for the lower and upper tails of the distribution are necessary. The temporal distribution of drought flows is considered with the aid of the generalized bootstrap method, time series analysis, and cluster sequencing of worsening droughts called Waitt's procedure. The methods are applied to drought inflows for three different water supply reservoirs in Spotsylvania County, Virginia, and different yield estimates are obtained.
- Assessing Urban Non-Point Source pollutants at the Virginia Tech Extended Dry Detention PondHodges, Kimberly Jean (Virginia Tech, 1997-05-23)With a growing concern for the environment and increasing urbanization of rural areas, understanding the characteristics of urban non-point source pollution has become a focus for water quality investigators. Once thought to be a small contributor to the pollution problem, urban non-point sources are now responsible for transporting over 50% of all pollutants into natural waterways. Assessing non-point source pollution is the key to future water quality improvements in natural receiving waters. The purpose of this research was to investigate the water quality of an urbanized watershed, analyze current prediction methods and to investigate the effectiveness of an extended dry detention basin as a pollutant removal management practice on a 21.68-acre urban watershed on the Virginia Tech Campus. This research included extensive stormwater monitoring and sampling to characterize the runoff and water quality from an urban watershed. The resulting analysis included comparing well-known desktop prediction methods with pollutant removal rates using an extended dry detention basin and comparison with different literature values. Finally, the study team calibrated the PSRM-QUAL model for watershed prediction of non-point source runoff and pollution. The results of the stormwater monitoring process show that water quality prediction methods are not very successful on a storm by storm basis, but can be fairly accurate over longer periods of time with little or no storm water quality sampling. The extended dry detention basin is a simple yet effective management practice for the removal of sediments and sediment bound pollutants.
- An Assessment and Modeling of Copper Plumbing pipe Failures due to Pinhole LeaksFarooqi, Owais Ehtisham (Virginia Tech, 2006-05-19)Pinhole leaks in copper plumbing pipes are a big concern for the homeowners. The problem is spread across the nation and remains a threat to plumbing systems of all ages. Due to the absence of a single acceptable mechanistic theory no preventive measure is available to date. Most of the present mechanistic theories are based on analysis of failed pipe samples however an objective comparison with other pipes that did not fail is seldom made. The variability in hydraulic and water quality parameters has made the problem complex and unquantifiable in terms of plumbing susceptibility to pinhole leaks. The present work determines the spatial and temporal spread of pinhole leaks across United States. The hotspot communities are identified based on repair histories and surveys. An assessment of variability in water quality is presented based on nationwide water quality data. A synthesis of causal factors is presented and a scoring system for copper pitting is developed using goal programming. A probabilistic model is presented to evaluate optimal replacement time for plumbing systems. Methodologies for mechanistic modeling based on corrosion thermodynamics and kinetics are presented.
- Bedload Transport in Gravel-Bed Streams under a wide range of Shields StressesAlmedeij, Jaber H. (Virginia Tech, 2002-03-28)Bedload transport is a complicated phenomenon in gravel-bed streams. Several factors account for this complication, including the different hydrologic regime under which different stream types operate and the wide range of particle sizes of channel bed material. Based on the hydrologic regime, there are two common types of gravel-bed streams: perennial and ephemeral. In terms of channel bed material, a gravel bed may have either unimodal or bimodal sediment. This study examines more closely some aspects of bedload transport in gravel-bed streams and proposes explanations based on fluvial mechanics. First, a comparison between perennial and ephemeral gravel-bed streams is conducted. This comparison demonstrates that under a wide range of Shields stresses, the trends exhibited by the bedload transport data of the two stream types collapse into one continuous curve, thus a unified approach is warranted. Second, an empirical bedload transport relation that accounts for the variation in the make-up of the surface material within a wide range of Shields stresses is developed. The accuracy of the relation is tested using available bedload transport data from streams with unimodal sediment. The relation is also compared against other formulae available in the literature that are commonly used for predicting bedload transport in gravel-bed streams. Third, an approach is proposed for transforming the bimodal sediment into two independent unimodal fractions, one for sand and another for gravel. This transformation makes it possible to carry out two separate computations of bedload transport rate using the bedload relation developed in this study for unimodal sediment. The total bedload transport rate is estimated by adding together the two contributions.
- Calibration of Snowmaking Equipment for Efficient Use on Virginia's Smart RoadShea, Edward (Virginia Tech, 1999-08-04)Virginia's Smart Road, to be completed by early 2000, is a test bed for numerous research activities including snow and ice control, remote sensor testing, snow removal management, safety and human factors, and vehicle dynamics. An all-weather testing system will feature 75 automated snowmaking towers. In order to provide timely and repeatable weather scenarios, equipment operators will need to understand fully the limitations and capabilities of the snowmaking system. The research presented herein addresses the hydraulic and hydrologic variables and design methodology to implement efficient snowmaking at a transportation research facility. Design variables include nozzle configuration, water pressure and flowrate, compressed air pressure and flowrate, tower orientation, snow inducer concentration, water and compressed air temperature, and ambient weather conditions. Testing and data collection was performed at the Snow Economics, Inc. research and development site at Seven Springs Mountain Resort in Champion, PA. The results of this work will be used to guide the operators of the Smart Road on the most efficient use of the snowmaking equipment.
- Cavitation and Bubble Formation in Water Distribution SystemsNovak, Julia Ann (Virginia Tech, 2005-04-08)Gaseous cavitation is examined from a practical and theoretical standpoint. Classical cavitation experiments which disregard dissolved gas are not directly relevant to natural water systems and require a redefined cavitation inception number which considers dissolved gases. In a pressurized water distribution system, classical cavitation is only expected to occur at extreme negative pressure caused by water hammer or at certain valves. Classical theory does not describe some practical phenomena including noisy pipes, necessity of air release valves, faulty instrument readings due to bubbles, and reports of premature pipe failure; inclusion of gaseous cavitation phenomena can better explain these events. Gaseous cavitation can be expected to influence corrosion in water distribution pipes. Bubbles can form within the water distribution system by a mechanism known as gaseous cavitation. A small scale apparatus was constructed to track gaseous cavitation as it could occur in buildings. Four independent measurements including visual observation of bubbles, an inline turbidimeter, an ultrasonic flow meter, and an inline total dissolved gas probe were used to track the phenomenon. All four measurements confirmed that gaseous cavitation was occurring within the experimental distribution system, even at pressures up to 40 psi. Gaseous cavitation was more likely at higher initial dissolved gas content, higher temperature, higher velocity and lower pressure. Certain changes in pH, conductivity, and surfactant concentration also tended to increase the likelihood of cavitation. For example, compared to the control at pH 5.0 and 30 psig, the turbidity increased 295% at pH 9.9. The formation of bubbles reduced the pump's operating efficiency, and in the above example, the velocity was decreased by 17% at pH 9.9 versus pH 5.0.
- Characterization of palmer drought index as a precursor for drought mitigationLohani, Vinod K. (Virginia Tech, 1995-08-15)Coping with droughts involves two phases. In the first phase drought susceptibility of a region should be assessed for developing proper additional sources of supply which will be exploited during the course of a drought. The second phase focuses on the issuance of drought warnings and exercising mitigation measures during a drought . These kinds of information are extremely valuable to decision making authorities. In this dissertation three broad schemes i) time series modeling, ii) Markov chain analysis, and iii) dynamical systems approach are put forward for computing the drought parameters necessary for understanding the scope of the drought. These parameters include drought occurrence probabilities, duration of various drought severity classes which describe a region's drought susceptibility, and first times of arrival for non drought classes which signify times of relief for a drought-affected region. These schemes also predict drought based on given current conditions. In the time series analysis two classes of models; the fixed parameter and the time varying models are formulated. To overcome the bimodal behavior of the Pallner Drought Severity Index (PDSI), primarily due to the backtracking scheme to reset the temporary index values as the PDSI values, the models are fitted to the Z index in addition to the PDSI for the forecasting of the PDSI.
- Comparison of two hydrological models on a Virginia Piedmont watershedFu, Youtong (Virginia Tech, 1994-12-05)KINEROS and PSRM-QUAL:J two distributed parameter event-based hydrologic models, were applied to Foster Creek Watershed, Louisa County, Virginia. The simulations of the two models were conducted using published data and a ten year database from the Foster Creek Watershed, Louisa County, Virginia. Data management and analysis was supported through the use of PC-VirGIS, a DOS based GIS package developed by the Information Support Systems Laboratory, Virginia Tech. The performance of the two models were based on the criteria established to compare the simulated and recorded peak discharge rates , total runoff volumes and time to peak. Goodness of fit criteria were based on graphic comparison relative error, model efficiency, linear regression, hypothesis testing and variance. Based on these measurements, the simulated results by both models were acceptable. KINEROS generally made better predictions of peak discharge rate and time to peak. Hydrograph shapes also generally matched the recorded sequence more closely. PSRM-QUAL simulated the total runoff volume slightly better than KINEROS. The sensitivity of KINER OS and PSRM-QUAL to the model input parameters was evaluated. For KINEROS, peak discharge rate and runoff volume were very sensitive to changes in rainfall amount, saturated hydraulic conductivity and effective capillary drive. For PSRM-QUAL, peak discharge rate and total runoff volume were very sensitive to changes in SCS CN, initial abstraction coefficient and rainfall amount.
- A Comprehensive Decision Support System(CDSS) for Optimal Pipe Renewal using Trenchless TechnologiesKhambhammettu, Prashanth (Virginia Tech, 2001-07-20)Water distribution system pipes span thousands of miles and form a significant part of the total infrastructure of the country. Rehabilitation of this underground infrastructure is one of the biggest challenges currently facing the water industry. Water main deterioration is twofold: the main itself loses strength over time and breaks; also, there is degradation of water quality and hydraulic capacity due to build of material within a main. The increasing repair and damage costs and degrading services demand that a deteriorating water main be replaced at an optimal time instead of continuing to repair it. In addition, expanding business districts, indirect costs, and interruptions including protected areas, waterways and roadways require examination of trenchless technologies for pipe installation. In this thesis a new threshold break rate criterion for the optimal replacement of pipes is provided. As opposed to the traditional present worth cost (PWC) criterion, the derived method uses the equivalent uniform annualized cost (EUAC). It is shown the EUAC based threshold break rate subsumes the PWC based threshold break rate. In addition, practicing engineers need a user-friendly decision support system to aid in the optimal pipeline replacement process. They also need a task-by-task cost evaluation in a project. As a part of this thesis a comprehensive decision support system that includes both technology selection knowledge base and cost evaluation spreadsheet program within a graphical user interface framework is developed. Numerical examples illustrating the theoretical derivations are also included.
- Computational Tools for Improved Analysis and Assessment of Groundwater Remediation SitesJoseph, Joshua Allen Jr. (Virginia Tech, 2008-04-24)Remediation of contaminated groundwater remains a high-priority national goal in the United States. Water is essential to life, and new sources of water are needed for an expanding population. Groundwater remediation remains a significant technical challenge despite decades of research into this field. New approaches are needed to address the most severely-polluted aquifers, and cost-effective solutions are required to meet remediation objectives that protect human health and the environment. Source reduction combined with Monitored Natural Attenuation (MNA) is a remediation strategy whereby the source of contamination is aggressively treated or removed and the residual groundwater plume depletes due to natural processes in the subsurface. The USEPA requires long-term performance monitoring of groundwater at MNA sites over the remediation timeframe, which often takes decades to complete. Presently, computational tools are lacking to adequately integrate source remediation with economic models. Furthermore, no framework has been developed to highlight the tradeoff between the degree of remediation versus the level of benefit within a cost structure. Using the Natural Attenuation Software (NAS) package developed at Virginia Tech, a set of formulae have been developed for calculating the TOR for petroleum-contaminated aquifers (specifically tracking benzene and MTBE) through statistical techniques. With the knowledge of source area residual saturation, groundwater velocity, and contaminant plume source length, the time to remediate a site contaminated with either benzene or MTBE can be determined across a range of regulatory maximum contaminant levels. After developing formulae for TOR, an integrated and interactive decision tool for framing the decision analysis component of the remediation problem was developed. While MNA can be a stand-alone groundwater remediation technology, significant benefits may be realized by layering a more traditional source zone remedial technique with MNA. Excavation and soil vapor extraction when applied to the front end of a remedial action plan can decrease the amount of time to remediation and while generally more expensive than an MNA-only approach, may accrue long-term economic advantages that would otherwise be foregone. The value of these research components can be realized within the engineering and science communities, as well as through government, business and industry, and communities where groundwater contamination and remediation are of issue. Together, these tools constitute the Sâ ªEâ ªEâ ªPâ ªAGE paradigm, founded upon the concept of sound science for an environmental engineering, effectual economics, and public policy agenda. The TOR formulation simplifies the inputs necessary to determine the number of years that an MNA strategy will require before project closure and thus reduces the specialized skills and training required to perform a numerical analysis that for one set of conditions could require many hours of simulation time. The economic decision tool, that utilizes a life cycle model to evaluate a set of feasible alternatives, highlights the tradeoffs between time and economics can be realized over the lifetime of the remedial project.
- Control of salinity intrusion caused by sea level riseGudmundsson, Kristinn (Virginia Tech, 1991-05-24)The objectives of this research are to take advance steps to assess the potential impacts of sea level rise on our nation's estuarine environments and water resources management. Specific engineering solutions to control salinity intrusion are studied. Structure measures such as construction of tidal barriers, tidal locks, and through long term stream flow augmentation are investigated for their suitability. Quantification of the extent of the impacts is accomplished by means of computer model simulations. A laterally integrated two-dimensional. time dependent. finite difference numerical model is used to study time-varying tidal height. current and salinity. Through a selected estuary. parametric studies on scenarios of projected sea level rise, stream flow, channel roughness, change in cross-section profile, etc. are performed in order to have an in-depth understanding of estuarine processes for cases such as present condition versus future sea level rise, with or without control measures. The results of the parametric studies are summarized and engineering applications of individual control methods are discussed.
- Decision Support Tool for Optimal Replacement of Plumbing SystemsLee, Juneseok (Virginia Tech, 2004-12-10)Pinhole corrosion leak in home plumbing has emerged as a significant issue. In the major water distribution system managed by municipalities and water utilities the costs are distributed among all subscribers. The home plumbing repair/replacement cost and possible water damage cost must be addressed by the home owner. There are also issues of the value of home, insurance rates, health consequences, and taste and odor problems. These issues have become major concerns to home owners. Cradle to grave life cycle assessment is becoming an integral part of industrial manufacturing. In this thesis comprehensive details pertaining to life cycle assessment are presented. Copper tubing for plumbing installations is mainly obtained from recycled copper. Various stages of copper plumbing pipe manufacturing are explained. A comprehensive synthesis of various corrosion mechanisms is presented. Particular reference is given to copper plumbing pipe corrosion. A decision support tool for replacing copper plumbing pipes is presented. The deterioration process is grouped into early, normal and late stages. Because available data reflects late stage process, an optimization, neural network and curve fitting models are developed to infer early and normal stage behavior of the plumbing system. Utilizing the inferred leak rates a non-homogeneous poisson process model is developed to generate leak arrival times. An economically sustainable replacement criterion is adopted to determine optimal replacement time.
- Development of a continuous, physically-based distributed parameter, nonpoint source modelBouraoui, Faycal (Virginia Tech, 1994-04-18)ANSWERS, an event-oriented, distributed parameter nonpoint source pollution model for simulating runoff and sediment transport was modified to develop a continuous nonpoint source model to simulate runoff, erosion, transport of dissolved and sediment-bound nutrients, and nutrient transformations. The model was developed for use by nonpoint source pollution managers to study the long-tenn effectiveness of best management practices (BMPs) in reducing runoff, sediment, and nutrient losses from agricultural watersheds. The Holtan's infiltration equation used in the original version of ANSWERS was replaced by the physically-based Green-Ampt infiltration equation. Soil evaporation and plant transpiration were modeled separately using the Ritchie equation. If soil moisture exceeds field capacity, the model computes percolation based on the degree of soil saturation. Nutrient losses include nitrate, sediment-bound and dissolved ammonium; sediment-bound TKN, and sediment-bound and dissolved phosphorus. A linear equilibrium is assumed between dissolved and sediment-bound phases of ammonium and phosphorus. Nutrient loss is assumed to occur only from the upper cm of the soil profile. The model simulates transformations and interactions between four nitrogen pools including stable organic N, active organic N, nitrate and ammonium. Transformations of nitrogen include mineralization simulated as a combination of ammonification and nitrification, denitrification, and plant uptake of ammonium and nitrate. The model maintains a dynamic equilibrium between stable and active organic N pools.
- Dynamic water quality modeling using cellular automataCastro, Antonio Paulo (Virginia Tech, 1996-08-16)Parallel computing has recently appeared has an alternative approach to increase computing performance. In the world of engineering and scientific computing the efficient use of parallel computers is dependent on the availability of methodologies capable of exploiting the new computing environment. The research presented here focused on a modeling approach, known as cellular automata (CA), which is characterized by a high degree of parallelism and is thus well suited to implementation on parallel processors. The inherent degree of parallelism also exhibited by the random-walk particle method provided a suitable basis for the development of a CA water quality model. The random-walk particle method was successfully represented using an approach based on CA. The CA approach requires the definition of transition rules, with each rule representing a water quality process. The basic water quality processes of interest in this research were advection, dispersion, and first-order decay. Due to the discrete nature of CA, the rule for advection introduces considerable numerical dispersion. However, the magnitude of this numerical dispersion can be minimized by proper selection of model parameters, namely the size of the cells and the time step. Similarly, the rule for dispersion is also affected by numerical dispersion. But, contrary to advection, a procedure was developed that eliminates significant numerical dispersion associated with the dispersion rule. For first-order decay a rule was derived which describes the decay process without the limitations of a similar approach previously reported in the literature. The rules developed for advection, dispersion, and decay, due to their independence, are well suited to implementation using a time-splitting approach. Through validation of the CA methodology as an integrated water quality model, the methodology was shown to adequately simulate one and two-dimensional, single and multiple constituent, steady state and transient, and spatially invariant and variant systems. The CA results show a good agreement with corresponding results for differential equation based models. The CA model was found to be simpler to understand and implement than the traditional numerical models. The CA model was easily implemented on a MIMD distributed memory parallel computer (Intel Paragon). However, poor performance was obtained.
- The Effect of Chlorine and Chloramines on the Viability and Activity of Nitrifying BacteriaZaklikowski, Anna Emilia (Virginia Tech, 2006-05-05)Nitrification is a significant concern for drinking water systems employing chloramines for secondary disinfection. Utilities have implemented a range of disinfection strategies that have varying levels of effectiveness in the prevention and control of nitrification events, including optimizing the chlorine-to-ammonia ratio, maintaining chloramine residual throughout the distribution system, controlling pH, and temporal switching to free chlorination. Annual or semi-annual application of free chlorination is practiced by 23% of chloraminating systems on a temporary basis as a preventative measure, even though it has the undesirable consequences of temporarily increasing disinfection byproducts, facilitating coliform detachment, and altering water taste and odor. Although temporal free chlorination and other nitrification control methods have been widely studied in the field and in pilot-scale systems, very little is known about the stress responses of nitrifying bacteria to different disinfection strategies and the role physiological state plays in the resistance to disinfection. It is well known that many commonly studied bacteria, such as Escherichia coli, are able to better resist disinfection by free chlorine and chloramines under nutrient limitation through regulation of stress response genes that encode for DNA protection and enzymes that mediate reactive oxygen species. We compared the genomes of E. coli and the ammonia-oxidizing bacterium Nitrosomonas europaea, and found that many of the known stress response mechanisms and genes present in E. coli are absent in N. europaea or not controlled by the same mechanisms specific to bacterial growth state. These genetic differences present a general susceptibility of N. europaea to disinfection by chlorine compounds. Using an experimental approach, we tested the hypothesis that N. europaea does not develop increased resistance to free chlorine and monochloramine during starvation to the same degree as E. coli. In addition, N. europaea cells were challenged with sequential treatments of monochloramine and hypochlorous acid to mimic the disinfectant switch employed by drinking water utilities. Indicators of activity (specific nitrite generation rate) and viability (LIVE/DEAD® BacLight© membrane-integrity based assay) were measured to determine short-term effectiveness of disinfection and recovery of cells over a twelve day monitoring period. The results of disinfectant challenge experiments reinforce the hypothesis, indicating that the response of N. europaea to either disinfectant does not significantly change during the transition from exponential phase to stationary phase. Exponentially growing N. europaea cells showed greater susceptibility to hypochlorous acid and monochloramine than stationary phase E. coli cells, but had increased resistance compared with exponential phase E. coli cells. Following incubation with monochloramine, N. europaea showed increased sensitivity to subsequent treatment with hypochlorous acid. Complete loss of ammonia-oxidation activity was observed in cells immediately following treatment with hypochlorous acid, monochloramine, or a combination of both disinfectants. Replenishing ammonia and nutrients did not invoke recovery of cells, as detected in activity measurements during the twelve day monitoring period. The results provide evidence for the effectiveness of both free chlorine and chloramines in the inhibition of growth and ammonia-oxidation activity in N. europaea. Furthermore, comparison of viability and activity measurements suggest that the membrane integrity-based stain does not serve as a good indicator of activity. These insights into the responses of pure culture nitrifying bacteria to free chlorine and monochloramine could prove useful in designing disinfection strategies effective in the control of nitrification.
- Effect of Spatial Scale on Hydrologic Modeling in a Headwater CatchmentFedak, Ryan Michael (Virginia Tech, 1999-01-29)In this study, two hydrologic models were applied to the mountainous Back Creek catchment, located in the headwaters of the Roanoke River in Southwest Virginia. The two models employed were HEC-1, an event based lumped model, and TOPMODEL, a continuous semi-distributed model. These models were used to investigate (a) the issue of spatial scale in hydrologic modeling, and (b) two approaches to modeling, continuous versus event based. Two HEC-1 models were developed with a different number of subareas in each. The hydrographs generated by each HEC-1 model for a number of large rainfall events were analyzed visually and statistically. No observable improvement resulted from increasing the number of subareas in the HEC-1 models from 20 to 81. TOPMODEL was applied to the same watershed using a series of different size grid cells. The first step in applying TOPMODEL to a watershed involves GIS analysis which results in a raster grid of elevations used for the calculation of the topographic index, ln(a/tan b). The hydrographs generated by TOPMODEL with each grid cell size were compared in order to assess the sensitivity of TOPMODEL hydrographs to grid cell size. An increase in grid cell size from 15 to 120 meters resulted in increased values of the watershed mean of the topographic index. However, hydrographs generated by TOPMODEL were completely unaffected by this increase in the topographic index. Analyses were also performed to determine the sensitivity of TOPMODEL hydrographs to several model parameters. It was determined that the parameters that had the greatest effect on hydrographs generated by TOPMODEL were the m and ln(To) parameters. The modeling performances of the event based HEC-1 and the continuous TOPMODEL were analyzed and compared visually and statistically for a number of large storms. The limited number of storms used to compare HEC-1 and TOPMODEL makes it difficult to determine definitively which model simulates large storms better. It does appear that perhaps HEC-1 is slightly superior in that regard. TOPMODEL was also executed as an event based model for two single events and the resulting hydrographs were compared to the HEC-1 and continuous TOPMODEL results. Both HEC-1 and TOPMODEL (when used as a continuous model) simulate large storms better than TOPMODEL (when used as an event based model).