Computational Science Laboratory
Permanent URI for this community
The mission of the Computational Science Laboratory (CSL) is to
develop innovative computational solutions for complex real-world problems, and to
foster a productive research and education environment emphasizing collaboration and innovation.
Browse
Browsing Computational Science Laboratory by Issue Date
Now showing 1 - 20 of 131
Results Per Page
Sort Options
- Multirate timestepping methods for hyperbolic conservation lawsConstantinescu, Emil M.; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2006)This paper constructs multirate time discretizations for hyperbolic conservation laws that allow different time-steps to be used in different parts of the spatial domain. The discretization is second order accurate in time and preserves the conservation and stability properties under local CFL conditions. Multirate timestepping avoids the necessity to take small global time-steps (restricted by the largest value of the Courant number on the grid) and therefore results in more efficient algorithms.
- Technical note: Simulating chemical systems in Fortran90 and Matlab with the Kinetic PreProcessor KPP-2.1Sandu, Adrian; Sander, Rolf (Copernicus Publications, 2006-01-01)This paper presents the new version 2.1 of the Kinetic PreProcessor (KPP). Taking a set of chemical reactions and their rate coefficients as input, KPP generates Fortran90, Fortran77, Matlab, or C code for the temporal integration of the kinetic system. Efficiency is obtained by carefully exploiting the sparsity structures of the Jacobian and of the Hessian. A comprehensive suite of stiff numerical integrators is also provided. Moreover, KPP can be used to generate the tangent linear model, as well as the continuous and discrete adjoint models of the chemical system.
- Ensemble-based chemical data assimilation II: Real observationsConstantinescu, Emil M.; Sandu, Adrian; Chai, Tianfeng; Carmichael, Gregory R. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2006-03-01)Data assimilation is the process of integrating observational data and model predictions to obtain an optimal representation of the state of the atmosphere. As more chemical observations in the troposphere are becoming available, chemical data assimilation is expected to play an essential role in air quality forecasting, similar to the role it has in numerical weather prediction. Considerable progress has been made recently in the development of variational tools for chemical data assimilation. In this paper we assess the performance of the ensemble Kalman filter (EnKF) and compare it with a state of the art 4D-Var approach. We analyze different aspects that affect the assimilation process, investigate several ways to avoid filter divergence, and investigate the assimilation of emissions. Results with a real model and real observations show that EnKF is a promising approach for chemical data assimilation. The results also point to several issues on which further research is necessary.
- Ensemble-based Chemical Data Assimilation III: Filter LocalizationConstantinescu, Emil M.; Sandu, Adrian; Chai, Tianfeng; Carmichael, Gregory R. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2006-03-01)Data assimilation is the process of integrating observational data and model predictions to obtain an optimal representation of the state of the atmosphere. As more chemical observations in the troposphere are becoming available, chemical data assimilation is expected to play an essential role in air quality forecasting, similar to the role it has in numerical weather prediction. Considerable progress has been made recently in the development of variational tools for chemical data assimilation. In this paper we implement and assess the performance of a localized ``perturbed observations'' ensemble Kalman filter (LEnKF). We analyze different settings of the ensemble localization, and investigate the joint assimilation of the state, emissions and boundary conditions. Results with a real model and real observations show that LEnKF is a promising approach for chemical data assimilation. The results also point to several issues on which future research is necessary.
- Ensemble-based chemical data assimilation I: An idealized settingConstantinescu, Emil M.; Sandu, Adrian; Chai, Tianfeng; Carmichael, Gregory R. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2006-03-01)Data assimilation is the process of integrating observational data and model predictions to obtain an optimal representation of the state of the atmosphere. As more chemical observations in the troposphere are becoming available, chemical data assimilation is expected to play an essential role in air quality forecasting, similar to the role it has in numerical weather prediction. Considerable progress has been made recently in the development of variational tools for chemical data assimilation. In this paper we assess the performance of the ensemble Kalman filter (EnKF). Results in an idealized setting show that EnKF is promising for chemical data assimilation.
- Forward, Tangent Linear, and Adjoint Runge Kutta Methods in KPP–2.2 for Efficient Chemical Kinetic SimulationsSandu, Adrian; Miehe, Philipp (Department of Computer Science, Virginia Polytechnic Institute & State University, 2006-07-01)The Kinetic PreProcessor (KPP) is a widely used software environment which generates Fortran90, Fortran77, Matlab, or C code for the simulation of chemical kinetic systems. High computational efficiency is attained by exploiting the sparsity pattern of the Jacobian and Hessian. In this paper we report on the implementation of two new families of stiff numerical integrators in the new version 2.2 of KPP. One family is the fully implicit three-stage Runge Kutta methods, and the second family are singly diagonally-implicit Runge Kutta methods. For each family tangent linear models for direct decoupled sensitivity analysis, and adjoint models for adjoint sensitivity analysis of chemical kinetic systems are also implemented. To the best of our knowledge this work brings the first implementation of the direct decoupled sensitivity method and of the discrete adjoint sensitivity method with Runge Kutta methods. Numerical experiments with a chemical system used in atmospheric chemistry illustrate the power of the stiff Runge Kutta integrators and their tangent linear and discrete adjoint models. Through the integration with KPP–2.2. these numerical techniques become easily available to a wide community interested in the simulation of chemical kinetic systems.
- Stabilized Explicit Time Integration for Parallel Air Quality ModelsSrivastava, Anurag (Virginia Tech, 2006-08-18)Air Quality Models are defined for prediction and simulation of air pollutant concentrations over a certain period of time. The predictions can be used in setting limits for the emission levels of industrial facilities. The input data for the air quality models are very large and encompass various environmental conditions like wind speed, turbulence, temperature and cloud density. Most air quality models are based on advection-diffusion equations. These differential equations are moderately stiff and require appropriate techniques for fast integration over large intervals of time. Implicit time stepping techniques for solving differential equations being unconditionally stable are considered suitable for the solution. However, implicit time stepping techniques impose certain data dependencies that can cause the parallelization of air quality models to be inefficient. The current approach uses Runge Kutta Chebyshev explicit method for solution of advection diffusion equations. It is found that even if the explicit method used is computationally more expensive in the serial execution, it takes lesser execution time when parallelized because of less complicated data dependencies presented by the explicit time-stepping. The implicit time-stepping on the other hand cannot be parallelized efficiently because of the inherent complicated data dependencies.
- Autoregressive Models of Background Errors for Chemical Data AssimilationConstantinescu, Emil M.; Chai, Tianfeng; Sandu, Adrian; Carmichael, Gregory R. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2006-10-01)The task of providing an optimal analysis of the state of the atmosphere requires the development of dynamic data-driven systems that efficiently integrate the observational data and the models. Data assimilation (DA) is the process of adjusting the states or parameters of a model in such a way that its outcome (prediction) is close, in some distance metric, to observed (real) states. It is widely accepted that a key ingredient of successful data assimilation is a realistic estimation of the background error distribution. This paper introduces a new method for estimating the background errors which are modeled using autoregressive processes. The proposed approach is computationally inexpensive and captures the error correlations along the flow lines.
- Efficient Uncertainty Quantification with the Polynomial Chaos Method for Stiff SystemsCheng, Haiyan; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)The polynomial chaos method has been widely adopted as a computationally feasible approach for uncertainty quantification. Most studies to date have focused on non-stiff systems. When stiff systems are considered, implicit numerical integration requires the solution of a nonlinear system of equations at every time step. Using the Galerkin approach, the size of the system state increases from $n$ to $S \times n$, where $S$ is the number of the polynomial chaos basis functions. Solving such systems with full linear algebra causes the computational cost to increase from $O(n^3)$ to $O(S^3n^3)$. The $S^3$-fold increase can make the computational cost prohibitive. This paper explores computationally efficient uncertainty quantification techniques for stiff systems using the Galerkin, collocation and collocation least-squares formulations of polynomial chaos. In the Galerkin approach, we propose a modification in the implicit time stepping process using an approximation of the Jacobian matrix to reduce the computational cost. The numerical results show a run time reduction with a small impact on accuracy. In the stochastic collocation formulation, we propose a least-squares approach based on collocation at a low-discrepancy set of points. Numerical experiments illustrate that the collocation least-squares approach for uncertainty quantification has similar accuracy with the Galerkin approach, is more efficient, and does not require any modifications of the original code.
- A Polynomial Chaos Based Bayesian Approach for Estimating Uncertain Parameters of Mechanical Systems – Part II: Applications to Vehicle SystemsBlanchard, Emmanuel; Sandu, Adrian; Sandu, Corina (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)This is the second part of a two-part article. In the first part, a new computational approach for parameter estimation was proposed based on the application of the polynomial chaos theory. The maximum likelihood estimates are obtained by minimizing a cost function derived from the Bayesian theorem. In this part, the new parameter estimation method is illustrated on a nonlinear four-degree-of-freedom roll plane model of a vehicle in which an uncertain mass with an uncertain position is added on the roll bar. The value of the mass and its position are estimated from periodic observations of the displacements and velocities across the suspensions. Appropriate excitations are needed in order to obtain accurate results. For some excitations, different combinations of uncertain parameters lead to essentially the same time responses, and no estimation method can work without additional information. Regularization techniques can still yield most likely values among the possible combinations of uncertain parameters resulting in the same time responses than the ones observed. When using appropriate excitations, the results obtained with this approach are close to the actual values of the parameters. The accuracy of the estimations has been shown to be sensitive to the number of terms used in the polynomial expressions and to the number of collocation points, and thus it may become computationally expensive when a very high accuracy of the results is desired. However, the noise level in the measurements affects the accuracy of the estimations as well. Therefore, it is usually not necessary to use a large number of terms in the polynomial expressions and a very large number of collocation points since the addition of extra precision eventually affects the results less than the effect of the measurement noise. Possible applications of this theory to the field of vehicle dynamics simulations include the estimation of mass, inertia properties, as well as other parameters of interest.
- On Consistency Properties of Discrete Adjoint Linear Multistep MethodsSandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)In this paper we analyze the consistency properties of discrete adjoints of linear multistep methods. Discrete adjoints are very popular in optimization and control since they can be constructed automatically by reverse mode automatic differentiation. The consistency analysis reveals that the discrete linear multistep adjoints are, in general, inconsistent approximations of the adjoint ODE solution along the trajectory. However, the discrete adjoints at the initial time (and therefore the discrete adjoint gradients) converge to the adjoint ODE solution with the same order as the original linear multistep method. Discrete adjoints inherit the zero-stability properties of the forward method. Numerical results confirm the theoretical findings.
- Uncertainty Quantification and Apportionment in Air Quality Models using the Polynomial Chaos MethodCheng, Haiyan; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)Simulations of large-scale physical systems are often affected by the uncertainties in data, in model parameters, and by incomplete knowledge of the underlying physics. The traditional deterministic simulations do not account for such uncertainties. It is of interest to extend simulation results with ``error bars'' that quantify the degree of uncertainty. This added information provides a confidence level for the simulation result. For example, the air quality forecast with an associated uncertainty information is very useful for making policy decisions regarding environmental protection. Techniques such as Monte Carlo (MC) and response surface are popular for uncertainty quantification, but accurate results require a large number of runs. This incurs a high computational cost, which maybe prohibitive for large-scale models. The polynomial chaos (PC) method was proposed as a practical and efficient approach for uncertainty quantification, and has been successfully applied in many engineering fields. Polynomial chaos uses a spectral representation of uncertainty. It has the ability to handle both linear and nonlinear problems with either Gaussian or non-Gaussian uncertainties. This work extends the functionality of the polynomial chaos method to Source Uncertainty Apportionment (SUA), i.e., we use the polynomial chaos approach to attribute the uncertainty in model results to different sources of uncertainty. The uncertainty quantification and source apportionment are implemented in the Sulfur Transport Eulerian Model (STEM-III). It allows us to assess the combined effects of different sources of uncertainty to the ozone forecast. It also enables to quantify the contribution of each source to the total uncertainty in the predicted ozone levels.
- A Polynomial Chaos Based Bayesian Approach for Estimating Uncertain Parameters of Mechanical Systems – Part I: Theoretical ApproachBlanchard, Emmanuel; Sandu, Adrian; Sandu, Corina (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)This is the first part of a two-part article. A new computational approach for parameter estimation is proposed based on the application of the polynomial chaos theory. The polynomial chaos method has been shown to be considerably more efficient than Monte Carlo in the simulation of systems with a small number of uncertain parameters. In the new approach presented in this paper, the maximum likelihood estimates are obtained by minimizing a cost function derived from the Bayesian theorem. Direct stochastic collocation is used as a less computationally expensive alternative to the traditional Galerkin approach to propagate the uncertainties through the system in the polynomial chaos framework. This approach is applied to very simple mechanical systems in order to illustrate how the cost function can be affected by undersampling, non-identifiablily of the system, non-observability, and by excitation signals that are not rich enough. When the system is non-identifiable, regularization techniques can still yield most likely values among the possible combinations of uncertain parameters resulting in the same time responses than the ones observed. This is illustrated using a simple spring-mass system. Possible applications of this theory to the field of vehicle dynamics simulations include the estimation of mass, inertia properties, as well as other parameters of interest. In the second part of this article, this new parameter estimation method is illustrated on a nonlinear four-degree-of-freedom roll plane model of a vehicle in which an uncertain mass with an uncertain position is added on the roll bar.
- Discrete Second Order Adjoints in Atmospheric Chemical Transport ModelingSandu, Adrian; Zhang, Lin (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)Atmospheric chemical transport models (CTMs) are essential tools for the study of air pollution, for environmental policy decisions, for the interpretation of observational data, and for producing air quality forecasts. Many air quality studies require sensitivity analyses, i.e., the computation of derivatives of the model output with respect to model parameters. The derivatives of a cost functional (defined on the model output) with respect to a large number of model parameters can be calculated efficiently through adjoint sensitivity analysis. While the traditional (first order) adjoint models give the gradient of the cost functional with respect to parameters, second order adjoint models give second derivative information in the form of products between the Hessian of the cost functional and a user defined vector. In this paper we discuss the mathematical foundations of the discrete second order adjoint sensitivity method and present a complete set of computational tools for performing second order sensitivity studies in three-dimensional atmospheric CTMs. The tools include discrete second order adjoints of Runge Kutta and of Rosenbrock time stepping methods for stiff equations together with efficient implementation strategies. Numerical examples illustrate the use of these computational tools in important applications like sensitivity analysis, optimization, uncertainty quantification, and the calculation of directions of maximal error growth in three-dimensional atmospheric CTMs.
- Update on Multirate Timestepping Methods for Hyperbolic Conservation LawsConstantinescu, Emil M.; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007-03-01)This paper constructs multirate time discretizations for hyperbolic conservation laws that allow different timesteps to be used in different parts of the spatial domain. The proposed family of discretizations is second order accurate in time and has conservation and linear and nonlinear stability properties under local CFL conditions. Multirate timestepping avoids the necessity to take small global timesteps (restricted by the largest value of the Courant number on the grid) and therefore results in more efficient algorithms. Numerical results obtained for the advection and Burgers equations confirm the theoretical findings.
- Large-Scale Simulations Using First and Second Order Adjoints with Applications in Data AssimilationZhang, Lin (Virginia Tech, 2007-06-09)In large-scale air quality simulations we are interested in the influence factors which cause changes of pollutants, and optimization methods which improve forecasts. The solutions to these problems can be achieved by incorporating adjoint models, which are efficient in computing the derivatives of a functional with respect to a large number of model parameters. In this research we employ first order adjoints in air quality simulations. Moreover, we explore theoretically the computation of second order adjoints for chemical transport models, and illustrate their feasibility in several aspects. We apply first order adjoints to sensitivity analysis and data assimilation. Through sensitivity analysis, we can discover the area that has the largest influence on changes of ozone concentrations at a receptor. For data assimilation with optimization methods which use first order adjoints, we assess their performance under different scenarios. The results indicate that the L-BFGS method is the most efficient. Compared with first order adjoints, second order adjoints have not been used to date in air quality simulation. To explore their utility, we show the construction of second order adjoints for chemical transport models and demonstrate several applications including sensitivity analysis, optimization, uncertainty quantification, and Hessian singular vectors. Since second order adjoints provide second order information in the form of Hessian-vector product instead of the entire Hessian matrix, it is possible to implement applications for large-scale models which require second order derivatives. Finally, we conclude that second order adjoints for chemical transport models are computationally feasible and effective.
- Multirate explicit Adams methods for time integration of conservation lawsSandu, Adrian; Constantinescu, Emil M. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007-08-01)This paper constructs multirate linear multistep time discretizations based on Adams-Bashforth methods. These methods are aimed at solving conservation laws and allow different timesteps to be used in different parts of the spatial domain. The proposed family of discretizations is second order accurate in time and has conservation and linear and nonlinear stability properties under local CFL conditions. Multirate timestepping avoids the necessity to take small global timesteps - restricted by the largest value of the Courant number on the grid - and therefore results in more efficient computations. Numerical results obtained for the advection and Burgers' equations confirm the theoretical findings.
- Denserks: Fortran Sensitivity Solvers Using Continuous, Explicit Runge-kutta SchemesAlexe, Mihai; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007-10-01)DENSERKS is a Fortran sensitivity equation solver package designed for integrating models whose evolution can be described by ordinary differential equations (ODEs). A salient feature of DENSERKS is its support for both forward and adjoint sensitivity analyses, with built-in integrators for both first and second order continuous adjoint models. The software implements explicit Runge-Kutta methods with adaptive timestepping and high-order dense output schemes for the forward and the tangent linear model trajectory interpolation. Implementations of six Runge-Kutta methods are provided, with orders of accuracy ranging from two to eight. This makes DENSERKS suitable for a wide range of practical applications. The use of dense output, a novel approach in adjoint sensitivity analysis solvers, allows for a high-order cost-effective interpolation. This is a necessary feature when solving adjoints of nonlinear systems using highly accurate Runge-Kutta methods (order five and above). To minimize memory requirements and make long-time integrations computationally efficient, DENSERKS implements a two-level checkpointing mechanism. The code is tested on a selection of problems illustrating first and second order sensitivity analysis with respect to initial model conditions. The resulting derivative information is also used in a gradient-based optimization algorithm to minimize cost functionals dependent on a given set of model parameters.
- On the discrete adjoints of adaptive time stepping algorithmsAlexe, Mihai; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2008-04-01)We investigate the behavior of adaptive time stepping numerical algorithms under the reverse mode of automatic differentiation (AD). By differentiating the time step controller and the error estimator of the original algorithm, reverse mode AD generates spurious adjoint derivatives of the time steps. The resulting discrete adjoint models become inconsistent with the adjoint ODE, and yield incorrect derivatives. To regain consistency, one has to cancel out the contributions of the non-physical derivatives in the discrete adjoint model. We demonstrate that the discrete adjoint models of one-step, explicit adaptive algorithms, such as the Runge--Kutta schemes, can be made consistent with their continuous analogs using simple code modifications. Furthermore, we extend the analysis to cover second order adjoint models derived through an extra forward-mode differentiation of the discrete adjoint code. Two numerical examples support the mathematical derivations.
- Adaptive Numerical Methods for Large Scale Simulations and Data AssimilationConstantinescu, Emil Mihai (Virginia Tech, 2008-05-26)Numerical simulation is necessary to understand natural phenomena, make assessments and predictions in various research and engineering fields, develop new technologies, etc. New algorithms are needed to take advantage of the increasing computational resources and utilize the emerging hardware and software infrastructure with maximum efficiency. Adaptive numerical discretization methods can accommodate problems with various physical, scale, and dynamic features by adjusting the resolution, order, and the type of method used to solve them. In applications that simulate real systems, the numerical accuracy of the solution is typically just one of the challenges. Measurements can be included in the simulation to constrain the numerical solution through a process called data assimilation in order to anchor the simulation in reality. In this thesis we investigate adaptive discretization methods and data assimilation approaches for large-scale numerical simulations. We develop and investigate novel multirate and implicit-explicit methods that are appropriate for multiscale and multiphysics numerical discretizations. We construct and explore data assimilation approaches for, but not restricted to, atmospheric chemistry applications. A generic approach for describing the structure of the uncertainty in initial conditions that can be applied to the most popular data assimilation approaches is also presented. We show that adaptive numerical methods can effectively address the discretization of large-scale problems. Data assimilation complements the adaptive numerical methods by correcting the numerical solution with real measurements. Test problems and large-scale numerical experiments validate the theoretical findings. Synergistic approaches that use adaptive numerical methods within a data assimilation framework need to be investigated in the future.