Browsing by Author "Cheng, Haiyan"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
- Efficient Formulation and Implementation of Data Assimilation MethodsNino-Ruiz, Elias D.; Sandu, Adrian; Cheng, Haiyan (MDPI, 2018-07-06)This Special Issue presents efficient formulations and implementations of sequential and variational data assimilation methods. The methods address three important issues in the context of operational data assimilation: efficient implementation of localization methods, sampling methods for approaching posterior ensembles under non-linear model errors, and adjoint-free formulations of four dimensional variational methods.
- Efficient Uncertainty Quantification with the Polynomial Chaos Method for Stiff SystemsCheng, Haiyan; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)The polynomial chaos method has been widely adopted as a computationally feasible approach for uncertainty quantification. Most studies to date have focused on non-stiff systems. When stiff systems are considered, implicit numerical integration requires the solution of a nonlinear system of equations at every time step. Using the Galerkin approach, the size of the system state increases from $n$ to $S \times n$, where $S$ is the number of the polynomial chaos basis functions. Solving such systems with full linear algebra causes the computational cost to increase from $O(n^3)$ to $O(S^3n^3)$. The $S^3$-fold increase can make the computational cost prohibitive. This paper explores computationally efficient uncertainty quantification techniques for stiff systems using the Galerkin, collocation and collocation least-squares formulations of polynomial chaos. In the Galerkin approach, we propose a modification in the implicit time stepping process using an approximation of the Jacobian matrix to reduce the computational cost. The numerical results show a run time reduction with a small impact on accuracy. In the stochastic collocation formulation, we propose a least-squares approach based on collocation at a low-discrepancy set of points. Numerical experiments illustrate that the collocation least-squares approach for uncertainty quantification has similar accuracy with the Galerkin approach, is more efficient, and does not require any modifications of the original code.
- A Hybrid Approach to Estimating Error Covariances in Variational Data AssimilationCheng, Haiyan; Jardak, Mohamed; Alexe, Mihai; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2009-03-01)Data Assimilation (DA) involves the combination of observational data with the underlying dynamical principles governing the system under observation. In this work we combine the advantages of the two prominent advanced data assimilation systems, the 4D-Var and the ensemble methods. The proposed method consists of identifying the subspace spanned by the major 4D-Var error reduction directions. These directions are then removed from the background covariance through a Galerkin-type projection. This generates an updated error covariance information at both end points of an assimilation window. The error covariance information is updated between assimilation windows to capture the ``error of the day''. Numerical results using our new hybrid approach on a nonlinear model demonstrate how the background covariance matrix leads to an error covariance update that improves the 4D-Var DA results.
- A Hybrid Variational/Ensemble Filter Approach to Data AssimilationSandu, Adrian; Cheng, Haiyan (Department of Computer Science, Virginia Polytechnic Institute & State University, 2009-08-01)Two families of methods are widely used in data assimilation: the four dimensional variational (4D-Var) approach, and the ensemble Kalman filter (EnKF) approach. The two families have been developed largely through parallel research efforts, and each method has its advantages and disadvantages. It is of interest to combine the two ap- proaches and develop hybrid data assimilation algorithms. This paper investigates the theoretical equivalence between the suboptimal 4D-Var method (where only a small number of optimization iterations are performed) and the practical EnKF method (where only a small number of ensemble members are used) in a linear Gaussian setting. The analysis motivates a new hybrid algorithm: the optimization directions obtained from a short window 4D-Var run are used to construct the EnKF initial ensemble. Numerical results show that the proposed hybrid ensemble filter method performs better than the regular EnKF method for both linear and nonlinear test problems.
- Revision of TR-09-25: A Hybrid Variational/Ensemble Filter Approach to Data AssimilationSandu, Adrian; Cheng, Haiyan (Department of Computer Science, Virginia Polytechnic Institute & State University, 2010-03-01)Two families of methods are widely used in data assimilation: the four dimensional variational (4D-Var) approach, and the ensemble Kalman filter (EnKF) approach. The two families have been developed largely through parallel research efforts. Each method has its advantages and disadvantages. It is of interest to develop hybrid data assimilation algorithms that can combine the relative strengths of the two approaches. This paper proposes a subspace approach to investigate the theoretical equivalence between the suboptimal 4D-Var method (where only a small number of optimization iterations are performed) and the practical EnKF method (where only a small number of ensemble members are used) in a linear Gaussian setting. The analysis motivates a new hybrid algorithm: the optimization directions obtained from a short window 4D-Var run are used to construct the EnKF initial ensemble. The proposed hybrid method is computationally less expensive than a full 4D-Var, as only short assimilation windows are considered. The hybrid method has the potential to perform better than the regular EnKF due to its look-ahead property. Numerical results show that the proposed hybrid ensemble filter method performs better than the regular EnKF method for both linear and nonlinear test problems.
- Uncertainty Quantification and Apportionment in Air Quality Models using the Polynomial Chaos MethodCheng, Haiyan; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007)Simulations of large-scale physical systems are often affected by the uncertainties in data, in model parameters, and by incomplete knowledge of the underlying physics. The traditional deterministic simulations do not account for such uncertainties. It is of interest to extend simulation results with ``error bars'' that quantify the degree of uncertainty. This added information provides a confidence level for the simulation result. For example, the air quality forecast with an associated uncertainty information is very useful for making policy decisions regarding environmental protection. Techniques such as Monte Carlo (MC) and response surface are popular for uncertainty quantification, but accurate results require a large number of runs. This incurs a high computational cost, which maybe prohibitive for large-scale models. The polynomial chaos (PC) method was proposed as a practical and efficient approach for uncertainty quantification, and has been successfully applied in many engineering fields. Polynomial chaos uses a spectral representation of uncertainty. It has the ability to handle both linear and nonlinear problems with either Gaussian or non-Gaussian uncertainties. This work extends the functionality of the polynomial chaos method to Source Uncertainty Apportionment (SUA), i.e., we use the polynomial chaos approach to attribute the uncertainty in model results to different sources of uncertainty. The uncertainty quantification and source apportionment are implemented in the Sulfur Transport Eulerian Model (STEM-III). It allows us to assess the combined effects of different sources of uncertainty to the ozone forecast. It also enables to quantify the contribution of each source to the total uncertainty in the predicted ozone levels.
- Uncertainty Quantification and Uncertainty Reduction Techniques for Large-scale SimulationsCheng, Haiyan (Virginia Tech, 2009-07-15)Modeling and simulations of large-scale systems are used extensively to not only better understand a natural phenomenon, but also to predict future events. Accurate model results are critical for design optimization and policy making. They can be used effectively to reduce the impact of a natural disaster or even prevent it from happening. In reality, model predictions are often affected by uncertainties in input data and model parameters, and by incomplete knowledge of the underlying physics. A deterministic simulation assumes one set of input conditions, and generates one result without considering uncertainties. It is of great interest to include uncertainty information in the simulation. By ``Uncertainty Quantification,'' we denote the ensemble of techniques used to model probabilistically the uncertainty in model inputs, to propagate it through the system, and to represent the resulting uncertainty in the model result. This added information provides a confidence level about the model forecast. For example, in environmental modeling, the model forecast, together with the quantified uncertainty information, can assist the policy makers in interpreting the simulation results and in making decisions accordingly. Another important goal in modeling and simulation is to improve the model accuracy and to increase the model prediction power. By merging real observation data into the dynamic system through the data assimilation (DA) technique, the overall uncertainty in the model is reduced. With the expansion of human knowledge and the development of modeling tools, simulation size and complexity are growing rapidly. This poses great challenges to uncertainty analysis techniques. Many conventional uncertainty quantification algorithms, such as the straightforward Monte Carlo method, become impractical for large-scale simulations. New algorithms need to be developed in order to quantify and reduce uncertainties in large-scale simulations. This research explores novel uncertainty quantification and reduction techniques that are suitable for large-scale simulations. In the uncertainty quantification part, the non-sampling polynomial chaos (PC) method is investigated. An efficient implementation is proposed to reduce the high computational cost for the linear algebra involved in the PC Galerkin approach applied to stiff systems. A collocation least-squares method is proposed to compute the PC coefficients more efficiently. A novel uncertainty apportionment strategy is proposed to attribute the uncertainty in model results to different uncertainty sources. The apportionment results provide guidance for uncertainty reduction efforts. The uncertainty quantification and source apportionment techniques are implemented in the 3-D Sulfur Transport Eulerian Model (STEM-III) predicting pollute concentrations in the northeast region of the United States. Numerical results confirm the efficacy of the proposed techniques for large-scale systems and the potential impact for environmental protection policy making. ``Uncertainty Reduction'' describes the range of systematic techniques used to fuse information from multiple sources in order to increase the confidence one has in model results. Two DA techniques are widely used in current practice: the ensemble Kalman filter (EnKF) and the four-dimensional variational (4D-Var) approach. Each method has its advantages and disadvantages. By exploring the error reduction directions generated in the 4D-Var optimization process, we propose a hybrid approach to construct the error covariance matrix and to improve the static background error covariance matrix used in current 4D-Var practice. The updated covariance matrix between assimilation windows effectively reduces the root mean square error (RMSE) in the solution. The success of the hybrid covariance updates motivates the hybridization of EnKF and 4D-Var to further reduce uncertainties in the simulation results. Numerical tests show that the hybrid method improves the model accuracy and increases the model prediction quality.