Browsing by Author "Moosavi, Azam"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
- Cluster Sampling Filters for Non-Gaussian Data AssimilationAttia, A.; Moosavi, Azam; Sandu, Adrian (2016-08-19)This paper presents a fully non-Gaussian version of the Hamiltonian Monte Carlo (HMC) sampling filter. The Gaussian prior assumption in the original HMC filter is relaxed. Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. Using the data likelihood function, the posterior density is then formulated as a mixture density, and is sampled using a HMC approach (or any other scheme capable of sampling multimodal densities in high-dimensional subspaces). The main filter developed herein is named "cluster HMC sampling filter" (ClHMC). A multi-chain version of the ClHMC filter, namely MC-ClHMC is also proposed to guarantee that samples are taken from the vicinities of all probability modes of the formulated posterior. The new methodologies are tested using a quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction. Numerical results demonstrate the usefulness of using GMMs to relax the Gaussian prior assumption in the HMC filtering paradigm.
- Cluster Sampling Filters for Non-Gaussian Data AssimilationAttia, Ahmed; Moosavi, Azam; Sandu, Adrian (MDPI, 2018-05-31)This paper presents a fully non-Gaussian filter for sequential data assimilation. The filter is named the “cluster sampling filter”, and works by directly sampling the posterior distribution following a Markov Chain Monte-Carlo (MCMC) approach, while the prior distribution is approximated using a Gaussian Mixture Model (GMM). Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a GMM to the prior ensemble. Using the data likelihood function, the posterior density is then formulated as a mixture density, and is sampled following an MCMC approach. Four versions of the proposed filter, namely C ℓ MCMC , C ℓ HMC , MC- C ℓ HMC , and MC- C ℓ HMC are presented. C ℓ MCMC uses a Gaussian proposal density to sample the posterior, and C ℓ HMC is an extension to the Hamiltonian Monte-Carlo (HMC) sampling filter. MC- C ℓ MCMC and MC- C ℓ HMC are multi-chain versions of the cluster sampling filters C ℓ MCMC and C ℓ HMC respectively. The multi-chain versions are proposed to guarantee that samples are taken from the vicinities of all probability modes of the formulated posterior. The new methodologies are tested using a simple one-dimensional example, and a quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction. Numerical results demonstrate the usefulness of using GMMs to relax the Gaussian prior assumption especially in the HMC filtering paradigm.
- Efficient Construction of Local Parametric Reduced Order Models Using Machine Learning TechniquesMoosavi, Azam; Stefanescu, Razvan; Sandu, Adrian (2015-11-11)Reduced order models are computationally inexpensive approximations that capture the important dynamical characteristics of large, high-fidelity computer models of physical systems. This paper applies machine learning techniques to improve the design of parametric reduced order models. Specifically, machine learning is used to develop feasible regions in the parameter space where the admissible target accuracy is achieved with a predefined reduced order basis, to construct parametric maps, to chose the best two already existing bases for a new parameter configuration from accuracy point of view and to pre-select the optimal dimension of the reduced basis such as to meet the desired accuracy. By combining available information using bases concatenation and interpolation as well as high-fidelity solutions interpolation we are able to build accurate reduced order models associated with new parameter settings. Promising numerical results with a viscous Burgers model illustrate the potential of machine learning approaches to help design better reduced order models.
- Machine learning based algorithms for uncertainty quantification in numerical weather prediction modelsMoosavi, Azam; Rao, Vishwas; Sandu, Adrian (Elsevier, 2021-03-01)Complex numerical weather prediction models incorporate a variety of physical processes, each described by multiple alternative physical schemes with specific parameters. The selection of the physical schemes and the choice of the corresponding physical parameters during model configuration can significantly impact the accuracy of model forecasts. There is no combination of physical schemes that works best for all times, at all locations, and under all conditions. It is therefore of considerable interest to understand the interplay between the choice of physics and the accuracy of the resulting forecasts under different conditions. This paper demonstrates the use of machine learning techniques to study the uncertainty in numerical weather prediction models due to the interaction of multiple physical processes. The first problem addressed herein is the estimation of systematic model errors in output quantities of interest at future times, and the use of this information to improve the model forecasts. The second problem considered is the identification of those specific physical processes that contribute most to the forecast uncertainty in the quantity of interest under specified meteorological conditions. In order to address these questions we employ two machine learning approaches, random forests and artificial neural networks. The discrepancies between model results and observations at past times are used to learn the relationships between the choice of physical processes and the resulting forecast errors. Numerical experiments are carried out with the Weather Research and Forecasting (WRF) model. The output quantity of interest is the model precipitation, a variable that is both extremely important and very challenging to forecast. The physical processes under consideration include various micro-physics schemes, cumulus parameterizations, short wave, and long wave radiation schemes. The experiments demonstrate the strong potential of machine learning approaches to aid the study of model errors.
- Multivariate predictions of local reduced-order-model errors and dimensionsMoosavi, Azam; Stefanescu, Razvan; Sandu, Adrian (2017-01-16)This paper introduces multivariate input-output models to predict the errors and bases dimensions of local parametric Proper Orthogonal Decomposition reduced-order models. We refer to these multivariate mappings as the MP-LROM models. We employ Gaussian Processes and Artificial Neural Networks to construct approximations of these multivariate mappings. Numerical results with a viscous Burgers model illustrate the performance and potential of the machine learning based regression MP-LROM models to approximate the characteristics of parametric local reduced-order models. The predicted reduced-order models errors are compared against the multi-fidelity correction and reduced order model error surrogates methods predictions, whereas the predicted reduced-order dimensions are tested against the standard method based on the spectrum of snapshots matrix. Since the MP-LROM models incorporate more features and elements to construct the probabilistic mappings they achieve more accurate results. However, for high-dimensional parametric spaces, the MP-LROM models might suffer from the curse of dimensionality. Scalability challenges of MP-LROM models and the feasible ways of addressing them are also discussed in this study.