Likelihood-Free Bayesian Inference with Efficient Uncertainty Quantification
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Uncertainty quantification (UQ) in inverse problems is essential for reliable parameter estimation in scientific and engineering applications. This thesis presents a study on two frameworks that separately quantifies two fundamental types of uncertainty: aleatoric uncertainty, arising from inherent measurement noise and non-identifiability in the inverse mapping, and epistemic uncertainty, stemming from limited training data and model inadequacy.
For aleatoric uncertainty quantification, a conditional Wasserstein Generative Adversarial Network with Full Gradient Penalty (cWGAN-GP) is employed to approximate the posterior distribution over parameters given observations. The trained generator enables efficient posterior sampling through a single forward pass, providing credible intervals and capturing potential multimodality in the solution space. A physics-informed extension, SGML-cWGAN, incorporates domain knowledge through physics-based loss terms to improve estimation accuracy. For epistemic uncertainty quantification, Prediction with Neural Network Corrections (PNC) is utilized, leveraging Neural Tangent Kernel theory to provide theoretically grounded uncertainty estimates. Bootstrap and stacking resampling methods generate multiple model instances, with prediction variance across instances serving as the epistemic uncertainty measure. The framework is evaluated on two benchmark problems: the FitzHugh-Nagumo (FHN) dynamical system and the Pacejka tire model. Results demonstrate that PNC achieves excellent performance on clean and structured noisy datasets, while cWGAN scales efficiently to large datasets containing up to 864,000 samples. The physics informed SGML-cWGAN achieves up to 33% improvement in mean squared error over the baseline cWGAN on the Pacejka dataset. However, a fundamental trade-off emerges: PNC faces computational constraints limiting applicability to datasets smaller than approximately 7,000 samples, while cWGAN requires a minimum of 8,000 samples for reliable performance. This incompatibility highlights the need for scalable epistemic uncertainty methods that complement data-hungry generative models. The findings demonstrate the viability of neural network-based approaches for uncertainty quantification in inverse problems, while identifying key limitations and directions for future research, including alternative simulation-based inference methods and improved posterior evaluation metrics