Likelihood-Free Bayesian Inference with Efficient Uncertainty Quantification
| dc.contributor.author | Nouri, Arash | en |
| dc.contributor.committeechair | Sandu, Adrian | en |
| dc.contributor.committeemember | Rudi, Johann | en |
| dc.contributor.committeemember | Lourentzou, Ismini | en |
| dc.contributor.department | Computer Science and Applications | en |
| dc.date.accessioned | 2026-03-05T09:00:08Z | en |
| dc.date.available | 2026-03-05T09:00:08Z | en |
| dc.date.issued | 2026-02-09 | en |
| dc.description.abstract | Uncertainty quantification (UQ) in inverse problems is essential for reliable parameter estimation in scientific and engineering applications. This thesis presents a study on two frameworks that separately quantifies two fundamental types of uncertainty: aleatoric uncertainty, arising from inherent measurement noise and non-identifiability in the inverse mapping, and epistemic uncertainty, stemming from limited training data and model inadequacy. For aleatoric uncertainty quantification, a conditional Wasserstein Generative Adversarial Network with Full Gradient Penalty (cWGAN-GP) is employed to approximate the posterior distribution over parameters given observations. The trained generator enables efficient posterior sampling through a single forward pass, providing credible intervals and capturing potential multimodality in the solution space. A physics-informed extension, SGML-cWGAN, incorporates domain knowledge through physics-based loss terms to improve estimation accuracy. For epistemic uncertainty quantification, Prediction with Neural Network Corrections (PNC) is utilized, leveraging Neural Tangent Kernel theory to provide theoretically grounded uncertainty estimates. Bootstrap and stacking resampling methods generate multiple model instances, with prediction variance across instances serving as the epistemic uncertainty measure. The framework is evaluated on two benchmark problems: the FitzHugh-Nagumo (FHN) dynamical system and the Pacejka tire model. Results demonstrate that PNC achieves excellent performance on clean and structured noisy datasets, while cWGAN scales efficiently to large datasets containing up to 864,000 samples. The physics informed SGML-cWGAN achieves up to 33% improvement in mean squared error over the baseline cWGAN on the Pacejka dataset. However, a fundamental trade-off emerges: PNC faces computational constraints limiting applicability to datasets smaller than approximately 7,000 samples, while cWGAN requires a minimum of 8,000 samples for reliable performance. This incompatibility highlights the need for scalable epistemic uncertainty methods that complement data-hungry generative models. The findings demonstrate the viability of neural network-based approaches for uncertainty quantification in inverse problems, while identifying key limitations and directions for future research, including alternative simulation-based inference methods and improved posterior evaluation metrics | en |
| dc.description.abstractgeneral | Many scientific and engineering problems require estimating unknown parameters from measured data under a process called an inverse problem. A critical challenge in these problems is understanding how confident we can be in our estimates: Are the measurements precise enough to pinpoint a unique answer, or could multiple parameter values explain the data equally well? This thesis study two computational frameworks to quantify two types of uncertainty in inverse problems. The first type, called aleatoric uncertainty, represents the fundamental ambiguity that exists even with perfect methods—some inverse problems simply have multiple valid solutions, or measurement noise makes precise estimation impossible. The second type, called epistemic uncertainty, represents uncertainty due to limited knowledge—having more data or better models would reduce this uncertainty. To capture aleatoric uncertainty, this work employs a type of neural network called a Generative Adversarial Network (GAN), which learns to generate plausible parameter values that could have produced the observed measurements. Rather than providing a single "best guess," this approach produces a range of possibilities along with their relative likelihoods. For epistemic uncertainty, a different technique called Prediction with Neural Network Corrections (PNC) is used, which estimates how much predictions might change if different training data were available. The framework was tested on two applications: a mathematical model of nerve cell behavior (the FitzHugh-Nagumo model) and a model used in automotive engineering to describe tireroad interactions (the Pacejka model). Results show that both methods successfully quantify uncertainty when meaningful patterns exist in the data, while appropriately indicating high uncertainty when analyzing pure noise. A physics-informed version for the Pacejka data that incorporates known physical laws achieved bet | en |
| dc.description.degree | Master of Science | en |
| dc.format.medium | ETD | en |
| dc.identifier.other | vt_gsexam:45623 | en |
| dc.identifier.uri | https://hdl.handle.net/10919/141671 | en |
| dc.language.iso | en | en |
| dc.publisher | Virginia Tech | en |
| dc.rights | In Copyright | en |
| dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
| dc.subject | Uncertainty Quantification | en |
| dc.subject | Inverse Problem | en |
| dc.subject | Estimation | en |
| dc.subject | Efficient Bootstrapping | en |
| dc.title | Likelihood-Free Bayesian Inference with Efficient Uncertainty Quantification | en |
| dc.type | Thesis | en |
| thesis.degree.discipline | Computer Science & Applications | en |
| thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
| thesis.degree.level | masters | en |
| thesis.degree.name | Master of Science | en |
Files
Original bundle
1 - 1 of 1