Gradient-Based Sensitivity Analysis with Kernels
dc.contributor.author | Wycoff, Nathan Benjamin | en |
dc.contributor.committeechair | Gramacy, Robert B. | en |
dc.contributor.committeemember | Wild, Stefan M. | en |
dc.contributor.committeemember | Binois, Mickael | en |
dc.contributor.committeemember | Leman, Scotland C. | en |
dc.contributor.committeemember | Higdon, David | en |
dc.contributor.department | Statistics | en |
dc.date.accessioned | 2021-08-21T08:00:07Z | en |
dc.date.available | 2021-08-21T08:00:07Z | en |
dc.date.issued | 2021-08-20 | en |
dc.description.abstract | Emulation of computer experiments via surrogate models can be difficult when the number of input parameters determining the simulation grows any greater than a few dozen. In this dissertation, we explore dimension reduction in the context of computer experiments. The active subspace method is a linear dimension reduction technique which uses the gradients of a function to determine important input directions. Unfortunately, we cannot expect to always have access to the gradients of our black-box functions. We thus begin by developing an estimator for the active subspace of a function using kernel methods to indirectly estimate the gradient. We then demonstrate how to deploy the learned input directions to improve the predictive performance of local regression models by ``undoing" the active subspace. Finally, we develop notions of sensitivities which are local to certain parts of the input space, which we then use to develop a Bayesian optimization algorithm which can exploit locally important directions. | en |
dc.description.abstractgeneral | Increasingly, scientists and engineers developing new understanding or products rely on computers to simulate complex phenomena. Sometimes, these computer programs are so detailed that the amount of time they take to run becomes a serious issue. Surrogate modeling is the problem of trying to predict a computer experiment's result without having to actually run it, on the basis of having observed the behavior of similar simulations. Typically, computer experiments have different settings which induce different behavior. When there are many different settings to tweak, typical surrogate modeling approaches can struggle. In this dissertation, we develop a technique for deciding which input settings, or even which combinations of input settings, we should focus our attention on when trying to predict the output of the computer experiment. We then deploy this technique both to prediction of computer experiment outputs as well as to trying to find which of the input settings yields a particular desired result. | en |
dc.description.degree | Doctor of Philosophy | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:31835 | en |
dc.identifier.uri | http://hdl.handle.net/10919/104683 | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Sensitivity Analysis | en |
dc.subject | Nonparametric Regression | en |
dc.subject | Dimension Reduction | en |
dc.subject | Derivative Free Optimization | en |
dc.title | Gradient-Based Sensitivity Analysis with Kernels | en |
dc.type | Dissertation | en |
thesis.degree.discipline | Statistics | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | doctoral | en |
thesis.degree.name | Doctor of Philosophy | en |
Files
Original bundle
1 - 1 of 1