Augmented Neural Network Surrogate Models for Polynomial Chaos Expansions and Reduced Order Modeling
dc.contributor.author | Cooper, Rachel Gray | en |
dc.contributor.committeechair | Sandu, Adrian | en |
dc.contributor.committeemember | Xiao, Heng | en |
dc.contributor.committeemember | Karpatne, Anuj | en |
dc.contributor.department | Computer Science | en |
dc.date.accessioned | 2021-05-21T08:00:34Z | en |
dc.date.available | 2021-05-21T08:00:34Z | en |
dc.date.issued | 2021-05-20 | en |
dc.description.abstract | Mathematical models describing real world processes are becoming increasingly complex to better match the dynamics of the true system. While this is a positive step towards more complete knowledge of our world, numerical evaluations of these models become increasingly computationally inefficient, requiring increased resources or time to evaluate. This has led to the need for simplified surrogates to these complex mathematical models. A growing surrogate modeling solution is with the usage of neural networks. Neural networks (NN) are known to generalize an approximation across a diverse dataset and minimize the solution along complex nonlinear boundaries. Additionally, these surrogate models can be found using only incomplete knowledge of the true dynamics. However, NN surrogates often suffer from a lack of interpretability, where the decisions made in the training process are not fully understood, and the roles of individual neurons are not well defined. We present two solutions towards this lack of interpretability. The first focuses on mimicking polynomial chaos (PC) modeling techniques, modifying the structure of a NN to produce polynomial approximations of the underlying dynamics. This methodology allows for an extractable meaning from the network and results in improvement in accuracy over traditional PC methods. Secondly, we examine the construction of a reduced order modeling scheme using NN autoencoders, guiding the decisions of the training process to better match the real dynamics. This guiding process is performed via a physics-informed (PI) penalty, resulting in a speed-up in training convergence, but still results in poor performance compared to traditional schemes. | en |
dc.description.abstractgeneral | The world is an elaborate system of relationships between diverse processes. To accurately represent these relationships, increasingly complex models are defined to better match what is physically seen. These complex models can lead to issues when trying to use them to predict a realistic outcome, either requiring immensely powerful computers to run the simulations or long amounts of time to present a solution. To fix this, surrogates or approximations to these complex models are used. These surrogate models aim to reduce the resources needed to calculate a solution while remaining as accurate to the more complex model as possible. One way to make these surrogate models is through neural networks. Neural networks try to simulate a brain, making connections between some input and output given to the network. In the case of surrogate modeling, the input is some current state of the true process, and the output is what is seen later from the same system. But much like the human brain, the reasoning behind why choices are made when connecting the input and outputs is often largely unknown. Within this paper, we seek to add meaning to neural network surrogate models in two different ways. In the first, we change what each piece in a neural network represents to build large polynomials (e.g., $x^5 + 4x^2 + 2$) to approximate the larger complex system. We show that the building of these polynomials via neural networks performs much better than traditional ways to construct them. For the second, we guide the choices made by the neural network by enforcing restrictions in what connections it can make. We do this by using additional information from the larger system to ensure the connections made focus on the most important information first before trying to match the less important patterns. This guiding process leads to more information being captured when the surrogate model is compressed into only a few dimensions compared to traditional methods. Additionally, it allows for a faster learning time compared to similar surrogate models without the information. | en |
dc.description.degree | Master of Science | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:31087 | en |
dc.identifier.uri | http://hdl.handle.net/10919/103423 | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Machine learning | en |
dc.subject | Neural Networks | en |
dc.subject | Polynomial Chaos | en |
dc.subject | Reduced Order Modeling | en |
dc.title | Augmented Neural Network Surrogate Models for Polynomial Chaos Expansions and Reduced Order Modeling | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Science and Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |
Files
Original bundle
1 - 1 of 1