College of Engineering (COE)
Permanent URI for this community
Note: The Department of Biological Systems Engineering is listed within the College of Agriculture and Life Sciences (CALS).
Browse
Browsing College of Engineering (COE) by Content Type "Dissertation"
Now showing 1 - 20 of 22
Results Per Page
Sort Options
- Accelerating Atmospheric Modeling Through Emerging Multi-core TechnologiesLinford, John Christian (Virginia Tech, 2010-05-05)The new generations of multi-core chipset architectures achieve unprecedented levels of computational power while respecting physical and economical constraints. The cost of this power is bewildering program complexity. Atmospheric modeling is a grand-challenge problem that could make good use of these architectures if they were more accessible to the average programmer. To that end, software tools and programming methodologies that greatly simplify the acceleration of atmospheric modeling and simulation with emerging multi-core technologies are developed. A general model is developed to simulate atmospheric chemical transport and atmospheric chemical kinetics. The Cell Broadband Engine Architecture (CBEA), General Purpose Graphics Processing Units (GPGPUs), and homogeneous multi-core processors (e.g. Intel Quad-core Xeon) are introduced. These architectures are used in case studies of transport modeling and kinetics modeling and demonstrate per-kernel speedups as high as 40x. A general analysis and code generation tool for chemical kinetics called "KPPA" is developed. KPPA generates highly tuned C, Fortran, or Matlab code that uses every layer of heterogeneous parallelism in the CBEA, GPGPU, and homogeneous multi-core architectures. A scalable method for simulating chemical transport is also developed. The Weather Research and Forecasting Model with Chemistry (WRF-Chem) is accelerated with these methods with good results: real forecasts of air quality are generated for the Eastern United States 65% faster than the state-of-the-art models.
- Adaptive Numerical Methods for Large Scale Simulations and Data AssimilationConstantinescu, Emil Mihai (Virginia Tech, 2008-05-26)Numerical simulation is necessary to understand natural phenomena, make assessments and predictions in various research and engineering fields, develop new technologies, etc. New algorithms are needed to take advantage of the increasing computational resources and utilize the emerging hardware and software infrastructure with maximum efficiency. Adaptive numerical discretization methods can accommodate problems with various physical, scale, and dynamic features by adjusting the resolution, order, and the type of method used to solve them. In applications that simulate real systems, the numerical accuracy of the solution is typically just one of the challenges. Measurements can be included in the simulation to constrain the numerical solution through a process called data assimilation in order to anchor the simulation in reality. In this thesis we investigate adaptive discretization methods and data assimilation approaches for large-scale numerical simulations. We develop and investigate novel multirate and implicit-explicit methods that are appropriate for multiscale and multiphysics numerical discretizations. We construct and explore data assimilation approaches for, but not restricted to, atmospheric chemistry applications. A generic approach for describing the structure of the uncertainty in initial conditions that can be applied to the most popular data assimilation approaches is also presented. We show that adaptive numerical methods can effectively address the discretization of large-scale problems. Data assimilation complements the adaptive numerical methods by correcting the numerical solution with real measurements. Test problems and large-scale numerical experiments validate the theoretical findings. Synergistic approaches that use adaptive numerical methods within a data assimilation framework need to be investigated in the future.
- Adjoint based solution and uncertainty quantification techniques for variational inverse problemsHebbur Venkata Subba Rao, Vishwas (Virginia Tech, 2015-09-25)Variational inverse problems integrate computational simulations of physical phenomena with physical measurements in an informational feedback control system. Control parameters of the computational model are optimized such that the simulation results fit the physical measurements.The solution procedure is computationally expensive since it involves running the simulation computer model (the emph{forward model}) and the associated emph {adjoint model} multiple times. In practice, our knowledge of the underlying physics is incomplete and hence the associated computer model is laden with emph {model errors}. Similarly, it is not possible to measure the physical quantities exactly and hence the measurements are associated with emph {data errors}. The errors in data and model adversely affect the inference solutions. This work develops methods to address the challenges posed by the computational costs and by the impact of data and model errors in solving variational inverse problems. Variational inverse problems of interest here are formulated as optimization problems constrained by partial differential equations (PDEs). The solution process requires multiple evaluations of the constraints, therefore multiple solutions of the associated PDE. To alleviate the computational costs we develop a parallel in time discretization algorithm based on a nonlinear optimization approach. Like in the emph{parareal} approach, the time interval is partitioned into subintervals, and local time integrations are carried out in parallel. Solution continuity equations across interval boundaries are added as constraints. All the computational steps - forward solutions, gradients, and Hessian-vector products - involve only ideally parallel computations and therefore are highly scalable. This work develops a systematic mathematical framework to compute the impact of data and model errors on the solution to the variational inverse problems. The computational algorithm makes use of first and second order adjoints and provides an a-posteriori error estimate for a quantity of interest defined on the inverse solution (i.e., an aspect of the inverse solution). We illustrate the estimation algorithm on a shallow water model and on the Weather Research and Forecast model. Presence of outliers in measurement data is common, and this negatively impacts the solution to variational inverse problems. The traditional approach, where the inverse problem is formulated as a minimization problem in $L_2$ norm, is especially sensitive to large data errors. To alleviate the impact of data outliers we propose to use robust norms such as the $L_1$ and Huber norm in data assimilation. This work develops a systematic mathematical framework to perform three and four dimensional variational data assimilation using $L_1$ and Huber norms. The power of this approach is demonstrated by solving data assimilation problems where measurements contain outliers.
- Adjoint-based space-time adaptive solution algorithms for sensitivity analysis and inverse problemsAlexe, Mihai (Virginia Tech, 2011-03-18)Adaptivity in both space and time has become the norm for solving problems modeled by partial differential equations. The size of the discretized problem makes uniformly refined grids computationally prohibitive. Adaptive refinement of meshes and time steps allows to capture the phenomena of interest while keeping the cost of a simulation tractable on the current hardware. Many fields in science and engineering require the solution of inverse problems where parameters for a given model are estimated based on available measurement information. In contrast to forward (regular) simulations, inverse problems have not extensively benefited from the adaptive solver technology. Previous research in inverse problems has focused mainly on the continuous approach to calculate sensitivities, and has typically employed fixed time and space meshes in the solution process. Inverse problem solvers that make exclusive use of uniform or static meshes avoid complications such as the differentiation of mesh motion equations, or inconsistencies in the sensitivity equations between subdomains with different refinement levels. However, this comes at the cost of low computational efficiency. More efficient computations are possible through judicious use of adaptive mesh refinement, adaptive time steps, and the discrete adjoint method. This dissertation develops a complete framework for fully discrete adjoint sensitivity analysis and inverse problem solutions, in the context of time dependent, adaptive mesh, and adaptive step models. The discrete framework addresses all the necessary ingredients of a state–of–the–art adaptive inverse solution algorithm: adaptive mesh and time step refinement, solution grid transfer operators, a priori and a posteriori error analysis and estimation, and discrete adjoints for sensitivity analysis of flux–limited numerical algorithms.
- Advanced Sampling Methods for Solving Large-Scale Inverse ProblemsAttia, Ahmed Mohamed Mohamed (Virginia Tech, 2016-09-19)Ensemble and variational techniques have gained wide popularity as the two main approaches for solving data assimilation and inverse problems. The majority of the methods in these two approaches are derived (at least implicitly) under the assumption that the underlying probability distributions are Gaussian. It is well accepted, however, that the Gaussianity assumption is too restrictive when applied to large nonlinear models, nonlinear observation operators, and large levels of uncertainty. This work develops a family of fully non-Gaussian data assimilation algorithms that work by directly sampling the posterior distribution. The sampling strategy is based on a Hybrid/Hamiltonian Monte Carlo (HMC) approach that can handle non-normal probability distributions. The first algorithm proposed in this work is the "HMC sampling filter", an ensemble-based data assimilation algorithm for solving the sequential filtering problem. Unlike traditional ensemble-based filters, such as the ensemble Kalman filter and the maximum likelihood ensemble filter, the proposed sampling filter naturally accommodates non-Gaussian errors and nonlinear model dynamics, as well as nonlinear observations. To test the capabilities of the HMC sampling filter numerical experiments are carried out using the Lorenz-96 model and observation operators with different levels of nonlinearity and differentiability. The filter is also tested with shallow water model on the sphere with linear observation operator. Numerical results show that the sampling filter performs well even in highly nonlinear situations where the traditional filters diverge. Next, the HMC sampling approach is extended to the four-dimensional case, where several observations are assimilated simultaneously, resulting in the second member of the proposed family of algorithms. The new algorithm, named "HMC sampling smoother", is an ensemble-based smoother for four-dimensional data assimilation that works by sampling from the posterior probability density of the solution at the initial time. The sampling smoother naturally accommodates non-Gaussian errors and nonlinear model dynamics and observation operators, and provides a full description of the posterior distribution. Numerical experiments for this algorithm are carried out using a shallow water model on the sphere with observation operators of different levels of nonlinearity. The numerical results demonstrate the advantages of the proposed method compared to the traditional variational and ensemble-based smoothing methods. The HMC sampling smoother, in its original formulation, is computationally expensive due to the innate requirement of running the forward and adjoint models repeatedly. The proposed family of algorithms proceeds by developing computationally efficient versions of the HMC sampling smoother based on reduced-order approximations of the underlying model dynamics. The reduced-order HMC sampling smoothers, developed as extensions to the original HMC smoother, are tested numerically using the shallow-water equations model in Cartesian coordinates. The results reveal that the reduced-order versions of the smoother are capable of accurately capturing the posterior probability density, while being significantly faster than the original full order formulation. In the presence of nonlinear model dynamics, nonlinear observation operator, or non-Gaussian errors, the prior distribution in the sequential data assimilation framework is not analytically tractable. In the original formulation of the HMC sampling filter, the prior distribution is approximated by a Gaussian distribution whose parameters are inferred from the ensemble of forecasts. The Gaussian prior assumption in the original HMC filter is relaxed. Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. The base filter developed following this strategy is named cluster HMC sampling filter (ClHMC ). A multi-chain version of the ClHMC filter, namely MC-ClHMC , is also proposed to guarantee that samples are taken from the vicinities of all probability modes of the formulated posterior. These methodologies are tested using a quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction. Numerical results demonstrate the usefulness of using GMMs to relax the Gaussian prior assumption in the HMC filtering paradigm. To provide a unified platform for data assimilation research, a flexible and a highly-extensible testing suite, named DATeS , is developed and described in this work. The core of DATeS is implemented in Python to enable for Object-Oriented capabilities. The main components, such as the models, the data assimilation algorithms, the linear algebra solvers, and the time discretization routines are independent of each other, such as to offer maximum flexibility to configure data assimilation studies.
- Advanced Time Integration Methods with Applications to Simulation, Inverse Problems, and Uncertainty QuantificationNarayanamurthi, Mahesh (Virginia Tech, 2020-01-29)Simulation and optimization of complex physical systems are an integral part of modern science and engineering. The systems of interest in many fields have a multiphysics nature, with complex interactions between physical, chemical and in some cases even biological processes. This dissertation seeks to advance forward and adjoint numerical time integration methodologies for the simulation and optimization of semi-discretized multiphysics partial differential equations (PDEs), and to estimate and control numerical errors via a goal-oriented a posteriori error framework. We extend exponential propagation iterative methods of Runge-Kutta type (EPIRK) by [Tokman, JCP 2011], to build EPIRK-W and EPIRK-K time integration methods that admit approximate Jacobians in the matrix-exponential like operations. EPIRK-W methods extend the W-method theory by [Steihaug and Wofbrandt, Math. Comp. 1979] to preserve their order of accuracy under arbitrary Jacobian approximations. EPIRK-K methods extend the theory of K-methods by [Tranquilli and Sandu, JCP 2014] to EPIRK and use a Krylov-subspace based approximation of Jacobians to gain computational efficiency. New families of partitioned exponential methods for multiphysics problems are developed using the classical order condition theory via particular variants of T-trees and corresponding B-series. The new partitioned methods are found to perform better than traditional unpartitioned exponential methods for some problems in mild-medium stiffness regimes. Subsequently, partitioned stiff exponential Runge-Kutta (PEXPRK) methods -- that extend stiffly accurate exponential Runge-Kutta methods from [Hochbruck and Ostermann, SINUM 2005] to a multiphysics context -- are constructed and analyzed. PEXPRK methods show full convergence under various splittings of a diffusion-reaction system. We address the problem of estimation of numerical errors in a multiphysics discretization by developing a goal-oriented a posteriori error framework. Discrete adjoints of GARK methods are derived from their forward formulation [Sandu and Guenther, SINUM 2015]. Based on these, we build a posteriori estimators for both spatial and temporal discretization errors. We validate the estimators on a number of reaction-diffusion systems and use it to simultaneously refine spatial and temporal grids.
- Combining Data-driven and Theory-guided Models in Ensemble Data AssimilationPopov, Andrey Anatoliyevich (Virginia Tech, 2022-08-23)There once was a dream that data-driven models would replace their theory-guided counterparts. We have awoken from this dream. We now know that data cannot replace theory. Data-driven models still have their advantages, mainly in computational efficiency but also providing us with some special sauce that is unreachable by our current theories. This dissertation aims to provide a way in which both the accuracy of theory-guided models, and the computational efficiency of data-driven models can be combined. This combination of theory-guided and data-driven allows us to combine ideas from a much broader set of disciplines, and can help pave the way for robust and fast methods.
- A Computational Framework for Assessing and Optimizing the Performance of Observational Networks in 4D-Var Data AssimilationCioaca, Alexandru (Virginia Tech, 2013-09-04)A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimilation is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) - data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
- Computational Techniques for the Analysis of Large Scale Biological SystemsAhn, Tae-Hyuk (Virginia Tech, 2016-09-27)An accelerated pace of discovery in biological sciences is made possible by a new generation of computational biology and bioinformatics tools. In this dissertation we develop novel computational, analytical, and high performance simulation techniques for biological problems, with applications to the yeast cell division cycle, and to the RNA-Sequencing of the yellow fever mosquito. Cell cycle system evolves stochastic effects when there are a small number of molecules react each other. Consequently, the stochastic effects of the cell cycle are important, and the evolution of cells is best described statistically. Stochastic simulation algorithm (SSA), the standard stochastic method for chemical kinetics, is often slow because it accounts for every individual reaction event. This work develops a stochastic version of a deterministic cell cycle model, in order to capture the stochastic aspects of the evolution of the budding yeast wild-type and mutant strain cells. In order to efficiently run large ensembles to compute statistics of cell evolution, the dissertation investigates parallel simulation strategies, and presents a new probabilistic framework to analyze the performance of dynamic load balancing algorithms. This work also proposes new accelerated stochastic simulation algorithms based on a fully implicit approach and on stochastic Taylor expansions. Next Generation RNA-Sequencing, a high-throughput technology to sequence cDNA in order to get information about a sample's RNA content, is becoming an efficient genomic approach to uncover new genes and to study gene expression and alternative splicing. This dissertation develops efficient algorithms and strategies to find new genes in Aedes aegypti, which is the most important vector of dengue fever and yellow fever. We report the discovery of a large number of new gene transcripts, and the identification and characterization of genes that showed male-biased expression profiles. This basic information may open important avenues to control mosquito borne infectious diseases.
- Efficient Computational Tools for Variational Data Assimilation and Information Content EstimationSingh, Kumaresh (Virginia Tech, 2010-08-10)The overall goals of this dissertation are to advance the field of chemical data assimilation, and to develop efficient computational tools that allow the atmospheric science community benefit from state of the art assimilation methodologies. Data assimilation is the procedure to combine data from observations with model predictions to obtain a more accurate representation of the state of the atmosphere. As models become more complex, determining the relationships between pollutants and their sources and sinks becomes computationally more challenging. The construction of an adjoint model ( capable of efficiently computing sensitivities of a few model outputs with respect to many input parameters ) is a difficult, labor intensive, and error prone task. This work develops adjoint systems for two of the most widely used chemical transport models: Harvard's GEOS-Chem global model and for Environmental Protection Agency's regional CMAQ regional air quality model. Both GEOS-Chem and CMAQ adjoint models are now used by the atmospheric science community to perform sensitivity analysis and data assimilation studies. Despite the continuous increase in capabilities, models remain imperfect and models alone cannot provide accurate long term forecasts. Observations of the atmospheric composition are now routinely taken from sondes, ground stations, aircraft, and satellites, etc. This work develops three and four dimensional variational data assimilation capabilities for GEOS-Chem and CMAQ which allow to estimate chemical states that best fit the observed reality. Most data assimilation systems to date use diagonal approximations of the background covariance matrix which ignore error correlations and may lead to inaccurate estimates. This dissertation develops computationally efficient representations of covariance matrices that allow to capture spatial error correlations in data assimilation. Not all observations used in data assimilation are of equal importance. Erroneous and redundant observations not only affect the quality of an estimate but also add unnecessary computational expense to the assimilation system. This work proposes techniques to quantify the information content of observations used in assimilation; information-theoretic metrics are used. The four dimensional variational approach to data assimilation provides accurate estimates but requires an adjoint construction, and uses considerable computational resources. This work studies versions of the four dimensional variational methods (Quasi 4D-Var) that use approximate gradients and are less expensive to develop and run. Variational and Kalman filter approaches are both used in data assimilation, but their relative merits and disadvantages in the context of chemical data assimilation have not been assessed. This work provides a careful comparison on a chemical assimilation problem with real data sets. The assimilation experiments performed here demonstrate for the first time the benefit of using satellite data to improve estimates of tropospheric ozone.
- Efficient formulation and implementation of ensemble based methods in data assimilationNino Ruiz, Elias David (Virginia Tech, 2016-01-11)Ensemble-based methods have gained widespread popularity in the field of data assimilation. An ensemble of model realizations encapsulates information about the error correlations driven by the physics and the dynamics of the numerical model. This information can be used to obtain improved estimates of the state of non-linear dynamical systems such as the atmosphere and/or the ocean. This work develops efficient ensemble-based methods for data assimilation. A major bottleneck in ensemble Kalman filter (EnKF) implementations is the solution of a linear system at each analysis step. To alleviate it an EnKF implementation based on an iterative Sherman Morrison formula is proposed. The rank deficiency of the ensemble covariance matrix is exploited in order to efficiently compute the analysis increments during the assimilation process. The computational effort of the proposed method is comparable to those of the best EnKF implementations found in the current literature. The stability analysis of the new algorithm is theoretically proven based on the positiveness of the data error covariance matrix. In order to improve the background error covariance matrices in ensemble-based data assimilation we explore the use of shrinkage covariance matrix estimators from ensembles. The resulting filter has attractive features in terms of both memory usage and computational complexity. Numerical results show that it performs better that traditional EnKF formulations. In geophysical applications the correlations between errors corresponding to distant model components decreases rapidly with the distance. We propose a new and efficient implementation of the EnKF based on a modified Cholesky decomposition for inverse covariance matrix estimation. This approach exploits the conditional independence of background errors between distant model components with regard to a predefined radius of influence. Consequently, sparse estimators of the inverse background error covariance matrix can be obtained. This implies huge memory savings during the assimilation process under realistic weather forecast scenarios. Rigorous error bounds for the resulting estimator in the context of data assimilation are theoretically proved. The conclusion is that the resulting estimator converges to the true inverse background error covariance matrix when the ensemble size is of the order of the logarithm of the number of model components. We explore high-performance implementations of the proposed EnKF algorithms. When the observational operator can be locally approximated for different regions of the domain, efficient parallel implementations of the EnKF formulations presented in this dissertation can be obtained. The parallel computation of the analysis increments is performed making use of domain decomposition. Local analysis increments are computed on (possibly) different processors. Once all local analysis increments have been computed they are mapped back onto the global domain to recover the global analysis. Tests performed with an atmospheric general circulation model at a T-63 resolution, and varying the number of processors from 96 to 2,048, reveal that the assimilation time can be decreased multiple fold for all the proposed EnKF formulations.Ensemble-based methods can be used to reformulate strong constraint four dimensional variational data assimilation such as to avoid the construction of adjoint models, which can be complicated for operational models. We propose a trust region approach based on ensembles in which the analysis increments are computed onto the space of an ensemble of snapshots. The quality of the resulting increments in the ensemble space is compared against the gains in the full space. Decisions on whether accept or reject solutions rely on trust region updating formulas. Results based on a atmospheric general circulation model with a T-42 resolution reveal that this methodology can improve the analysis accuracy.
- Efficient Time Stepping Methods and Sensitivity Analysis for Large Scale Systems of Differential EquationsZhang, Hong (Virginia Tech, 2014-09-09)Many fields in science and engineering require large-scale numerical simulations of complex systems described by differential equations. These systems are typically multi-physics (they are driven by multiple interacting physical processes) and multiscale (the dynamics takes place on vastly different spatial and temporal scales). Numerical solution of such systems is highly challenging due to the dimension of the resulting discrete problem, and to the complexity that comes from incorporating multiple interacting components with different characteristics. The main contributions of this dissertation are the creation of new families of time integration methods for multiscale and multiphysics simulations, and the development of industrial-strengh tools for sensitivity analysis. This work develops novel implicit-explicit (IMEX) general linear time integration methods for multiphysics and multiscale simulations typically involving both stiff and non-stiff components. In an IMEX approach, one uses an implicit scheme for the stiff components and an explicit scheme for the non-stiff components such that the combined method has the desired stability and accuracy properties. Practical schemes with favorable properties, such as maximized stability, high efficiency, and no order reduction, are constructed and applied in extensive numerical experiments to validate the theoretical findings and to demonstrate their advantages. Approximate matrix factorization (AMF) technique exploits the structure of the Jacobian of the implicit parts, which may lead to further efficiency improvement of IMEX schemes. We have explored the application of AMF within some high order IMEX Runge-Kutta schemes in order to achieve high efficiency. Sensitivity analysis gives quantitative information about the changes in a dynamical model outputs caused by caused by small changes in the model inputs. This information is crucial for data assimilation, model-constrained optimization, inverse problems, and uncertainty quantification. We develop a high performance software package for sensitivity analysis in the context of stiff and nonstiff ordinary differential equations. Efficiency is demonstrated by direct comparisons against existing state-of-art software on a variety of test problems.
- Lightly-Implicit Methods for the Time Integration of Large ApplicationsTranquilli, Paul J. (Virginia Tech, 2016-08-09)Many scientific and engineering applications require the solution of large systems of initial value problems arising from method of lines discretization of partial differential equations. For systems with widely varying time scales, or with complex physical dynamics, implicit time integration schemes are preferred due to their superior stability properties. However, for very large systems accurate solution of the implicit terms can be impractical. For this reason approximations are widely used in the implementation of such methods. The primary focus of this work is on the development of novel ``lightly-implicit'' time integration methodologies. These methods consider the time integration and the solution of the implicit terms as a single computational process. We propose several classes of lightly-implicit methods that can be constructed to allow for different, specific approximations. Rosenbrock-Krylov and exponential-Krylov methods are designed to permit low accuracy Krylov based approximations of the implicit terms, while maintaining full order of convergence. These methods are matrix free, have low memory requirements, and are particularly well suited to parallel architectures. Linear stability analysis of K-methods is leveraged to construct implementation improvements for both Rosenbrock-Krylov and exponential-Krylov methods. Linearly-implicit Runge-Kutta-W methods are designed to permit arbitrary, time dependent, and stage varying approximations of the linear stiff dynamics of the initial value problem. The methods presented here are constructed with approximate matrix factorization in mind, though the framework is flexible and can be extended to many other approximations. The flexibility of lightly-implicit methods, and their ability to leverage computationally favorable approximations makes them an ideal alternative to standard explicit and implicit schemes for large parallel applications.
- Modeling, Sensitivity Analysis, and Optimization of Hybrid, Constrained Mechanical SystemsCorner, Sebastien Marc (Virginia Tech, 2018-03-29)This dissertation provides a complete mathematical framework to compute the sensitivities with respect to system parameters for any second order hybrid Ordinary Differential Equation (ODE) and rank 1 and 3 Differential Algebraic Equation (DAE) systems. The hybrid system is characterized by discontinuities in the velocity state variables due to an impulsive forces at the time of event. At the time of event, such system may also exhibit a change in the equations of motion or in the kinematic constraints. The analytical methodology that solves the sensitivities for hybrid systems is structured based on jumping conditions for both, the velocity state variables and the sensitivities matrix. The proposed analytical approach is then benchmarked against a known numerical method. The mathematical framework is extended to compute sensitivities of the states of the model and of the general cost functionals with respect to model parameters for both, unconstrained and constrained, hybrid mechanical systems. This dissertation emphasizes the penalty formulation for modeling constrained mechanical systems since this formalism has the advantage that it incorporates the kinematic constraints inside the equation of motion, thus easing the numerical integration, works well with redundant constraints, and avoids kinematic bifurcations. In addition, this dissertation provides a unified mathematical framework for performing the direct and the adjoint sensitivity analysis for general hybrid systems associated with general cost functions. The mathematical framework computes the jump sensitivity matrix of the direct sensitivities which is found by computing the Jacobian of the jump conditions with respect to sensitivities right before the event. The main idea is then to obtain the transpose of the jump sensitivity matrix to compute the jump conditions for the adjoint sensitivities. Finally, the methodology developed obtains the sensitivity matrix of cost functions with respect to parameters for general hybrid ODE systems. Such matrix is a key result for design analysis as it provides the parameters that affect the given cost functions the most. Such results could be applied to gradient based algorithms, control optimization, implicit time integration methods, deep learning, etc.
- Multimethods for the Efficient Solution of Multiscale Differential EquationsRoberts, Steven Byram (Virginia Tech, 2021-08-30)Mathematical models involving ordinary differential equations (ODEs) play a critical role in scientific and engineering applications. Advances in computing hardware and numerical methods have allowed these models to become larger and more sophisticated. Increasingly, problems can be described as multiphysics and multiscale as they combine several different physical processes with different characteristics. If just one part of an ODE is stiff, nonlinear, chaotic, or rapidly-evolving, this can force an expensive method or a small timestep to be used. A method which applies a discretization and timestep uniformly across a multiphysics problem poorly utilizes computational resources and can be prohibitively expensive. The focus of this dissertation is on "multimethods" which apply different methods to different partitions of an ODE. Well-designed multimethods can drastically reduce the computation costs by matching methods to the individual characteristics of each partition while making minimal concessions to stability and accuracy. However, they are not without their limitations. High order methods are difficult to derive and may suffer from order reduction. Also, the stability of multimethods is difficult to characterize and analyze. The goals of this work are to develop new, practical multimethods and to address these issues. First, new implicit multirate Runge–Kutta methods are analyzed with a special focus on stability. This is extended into implicit multirate infinitesimal methods. We introduce approaches for constructing implicit-explicit methods based on Runge–Kutta and general linear methods. Finally, some unique applications of multimethods are considered including using surrogate models to accelerate Runge–Kutta methods and eliminating order reduction on linear ODEs with time-dependent forcing.
- Parametric Optimal Design Of Uncertain Dynamical SystemsHays, Joseph T. (Virginia Tech, 2011-08-25)This research effort develops a comprehensive computational framework to support the parametric optimal design of uncertain dynamical systems. Uncertainty comes from various sources, such as: system parameters, initial conditions, sensor and actuator noise, and external forcing. Treatment of uncertainty in design is of paramount practical importance because all real-life systems are affected by it; not accounting for uncertainty may result in poor robustness, sub-optimal performance and higher manufacturing costs. Contemporary methods for the quantification of uncertainty in dynamical systems are computationally intensive which, so far, have made a robust design optimization methodology prohibitive. Some existing algorithms address uncertainty in sensors and actuators during an optimal design; however, a comprehensive design framework that can treat all kinds of uncertainty with diverse distribution characteristics in a unified way is currently unavailable. The computational framework uses Generalized Polynomial Chaos methodology to quantify the effects of various sources of uncertainty found in dynamical systems; a Least-Squares Collocation Method is used to solve the corresponding uncertain differential equations. This technique is significantly faster computationally than traditional sampling methods and makes the construction of a parametric optimal design framework for uncertain systems feasible. The novel framework allows to directly treat uncertainty in the parametric optimal design process. Specifically, the following design problems are addressed: motion planning of fully-actuated and under-actuated systems; multi-objective robust design optimization; and optimal uncertainty apportionment concurrently with robust design optimization. The framework advances the state-of-the-art and enables engineers to produce more robust and optimally performing designs at an optimal manufacturing cost.
- Polynomial Chaos Approaches to Parameter Estimation and Control Design for Mechanical Systems with Uncertain ParametersBlanchard, Emmanuel (Virginia Tech, 2010-03-26)Mechanical systems operate under parametric and external excitation uncertainties. The polynomial chaos approach has been shown to be more efficient than Monte Carlo approaches for quantifying the effects of such uncertainties on the system response. This work uses the polynomial chaos framework to develop new methodologies for the simulation, parameter estimation, and control of mechanical systems with uncertainty. This study has led to new computational approaches for parameter estimation in nonlinear mechanical systems. The first approach is a polynomial-chaos based Bayesian approach in which maximum likelihood estimates are obtained by minimizing a cost function derived from the Bayesian theorem. The second approach is based on the Extended Kalman Filter (EKF). The error covariances needed for the EKF approach are computed from polynomial chaos expansions, and the EKF is used to update the polynomial chaos representation of the uncertain states and the uncertain parameters. The advantages and drawbacks of each method have been investigated. This study has demonstrated the effectiveness of the polynomial chaos approach for control systems analysis. For control system design the study has focused on the LQR problem when dealing with parametric uncertainties. The LQR problem was written as an optimality problem using Lagrange multipliers in an extended form associated with the polynomial chaos framework. The solution to the Hâ problem as well as the H2 problem can be seen as extensions of the LQR problem. This method might therefore have the potential of being a first step towards the development of computationally efficient numerical methods for Hâ design with parametric uncertainties. I would like to gratefully acknowledge the support provided for this work under NASA Grant NNL05AA18A.
- Probabilistic and Statistical Learning Models for Error Modeling and Uncertainty QuantificationZavar Moosavi, Azam Sadat (Virginia Tech, 2018-03-13)Simulations and modeling of large-scale systems are vital to understanding real world phenomena. However, even advanced numerical models can only approximate the true physics. The discrepancy between model results and nature can be attributed to different sources of uncertainty including the parameters of the model, input data, or some missing physics that is not included in the model due to a lack of knowledge or high computational costs. Uncertainty reduction approaches seek to improve the model accuracy by decreasing the overall uncertainties in models. Aiming to contribute to this area, this study explores uncertainty quantification and reduction approaches for complex physical problems. This study proposes several novel probabilistic and statistical approaches for identifying the sources of uncertainty, modeling the errors, and reducing uncertainty to improve the model predictions for large-scale simulations. We explore different computational models. The first class of models studied herein are inherently stochastic, and numerical approximations suffer from stability and accuracy issues. The second class of models are partial differential equations, which capture the laws of mathematical physics; however, they only approximate a more complex reality, and have uncertainties due to missing dynamics which is not captured by the models. The third class are low-fidelity models, which are fast approximations of very expensive high-fidelity models. The reduced-order models have uncertainty due to loss of information in the dimension reduction process. We also consider uncertainty analysis in the data assimilation framework, specifically for ensemble based methods where the effect of sampling errors is alleviated by localization. Finally, we study the uncertainty in numerical weather prediction models coming from approximate descriptions of physical processes.
- Sensitivity Analysis and Optimization of Multibody SystemsZhu, Yitao (Virginia Tech, 2015-01-05)Multibody dynamics simulations are currently widely accepted as valuable means for dynamic performance analysis of mechanical systems. The evolution of theoretical and computational aspects of the multibody dynamics discipline make it conducive these days for other types of applications, in addition to pure simulations. One very important such application is design optimization for multibody systems. Sensitivity analysis of multibody system dynamics, which is performed before optimization or in parallel, is essential for optimization. Current sensitivity approaches have limitations in terms of efficiently performing sensitivity analysis for complex systems with respect to multiple design parameters. Thus, we bring new contributions to the state-of-the-art in analytical sensitivity approaches in this study. A direct differentiation method is developed for multibody dynamic models that employ Maggi's formulation. An adjoint variable method is developed for explicit and implicit first order Maggi's formulations, second order Maggi's formulation, and first and second order penalty formulations. The resulting sensitivities are employed to perform optimization of different multibody systems case studies. The collection of benchmark problems includes a five-bar mechanism, a full vehicle model, and a passive dynamic robot. The five-bar mechanism is used to test and validate the sensitivity approaches derived in this paper by comparing them with other sensitivity approaches. The full vehicle system is used to demonstrate the capability of the adjoint variable method based on the penalty formulation to perform sensitivity analysis and optimization for large and complex multibody systems with respect to multiple design parameters with high efficiency. In addition, a new multibody dynamics software library MBSVT (Multibody Systems at Virginia Tech) is developed in Fortran 2003, with forward kinematics and dynamics, sensitivity analysis, and optimization capabilities. Several different contact and friction models, which can be used to model point contact and surface contact, are developed and included in MBSVT. Finally, this study employs reference point coordinates and the penalty formulation to perform dynamic analysis for the passive dynamic robot, simplifying the modeling stage and making the robotic system more stable. The passive dynamic robot is also used to test and validate all the point contact and surface contact models developed in MBSVT.
- Time Integration Methods for Large-scale Scientific SimulationsGlandon Jr, Steven Ross (Virginia Tech, 2020-06-26)The solution of initial value problems is a fundamental component of many scientific simulations of physical phenomena. In many cases these initial value problems arise from a method of lines approach to solving partial differential equations, resulting in very large systems of equations that require the use of numerical time integration methods to solve. Many problems of scientific interest exhibit stiff behavior for which implicit methods are favorable, however standard implicit methods are computationally expensive. They require the solution of one or more large nonlinear systems at each timestep, which can be impractical to solve exactly and can behave poorly when solved approximately. The recently introduced ``lightly-implicit'' K-methods seek to avoid this issue by directly coupling the time integration methods with a Krylov based approximation of linear system solutions, treating a portion of the problem implicitly and the remainder explicitly. This work seeks to further two primary objectives: evaluation of these K-methods in large-scale parallel applications, and development of new linearly implicit methods for contexts where improvements can be made. To this end, Rosenbrock-Krylov methods, the first K-methods, are examined in a scalability study, and two new families of time integration methods are introduced: biorthogonal Rosenbrock-Krylov methods, and linearly implicit multistep methods. For the scalability evaluation of Rosenbrock-Krylov methods, two parallel contexts are considered: a GPU accelerated model and a distributed MPI parallel model. In both cases, the most significant performance bottleneck is the need for many vector dot products, which require costly parallel reduce operations. Biorthogonal Rosenbrock-Krylov methods are an extension of the original Rosenbrock-Krylov methods which replace the Arnoldi iteration used to produce the Krylov approximation with Lanczos biorthogonalization, which requires fewer vector dot products, leading to lower overall cost for stiff problems. Linearly implicit multistep methods are a new family of implicit multistep methods that require only a single linear solve per timestep; the family includes W- and K-method variants, which admit arbitrary or Krylov based approximations of the problem Jacobian while maintaining the order of accuracy. This property allows for a wide range of implementation optimizations. Finally, all the new methods proposed herein are implemented efficiently in the MATLODE package, a Matlab ODE solver and sensitivity analysis toolbox, to make them available to the community at large.