Browsing by Author "de Sturler, Eric"
Now showing 1 - 20 of 43
Results Per Page
Sort Options
- Adjoint based solution and uncertainty quantification techniques for variational inverse problemsHebbur Venkata Subba Rao, Vishwas (Virginia Tech, 2015-09-25)Variational inverse problems integrate computational simulations of physical phenomena with physical measurements in an informational feedback control system. Control parameters of the computational model are optimized such that the simulation results fit the physical measurements.The solution procedure is computationally expensive since it involves running the simulation computer model (the emph{forward model}) and the associated emph {adjoint model} multiple times. In practice, our knowledge of the underlying physics is incomplete and hence the associated computer model is laden with emph {model errors}. Similarly, it is not possible to measure the physical quantities exactly and hence the measurements are associated with emph {data errors}. The errors in data and model adversely affect the inference solutions. This work develops methods to address the challenges posed by the computational costs and by the impact of data and model errors in solving variational inverse problems. Variational inverse problems of interest here are formulated as optimization problems constrained by partial differential equations (PDEs). The solution process requires multiple evaluations of the constraints, therefore multiple solutions of the associated PDE. To alleviate the computational costs we develop a parallel in time discretization algorithm based on a nonlinear optimization approach. Like in the emph{parareal} approach, the time interval is partitioned into subintervals, and local time integrations are carried out in parallel. Solution continuity equations across interval boundaries are added as constraints. All the computational steps - forward solutions, gradients, and Hessian-vector products - involve only ideally parallel computations and therefore are highly scalable. This work develops a systematic mathematical framework to compute the impact of data and model errors on the solution to the variational inverse problems. The computational algorithm makes use of first and second order adjoints and provides an a-posteriori error estimate for a quantity of interest defined on the inverse solution (i.e., an aspect of the inverse solution). We illustrate the estimation algorithm on a shallow water model and on the Weather Research and Forecast model. Presence of outliers in measurement data is common, and this negatively impacts the solution to variational inverse problems. The traditional approach, where the inverse problem is formulated as a minimization problem in $L_2$ norm, is especially sensitive to large data errors. To alleviate the impact of data outliers we propose to use robust norms such as the $L_1$ and Huber norm in data assimilation. This work develops a systematic mathematical framework to perform three and four dimensional variational data assimilation using $L_1$ and Huber norms. The power of this approach is demonstrated by solving data assimilation problems where measurements contain outliers.
- Adjoint-based space-time adaptive solution algorithms for sensitivity analysis and inverse problemsAlexe, Mihai (Virginia Tech, 2011-03-18)Adaptivity in both space and time has become the norm for solving problems modeled by partial differential equations. The size of the discretized problem makes uniformly refined grids computationally prohibitive. Adaptive refinement of meshes and time steps allows to capture the phenomena of interest while keeping the cost of a simulation tractable on the current hardware. Many fields in science and engineering require the solution of inverse problems where parameters for a given model are estimated based on available measurement information. In contrast to forward (regular) simulations, inverse problems have not extensively benefited from the adaptive solver technology. Previous research in inverse problems has focused mainly on the continuous approach to calculate sensitivities, and has typically employed fixed time and space meshes in the solution process. Inverse problem solvers that make exclusive use of uniform or static meshes avoid complications such as the differentiation of mesh motion equations, or inconsistencies in the sensitivity equations between subdomains with different refinement levels. However, this comes at the cost of low computational efficiency. More efficient computations are possible through judicious use of adaptive mesh refinement, adaptive time steps, and the discrete adjoint method. This dissertation develops a complete framework for fully discrete adjoint sensitivity analysis and inverse problem solutions, in the context of time dependent, adaptive mesh, and adaptive step models. The discrete framework addresses all the necessary ingredients of a state–of–the–art adaptive inverse solution algorithm: adaptive mesh and time step refinement, solution grid transfer operators, a priori and a posteriori error analysis and estimation, and discrete adjoints for sensitivity analysis of flux–limited numerical algorithms.
- Advanced Time Integration Methods with Applications to Simulation, Inverse Problems, and Uncertainty QuantificationNarayanamurthi, Mahesh (Virginia Tech, 2020-01-29)Simulation and optimization of complex physical systems are an integral part of modern science and engineering. The systems of interest in many fields have a multiphysics nature, with complex interactions between physical, chemical and in some cases even biological processes. This dissertation seeks to advance forward and adjoint numerical time integration methodologies for the simulation and optimization of semi-discretized multiphysics partial differential equations (PDEs), and to estimate and control numerical errors via a goal-oriented a posteriori error framework. We extend exponential propagation iterative methods of Runge-Kutta type (EPIRK) by [Tokman, JCP 2011], to build EPIRK-W and EPIRK-K time integration methods that admit approximate Jacobians in the matrix-exponential like operations. EPIRK-W methods extend the W-method theory by [Steihaug and Wofbrandt, Math. Comp. 1979] to preserve their order of accuracy under arbitrary Jacobian approximations. EPIRK-K methods extend the theory of K-methods by [Tranquilli and Sandu, JCP 2014] to EPIRK and use a Krylov-subspace based approximation of Jacobians to gain computational efficiency. New families of partitioned exponential methods for multiphysics problems are developed using the classical order condition theory via particular variants of T-trees and corresponding B-series. The new partitioned methods are found to perform better than traditional unpartitioned exponential methods for some problems in mild-medium stiffness regimes. Subsequently, partitioned stiff exponential Runge-Kutta (PEXPRK) methods -- that extend stiffly accurate exponential Runge-Kutta methods from [Hochbruck and Ostermann, SINUM 2005] to a multiphysics context -- are constructed and analyzed. PEXPRK methods show full convergence under various splittings of a diffusion-reaction system. We address the problem of estimation of numerical errors in a multiphysics discretization by developing a goal-oriented a posteriori error framework. Discrete adjoints of GARK methods are derived from their forward formulation [Sandu and Guenther, SINUM 2015]. Based on these, we build a posteriori estimators for both spatial and temporal discretization errors. We validate the estimators on a number of reaction-diffusion systems and use it to simultaneously refine spatial and temporal grids.
- Analysis of GMRES for Low‐Rank and Small‐Norm Perturbations of the Identity MatrixCarr, Arielle K.; de Sturler, Eric; Embree, Mark P. (Wiley, 2023-03-24)
- Analysis of the BiCG MethodRenardy, Marissa (Virginia Tech, 2013-05-31)The Biconjugate Gradient (BiCG) method is an iterative Krylov subspace method that utilizes a 3-term recurrence. BiCG is the basis of several very popular methods, such as BiCGStab. The short recurrence makes BiCG preferable to other Krylov methods because of decreased memory usage and CPU time. However, BiCG does not satisfy any optimality conditions and it has been shown that for up to n/2-1 iterations, a special choice of the left starting vector can cause BiCG to follow {em any} 3-term recurrence. Despite this apparent sensitivity, BiCG often converges well in practice. This paper seeks to explain why BiCG converges so well, and what conditions can cause BiCG to behave poorly. We use tools such as the singular value decomposition and eigenvalue decomposition to establish bounds on the residuals of BiCG and make links between BiCG and optimal Krylov methods.
- A Computational Framework for Assessing and Optimizing the Performance of Observational Networks in 4D-Var Data AssimilationCioaca, Alexandru (Virginia Tech, 2013-09-04)A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimilation is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) - data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
- Computing Reduced Order Models via Inner-Outer Krylov Recycling in Diffuse Optical TomographyO'Connell, Meghan; Kilmer, Misha E.; de Sturler, Eric; Gugercin, Serkan (Siam Publications, 2017-01-01)In nonlinear imaging problems whose forward model is described by a partial differential equation (PDE), the main computational bottleneck in solving the inverse problem is the need to solve many large-scale discretized PDEs at each step of the optimization process. In the context of absorption imaging in diffuse optical tomography, one approach to addressing this bottleneck proposed recently (de Sturler, et al, 2015) reformulates the viewing of the forward problem as a differential algebraic system, and then employs model order reduction (MOR). However, the construction of the reduced model requires the solution of several full order problems (i.e. the full discretized PDE for multiple right-hand sides) to generate a candidate global basis. This step is then followed by a rank-revealing factorization of the matrix containing the candidate basis in order to compress the basis to a size suitable for constructing the reduced transfer function. The present paper addresses the costs associated with the global basis approximation in two ways. First, we use the structure of the matrix to rewrite the full order transfer function, and corresponding derivatives, such that the full order systems to be solved are symmetric (positive definite in the zero frequency case). Then we apply MOR to the new formulation of the problem. Second, we give an approach to computing the global basis approximation dynamically as the full order systems are solved. In this phase, only the incrementally new, relevant information is added to the existing global basis, and redundant information is not computed. This new approach is achieved by an inner-outer Krylov recycling approach which has potential use in other applications as well. We show the value of the new approach to approximate global basis computation on two DOT absorption image reconstruction problems.
- CPU/GPU Code Acceleration on Heterogeneous Systems and Code Verification for CFD ApplicationsXue, Weicheng (Virginia Tech, 2021-01-25)Computational Fluid Dynamics (CFD) applications usually involve intensive computations, which can be accelerated through using open accelerators, especially GPUs due to their common use in the scientific computing community. In addition to code acceleration, it is important to ensure that the code and algorithm are implemented numerically correctly, which is called code verification. This dissertation focuses on accelerating research CFD codes on multi-CPUs/GPUs using MPI and OpenACC, as well as the code verification for turbulence model implementation using the method of manufactured solutions and code-to-code comparisons. First, a variety of performance optimizations both agnostic and specific to applications and platforms are developed in order to 1) improve the heterogeneous CPU/GPU compute utilization; 2) improve the memory bandwidth to the main memory; 3) reduce communication overhead between the CPU host and the GPU accelerator; and 4) reduce the tedious manual tuning work for GPU scheduling. Both finite difference and finite volume CFD codes and multiple platforms with different architectures are utilized to evaluate the performance optimizations used. A maximum speedup of over 70 is achieved on 16 V100 GPUs over 16 Xeon E5-2680v4 CPUs for multi-block test cases. In addition, systematic studies of code verification are performed for a second-order accurate finite volume research CFD code. Cross-term sinusoidal manufactured solutions are applied to verify the Spalart-Allmaras and k-omega SST model implementation, both in 2D and 3D. This dissertation shows that the spatial and temporal schemes are implemented numerically correctly.
- Development of a Novel Performance Index and a Performance Prediction Model for Metallic Drinking Water PipelinesSt. Clair, Alison Marie (Virginia Tech, 2013-04-23)Previous authors have developed many different types of water pipe condition and failure models using the various methodologies available. Contrary, current utilities are struggling to maintain their current water infrastructure system, due to the lack of effective prediction tools at hand. The gap between the methodologies available in academic research and the tools available to current water utilities needs to be addressed. This paper presents a fuzzy inference prediction model used to forecast the performance rating of individual drinking water pipeline sections (node to node) in which utilities can easily apply to their drinking water infrastructure system. Prior to the development of a prediction model, a through literature and current practice review is completed detailing and summarizing all the available mathematical models. Following, an infrastructure overview is presented detailing the various pipe materials, lifecycle and failure modes and mechanisms. A data structure is also detailed which lists all parameters that affect the condition and/or performance of a pipeline. All of these tools are successfully used to develop a fuzzy inference performance model. The fuzzy inference performance model is considered novel in that it considers close to 30 pipe parameters. Moreover, the performance model is applied using the Western Virginia Water Authority (WVWA) and the Washington Suburban Sanitary Commission (WSSC) databases to evaluate and verify the predicting results. Lab testing of several pipe samples is also used to evaluate the model. The testing consists of a ring bearing test which is used to calculate the rupture modulus of the pipe. Comparing the original vs. the current rupture modulus can determine the remaining strength of the pipe. The remaining strength can then be used to assess the performance results predicted by the fuzzy inference model. Further a framework is set forth which utilizes the model's predicted performance ratings to develop deterioration curves which can be used as a tool to forecast and plan future inspection, repair, rehabilitation and replacement of water pipelines. The deterioration model is made up of a Markov chain approach coupled with a non-optimization technique.
- Development of Protocols and Methods for Predicting the Remaining Economic Life of Wastewater Pipe Infrastructure AssetsUslu, Berk (Virginia Tech, 2017-12-07)Performance prediction modeling is a crucial step in assessing the remaining service life of pipelines. Sound infrastructure deterioration models are essential for accurately predicting future performance that, in turn, are critical tools for efficient maintenance, repair and rehabilitation decision making. The objective of this research is to develop a gravity and force main pipe performance deterioration model for predicting the remaining economic life of wastewater pipe for infrastructure asset management. For condition assessment of gravity pipes, the defect indices currently in practice, use CCTV inspection and a defect coding scale to assess the internal condition of the wastewater pipes. Unfortunately, in practice, the distress indices are unable to capture all the deterioration mechanisms and distresses on pipes to provide a comprehensive and accurate evaluation of the pipe performance. Force main pipes present a particular challenge in performance prediction modeling. The consequence of failure can be higher for the force mains relative to the gravity pipes which increases the risk associated with these assets. However, unlike gravity pipes, there are no industry standards for inspection and condition assessment for force mains. Furthermore, accessibility issues for inspections add to this challenge. Under Water Environmental and Reuse Foundation (WEandRF)'s Strategic Asset Management (SAM) Challenge, there was a planned three-phase development of this performance prediction model. Only Phases 1 and 2 were completed for gravity pipes under the SAM Challenge. Currently, 37 utilities nationally distributed have provided data and support for this research. Data standards are developed to capture the physical, operational, structural, environmental, financial, and other factors affecting the performance. These data standards were reviewed by various participating utilities and service providers for completeness and accuracy. The performance of the gravity and force main pipes are assessed with incorporating the single and combined effects of these parameters on performance. These indices assess the performance regarding; integrity, corrosion, surface wear, joint, lining, blockage, IandI, root intrusion, and capacity. These performance indices are used for the long-term prediction of performance. However, due to limitations in historical performance data, an advanced integrated method for probabilistic performance modeling to construct workable transition probabilities for predicting long-term performance has been developed. A selection process within this method chooses a suitable prediction model for a given situation in terms of available historical data. Prediction models using time and state-dependent data were developed for this prediction model for reliable long-term performance prediction. Reliability of performance assessments and long-term predictions are tested with the developed verification and validation (VeandVa) framework. VeandVa framework incorporates piloting the performance index and prediction models with artificial, field, and forensic data collected from participating utilities. The deterioration model and the supporting data was integrated with the PIPEiD (Pipeline Infrastructure Database) for effective dissemination and outreach.
- Efficient methods for computing observation impact in 4D-Var data assimilationCioaca, Alexandru; Sandu, Adrian; de Sturler, Eric (Springer, 2013-12-01)This paper presents a practical computational approach to quantify the effect of individual observations in estimating the state of a system. Such an analysis can be used for pruning redundant measurements, and for designing future sensor networks. The mathematical approach is based on computing the sensitivity of the reanalysis (unconstrained optimization solution) with respect to the data. The computational cost is dominated by the solution of a linear system, whose matrix is the Hessian of the cost function, and is only available in operator form. The right hand side is the gradient of a scalar cost function that quantities the forecast error of the numerical model. The use of adjoint models to obtain the necessary first and second order derivatives is discussed. We study various strategies to accelerate the computation, including matrix-free iterative solvers, preconditioners, and an in-house multigrid solver. Experiments are conducted on both a small-size shallow-water equations model, and on a large-scale numerical weather prediction model, in order to illustrate the capabilities of the new methodology.
- Improved scaling for quantum monte carlo on insulatorsAhuja, Kapil; Clark, Bryan K.; de Sturler, Eric; Ceperley, David M.; Kim, Jeongnim (Siam Publications, 2011)Quantum Monte Carlo (QMC) methods are often used to calculate properties of many body quantum systems. The main cost of many QMC methods, for example, the variational Monte Carlo (VMC) method, is in constructing a sequence of Slater matrices and computing the ratios of determinants for successive Slater matrices. Recent work has improved the scaling of constructing Slater matrices for insulators so that the cost of constructing Slater matrices in these systems is now linear in the number of particles, whereas computing determinant ratios remains cubic in the number of particles. With the long term aim of simulating much larger systems, we improve the scaling of computing the determinant ratios in the VMC method for simulating insulators by using preconditioned iterative solvers. The main contribution of this paper is the development of a method to efficiently compute for the Slater matrices a sequence of preconditioners that make the iterative solver converge rapidly. This involves cheap preconditioner updates, an effective reordering strategy, and a cheap method to monitor instability of incomplete LU decomposition with threshold and pivoting (ILUTP) preconditioners. Using the resulting preconditioned iterative solvers to compute determinant ratios of consecutive Slater matrices reduces the scaling of QMC algorithms from O(n3) per sweep to roughly O(n2), where n is the number of particles, and a sweep is a sequence of n steps, each attempting to move a distinct particle. We demonstrate experimentally that we can achieve the improved scaling without increasing statistical errors. Our results show that preconditioned iterative solvers can dramatically reduce the cost of VMC for large(r) systems.
- Inexact Solves in Interpolatory Model ReductionWyatt, Sarah A. (Virginia Tech, 2009-05-04)Dynamical systems are mathematical models characterized by a set of differential or difference equations. Due to the increasing demand for more accuracy, the number of equations involved may reach the order of thousands and even millions. With so many equations, it often becomes computationally cumbersome to work with these large-scale dynamical systems. Model reduction aims to replace the original system with a reduced system of significantly smaller dimension which will still describe the important dynamics of the large-scale model. Interpolation is one method used to obtain the reduced order model. This requires that the reduced order model interpolates the full order model at selected interpolation points. Reduced order models are obtained through the Krylov reduction process, which involves solving a sequence of linear systems. The Iterative Rational Krylov Algorithm (IRKA) iterates this Krylov reduction process to obtain an optimal Η₂ reduced model. Especially in the large-scale setting, these linear systems often require employing inexact solves. The aim of this thesis is to investigate the impact of inexact solves on interpolatory model reduction. We considered preconditioning the linear systems, varying the stopping tolerances, employing GMRES and BiCG as the inexact solvers, and using different initial shift selections. For just one step of Krylov reduction, we verified theoretical properties of the interpolation error. Also, we found a linear improvement in the subspace angles between the inexact and exact subspaces provided that a good shift selection was used. For a poor shift selection, these angles often remained of the same order regardless of how accurately the linear systems were solved. These patterns were reflected in Η₂ and Η∞ errors between the inexact and exact subspaces, since these errors improved linearly with a good shift selection and were typically of the same order with a poor shift. We found that the shift selection also influenced the overall model reduction error between the full model and inexact model as these error norms were often several orders larger when a poor shift selection was used. For a given shift selection, the overall model reduction error typically remained of the same order for tolerances smaller than 1 x 10-3, which suggests that larger tolerances for the inexact solver may be used without necessarily augmenting the model reduction error. With preconditioned linear systems as well as BiCG, we found smaller errors between the inexact and exact models while the order of the overall model reduction error remained the same. With IRKA, we observed similar patterns as with just one step of Krylov reduction. However, we also found additional benefits associated with using an initial guess in the inexact solve and by varying the tolerance of the inexact solve.
- Issues in Interpolatory Model Reduction: Inexact Solves, Second-order Systems and DAEsWyatt, Sarah Alice (Virginia Tech, 2012-05-01)Dynamical systems are mathematical models characterized by a set of differential or difference equations. Model reduction aims to replace the original system with a reduced system of significantly smaller dimension that still describes the important dynamics of the large-scale model. Interpolatory model reduction methods define a reduced model that interpolates the full model at selected interpolation points. The reduced model may be obtained through a Krylov reduction process or by using the Iterative Rational Krylov Algorithm (IRKA), which iterates this Krylov reduction process to obtain an optimal ℋ₂ reduced model. This dissertation studies interpolatory model reduction for first-order descriptor systems, second-order systems, and DAEs. The main computational cost of interpolatory model reduction is the associated linear systems. Especially in the large-scale setting, inexact solves become desirable if not necessary. With the introduction of inexact solutions, however, exact interpolation no longer holds. While the effect of this loss of interpolation has previously been studied, we extend the discussion to the preconditioned case. Then we utilize IRKA's convergence behavior to develop preconditioner updates. We also consider the interpolatory framework for DAEs and second-order systems. While interpolation results still hold, the singularity associated with the DAE often results in unbounded model reduction errors. Therefore, we present a theorem that guarantees interpolation and a bounded model reduction error. Since this theorem relies on expensive projectors, we demonstrate how interpolation can be achieved without explicitly computing the projectors for index-1 and Hessenberg index-2 DAEs. Finally, we study reduction techniques for second-order systems. Many of the existing methods for second-order systems rely on the model's associated first-order system, which results in computations of a 2𝑛 system. As a result, we present an IRKA framework for the reduction of second-order systems that does not involve the associated 2𝑛 system. The resulting algorithm is shown to be effective for several dynamical systems.
- Krylov subspace recycling for evolving structuresBolten, Matthias; de Sturler, Eric; Hahn, Camilla; Parks, Michael Lawrence (Elsevier, 2022-03-01)Krylov subspace recycling is a powerful tool when solving a long series of large, sparse linear systems that change only slowly over time. In PDE constrained shape optimization, these series appear naturally, as typically hundreds or thousands of optimization steps are needed with only small changes in the geometry. In this setting, however, applying Krylov subspace recycling can be a difficult task. As the geometry evolves, in general, so does the finite element mesh defined on or representing this geometry, including the numbers of nodes and elements and element connectivity. This is especially the case if re-meshing techniques are used. As a result, the number of algebraic degrees of freedom in the system changes, and in general the linear system matrices resulting from the finite element discretization change size from one optimization step to the next. Changes in the mesh connectivity also lead to structural changes in the matrices. In the case of re-meshing, even if the geometry changes only a little, the corresponding mesh might differ substantially from the previous one. Obviously, this prevents any straightforward mapping of the approximate invariant subspace of the linear system matrix (the focus of recycling in this paper) from one optimization step to the next; similar problems arise for other selected subspaces. In this paper, we present an algorithm to map an approximate invariant subspace of the linear system matrix for the previous optimization step to an approximate invariant subspace of the linear system matrix for the current optimization step, for general meshes. This is achieved by exploiting the map from coefficient vectors to finite element functions on the mesh, combined with interpolation or approximation of functions on the finite element mesh. We demonstrate the effectiveness of our approach numerically with several proof of concept studies for a specific meshing technique.
- Lightly-Implicit Methods for the Time Integration of Large ApplicationsTranquilli, Paul J. (Virginia Tech, 2016-08-09)Many scientific and engineering applications require the solution of large systems of initial value problems arising from method of lines discretization of partial differential equations. For systems with widely varying time scales, or with complex physical dynamics, implicit time integration schemes are preferred due to their superior stability properties. However, for very large systems accurate solution of the implicit terms can be impractical. For this reason approximations are widely used in the implementation of such methods. The primary focus of this work is on the development of novel ``lightly-implicit'' time integration methodologies. These methods consider the time integration and the solution of the implicit terms as a single computational process. We propose several classes of lightly-implicit methods that can be constructed to allow for different, specific approximations. Rosenbrock-Krylov and exponential-Krylov methods are designed to permit low accuracy Krylov based approximations of the implicit terms, while maintaining full order of convergence. These methods are matrix free, have low memory requirements, and are particularly well suited to parallel architectures. Linear stability analysis of K-methods is leveraged to construct implementation improvements for both Rosenbrock-Krylov and exponential-Krylov methods. Linearly-implicit Runge-Kutta-W methods are designed to permit arbitrary, time dependent, and stage varying approximations of the linear stiff dynamics of the initial value problem. The methods presented here are constructed with approximate matrix factorization in mind, though the framework is flexible and can be extended to many other approximations. The flexibility of lightly-implicit methods, and their ability to leverage computationally favorable approximations makes them an ideal alternative to standard explicit and implicit schemes for large parallel applications.
- Mathematical models of immune responses following vaccination with application to Brucella infectionKadelka, Mirjam Sarah (Virginia Tech, 2015-06-17)For many years bovine brucellosis was a zoonosis endemic in large parts of the world. While it is still endemic in some parts, such as the Middle East or India, several countries such as Australia and Canada have successfully eradicated brucellosis in cattle by applying vaccines, improving the hygienic standards in cattle breeding, and slaughtering or quarantining infected animals. The large economical impact of bovine brucellosis and its virulence for humans, coming in direct contact to fluid discharges from infected animals, makes the eradication of bovine brucellosis important to achieve. To achieve this goal several vaccines have been developed in the past decades. Today the two most commonly used vaccines are Brucella abortus vaccine strain 19 and strain RB51. Both vaccines have been shown to be effective, but the mechanisms of immune responses following vaccination with either of the vaccines are not understood yet. In this thesis we analyze the immunological data obtained through vaccination with the two strains using mathematical modeling. We first design a measure that allows us to separate the subjects into good and bad responders. Then we investigate differences in the immune responses following vaccination with strain 19 or strain RB51 and boosting with strain RB51. We develop a mathematical model of immune responses that accounts for formation of antagonistic pro and anti-inflammatory and memory cells. We show that different characteristics of pro-inflammatory cell development and activity have an impact on the number of memory cells obtained after vaccination.
- Multi-level Parallelism with MPI and OpenACC for CFD ApplicationsMcCall, Andrew James (Virginia Tech, 2017-06-14)High-level parallel programming approaches, such as OpenACC, have recently become popular in complex fluid dynamics research since they are cross-platform and easy to implement. OpenACC is a directive-based programming model that, unlike low-level programming models, abstracts the details of implementation on the GPU. Although OpenACC generally limits the performance of the GPU, this model significantly reduces the work required to port an existing code to any accelerator platform, including GPUs. The purpose of this research is twofold: to investigate the effectiveness of OpenACC in developing a portable and maintainable GPU-accelerated code, and to determine the capability of OpenACC to accelerate large, complex programs on the GPU. In both of these studies, the OpenACC implementation is optimized and extended to a multi-GPU implementation while maintaining a unified code base. OpenACC is shown as a viable option for GPU computing with CFD problems. In the first study, a CFD code that solves incompressible cavity flows is accelerated using OpenACC. Overlapping communication with computation improves performance for the multi-GPU implementation by up to 21%, achieving up to 400 times faster performance than a single CPU and 99% weak scalability efficiency with 32 GPUs. The second study ports the execution of a more complex CFD research code to the GPU using OpenACC. Challenges using OpenACC with modern Fortran are discussed. Three test cases are used to evaluate performance and scalability. The multi-GPU performance using 27 GPUs is up to 100 times faster than a single CPU and maintains a weak scalability efficiency of 95%.
- A New State Transition Model for Forecasting-Aided State Estimation for the Grid of the FutureHassanzadeh, Mohammadtaghi (Virginia Tech, 2014-07-09)The grid of the future will be more decentralized due to the significant increase in distributed generation, and microgrids. In addition, due to the proliferation of large-scale intermittent wind power, the randomness in power system state will increase to unprecedented levels. This dissertation proposes a new state transition model for power system forecasting-aided state estimation, which aims at capturing the increasing stochastic nature in the states of the grid of the future. The proposed state forecasting model is based on time-series modeling of filtered system states and it takes spatial correlation among the states into account. Once the states with high spatial correlation are identified, the time-series models are developed to capture the dependency of voltages and angles in time and among each other. The temporal correlation in power system states (i.e. voltage angles and magnitudes) is modeled by using autoregression, while the spatial correlation among the system states (i.e. voltage angles) is modeled using vector autoregression. Simulation results show significant improvement in power system state forecasting accuracy especially in presence of distributed generation and microgrids.
- Nonconforming Immersed Finite Element Methods for Interface ProblemsZhang, Xu (Virginia Tech, 2013-05-04)In science and engineering, many simulations are carried out over domains consisting of multiple materials separated by curves/surfaces. If partial differential equations (PDEs) are used to model these simulations, it usually leads to the so-called interface problems of PDEs whose coefficients are discontinuous. In this dissertation, we consider nonconforming immersed "nite element (IFE) methods and error analysis for interface problems. We "first consider the second order elliptic interface problem with a discontinuous diffusion coefficient. We propose new IFE spaces based on the nonconforming rotated Q1 "finite elements on Cartesian meshes. The degrees of freedom of these IFE spaces are determined by midpoint values or average integral values on edges. We investigate fundamental properties of these IFE spaces, such as unisolvency and partition of unity, and extend well-known trace inequalities and inverse inequalities to these IFE functions. Through interpolation error analysis, we prove that these IFE spaces have optimal approximation capabilities. We use these IFE spaces to develop partially penalized Galerkin (PPG) IFE schemes whose bilinear forms contain penalty terms over interface edges. Error estimation is carried out for these IFE schemes. We prove that the PPG schemes with IFE spaces based on integral-value degrees of freedom have the optimal convergence in an energy norm. Following a similar approach, we prove that the interior penalty discontinuous Galerkin schemes based on these IFE functions also have the optimal convergence. However, for the PPG schemes based on midpoint-value degrees of freedom, we prove that they have at least a sub-optimal convergence. Numerical experiments are provided to demonstrate features of these IFE methods and compare them with other related numerical schemes. We extend nonconforming IFE schemes to the planar elasticity interface problem with discontinuous Lam"e parameters. Vector-valued nonconforming rotated Q1 IFE functions with integral-value degrees of freedom are unisolvent with appropriate interface jump conditions. More importantly, the Galerkin IFE scheme using these vector-valued nonconforming rotated Q1 IFE functions are "locking-free" for nearly incompressible elastic materials. In the last part of this dissertation, we consider potential applications of IFE methods to time dependent PDEs with moving interfaces. Using IFE functions in the discretization in space enables the applicability of the method of lines. Crank-Nicolson type fully discrete schemes are also developed as alternative approaches for solving moving interface problems.
- «
- 1 (current)
- 2
- 3
- »