Scholarly Works, Computational Science Laboratory
Permanent URI for this collection
Browse
Browsing Scholarly Works, Computational Science Laboratory by Author "Alexe, Mihai"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
- Adjoint-based space-time adaptive solution algorithms for sensitivity analysis and inverse problemsAlexe, Mihai (Virginia Tech, 2011-03-18)Adaptivity in both space and time has become the norm for solving problems modeled by partial differential equations. The size of the discretized problem makes uniformly refined grids computationally prohibitive. Adaptive refinement of meshes and time steps allows to capture the phenomena of interest while keeping the cost of a simulation tractable on the current hardware. Many fields in science and engineering require the solution of inverse problems where parameters for a given model are estimated based on available measurement information. In contrast to forward (regular) simulations, inverse problems have not extensively benefited from the adaptive solver technology. Previous research in inverse problems has focused mainly on the continuous approach to calculate sensitivities, and has typically employed fixed time and space meshes in the solution process. Inverse problem solvers that make exclusive use of uniform or static meshes avoid complications such as the differentiation of mesh motion equations, or inconsistencies in the sensitivity equations between subdomains with different refinement levels. However, this comes at the cost of low computational efficiency. More efficient computations are possible through judicious use of adaptive mesh refinement, adaptive time steps, and the discrete adjoint method. This dissertation develops a complete framework for fully discrete adjoint sensitivity analysis and inverse problem solutions, in the context of time dependent, adaptive mesh, and adaptive step models. The discrete framework addresses all the necessary ingredients of a state–of–the–art adaptive inverse solution algorithm: adaptive mesh and time step refinement, solution grid transfer operators, a priori and a posteriori error analysis and estimation, and discrete adjoints for sensitivity analysis of flux–limited numerical algorithms.
- Denserks: Fortran Sensitivity Solvers Using Continuous, Explicit Runge-kutta SchemesAlexe, Mihai; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2007-10-01)DENSERKS is a Fortran sensitivity equation solver package designed for integrating models whose evolution can be described by ordinary differential equations (ODEs). A salient feature of DENSERKS is its support for both forward and adjoint sensitivity analyses, with built-in integrators for both first and second order continuous adjoint models. The software implements explicit Runge-Kutta methods with adaptive timestepping and high-order dense output schemes for the forward and the tangent linear model trajectory interpolation. Implementations of six Runge-Kutta methods are provided, with orders of accuracy ranging from two to eight. This makes DENSERKS suitable for a wide range of practical applications. The use of dense output, a novel approach in adjoint sensitivity analysis solvers, allows for a high-order cost-effective interpolation. This is a necessary feature when solving adjoints of nonlinear systems using highly accurate Runge-Kutta methods (order five and above). To minimize memory requirements and make long-time integrations computationally efficient, DENSERKS implements a two-level checkpointing mechanism. The code is tested on a selection of problems illustrating first and second order sensitivity analysis with respect to initial model conditions. The resulting derivative information is also used in a gradient-based optimization algorithm to minimize cost functionals dependent on a given set of model parameters.
- A fully discrete framework for the adaptive solution of inverse problemsAlexe, Mihai; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2012-12-01)We investigate and contrast the differences between the discretize-then-differentiate and differentiate-then-discretize approaches to the numerical solution of parameter estimation problems. The former approach is attractive in practice due to the use of automatic differentiation for the generation of the dual and optimality equations in the first-order KKT system. The latter strategy is more versatile, in that it allows one to formulate efficient mesh-independent algorithms over suitably chosen function spaces. However, it is significantly more difficult to implement, since automatic code generation is no longer an option. The starting point is a classical elliptic inverse problem. An a priori error analysis for the discrete optimality equation shows consistency and stability are not inherited automatically from the primal discretization. Similar to the concept of dual consistency, We introduce the concept of optimality consistency. However, the convergence properties can be restored through suitable consistent modifications of the target functional. Numerical tests confirm the theoretical convergence order for the optimal solution. We then derive a posteriori error estimates for the infinite dimensional optimal solution error, through a suitably chosen error functional. This estimates are constructed using second order derivative information for the target functional. For computational efficiency, the Hessian is replaced by a low order BFGS approximation. The efficiency of the error estimator is confirmed by a numerical experiment with multigrid optimization.
- A Hybrid Approach to Estimating Error Covariances in Variational Data AssimilationCheng, Haiyan; Jardak, Mohamed; Alexe, Mihai; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2009-03-01)Data Assimilation (DA) involves the combination of observational data with the underlying dynamical principles governing the system under observation. In this work we combine the advantages of the two prominent advanced data assimilation systems, the 4D-Var and the ensemble methods. The proposed method consists of identifying the subspace spanned by the major 4D-Var error reduction directions. These directions are then removed from the background covariance through a Galerkin-type projection. This generates an updated error covariance information at both end points of an assimilation window. The error covariance information is updated between assimilation windows to capture the ``error of the day''. Numerical results using our new hybrid approach on a nonlinear model demonstrate how the background covariance matrix leads to an error covariance update that improves the 4D-Var DA results.
- On the discrete adjoints of adaptive time stepping algorithmsAlexe, Mihai; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2008-04-01)We investigate the behavior of adaptive time stepping numerical algorithms under the reverse mode of automatic differentiation (AD). By differentiating the time step controller and the error estimator of the original algorithm, reverse mode AD generates spurious adjoint derivatives of the time steps. The resulting discrete adjoint models become inconsistent with the adjoint ODE, and yield incorrect derivatives. To regain consistency, one has to cancel out the contributions of the non-physical derivatives in the discrete adjoint model. We demonstrate that the discrete adjoint models of one-step, explicit adaptive algorithms, such as the Runge--Kutta schemes, can be made consistent with their continuous analogs using simple code modifications. Furthermore, we extend the analysis to cover second order adjoint models derived through an extra forward-mode differentiation of the discrete adjoint code. Two numerical examples support the mathematical derivations.
- Second order adjoints for solving PDE-constrained optimization problemsCioaca, Alexandru; Alexe, Mihai; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2010)Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as PDE-constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost functional subject to the constraints posed by the model equations. The numerical solution of such optimization problems requires the computation of derivatives of the model output with respect to model parameters. The first order derivatives of a cost functional (defined on the model output) with respect to a large number of model parameters can be calculated efficiently through first order adjoint sensitivity analysis. Second order adjoint models give second derivative information in the form of matrix-vector products between the Hessian of the cost functional and user defined vectors. Traditionally, the construction of second order derivatives for large scale models has been considered too costly. Consequently, data assimilation applications employ optimization algorithms that use only first order derivative information, like nonlinear conjugate gradients and quasi-Newton methods. In this paper we discuss the mathematical foundations of second order adjoint sensitivity analysis and show that it provides an efficient approach to obtain Hessian-vector products. We study the benefits of using of second order information in the numerical optimization process for data assimilation applications. The numerical studies are performed in a twin experiment setting with a two-dimensional shallow water model. Different scenarios are considered with different discretization approaches, observation sets, and noise levels. Optimization algorithms that employ second order derivatives are tested against widely used methods that require only first order derivatives. Conclusions are drawn regarding the potential benefits and the limitations of using high-order information in large scale data assimilation problems.
- Space-time adaptive solution of inverse problems with the discrete adjoint methodAlexe, Mihai; Sandu, Adrian (Department of Computer Science, Virginia Polytechnic Institute & State University, 2010-11-01)Adaptivity in both space and time has become the norm for solving problems modeled by partial differential equations. The size of the discretized problem makes uniformly refined grids computationally prohibitive. Adaptive refinement of meshes and time steps allows to capture the phenomena of interest while keeping the cost of a simulation tractable on the current hardware. Many fields in science and engineering require the solution of inverse problems where parameters for a given model are estimated based on available measurement information. In contrast to forward (regular) simulations, inverse problems have not extensively benefited from the adaptive solver technology. Previous research in inverse problems has focused mainly on the continuous approach to calculate sensitivities, and has typically employed fixed time and space meshes in the solution process. Inverse problem solvers that make exclusive use of uniform or static meshes avoid complications such as the differentiation of mesh motion equations, or inconsistencies in the sensitivity equations between subdomains with different refinement levels. However, this comes at the cost of low computational efficiency. More efficient computations are possible through judicious use of adaptive mesh refinement, adaptive time steps, and the discrete adjoint method. This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the intergrid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided for the discontinuous Galerkin (DG) method. The adjoint model development is considerably simplified by decoupling the adaptive mesh refinement mechanism from the forward model solver, and by selectively applying automatic differentiation on individual algorithms. In forward models discontinuous Galerkin discretizations can efficiently handle high orders of accuracy, $h/p$-refinement, and parallel computation. The analysis reveals that this approach, paired with Runge Kutta time stepping, is well suited for the adaptive solutions of inverse problems. The usefulness of discrete discontinuous Galerkin adjoints is illustrated on a two-dimensional adaptive data assimilation problem.