Browsing by Author "Sachs, Ekkehard W."
Now showing 1 - 7 of 7
Results Per Page
Sort Options
- An augmented Lagrangian algorithm for optimization with equality constraints in Hilbert spacesMaruhn, Jan Hendrik (Virginia Tech, 2001-04-30)Since augmented Lagrangian methods were introduced by Powell and Hestenes, this class of methods has been investigated very intensively. While the finite dimensional case has been treated in a satisfactory manner, the infinite dimensional case is studied much less. The general approach to solve an infinite dimensional optimization problem subject to equality constraints is as follows: First one proves convergence for a basic algorithm in the Hilbert space setting. Then one discretizes the given spaces and operators in order to make numerical computations possible. Finally, one constructs a discretized version of the infinite dimensional method and tries to transfer the convergence results to the finite dimensional version of the basic algorithm. In this thesis we discuss a globally convergent augmented Lagrangian algorithm and discretize it in terms of functional analytic restriction operators. Given this setting, we prove global convergence of the discretized version of this algorithm to a stationary point of the infinite dimensional optimization problem. The proposed algorithm includes an explicit rule of how to update the discretization level and the penalty parameter from one iteration to the next one - questions that had been unanswered so far. In particular the latter update rule guarantees that the penalty parameters stay bounded away from zero which prevents the Hessian of the discretized augmented Lagrangian functional from becoming more and more ill conditioned.
- The Clarke Derivative and Set-Valued Mappings in the Numerical Optimization of Non-Smooth, Noisy FunctionsKrahnke, Andreas (Virginia Tech, 2001-04-30)In this work we present a new tool for the convergence analysis of numerical optimization methods. It is based on the concepts of the Clarke derivative and set-valued mappings. Our goal is to apply this tool to minimization problems with non-smooth and noisy objective functions. After deriving a necessary condition for minimizers of such functions, we examine two unconstrained optimization routines. First, we prove new convergence theorems for Implicit Filtering and General Pattern Search. Then we show how these results can be used in practice, by executing some numerical computations.
- Inexact Kleinman-Newton method for Riccati equationsFeitzinger, Franziska; Hylla, Timo; Sachs, Ekkehard W. (Siam Publications, 2009-03)In this paper we consider the numerical solution of the algebraic Riccati equation using Newton's method. We propose an inexact variant which allows one control the number of the inner iterates used in an iterative solver for each Newton step. Conditions are given under which the monotonicity and global convergence result of Kleinman also hold for the inexact Newton iterates. Numerical results illustrate the efficiency of this method.
- Mesh independence of Kleinman-Newton iterations for Riccati equations in Hilbert spaceBurns, John A.; Sachs, Ekkehard W.; Zietsman, Lizette (Siam Publications, 2008)In this paper we consider the convergence of the infinite dimensional version of the Kleinman-Newton algorithm for solving the algebraic Riccati operator equation associated with the linear quadratic regulator problem in a Hilbert space. We establish mesh independence for this algorithm and apply the result to systems governed by delay equations. Numerical examples are presented to illustrate the results.
- Numerical Analysis of Jump-Diffusion Models for Option PricingStrauss, Arne Karsten (Virginia Tech, 2006-07-07)Jump-diffusion models can under certain assumptions be expressed as partial integro-differential equations (PIDE). Such a PIDE typically involves a convection term and a nonlocal integral like for the here considered models of Merton and Kou. We transform the PIDE to eliminate the convection term, discretize it implicitly using finite differences and the second order backward difference formula (BDF2) on a uniform grid. The arising dense linear system is solved by an iterative method, either a splitting technique or a circulant preconditioned conjugate gradient method. Exploiting the Fast Fourier Transform (FFT) yields the solution in only $O(n\log n)$ operations and just some vectors need to be stored. Second order accuracy is obtained on the whole computational domain for Merton's model whereas for Kou's model first order is obtained on the whole computational domain and second order locally around the strike price. The solution for the PIDE with convection term can oscillate in a neighborhood of the strike price depending on the choice of parameters, whereas the solution obtained from the transformed problem is stabilized.
- Probability-One Homotopy Maps for Mixed Complementarity ProblemsAhuja, Kapil (Virginia Tech, 2007-03-19)Probability-one homotopy algorithms have strong convergence characteristics under mild assumptions. Such algorithms for mixed complementarity problems (MCPs) have potentially wide impact because MCPs are pervasive in science and engineering. A probability-one homotopy algorithm for MCPs was developed earlier by Billups and Watson based on the default homotopy mapping. This algorithm had guaranteed global convergence under some mild conditions, and was able to solve most of the MCPs from the MCPLIB test library. This thesis extends that work by presenting some other homotopy mappings, enabling the solution of all the remaining problems from MCPLIB. The homotopy maps employed are the Newton homotopy and homotopy parameter embeddings.
- Sensitivities in Option Pricing ModelsTimsina, Tirtha Prasad (Virginia Tech, 2007-08-13)The inverse problem in finance consists of determining the unknown parameters of the pricing equation from the values quoted from the market. We formulate the inverse problem as a minimization problem for an appropriate cost function to minimize the difference between the solution of the model and the market observations. Efficient gradient based optimization requires accurate gradient estimation of the cost function. In this thesis we highlight the adjoint method for computing gradients of the cost function in the context of gradient based optimization and show its importance. We derive the continuous adjoint equations with appropriate boundary conditions for three main option pricing models: the Black-Scholes model, the Heston's model and the jump diffusion model, for European type options. These adjoint equations can be used to compute the gradient of the cost function accurately for parameter estimation problems. The adjoint method allows efficient evaluation of the gradient of a cost function F(σ) with respect to parameters σ where F depends on σ indirectly, via an intermediate variable. Compared to the finite difference method and the sensitivity equation method, the adjoint equation method is very efficient in computing the gradient of the cost function. The sensitivity equations method requires solving a PDE corresponding to each parameter in the model to estimate the gradient of the cost function. The adjoint method requires solving a single adjoint equation once. Hence, for a large number of parameters in the model, the adjoint equation method is very efficient. Due to its nature, the adjoint equation has to be solved backward in time. The adjoint equation derived from the jump diffusion model is harder to solve due to its non local integral term. But algorithms that can be used to solve the Partial Integro-Differential Equation (PIDE) derived from jump diffusion model can be modified to solve the adjoint equation derived from the PIDE.