Rate of convergence in nonlinear programming

TR Number
Date
1972-05-06
Journal Title
Journal ISSN
Volume Title
Publisher
Virginia Tech
Abstract

The rate of convergence is a useful measure of the performance of an algorithm. Knowledge of the rate can help determine which algorithm is best suited for a given problem. This research is a study of the rate of convergence of a few algorithms used for nonlinear programming problems. The Newton-Raphson procedure and a higher order procedure used for the solution of nonlinear equations is studied. Both the convergence and the rate of convergence for the multivariate Newton-Raphson procedure is presented in the simple format of the Newton-Raphson procedure for scalar functions. A higher order procedure, which results directly from Taylor series expansion is presented. Its convergence is established and a measure for the rate of convergence is obtained. A multivariate generalization of this higher order procedure is seen to have little practical value.

In analyzing the simplex algorithm, it was not possible to obtain an expression for its rate of convergence, however, an expression for the improvement in the objective function between successive iterations is obtained. This expression is entirely in terms of the original problem rather than intermediate computations.

The bound, due to Kantorovich, for the rate of convergence of the optimal gradient method used in solution for a system of linear equations is shown to hold for the general unconstrained quadratic programming problem. The result is then extended for the general directional procedure.

Rosen presented a bound for the rate of convergence of his algorithm. The bound was obtained under very strict assumptions of the computational procedure. It is seen that under the same assumptions, tighter bounds are available for Rosen's method and that these bounds are also applicable under less stringent assumptions about the computational procedure.

Description
Keywords
algorithms
Citation