A Second Derivative SQP Method: Local Convergence and Practical Issues (2010)

First Author: Gould N

Attributed to:  Algorithms for Large-Scale Nonlinearly Constrained Optimization funded by EPSRC


Gould and Robinson [SIAM J. Optim., 20 (2010), pp. 2023-2048] proved global convergence of a second derivative SQP method for minimizing the exact l(1)-merit function for a fixed value of the penalty parameter. This result required the properties of a so-called Cauchy step, which was itself computed from a so-called predictor step. In addition, they allowed for the additional computation of a variety of (optional) accelerator steps that were intended to improve the efficiency of the algorithm. The main purpose of this paper is to prove that a nonmonotone variant of the algorithm is quadratically convergent for two specific realizations of the accelerator step; this is verified with preliminary numerical results on the Hock and Schittkowski test set. Once fast local convergence is established, we consider two specific aspects of the algorithm that are important for an efficient implementation. First, we discuss a strategy for defining the positive-definite matrix B(k) used in computing the predictor step that is based on a limited-memory BFGS update. Second, we provide a simple strategy for updating the penalty parameter based on approximately minimizing the l(1)-penalty function over a sequence of increasing values of the penalty parameter.

Bibliographic Information

Digital Object Identifier: http://dx.doi.org/10.1137/080744554

Publication URI: http://dx.doi.org/10.1137/080744554

Type: Journal Article/Review

Volume: 20

Parent Publication: SIAM Journal on Optimization

Issue: 4