Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A new approach to symmetric rank-one updating

A new approach to symmetric rank-one updating IMA Journal of Numerical Analysis (1999) 19, 497­507 M. R. O SBORNE Program in Advanced Computation, School of Mathematical Sciences, The Australian National University, Canberra, ACT 2601, Australia AND L INPING S UN Department of Mathematics, Nanjing University, Nanjing 210008, People's Republic of China [Received 28 October 1997 and in revised form 15 November 1998] A stabilized version of the symmetric rank-one updating method for solving unconstrained optimization problems is developed by introducing a scaling parameter to ensure that successive estimates of the inverse Hessian are positive definite. The properties of this update are studied, and a new algorithm based on this procedure is proposed. This algorithm uses Davidon's idea of optimal conditioning in order to devise heuristics for selecting the scaling parameter automatically. Numerical testing shows that the new method compares favourably with good implementations of the BFGS method. Thus it appears very competitive in the class of methods which use only function and gradient information. 1. Introduction We consider the unconstrained optimization problem min f (x) (1.1) where f (x) : Rn R is a twice continuously differentiable function, and it is assumed that a solution x to this problem exists. The methods considered here fall into the class of quasi-Newton algorithms. Typically, these start each iteration given an estimate H of the inverse of the Hessian 2 f (x) at the current point x and move to a new point x+ given by x+ = x - H g (1.2) where g = f (x) and > 0 is chosen so that f (x+ ) < f (x) and certain auxiliary conditions are satisfied (see (2.3) for example). The calculation of is called the line search step. The iteration concludes by updating H to give a revised estimate H+ satisfying the quasi-Newton condition (1.3) H+ y = s, where s = x+ - x, and y = g+ - g. Broyden (1967) proposed a rank-one updating formula for H+ : (s - H y)q T H+ = H + , (1.4) qT y c Oxford University Press 1999 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IMA Journal of Numerical Analysis Oxford University Press

A new approach to symmetric rank-one updating

IMA Journal of Numerical Analysis , Volume 19 (4) – Oct 1, 1999

Loading next page...
 
/lp/oxford-university-press/a-new-approach-to-symmetric-rank-one-updating-eAgng8pusN

References (0)

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Oxford University Press
Copyright
Copyright © 2015 Institute of Mathematics and its Applications
ISSN
0272-4979
eISSN
1464-3642
DOI
10.1093/imanum/19.4.497
Publisher site
See Article on Publisher Site

Abstract

IMA Journal of Numerical Analysis (1999) 19, 497­507 M. R. O SBORNE Program in Advanced Computation, School of Mathematical Sciences, The Australian National University, Canberra, ACT 2601, Australia AND L INPING S UN Department of Mathematics, Nanjing University, Nanjing 210008, People's Republic of China [Received 28 October 1997 and in revised form 15 November 1998] A stabilized version of the symmetric rank-one updating method for solving unconstrained optimization problems is developed by introducing a scaling parameter to ensure that successive estimates of the inverse Hessian are positive definite. The properties of this update are studied, and a new algorithm based on this procedure is proposed. This algorithm uses Davidon's idea of optimal conditioning in order to devise heuristics for selecting the scaling parameter automatically. Numerical testing shows that the new method compares favourably with good implementations of the BFGS method. Thus it appears very competitive in the class of methods which use only function and gradient information. 1. Introduction We consider the unconstrained optimization problem min f (x) (1.1) where f (x) : Rn R is a twice continuously differentiable function, and it is assumed that a solution x to this problem exists. The methods considered here fall into the class of quasi-Newton algorithms. Typically, these start each iteration given an estimate H of the inverse of the Hessian 2 f (x) at the current point x and move to a new point x+ given by x+ = x - H g (1.2) where g = f (x) and > 0 is chosen so that f (x+ ) < f (x) and certain auxiliary conditions are satisfied (see (2.3) for example). The calculation of is called the line search step. The iteration concludes by updating H to give a revised estimate H+ satisfying the quasi-Newton condition (1.3) H+ y = s, where s = x+ - x, and y = g+ - g. Broyden (1967) proposed a rank-one updating formula for H+ : (s - H y)q T H+ = H + , (1.4) qT y c Oxford University Press 1999

Journal

IMA Journal of Numerical AnalysisOxford University Press

Published: Oct 1, 1999

There are no references for this article.