436. On finite termination of quasi-Newton methods on quadratic problems
Invited abstract in session MD-9: Generalized convexity and monotonicity 3, stream Generalized convexity and monotonicity.
Monday, 16:30-18:30Room: B100/8013
Authors (first author is the speaker)
| 1. | Aban Ansari-Önnestam
|
| Mathematics, Linköping University | |
| 2. | Anders Forsgren
|
| Department of Mathematics, KTH Royal Institute of Technology |
Abstract
Quasi-Newton methods form an important class of methods for solving nonlinear optimization problems. In such methods, first order information is used to approximate the second derivative. The aim is to mimic the fast convergence that can be guaranteed by Newton-based methods. In the best case, quasi-Newton methods will far outperform steepest descent and other first order methods, without the computational cost of calculating the exact second derivative. These guarantees are a local result, that follow closely from the fact that such an objective function is strongly convex and can be approximated by a quadratic function close to the solution. Understanding the performance of quasi-Newton methods on quadratic problems with a symmetric positive definite Hessian is therefore of vital importance. This talk will discuss ways in which the reliance on line search and a full approximation of the Hessian can be relaxed, and how these changes affect the behavior of quasi-Newton methods. It is possible to achieve termination in at most one iteration more than termination of methods such as CG and BFGS based on only first order information, without any type of line search except for the last step.
Keywords
- Large-scale optimization
Status: accepted
Back to the list of papers