EURO 2024 Copenhagen
Abstract Submission

EURO-Online login

2731. On the behavior of limited-memory quasi-Newton methods for quadratic problems

Invited abstract in session WB-32: Beyond First-Order Optimization Methods, stream Advances in large scale nonlinear optimization.

Wednesday, 10:30-12:00
Room: 41 (building: 303A)

Authors (first author is the speaker)

1. Aban Ansari-Önnestam
Mathematics, Linköping University
2. Anders Forsgren
Department of Mathematics, KTH Royal Institute of Technology

Abstract

Quasi-Newton methods form an important class of methods for solving nonlinear optimization problems. In such methods, first order information is used to approximate the second derivative. The aim is to mimic the fast convergence that can be guaranteed by Newton-based methods. In the best case, quasi-Newton methods will far outperform steepest descent and other first order methods, without the computational cost of calculating the exact second derivative. These convergence guarantees hold locally, which follows closely from the fact that if the objective function is strongly convex it can be approximated well by a quadratic function close to the solution. Understanding the performance of quasi-Newton methods on quadratic problems with a symmetric positive definite Hessian is therefore of vital importance. In the classic case, an approximation of the Hessian is updated at every iteration and exact line search is used. It is well known that the algorithm terminates finitely, even when the Hessian approximation is memoryless, i.e. requires only the most recent information. This talk will address ways in which the reliance on exact line search and dependence on conjugate search directions can be relaxed, and how these changes affect the behavior of quasi-Newton methods.

Keywords

Status: accepted


Back to the list of papers