284. An acceleration strategy for gradient methods in convex quadratic programming
Invited abstract in session MB-35: Nonlinear Optimization Algorithms and Applications: 1 , stream Continuous and mixed-integer nonlinear programming: theory and algorithms.
Monday, 10:30-12:00Room: Michael Sadler LG15
Authors (first author is the speaker)
| 1. | Gerardo Toraldo
|
| Universita degli Studi della Campania | |
| 2. | Serena Crisci
|
| Department of Mathematics and Physics, University of Campania "L. Vanvitelli" | |
| 3. | Anna De Magistris
|
| Department of Mathematics and Physiscs | |
| 4. | Valentina De Simone
|
| Mathematics and Physics, University of Campania "L. Vanvitelli" |
Abstract
It is well known that the Cauchy steepest descent (SD) algorithm performs very badly, even for mild conditioned problems, since the worst case predicted by the theory is likely to happen. Barzilai Borwein related rules shown to be much more efficient than SD with surprisingly computational results even when generalized to nonquadratic and constrained problems. The design of effective steplength rules is the common ground on which all new gradient methods are based on.
In this work we propose a different approach to accelerate the convergence
of gradient methods, based on the idea of performing, in selected iterations,
acceleration steps which are not gradient related.
Keywords
- Continuous Optimization
- Programming, Quadratic
Status: accepted
Back to the list of papers