485. An acceleration strategy for gradient methods in convex quadratic programming
Invited abstract in session TB-3: Theoretical and algorithmic advances in large scale nonlinear optimization and applications Part 1, stream Large scale optimization: methods and algorithms.
Tuesday, 10:30-12:30Room: B100/4011
Authors (first author is the speaker)
| 1. | Gerardo Toraldo
|
| Universita degli Studi della Campania | |
| 2. | Serena Crisci
|
| Department of Mathematics and Physics, University of Campania "L. Vanvitelli" | |
| 3. | Anna De Magistris
|
| Department of Mathematics and Physiscs | |
| 4. | Valentina De Simone
|
| Mathematics and Physics, University of Campania "L. Vanvitelli" |
Abstract
We propose a new acceleration strategy for gradient-based methods for solving strictly convex Quadratic optimization Problems (QP). The acceleration strategy is based
on the idea of performing, at selected iterations, minimization steps along descent directions other than the negative gradient, or even in affine subspaces of low dimension. In particular, considering the contribution of the linear and quadratic part of the objective function could be useful in designing line searches in acceleration steps. We present numerical tests to understand how the acceleration steps incorporated into different gradient methods influence their behavior. We examined randomly generated QP and box constrained QP test problems, designed to assess the algorithms under various conditions, such as matrix dimensions, condition numbers, and initialization strategies. Our experiments show that the use of acceleration steps in some BB methods significantly improves computational results.
Keywords
- Computational mathematical optimization
- Linear and nonlinear optimization
Status: accepted
Back to the list of papers