447. Accelerated Gradient Methods via Inertial Systems with Hessian-driven Damping
Invited abstract in session TB-10: First order methods: new perspectives for machine learning , stream Large scale optimization: methods and algorithms.
Tuesday, 10:30-12:30Room: B100/8011
Authors (first author is the speaker)
| 1. | Juan Peypouquet
|
| Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence, University of Groningen |
Abstract
We analyze the convergence rate of a family of inertial algorithms, which can be obtained by discretization of an inertial system with Hessian-driven damping. We recover a convergence rate, up to a factor of 2 speedup upon Nesterov's scheme, for smooth strongly convex functions. As a byproduct of our analyses, we also derive linear convergence rates for convex functions satisfying a quadratic growth condition or Polyak-Ćojasiewicz inequality. As a significant feature of our results, the dependence of the convergence rate on parameters of the inertial system/algorithm is revealed explicitly, which helps one get a better understanding of the acceleration mechanism underlying an inertial algorithm.
Keywords
- Large-scale optimization
- First-order optimization
Status: accepted
Back to the list of papers