EUROPT 2024
Abstract Submission

214. Exact convergence rates of the last iterate in subgradient methods

Invited abstract in session TD-2: Recent advances in computer-aided analyses of optimization algorithms III, stream Conic optimization: theory, algorithms and applications.

Thursday, 14:10 - 15:50
Room: M:O

Authors (first author is the speaker)

1. François Glineur
ICTEAM/INMA & CORE, Université catholique de Louvain (UCLouvain)
2. Moslem Zamani
Mathematical Engineering, UCLouvain

Abstract

We study the convergence of the last iterate in subgradient methods applied to the minimization of a nonsmooth convex function with bounded subgradients.

We first introduce a proof technique that generalizes the standard analysis of subgradient methods. It is based on tracking the distance between the current iterate and a different reference point at each iteration. Using this technique, we obtain the exact worst-case convergence rate for the objective accuracy of the last iterate of the projected subgradient method with either constant step sizes or constant step lengths. Tightness is shown with a worst-case instance matching the established convergence rate. We also derive the value of the optimal constant step size when performing a given number of iterations.

Finally, we introduce a new optimal subgradient method that achieves the best possible last-iterate accuracy after a given number of iterations. Its convergence rate matches exactly the lower bound on the performance of any black-box method on the considered problem class. We also show that no single memory-less method can achieve this optimal rate for any number of iterations.

[paper can be found at https://arxiv.org/abs/2307.11134]

Keywords

Status: accepted


Back to the list of papers