225. General Tail Bounds for Non-Smooth Stochastic Mirror Descent
Invited abstract in session WB-1: Advances in stochastic and non-euclidean first order methods, stream Zeroth and first-order optimization methods.
Wednesday, 10:30-12:30Room: B100/1001
Authors (first author is the speaker)
| 1. | Andrea Paudice
|
| Computer Science, Aarhus University | |
| 2. | Khaled Eldowa
|
| Politecnico di Milano |
Abstract
We consider the problem of minimizing a convex non-smooth Lipschitz function over a convex domain when the optimizer is given access to noisy stochastic estimates of its subgradients. We analyze the classical Stochastic Mirror Descent method and provide novel tail bounds on the optimization error for both the average and the last iterate. Our results extend the existing tail bounds from the classical light-tailed Sub-Gaussian noise case to heavier-tailed noise regimes. We instantiate our results in two important cases: a class of noise with exponential tails and one with polynomial tails. A remarkable feature of our results is that they do not require an upper bound on the diameter of the domain. Finally, we support our theory with illustrative experiments that compare the behavior of the average of the iterates with that of the last iterate in heavy-tailed noise regimes.
Keywords
- Large-scale optimization
Status: accepted
Back to the list of papers