EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
599. Universal Gradient Methods for Stochastic Convex Optimization
Invited abstract in session MC-32: Advances in Complexity of Convex and Nonconvex Problems, stream Advances in large scale nonlinear optimization.
Monday, 12:30-14:00Room: 41 (building: 303A)
Authors (first author is the speaker)
1. | Anton Rodomanov
|
CISPA Helmholtz Center for Information Security |
Abstract
We develop universal gradient methods for Stochastic Convex Optimization (SCO). Our algorithms automatically adapt not only to the oracle's noise but also to the Hölder smoothness of the objective function without a priori knowledge of the particular setting. The key ingredient is a novel strategy for adjusting step-size coefficients in the Stochastic Gradient Method (SGD). Unlike AdaGrad, which accumulates gradient norms, our Universal Gradient Method accumulates appropriate combinations of gradient- and iterate differences. The resulting algorithm has state-of-the-art worst-case convergence rate guarantees for the entire Hölder class including, in particular, both nonsmooth functions and those with Lipschitz continuous gradient. We also present the Universal Fast Gradient Method for SCO enjoying optimal efficiency estimates.
Keywords
- Convex Optimization
- Stochastic Optimization
Status: accepted
Back to the list of papers