330. Risk-averse guarantees for stochastic min-max problems
Invited abstract in session TC-1: First-Order Methods for Structured Optimization and Sampling, stream Zeroth and first-order optimization methods.
Tuesday, 14:00-16:00Room: B100/1001
Authors (first author is the speaker)
| 1. | Yassine Laguel
|
| Université Côte d’Azur | |
| 2. | Mert Gürbüzbalaban
|
| Rutgers University | |
| 3. | Necdet Serhat Aybat
|
| Penn State University | |
| 4. | Yasa Syed
|
| Rutgers University |
Abstract
We investigate the stochastic accelerated primal-dual algorithm for strongly-convex-strongly-concave (SCSC) saddle point problems, common in distributionally robust learning, game theory, and fairness in machine learning. Our algorithm offers optimal complexity in several settings and we provide high probability guarantees for convergence to a neighbourhood of the saddle point. For quadratic problems under Gaussian perturbations, we derive analytical formulas for the limit covariance matrix together with lower bounds that show that our general analysis for SCSC problems is tight. Our risk-averse convergence analysis characterises the trade-offs between bias and risk in approximate solutions. We present numerical experiments on zero-sum games and robust learning problems. We finally extend our results on nonconvex-PL problems.
Keywords
- Large-scale optimization
- Stochastic optimization
Status: accepted
Back to the list of papers