128. Optimal Acceleration for Minimax and Fixed-Point Problems is Not Unique
Invited abstract in session WC-5: Optimization for learning I, stream Optimization for learning.
Wednesday, 10:05 - 11:20Room: M:N
Authors (first author is the speaker)
| 1. | TaeHo Yoon
|
| Applied Mathematics and Statistics, Johns Hopkins University | |
| 2. | Jaeyeon Kim
|
| Seoul National University | |
| 3. | Jaewook Suh
|
| Seoul National University | |
| 4. | Ernest Ryu
|
| National University Ryu |
Abstract
Recently, accelerated algorithms using the anchoring mechanism for minimax optimization and fixed-point problems have been proposed, and matching complexity lower bounds establish their optimality. In this work, we present the surprising observation that the optimal acceleration mechanism in minimax optimization and fixed-point problems is not unique. Our new algorithms achieve exactly the same worst-case convergence rates as existing anchor-based methods while using materially different acceleration mechanisms. Specifically, these new algorithms are dual to the prior anchor-based accelerated methods in the sense of H-duality. This finding opens a new avenue of research on accelerated algorithms since we now have a family of methods that empirically exhibit varied characteristics while having the same optimal worst-case guarantee.
Keywords
- Complexity and efficiency of optimization algorithms
- Complementarity and variational problems
- SS - Optimal and stochastic optimal control and games
Status: accepted
Back to the list of papers