568. Parameter-free second-order methods for min-max optimization
Invited abstract in session WB-5: Recent advances in min-max optimization, stream Optimization for machine learning.
Wednesday, 10:30-12:30Room: B100/4013
Authors (first author is the speaker)
| 1. | Ali Kavis
|
| ECE Department, University of Texas at Austin | |
| 2. | Ruichen Jiang
|
| ECE Department, The University of Texas at Austin | |
| 3. | Qiujiang Jin
|
| Goldman Sachs | |
| 4. | Sujay Sanghavi
|
| ECE Department, University of Texas at Austin | |
| 5. | Aryan Mokhtari
|
| ECE Department, University of Texas at Austin |
Abstract
I will talk about an adaptive, line-search-free second-order methods with optimal rate of convergence for solving convex-concave min-max problems. By means of an adaptive step size, our algorithms feature a simple update rule that requires solving only one linear system per iteration, eliminating the need for line-search or backtracking mechanisms. Specifically, we base our algorithms on the optimistic method and appropriately combine it with second-order information. Moreover, distinct from common adaptive schemes, we define the step size recursively as a function of the gradient norm and the prediction error in the optimistic update. We first analyze a variant where the step size requires knowledge of the Lipschitz constant of the Hessian. Under the additional assumption of Lipschitz continuous gradients, we further design a parameter-free version by tracking the Hessian Lipschitz constant locally and ensuring the iterates remain bounded. We also evaluate the practical performance of our algorithm by comparing it to existing second-order algorithms for minimax optimization. Inspired by the adaptive design of the step size, we propose a heuristic initialization rule that performs competitively across different problems and scenarios and eliminates the need to fine tune the step size.
Keywords
- Second- and higher-order optimization
- Monotone inclusion problems
Status: accepted
Back to the list of papers