54. Stochastic Optimization under Hidden Convexity
Invited abstract in session WF-6: Stochastic Gradient Methods: Bridging Theory and Practice, stream Challenges in nonlinear programming.
Wednesday, 16:20 - 18:00Room: M:H
Authors (first author is the speaker)
| 1. | Ilyas Fatkhullin
|
| Computer Science, ETH Zürich | |
| 2. | Niao He
|
| Industrial and Enterprise Systems Engineering, University of Illinois at Urbana-Champaign | |
| 3. | Yifan Hu
|
| EPFL |
Abstract
In this work, we consider constrained stochastic optimization problems under hidden convexity, i.e., those that admit a convex reformulation via non-linear (but invertible) map c(⋅). A number of non-convex problems ranging from optimal control, revenue and inventory management, to convex reinforcement learning all admit such a hidden convex structure. Unfortunately, in the majority of applications considered, the map c(⋅) is unavailable or implicit; therefore, directly solving the convex reformulation is not possible. On the other hand, the stochastic gradients with respect to the original variable are often easy to obtain. Motivated by these observations, we examine the basic projected stochastic (sub-) gradient methods for solving such problems under hidden convexity. We provide the first sample complexity guarantees for global convergence in smooth and non-smooth settings. Additionally, in the smooth setting, we improve our results to the last iterate convergence in terms of function value gap using the momentum variant of projected stochastic gradient descent.
Keywords
- Convex and non-smooth optimization
- Optimization under uncertainty and applications
- Complexity and efficiency of optimization algorithms
Status: accepted
Back to the list of papers