75. Double-proximal augmented Lagrangian methods with improved convergence condition
Invited abstract in session WD-35: Bilevel optimization and augmented Lagrangian methods, stream Continuous and mixed-integer nonlinear programming: theory and algorithms.
Wednesday, 14:30-16:00Room: Michael Sadler LG15
Authors (first author is the speaker)
| 1. | Jianchao Bai
|
| School of Mathematics and Statistics, Northwestern Polytechnical University |
Abstract
In this talk, a novel double-proximal augmented Lagrangian method (DP-ALM) will be presented for solving a family of linearly constrained convex minimization problems whose objective function is not necessarily smooth. This DP-ALM not only enjoys a flexible dual stepsize, but also contains a proximal subproblem with relatively smaller proximal parameter. By a new prediction-correction reformulation for this DP-ALM and similar variational characterizations for both the saddle-point of the problem and the generated sequences, we establish its global convergence and sublinear convergence rate in both ergodic and nonergodic senses. A toy example is taken to illustrate that the presented lower bound of proximal parameter is optimal (smallest). We also show a relaxed accelerated version as well as a linearized version of DP-ALM when the objective function has composite structures. Preliminary experiments results show that our proposed methods outperform some well-established methods.
Keywords
- Convex Optimization
- Global Optimization
- Large Scale Optimization
Status: accepted
Back to the list of papers