440. The Augmented Lagrangian Method for Infeasible Convex Optimization
Invited abstract in session MB-8: Systematic and computer-aided analyses I: Analyses of proximal splittings methods & friends, stream Systematic and computer-aided analyses of optimization algorithms.
Monday, 10:30-12:30Room: B100/7007
Authors (first author is the speaker)
| 1. | Roland Andrews
|
| INRIA |
Abstract
The augmented Lagrangian approach is widely used for solving constrained optimization problems. This work presents a precise study of its behavior when the problem under consideration is infeasible. In particular, we show that the generated sequences of iterates converge to solutions of the closest feasible problem. In this context, we present a hierarchy of assumptions allowing to show a variety of precise results, from sequence convergence (without rates) to precise convergence rates. Our study leverages the classical relationship between augmented Lagrangian algorithms and proximal point methods on dual problems. In doing so, a key tool for us is a simple (and apparently new) result on the behavior of the proximal point algorithm applied to functions that are potentially unbounded below, specifically showing the convergence of convex conjugate values and subgradients, along with their respective convergence rates.
Keywords
- Computational mathematical optimization
- Complexity and efficiency of algorithms
- Multi-level optimization
Status: accepted
Back to the list of papers