446. Equivalence between Linear Convergence and Error Bounds for Optimization Algorithms
Invited abstract in session TC-6: Structured nonsmooth optimization -- Part I, stream Nonsmooth and nonconvex optimization.
Tuesday, 14:00-16:00Room: B100/7013
Authors (first author is the speaker)
| 1. | Kira van Treek
|
| Econometrics and Operations Research, Tilburg University | |
| 2. | Javier Peña
|
| Carnegie Mellon University | |
| 3. | Juan Vera
|
| Etrie and OR, Tilburg University | |
| 4. | Luis Zuluaga
|
| Lehigh University |
Abstract
Most common iterative optimization algorithms are fundamentally fixed-point iterations of an averaged operator, which typically lead to sublinear convergence rates in worst-case scenarios. We establish that a specific error bound condition for these algorithms is equivalent to their linear convergence. This result has significant implications, since proving linear convergence is reduced to verifying a regularity condition on the corresponding operator, which enables the use of tools such as the Hoffman bounds to compute convergence rates. As an illustration, we enhance existing results on the linear convergence of the Alternating Direction Method of Multipliers (ADMM) and Douglas-Rachford algorithm for structured problems, including piecewise linear, quadratic and linear optimization. For linear optimization we show a convergence rate that is independent of the problem data. Furthermore, we use our results to prove finite termination of the Douglas-Rachford method when applied to polyhedral feasibility problems.
Keywords
- Computational mathematical optimization
- Complexity and efficiency of algorithms
- Monotone inclusion problems
Status: accepted
Back to the list of papers