314. The Douglas-Rachford algorithm with variable stepsizes as a relocated fixed-point iteration
Invited abstract in session TB-9: Variational Analysis I, stream Variational analysis: theory and algorithms.
Tuesday, 10:30-12:30Room: B100/8013
Authors (first author is the speaker)
| 1. | Felipe Atenas
|
| University of Melbourne | |
| 2. | Heinz Bauschke
|
| Universityof British Columbia | |
| 3. | Minh N. Dao
|
| RMIT University | |
| 4. | Matthew Tam
|
| School of Mathematics and Statistics, University of Melbourne |
Abstract
The Douglas-Rachford algorithm is a popular proximal splitting method that breaks down complex problems with a sum structure into simpler pieces easier to handle. Traditional convergence guarantees assume constant stepsizes, while the theory with variable stepsizes is scarce. The fundamental challenge in varying this parameter stems from the stepsize-dependent nature of the fixed point set of the Douglas-Rachford operator, preventing the use of classical arguments to deduce convergence. To address this limitation, we propose a novel variant of the Douglas-Rachford algorithm that allows updating the stepsize between iterations, by composing the original Douglas-Rachford iteration with a “fixed point relocator” operator. For finding a zero of the sum of two maximally monotone operators in a Hilbert space, and under mild assumptions on the asymptotic behavior of the sequence of stepsizes, we show that the resulting relocated Douglas-Rachford method converges weakly to a fixed point of the limiting iteration operator. Furthermore, we establish that the corresponding shadow sequence converges weakly to a solution of the monotone inclusion problem.
Keywords
- Computational mathematical optimization
- Monotone inclusion problems
- Non-smooth optimization
Status: accepted
Back to the list of papers