197. Learning firmly nonexpansive operators
Invited abstract in session MB-3: First-order methods in modern optimization (Part I), stream Large scale optimization: methods and algorithms.
Monday, 10:30-12:30Room: B100/4011
Authors (first author is the speaker)
| 1. | Jonathan Chirinos Rodriguez
|
| IRIT, INP Toulouse | |
| 2. | Emanuele Naldi
|
| Mathematics, Università di Genova | |
| 3. | Kristian Bredies
|
| Institute for Mathematics and Scientific Computing, University of Graz |
Abstract
In this talk, we propose a data-driven approach for constructing (firmly) nonexpansive operators. We demonstrate its applicability in Plug-and-Play (PnP) methods, where classical algorithms such as Forward-Backward splitting, Chambolle--Pock primal-dual iteration, Douglas--Rachford iteration or alternating directions method of multipliers (ADMM), are modified by replacing one proximal map by a learned firmly nonexpansive operator. We provide sound mathematical background to the problem of learning such an operator via expected and empirical risk minimization. We prove that, as the number of training points increases, the empirical risk minimization problem converges (in the sense of Gamma-convergence) to the expected risk minimization problem. Further, we derive a solution strategy that ensures firmly nonexpansive and piecewise affine operators within the convex envelope of the training set. We show that this operator converges to the best empirical solution as the number of points in the envelope increases in an appropriate way. Finally, the experimental section details practical implementations of the method and presents an application in image denoising, where we consider a novel, interpretable PnP Chambolle--Pock primal-dual iteration.
Keywords
- Optimization for learning and data analysis
- Data driven optimization
- Computational mathematical optimization
Status: accepted
Back to the list of papers