EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
2694. Accelerating convergent Plug-and-Play methods
Invited abstract in session MC-34: Optimization and learning for data science and imaging (Part III), stream Advances in large scale nonlinear optimization.
Monday, 12:30-14:00Room: 43 (building: 303A)
Authors (first author is the speaker)
1. | Andrea Sebastiani
|
Department of Physics, Informatics and Mathematics, University of Modena and Reggio Emilia | |
2. | Tatiana Bubba
|
University of Bath | |
3. | Luca Ratti
|
University of Bologna |
Abstract
Plug-and-Play methods are obtained replacing with off-the-shelf denoisers, the proximal operator in many first order proximal optimization algorithms. Under suitable hypothesis, it is possible to derive the functional whose proximal operator corresponds to a particular class of denoisers, referred to as Gradient Step denoisers. This characterization result allows for the interpretation of Plug-and-Play schemes as the minimization of an underlying non-convex cost function, enabling the study of the convergence of such methods. In particular, this analysis can be further extended to ensure the convergence of several accelerated methods. The non-convexity of the cost function strongly limits any improvemnt in the theoretical convergence rate. However, the numerical experiments empirically demonstrate the benefits of this acceleration, that allows to reduce the computational demand and workload required to compute the reconstruction of an image.
Keywords
- Non-smooth Optimization
- Machine Learning
- Medical Applications
Status: accepted
Back to the list of papers