EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
1671. A Nested Primal–Dual Iterated Tikhonov Method for Regularized Convex Optimization
Invited abstract in session MB-34: Optimization and learning for data science and imaging (Part II), stream Advances in large scale nonlinear optimization.
Monday, 10:30-12:00Room: 43 (building: 303A)
Authors (first author is the speaker)
1. | Stefano Aleotti
|
Scienza ed alta tecnologia, Università degli Studi dell'Insubria | |
2. | Silvia Bonettini
|
Università degli studi di Modena e Reggio Emilia | |
3. | Marco Donatelli
|
Università degli studi dell'Insubria | |
4. | Marco Prato
|
Università degli studi di Modena e Reggio Emilia | |
5. | Simone Rebegoldi
|
Dipartimento di Scienze Fisiche, Informatiche e Matematiche, Università di Modena e Reggio Emilia |
Abstract
Proximal-gradient methods are iterative first-order techniques that prove useful in various applications, such as image deblurring and denoising. Alongside the potential issue of slow convergence, one crucial challenge is the assumption that the proximal operator is computable in closed form. To tackle this, adopting a variable metric approach and integrating an extrapolation step can enhance the efficiency of these methods. However, a significant concern arises from the inexact computation of the proximal operator, often addressed through a nested primal-dual solver.
In this work, we introduce a nested primal-dual method designed for efficiently solving regularized convex optimization problems. Our proposed method approximates a variable metric proximal-gradient step with extrapolation by executing a predetermined number of primal-dual iterates while adjusting the step length parameter through an appropriate backtracking procedure.
Furthermore, we investigate the numerical performance of our proposed method on an image deblurring problem, defining a scaling matrix inspired by the Iterated Tikhonov method. The numerical results demonstrate that the combination of such scaling matrices and Nesterov-like extrapolation parameters yields an effective acceleration towards the solution of the initial problem.
Keywords
- Convex Optimization
- Large Scale Optimization
Status: accepted
Back to the list of papers