EUROPT 2025
Abstract Submission

325. Greedy learning to optimise with convergence guarantees

Invited abstract in session MD-10: Interactions between optimization and machine learning, stream Zeroth and first-order optimization methods.

Monday, 16:30-18:30
Room: B100/8011

Authors (first author is the speaker)

1. Patrick Fahy
University of Bath
2. Mohammad Golbabaee
University of Bristol
3. Matthias J. Ehrhardt
University of Bath

Abstract

Learning to optimise (L2O) leverages training data to accelerate solving optimisation problems. Many existing methods use unrolling to parameterise update steps, but this often leads to memory limitations and a lack of convergence guarantees. We introduce a novel greedy strategy that learns iteration-specific parameters by minimising the function value at the next step. This approach enables training over significantly more iterations while keeping GPU memory usage constant. We focus on preconditioned gradient descent with multiple parameterisations, including a novel convolutional preconditioner. Our method ensures that parameter learning is no harder than solving the initial optimisation problem and provides convergence guarantees. We validate our approach on inverse problems such as image deblurring and Computed Tomography, where our learned convolutional preconditioners outperform classical methods like Nesterov’s Accelerated Gradient and L-BFGS.

Keywords

Status: accepted


Back to the list of papers