212. An adaptively inexact first-order method for bilevel learning
Invited abstract in session FC-5: Recent advances in bilevel optimization III, stream Bilevel optimization: strategies for complex decision-making.
Friday, 11:25 - 12:40Room: M:N
Authors (first author is the speaker)
| 1. | Mohammad Sadegh Salehi
|
| Independent Researcher | |
| 2. | Matthias J. Ehrhardt
|
| University of Bath | |
| 3. | Lindon Roberts
|
| Australian National University |
Abstract
In various imaging and data science domains, tasks are modeled using variational regularization, which poses challenges in manually selecting regularization parameters, especially when employing regularizers involving a large number of hyperparameters. To tackle this, gradient-based bilevel learning, as a large-scale approach, can be used to learn parameters from data. However, the unattainability of exact function values and gradients with respect to hyperparameters necessitates reliance on inexact evaluations. State-of-the-art inexact gradient-based methods face difficulties in selecting accuracy sequences and determining appropriate step sizes due to unknown Lipschitz constants of hypergradients.
In this talk, we present our algorithm, the "Method of Adaptive Inexact Descent (MAID)," featuring a provably convergent backtracking line search that incorporates inexact function evaluations and hypergradients. This ensures convergence to a stationary point and adaptively determines the required accuracy. Our numerical experiments demonstrate MAID's practical superiority over state-of-the-art methods on an image denoising problem. Importantly, we showcase MAID's robustness across different initial accuracy and step size choices.
Keywords
- Large- and Huge-scale optimization
- Multilevel optimization
- Optimization for learning and data analysis
Status: accepted
Back to the list of papers