421. Learning Proximal Neural Networks at Equilibrium Without Jacobian
Invited abstract in session WB-4: Optimization and learning for estimation problems, stream Optimization for machine learning.
Wednesday, 10:30-12:30Room: B100/5013
Authors (first author is the speaker)
| 1. | Leo Davy
|
| Lab. Physique, ENS Lyon | |
| 2. | Nelly Pustelnik
|
| Univ Lyon, Ens de Lyon, Univ Lyon 1, CNRS, Laboratoire de Physique, Lyon | |
| 3. | Luis Briceño-Arias
|
| Matemática, Universidad Técnica Federico Santa María |
Abstract
Proximal Neural Networks (PNNs) have been introduced as a principled framework for designing neural networks with desirable properties such as interpretability, stability, and convergence guarantees. In this talk, we focus on using PNNs to solve parametric variational formulations and address the bilevel optimization problem of learning hyperparameters. We consider the Deep Equilibrium formalism and demonstrate how the convergence properties of convex optimization algorithms can simplify the learning process by justifying the use of a technique known in the literature as Jacobian-Free Backpropagation. Finally, we present numerical results in image restoration to illustrate the effectiveness of our approach.
Keywords
- Multi-level optimization
- Optimization for learning and data analysis
- Non-smooth optimization
Status: accepted
Back to the list of papers