304. Algorithms with learned deviations
Invited abstract in session TD-2: Recent advances in computer-aided analyses of optimization algorithms III, stream Conic optimization: theory, algorithms and applications.
Thursday, 14:10 - 15:50Room: M:O
Authors (first author is the speaker)
| 1. | Sebastian Banert
|
| Uni Bremen |
Abstract
Deviations are a way to modify existing optimization algorithms. In contrast to, e.g., momentum terms parametrized by a single number, they can have the same dimensions as the optimization variables. Traditionally interpreted as unwanted errors, they can be used to tailor an algorithm to a specific class of problems.
By choosing the deviations small enough, convergence rates can be guaranteed.
Our bounds for the norms of the deviations are based on known quantities, not summability. The performance of specialized algorithms for certain problem classes can be improved by training the deviations with deep learning. This talk is based on joint work with Jevgenija Rudzusika, Ozan Öktem, Jonas Adler, Hamed Sadeghi, and Pontus Giselsson.
Keywords
- Convex and non-smooth optimization
- Data driven optimization
- Large- and Huge-scale optimization
Status: accepted
Back to the list of papers