View the program in our Progressive Web App
Program for stream Optimization for learning
Wednesday
Wednesday, 10:05 - 11:20
WC-05: Optimization for learning I
Stream: Optimization for learning
Room: M:N
Chair(s):
Manu Upadhyaya
-
Incorporating History and Deviations in Forward-Backward Splitting
Pontus Giselsson -
Optimal Acceleration for Minimax and Fixed-Point Problems is Not Unique
TaeHo Yoon, Jaeyeon Kim, Jaewook Suh, Ernest Ryu -
Accelerated Algorithms For Nonlinear Matrix Decomposition With The Relu Function
Giovanni Seraghiti, Arnaud Vandaele, Margherita Porcelli, Nicolas Gillis
Wednesday, 11:25 - 12:40
WD-05: Optimization for learning II
Stream: Optimization for learning
Room: M:N
Chair(s):
Manu Upadhyaya
-
Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization
Hanmin Li, Avetik Karagulyan, Peter Richtarik -
Optimization flows landing on the Stiefel manifold: continuous-time flows, deterministic and stochastic algorithms
Bin Gao, P.-A. Absil -
Is maze-solving parallelizable?
Romain Cosson
Thursday
Thursday, 10:05 - 11:20
TB-05: Optimization for learning III
Stream: Optimization for learning
Room: M:N
Chair(s):
Max Nilsson
-
MAST: Model-Agnostic Sparsified Training
Egor Shulgin, Peter Richtarik -
Online Learning and Information Exponents: The Importance of Batch size & Time/Complexity Tradeoffs
Stephan Ludovic -
Compressed and distributed least-squares regression: convergence rates with applications to Federated Learning
Constantin Philippenko, Aymeric Dieuleveut