View the program in our Progressive Web App
Program for stream Optimization for machine learning
Monday
Monday, 10:30-12:30
MB-05: Optimization and machine learning I
Stream: Optimization for machine learning
Room: B100/4013
Chair(s):
Laurent Condat, Avetik Karagulyan
-
Laplacian Regularization in Semi-Supervised Learning with Functional Data
Zhengang Zhong -
When to Forget? Complexity Trade-offs in Machine Unlearning
Martin Van Waerebeke, Marco Lorenzi, Gioanni Neglia, Kevin Scaman -
Searching for optimal per-coordinate stepsizes with multidimensional backtracking
Frederik Kunstner, Victor Sanches Portella, Nick Harvey, Mark Schmidt -
Automatic recommendation of optimization methods through their worst-case complexity
Sofiane Tanji, François Glineur
Monday, 14:00-16:00
MC-05: Optimization and machine learning II
Stream: Optimization for machine learning
Room: B100/4013
Chair(s):
Laurent Condat
-
Stabilized Proximal-Point Methods for Federated Optimization
Xiaowen Jiang, Anton Rodomanov, Sebastian Stich -
A primal-dual algorithm for variational image reconstruction with learned convex regularizers
Hok Shing Wong -
Entropic Mirror Descent for Linear Systems: Polyak’s Stepsize and Implicit Bias
Alexander Posch, Yura Malitsky
Monday, 16:30-18:30
MD-05: Relaxed Smoothness and Convexity Assumptions in Optimization for Machine Learning
Stream: Optimization for machine learning
Room: B100/4013
Chair(s):
Eduard Gorbunov
-
Loss Landscape Characterization of Neural Networks without Over-Parametrization
Rustem Islamov -
Methods for Convex (L0,L1)-Smooth Optimization: Clipping, Acceleration, and Adaptivity
Eduard Gorbunov -
Optimizing $(L_0, L_1)$-Smooth Functions by Gradient Methods
Anton Rodomanov, Daniil Vankov, Angelia Nedich, Lalitha Sankar, Sebastian Stich -
A Third-Order Perspective on Newton’s Method and its Application in Federated Learning
Slavomír Hanzely
Tuesday
Tuesday, 10:30-12:30
TB-05: Randomized Optimization algorithms I
Stream: Optimization for machine learning
Room: B100/4013
Chair(s):
Laurent Condat
-
Scalable Second-Order Optimization Algorithms for Minimizing Low-Rank Functions
Edward Tansley, Coralia Cartis, Zhen Shao -
Distributed Optimization with Communication Compression
Yuan Gao, Sebastian Stich -
Stochastic Gradient Descent without Variance Assumption: A Tight Lyapunov Analysis
Lucas Ketels, Daniel Cortild, Guillaume Garrigos, Juan Peypouquet -
Communication-Efficient Algorithms for Federated Learning and Weakly Coupled Games
Sebastian Stich, Ali Zindari, Parham Yazdkhasti, Anton Rodomanov, Tatjana Chavdarova
Tuesday, 14:00-16:00
TC-05: Randomized Optimization algorithms II
Stream: Optimization for machine learning
Room: B100/4013
Chair(s):
Laurent Condat
-
Derivative-free stochastic bilevel optimization for inverse problems
Mathias Staudigl, Simon Weissmann, Tristan van Leeuwen -
Proximal splitting algorithms in nonlinear spaces
Russell Luke -
SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning
Avetik Karagulyan -
A Stochastic Newton-type Method for Non-smooth Optimization
Titus Pinta
Wednesday
Wednesday, 10:30-12:30
WB-04: Optimization and learning for estimation problems
Stream: Optimization for machine learning
Room: B100/5013
Chair(s):
Laurent Condat
-
Multilevel Plug-and-Play Image Restoration
Nils Laurent, Julian Tachella, Elisa Riccietti, Nelly Pustelnik -
Anomaly Detection Using the Cloud of Spheres Classification Method
Paula Amaral, Tiago Dias -
Learning Proximal Neural Networks at Equilibrium Without Jacobian
Leo Davy, Nelly Pustelnik, Luis Briceño-Arias -
Resource-Constrained Plug-and-Play Imaging: a block proximal heavy ball approach
Andrea Sebastiani, Federica Porta, Simone Rebegoldi
WB-05: Recent advances in min-max optimization
Stream: Optimization for machine learning
Room: B100/4013
Chair(s):
Ali Kavis, Aryan Mokhtari
-
Steering Towards Success: Efficient Methods for Nonconvex-Nonconcave Minimax Problems
Pontus Giselsson, Anton Åkerman, Max Nilsson, Manu Upadhyaya, Sebastian Banert -
A Universally Optimal Primal-Dual Method for Minimizing Heterogeneous Compositions
Benjamin Grimmer -
Parameter-free second-order methods for min-max optimization
Ali Kavis, Ruichen Jiang, Qiujiang Jin, Sujay Sanghavi, Aryan Mokhtari
Wednesday, 14:00-16:00
WC-04: Large Scale Optimization for Statistical Learning
Stream: Optimization for machine learning
Room: B100/5013
Chair(s):
Selin Ahipasaoglu
-
Spatial branch-and-bound methods for solving the k-Hyperplane clustering
Stefano Coniglio, Montree Jaidee -
Solving the Optimal Experiment Design with Mixed-Integer Convex Methods
Deborah Hendrych, Mathieu Besançon, Sebastian Pokutta -
A column generation approach to exact experimental design
Selin Ahipasaoglu, Stefano Cipolla, Jacek Gondzio
WC-05: Recent Advances in Stochastic Optimization
Stream: Optimization for machine learning
Room: B100/4013
Chair(s):
Chuan He
-
Almost sure convergence rates for stochastic gradient methods
Simon Weissmann -
A Hessian-Aware Stochastic Differential Equation for Modelling SGD
Xiang Li -
Complexity guarantees for risk-neutral generalized Nash equilibrium problems
Meggie Marschner, Mathias Staudigl -
Stochastic first-order methods can leverage arbitrarily higher-order smoothness for acceleration
Chuan He
WC-08: Advances in non-convex optimization
Stream: Optimization for machine learning
Room: B100/7007
Chair(s):
Radu-Alexandru Dragomir
-
Benign landscapes for synchronization on spheres via normalized Laplacian matrices
Andrew McRae -
Diagonal linear networks and the regularization path of the LASSO
Raphael Berthier -
Long-time convergence of a consensus-based optimization method
Victor Priser, Pascal Bianchi, Radu-Alexandru Dragomir -
Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data
Nikola Konstantinov