View the program in our Progressive Web App
Program for stream Large scale optimization: methods and algorithms
Monday
Monday, 10:30-12:30
MB-03: First-order methods in modern optimization (Part I)
Stream: Large scale optimization: methods and algorithms
Room: B100/4011
Chair(s):
Simone Rebegoldi, Andrea Sebastiani
-
Learning firmly nonexpansive operators
Jonathan Chirinos Rodriguez, Emanuele Naldi, Kristian Bredies -
Line search based stochastic gradient methods for learning applications
Federica Porta -
Convergence rates of regularized quasi-Newton methods without strong convexity
Shida Wang, Jalal Fadili, Peter Ochs -
LoCoDL: Communication-Efficient Distributed Optimization with Local Training and Compression
Laurent Condat, Arto Maranjyan, Peter Richtarik
Monday, 14:00-16:00
MC-03: First-order methods in modern optimization (Part II)
Stream: Large scale optimization: methods and algorithms
Room: B100/4011
Chair(s):
Simone Rebegoldi, Andrea Sebastiani
-
Optimization Techniques for Learning Multi-Index Models
Hippolyte Labarrière, Shuo Huang, Ernesto De Vito, Lorenzo Rosasco, Tomaso Poggio -
Neural Blind Deconvolution for Poisson Data
Alessandro Benfenati, Ambra Catozzi, Valeria Ruggiero -
Adaptively Inexact Bilevel Learning via Primal-Dual Differentiation
Mohammad Sadegh Salehi -
Alternate Through the Epochs Stochastic Gradient for Multi-Task Neural Networks
Stefania Bellavia, Francesco Della Santa, Alessandra Papini
Tuesday
Tuesday, 10:30-12:30
TB-03: Theoretical and algorithmic advances in large scale nonlinear optimization and applications Part 1
Stream: Large scale optimization: methods and algorithms
Room: B100/4011
Chair(s):
Stefania Bellavia, Benedetta Morini
-
Fully stochastic trust-region methods with Barzilai-Borwein steplengths
Benedetta Morini, Mahsa Yousefi, Stefania Bellavia -
prunAdag: an adaptive pruning-aware gradient method
Giovanni Seraghiti, Margherita Porcelli, Philippe L. Toint -
An acceleration strategy for gradient methods in convex quadratic programming
Gerardo Toraldo, Serena Crisci, Anna De Magistris, Valentina De Simone -
Corrective Frank-Wolfe: Unifying and Extending Correction Steps
Jannis Halbey, Seta Rakotomandimby, Mathieu Besançon, Sebastian Pokutta
TB-10: First order methods: new perspectives for machine learning
Stream: Large scale optimization: methods and algorithms
Room: B100/8011
Chair(s):
Cesare Molinari, Silvia Villa, Lorenzo Rosasco
-
Convergence Analysis of Nonlinear Parabolic PDE Models with Neural Network Terms Trained with Gradient Descent
Konstantin Riedl, Justin Sirignano, Konstantinos Spiliopoulos -
Randomized trust-region method for non-convex mimimization
Radu-Alexandru Dragomir -
Perspectives on the analysis and design of optimization algorithms: Lyapunov analyses and counter-examples
Adrien Taylor -
Accelerated Gradient Methods via Inertial Systems with Hessian-driven Damping
Juan Peypouquet
Tuesday, 14:00-16:00
TC-03: Theoretical and algorithmic advances in large scale nonlinear optimization and applications Part 2
Stream: Large scale optimization: methods and algorithms
Room: B100/4011
Chair(s):
Stefania Bellavia, Benedetta Morini
-
Advanced Techniques for Portfolio Optimization Under Uncertainty
Valentina De Simone -
Inexact derivative-free methods for constrained bilevel optimization with applications to machine learning
Marco Viola, Matteo Pernini, Gabriele Sanguin, Francesco Rinaldi -
Three Alternating Projection Methods for Matrix Completion
Mattia Silei, Stefania Bellavia, Simone Rebegoldi -
Bi- and Multi-Level Optimization Strategies for Sparse and Interpretable Learning in NMF
Laura Selicato
Wednesday
Wednesday, 10:30-12:30
WB-03: Recent Advances in Line-Search Based Optimization
Stream: Large scale optimization: methods and algorithms
Room: B100/4011
Chair(s):
Matteo Lapucci
-
Line Search Methods are Sharpness-Aware and Operate at the Edge of Stability
Leonardo Galli -
A Gradient Method with Momentum for Riemannian Manifolds
Diego Scuppa, Filippo Leggio, Marco Sciandrone -
A Variable Dimension Sketching Strategy for Nonlinear Least-Squares
Greta Malaspina, Stefania Bellavia, Benedetta Morini -
Stochastic line-search-based optimization for training overparameterized models: convergence conditions and effective approaches to leverage momentum
Davide Pucci, Matteo Lapucci
WB-08: Theoretical advances in nonconvex optimization
Stream: Large scale optimization: methods and algorithms
Room: B100/7007
Chair(s):
Annette Dumas, Clément Royer
-
Continuized Nesterov Acceleration to improve convergence speed in non convex optimization
Julien Hermant, Jean-François Aujol, Charles Dossal, Aude Rondepierre -
Complexity of Newton-type methods with quadratic regularization for nonlinear least squares
Iskander LEGHERABA, Clément Royer -
Cubic regularized Newton methods with stochastic Hessian evaluations and momentum-based variance reduction
Yiming Yang -
Algorithms for nonconvex optimization on measure spaces.
Annette Dumas, Clément Royer
Wednesday, 14:00-16:00
WC-03: Acceleration Methods in Optimization
Stream: Large scale optimization: methods and algorithms
Room: B100/4011
Chair(s):
Vuong Phan, yingxin zhou
-
Anderson acceleration with adaptive relaxation for convergent fixed-point iterations
Nicolas Lepage-Saucier -
Anderson Acceleration for Primal-Dual Hybrid Gradient
yingxin zhou, Stefano Cipolla, Vuong Phan -
Accelerating Convergence of MPGP Algorithm
Jakub Kruzik, David Horak