View the program in our Progressive Web App
Program for stream Advances in large scale nonlinear optimization
Sunday
Monday
Monday, 8:30-10:00
MA-32: Large Scale Constrained Optimization: Algorithms and Applications
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Matteo Lapucci, Marianna De Santis
-
Feature selection in linear SVMs: a scalable SDP decomposition approach using a hard cardinality constraint
Bo Peng, Immanuel Bomze, Laura Palagi -
A branch-and-cut algorithm for biclustering via semidefinite programming
Antonio M. Sudoso -
Combining interior and exterior penalty for nonlinear costrained black-box optimization problems
Andrea Brilli, Everton Silva, Ana Luisa Custodio, Giampaolo Liuzzi -
Estimation of Hydraulic Parameters Using the Augmented Lagrange Method
Fabio Fortunato Filho, José Mario Martínez
MA-34: Optimization and learning for data science and imaging (Part I)
Stream: Advances in large scale nonlinear optimization
Room: 43 (building: 303A)
Chair(s):
Simone Rebegoldi, Federica Porta, Elena Morotti, Alessandro Benfenati
-
Alternating Projections Methods for Matrix Completion: Regularized and Inexact Projections
Mattia Silei, Stefania Bellavia, Simone Rebegoldi -
Fractional graph Laplacian for image reconstruction
Marco Donatelli, Stefano Aleotti, Alessandro Buccini -
A multiplicative components framework for joint correction and segmentation of magnetic resonance images
Marco Viola, Laura Antonelli, Valentina De Simone -
Convergence analysis of optimization-by-continuation proximal gradient algorithm and some primal-dual extensions
Ignace Loris
Monday, 10:30-12:00
MB-32: Algorithmic Advances in Large Scale Nonconvex Optimization
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Marianna De Santis, Matteo Lapucci
-
Don’t be so Monotone: Relaxing Stochastic Line Search in Over-Parameterized Models
Leonardo Galli -
A globally convergent gradient method with momentum
Matteo Lapucci, Giampaolo Liuzzi, Stefano Lucidi -
Stochastic gradient descent with momentum and line-searches
Davide Pucci, Matteo Lapucci -
A Block Cubic Newton method with Greedy Rule
Andrea Cristofari
MB-34: Optimization and learning for data science and imaging (Part II)
Stream: Advances in large scale nonlinear optimization
Room: 43 (building: 303A)
Chair(s):
Simone Rebegoldi, Federica Porta, Elena Morotti, Alessandro Benfenati
-
A Nested Primal–Dual Iterated Tikhonov Method for Regularized Convex Optimization
Stefano Aleotti, Silvia Bonettini, Marco Donatelli, Marco Prato, Simone Rebegoldi -
Parameter-free FISTA
Luca Calatroni, Jean-François Aujol, Charles Dossal, Hippolyte Labarrière, Aude Rondepierre -
Solving large-scale nonlinear least-squares with random Gauss-Newton models
Benedetta Morini, Stefania Bellavia, Greta Malaspina -
Mean square convergence analysis of nonlinear distributed recursive estimation under heavy-tailed noise
Manojlo Vukovic, Dusan Jakovetic, Dragana Bajovic, Soummya Kar
Monday, 12:30-14:00
MC-32: Advances in Complexity of Convex and Nonconvex Problems
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Yurii Nesterov, Geovani Grapiglia
MC-34: Optimization and learning for data science and imaging (Part III)
Stream: Advances in large scale nonlinear optimization
Room: 43 (building: 303A)
Chair(s):
Simone Rebegoldi, Federica Porta, Elena Morotti, Alessandro Benfenati
-
Real data EIT reconstruction using virtual X-rays and deep learning
Siiri Rautio -
Space-Variant Total Variation boosted by learning techniques for subsampled imaging problems
Davide Evangelista -
On stochastic first order optimization methods for deep learning applications
Federica Porta, Giorgia Franchini, Valeria Ruggiero, Ilaria Trombini, Luca Zanni -
Accelerating convergent Plug-and-Play methods
Andrea Sebastiani, Tatiana Bubba, Luca Ratti
Monday, 14:30-16:00
MD-32: Algorithms for machine learning and inverse problems: adaptive strategies
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Silvia Villa, Luca Calatroni, Cesare Molinari
-
Inertial methods beyond minimizer uniqueness
Hippolyte Labarrière -
Adaptive restart of conservative dynamics for convex optimization
Alessandro Scagliotti -
Monitoring the Convergence Speed of PDHG to Find Better Primal and Dual Step Sizes
Olivier Fercoq -
Stochastic Primal Dual Hybrid Gradient Algorithm with Adaptive Step-Sizes
Claire Delplancke, Antonin Chambolle, Matthias J. Ehrhardt, Carola-Bibiane Schönlieb, Junqi Tang
MD-34: Preconditioning for Large Scale Nonlinear Optimization
Stream: Advances in large scale nonlinear optimization
Room: 43 (building: 303A)
Chair(s):
Panos Parpas
-
Loraine – An Interior-Point Solver for Low-Rank Semidefinite Programming
Soodeh Habibi, Michal Kocvara, Michael Stingl -
Simba: A Scalable Bilevel Preconditioned Gradient Method for Fast Evasion of Flat Areas and Saddle Points
Nick Tsipinakis, Panos Parpas -
Parallel Neural Network Training via Nonlinearly Preconditioned Trust-Region Method
Samuel Cruz -
Stochastic Mirror Descent for Convex Optimization with Consensus Constraints
Panos Parpas
Tuesday
Tuesday, 8:30-10:00
TA-32: Nonsmooth optimization and applications, Part I
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Valentina De Simone, Gerardo Toraldo
-
An interior proximal gradient method for nonconvex optimization
Alberto De Marchi, Andreas Themelis -
Multilevel proximal methods for image restoration
Guillaume Lauga, Elisa Riccietti, Nelly Pustelnik -
A Scaled Gradient Projection method for the realization of the Balancing Principle in TGV-based image restoration
Germana Landi, Marco Viola, Fabiana Zama -
A Variational Model for graph p-Laplacian eigenfunctions under p-orthogonality constraints
Giuseppe Antonio Recupero, Serena Morigi, Alessandro Lanza
TA-34: New Algorithms for Nonlinear Optimization
Stream: Advances in large scale nonlinear optimization
Room: 43 (building: 303A)
Chair(s):
Benedetta Morini
-
Enhancing the convergence speed of line search methods: Applications in Neural Network training
José Ángel Martín-Baos, Ricardo Garcia-Rodenas, Luis Rodriguez-Benitez, Maria Luz Lopez -
Sphere covering and approximating tensor norms
Zhening Li -
First-order Trust-region Methods with Adaptive Sampling
Sara Shashaani
Tuesday, 10:30-12:00
TB-32: Nonsmooth optimization and applications, Part II
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Valentina De Simone, Gerardo Toraldo
-
Bilevel learning optimization and applications
Serena Crisci -
Nonsmooth optimization in sparse portfolio selection
Zelda Marino, Stefania Corsaro, Valentina De Simone -
Parallel computing in optimization methods used in estimating risk-neutral densities through option prices
Antonio Santos, Ana Monteiro
Tuesday, 12:30-14:00
TC-32: Algorithms for machine learning and inverse problems: zeroth-order optimisation
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Silvia Villa, Luca Calatroni, Cesare Molinari
-
Zeroth-order implementation of the regularized Newton method with lazy approximated Hessians
Geovani Grapiglia -
Accelerating Randomized Adaptive Subspace Trust-Region Derivative-Free Algorithms
Stefan M. Wild, Kwassi Joseph Dzahini, Xiaoqian Liu -
An Optimal Structured Zeroth-order Algorithm for Non-smooth Optimization
Cesare Molinari -
Stochastic derivative-free optimization algorithms using random subspace strategies
Kwassi Joseph Dzahini, Stefan M. Wild
Tuesday, 14:30-16:00
TD-32: Algorithms for machine learning and inverse problems: optimisation for neural networks
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Silvia Villa, Luca Calatroni, Cesare Molinari
-
Differentiating Nonsmooth Solutions to Parametric Monotone Inclusion Problems
Antonio Silveti-Falls -
Inexact Restoration trust-region algorithm with random models for unconstrained noisy optimization
Simone Rebegoldi, Benedetta Morini -
Conservation laws for gradient flows
Sibylle Marcotte -
(Automatic) Iterative Differentiation: some old (& new) results
Samuel Vaiter
Wednesday
Wednesday, 8:30-10:00
WA-32: Adaptive and Polyak step-size methods
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Dmitry Kamzolov, Martin Takac
-
On the Stochastic Polyak Step Size for Machine Learning: Proximal and Momentum Versions
Fabian Schaipp -
Unveiling the Power of Adaptive Methods Over SGD: A Parameter-Agnostic Perspective
Xiang Li -
EXTRA-NEWTON: A First Approach to Noise-Adaptive Accelerated Second-Order Methods
Kimon Antonakopoulos -
AdaBatchGrad: Combining Adaptive Batch Size and Adaptive Step Size
Petr Ostroukhov
Wednesday, 10:30-12:00
WB-32: Beyond First-Order Optimization Methods
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Dmitry Kamzolov, Martin Takac
-
Adaptive Quasi-Newton and Anderson Acceleration Framework with Explicit Global (Accelerated) Convergence Rates
Damien Scieur -
Accelerated Adaptive Cubic Regularized Quasi-Newton Methods
Dmitry Kamzolov -
On the behavior of limited-memory quasi-Newton methods for quadratic problems
Aban Ansari-Önnestam, Anders Forsgren -
Advancing the lower bounds: An accelerated, stochastic, second-order method with optimal adaptation to inexactness
Artem Agafonov, Dmitry Kamzolov, Alexander Gasnikov, Ali Kavis, Kimon Antonakopoulos, Volkan Cevher, Martin Takac
Wednesday, 12:30-14:00
WC-32: Computer-Assisted Proofs in Optimization
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Eduard Gorbunov
-
Constructive approaches to the analysis and construction of optimization algorithms
Adrien Taylor -
Automated tight Lyapunov analysis for first-order methods
Manu Upadhyaya, Sebastian Banert, Adrien Taylor, Pontus Giselsson -
Counter-examples in first-order optimization: a constructive approach
Aymeric Dieuleveut, Adrien Taylor, Baptiste Goujaud -
Provable non-accelerations of the heavy-ball method
Baptiste Goujaud, Adrien Taylor, Aymeric Dieuleveut
Wednesday, 14:30-16:00
WD-32: Distributed and Federated Optimization
Stream: Advances in large scale nonlinear optimization
Room: 41 (building: 303A)
Chair(s):
Eduard Gorbunov
-
TAMUNA: Doubly-Accelerated Distributed Optimization with Local Training, Compression, and Partial Participation
Laurent Condat, Peter Richtarik -
Machine Learning in Untrusted Distributed Environment
Nirupam Gupta -
Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences
Eduard Gorbunov -
Beyond spectral gap: the role of the topology in decentralized learning
Hadrien Hendrikx