273. LoCoDL: Communication-Efficient Distributed Optimization with Local Training and Compression
Invited abstract in session MB-3: First-order methods in modern optimization (Part I), stream Large scale optimization: methods and algorithms.
Monday, 10:30-12:30Room: B100/4011
Authors (first author is the speaker)
| 1. | Laurent Condat
|
| KAUST | |
| 2. | Arto Maranjyan
|
| KAUST | |
| 3. | Peter Richtarik
|
| Computer Science, KAUST |
Abstract
In distributed optimization, and even more in federated learning, communication is the main bottleneck. We introduce LoCoDL, a communication-efficient algorithm that leverages the two techniques of Local training, which reduces the communication frequency, and Compression with a large class of unbiased compressors that includes sparsification and quantization strategies. LoCoDL provably benefits from the two mechanisms and enjoys a doubly-accelerated communication complexity, with respect to the condition number of the functions and the model dimension, in the general heterogenous regime with strongly convex functions. The paper “LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression” has been presented at the conference ICLR 2025.
Keywords
- Distributed optimization
- First-order optimization
- Stochastic optimization
Status: accepted
Back to the list of papers