354. TAMUNA: Doubly-Accelerated Distributed Optimization with Local Training, Compression, and Partial Participation
Invited abstract in session WF-5: Randomized optimization algorithms part 2/2, stream Randomized optimization algorithms.
Wednesday, 16:20 - 18:00Room: M:N
Authors (first author is the speaker)
| 1. | Laurent Condat
|
| KAUST |
Abstract
In distributed optimization, a large number of machines perform computations in parallel and communicate back and forth with a distant server. Communication is the main bottleneck, as it is typically slow and costly. In addition to communication-efficiency, a robust algorithm should allow for partial participation, since some machine may not be able to participate in the process at certain iterations. To reduce the communication load, two strategies are popular: 1) communicate less frequently; 2) compress the communicated vectors. We introduce TAMUNA, the first algorithm that harnesses these two strategies jointly, provably benefits from them, and allows for partial participation. TAMUNA converges linearly to the minimizer of the sum of smooth and strongly convex functions, with double acceleration: its communication complexity exhibits a better dependency on the condition number of the functions and on the dimension.
Keywords
- Large- and Huge-scale optimization
- Convex and non-smooth optimization
Status: accepted
Back to the list of papers