EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
1590. TAMUNA: Doubly-Accelerated Distributed Optimization with Local Training, Compression, and Partial Participation
Invited abstract in session WD-32: Distributed and Federated Optimization, stream Advances in large scale nonlinear optimization.
Wednesday, 14:30-16:00Room: 41 (building: 303A)
Authors (first author is the speaker)
1. | Laurent Condat
|
KAUST | |
2. | Peter Richtarik
|
Computer Science, KAUST |
Abstract
In distributed optimization and machine learning, a large number of machines perform computations in parallel and communicate back and forth with a distant server. Communication is typically slow and costly, and forms the main bottleneck in this setting. This is particularly true in federated learning, where a large number of users collaborate to optimize a global model, based on their personal data that are kept private. In addition to communication-efficiency, a robust algorithm should allow for partial participation. To reduce the communication load, two strategies are popular: 1) communicate less frequently; 2) compress the communicated vectors. We introduce TAMUNA, the first algorithm that harnesses these two strategies jointly and allows for partial participation. TAMUNA converges linearly to an exact solution in the strongly convex setting and provably benefits from the two mechanisms of local training and compression: its communication complexity is doubly accelerated, with a better dependency on the condition number of the functions and on the model dimension.
Keywords
- Convex Optimization
- Stochastic Optimization
Status: accepted
Back to the list of papers