1050. Variance Reduced Gradient Tracking for Distributed Zeroth-Order Optimization
Invited abstract in session TC-35: Recent trends in zeroth order and simulation-based optimization: 1, stream Continuous and mixed-integer nonlinear programming: theory and algorithms.
Tuesday, 12:30-14:00Room: Michael Sadler LG15
Authors (first author is the speaker)
| 1. | Yujie Tang
|
| Industrial Engineering & Management, Peking University | |
| 2. | Huaiyi Mu
|
| College of Engineering, Peking University | |
| 3. | Zhongkui Li
|
| Peking University |
Abstract
In this talk, we investigate distributed zeroth-order optimization for smooth nonconvex problems. We propose a new variance-reduced gradient estimator, which randomly renovates one orthogonal direction of the true gradient in each iteration while leveraging historical snapshots for variance correction. By integrating this estimator with the gradient tracking mechanism, we address the trade-off between convergence rate and sampling cost per zeroth-order gradient estimation that exists in current zeroth-order distributed optimization algorithms, which rely on either the 2-point or 2d-point gradient estimators. We also derive convergence rate results for the proposed algorithm under the smooth and nonconvex setting, showing that our algorithm achieves comparable performance with its first-order counterpart. Numerical simulations comparing our algorithm with existing methods further confirm the effectiveness and efficiency of the proposed algorithm.
Keywords
- Algorithms
- Continuous Optimization
- Programming, Nonlinear
Status: accepted
Back to the list of papers