63. Subsampled Trust-Region Methods with Deterministic Worst-Case Complexity Guarantees
Invited abstract in session TA-35: Nonlinear Optimization Algorithms and Applications: 4, stream Continuous and mixed-integer nonlinear programming: theory and algorithms.
Tuesday, 8:30-10:00Room: Michael Sadler LG15
Authors (first author is the speaker)
| 1. | Geovani Grapiglia
|
| Applied Mathematics, Université catholique de Louvain |
Abstract
In this work, we develop and analyze subsampled trust-region methods for solving finite-sum optimization problems. These methods use random subsampling strategies to approximate the gradient and Hessian of the objective function, thereby significantly reducing the overall computational cost. However, the sample size for gradients (or gradients and Hessians) is chosen deterministically, with the notable feature that a full sample size maybe never required. We establish deterministic worst-case iteration complexity bounds for obtaining approximate first and second-order stationary points. Finally, numerical experiments illustrate the promising performance of the proposed algorithms.
Keywords
- Continuous Optimization
- Algorithms
Status: accepted
Back to the list of papers