63. Unifying Trust-region Algorithms with Adaptive Sampling for Nonconvex Stochastic Optimization
Invited abstract in session TB-1: Zeroth-Order Optimization Methods for Stochastic and Noisy Problems, stream Zeroth and first-order optimization methods.
Tuesday, 10:30-12:30Room: B100/1001
Authors (first author is the speaker)
| 1. | Sara Shashaani
|
| Industrial and Systems Engineering, North Carolina State University | |
| 2. | Yunsoo Ha
|
| US National Renewable Energy Laboratory |
Abstract
Continuous simulation optimization is challenging due to its derivative-free and often nonconvex noisy setting. Trust-region methods have shown remarkable robustness for this class of problems. Each iteration of a trust-region method involves constructing a local model within a neighborhood of the incumbent that helps verify sufficient reduction in the function estimate at the trial step. When the local model approximates the function well, larger neighborhoods are preferred for faster progress. Conversely, unsuccessful approximations can be corrected by contracting the neighborhood. Traditional trust-region methods can be slowed down by incremental contractions that lead to numerous unnecessary iterations and significant simulation cost towards convergence to a stationary point. We propose a unified regime for adaptive sampling trust-region optimization (ASTRO) that can enjoy faster convergence in both iteration count and sampling effort by employing quadratic regularization and dynamically adjusting the trust-region size based on gradient estimates.
Keywords
- Stochastic optimization
- Derivative-free optimization
- First-order optimization
Status: accepted
Back to the list of papers