556. Enhancing finite-difference-based derivative-free optimization with machine learning
Invited abstract in session MC-1: Strategies to Improve Zeroth-Order Optimization Methods, stream Zeroth and first-order optimization methods.
Monday, 14:00-16:00Room: B100/1001
Authors (first author is the speaker)
| 1. | Geovani Grapiglia
|
| Applied Mathematics, Université catholique de Louvain | |
| 2. | Timothé Taminiau
|
| ICTEAM, INMA, UCLouvain | |
| 3. | Estelle Massart
|
Abstract
Derivative-Free Optimization (DFO) involves methods that rely solely on evaluations of the objective function. One of the earliest strategies for designing DFO methods is to adapt first-order methods by replacing gradients with finite-difference approximations. The execution of such methods generates a rich dataset about the objective function, including iterate points, function values, approximate gradients, and successful step sizes. In this work, we propose a simple auxiliary procedure to leverage this dataset and enhance the performance of finite-difference-based DFO methods. Specifically, our procedure trains a surrogate model using the available data and applies the gradient method with Armijo line search to the surrogate until it fails to ensure sufficient decrease in the true objective function, in which case we revert to the original algorithm and improve our surrogate based on the new available information. As a proof of concept, we integrate this procedure with the derivative-free method proposed in (Optim. Lett. 18: 195--213, 2024). Numerical results demonstrate significant performance improvements, particularly when the approximate gradients are also used to train the surrogates.
Keywords
- Derivative-free optimization
- Black-box optimization
- AI based optimization methods
Status: accepted
Back to the list of papers