EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
946. Gradient-Based Stochastic Optimization via Finite Differences: When To Randomize?
Invited abstract in session TA-35: Optimization under uncertainty: theory and solution algorithms, stream Stochastic, Robust and Distributionally Robust Optimization.
Tuesday, 8:30-10:00Room: 44 (building: 303A)
Authors (first author is the speaker)
1. | Michael Fu
|
Smith School of Business, University of Maryland | |
2. | Jiaqiao Hu
|
Applied Mathematics and Statistics, Stony Brook University |
Abstract
We consider optimization of noisy black-box functions via gradient search, where the gradient is estimated using finite differences. Applying a recently developed convergence rate analysis leads to a finite-time error bound for a class of problems with convex differentiable structures. The results provide insight as to when using randomized gradient approaches such as simultaneous perturbation stochastic approximation might be advantageous, based on problem dimension and noise levels.
Keywords
- Stochastic Optimization
Status: accepted
Back to the list of papers