17. Spectral Stochastic Gradient Method with Additional Sampling for Finite and Infinite Sums
Invited abstract in session WC-4: Large scale optimization and applications 1 , stream Large scale optimization and applications.
Wednesday, 10:00 - 11:30Room: C105
Authors (first author is the speaker)
| 1. | Nataša Krklec Jerinkić
|
| Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad | |
| 2. | Valeria Ruggiero
|
| Università di Ferrara | |
| 3. | Ilaria Trombini
|
| Università di Ferrara |
Abstract
In this paper, we propose a new stochastic gradient method for numerical minimization of finite sums and its modified version applicable to more general problems where the objective function is in the form of mathematical expectation. The method is based on a
strategy to exploit the effectiveness of the well-known BB-like rules for updating the step length in the standard gradient method. The proposed method adapts the aforementioned strategy into the stochastic framework by exploiting the same SAA estimator of the objective function for several iterations. Furthermore, the sample size is controlled by an additional sampling which also plays a role in accepting the proposed iterate point. Moreover, the number of "inner" iterations with the same sample is also controlled by an adaptive rule which prevents the method from getting stacked with the same estimator for too long.
Convergence results are discussed for the finite and infinite sum version, for general and strongly convex objective functions. Numerical experiments on well-known datasets for binary classifications show very promising performance of the
method, without the need to provide special values for hyperparameters on which
the method depends.
Keywords
- Global optimization
- Large- and Huge-scale optimization
- Data driven optimization
Status: accepted
Back to the list of papers