306. A Stochastic Newton-type Method for Non-smooth Optimization
Invited abstract in session TC-5: Randomized Optimization algorithms II, stream Optimization for machine learning.
Tuesday, 14:00-16:00Room: B100/4013
Authors (first author is the speaker)
| 1. | Titus Pinta
|
| Mathematics, ENSTA |
Abstract
We introduce a new framework for analyzing (Quasi-}Newton type methods applied to non-smooth optimization problems. The source of randomness comes from the evaluation of the (approximation) of the Hessian. We derive, using a variant of Chernoff bounds for stopping times, expectation and probability bounds for the random variable representing the number of iterations of the algorithm until approximate first order optimality conditions are validated. As an important distinction to previous results in the literature, we do not require that the estimator is unbiased or that it has finite variance. We then showcase our theoretical results in a stochastic Quasi-Newton method for X-ray free electron laser orbital tomography and in a sketched Newton method for image denoising.
Keywords
- Stochastic optimization
- Non-smooth optimization
Status: accepted
Back to the list of papers