158. Analyzing the speed of convergence in nonsmooth optimization via the Goldstein epsilon-subdifferential
Invited abstract in session MD-6: Smoothing techniques for nonsmooth optimization, stream Nonsmooth and nonconvex optimization.
Monday, 16:30-18:30Room: B100/7013
Authors (first author is the speaker)
| 1. | Bennet Gebken
|
| Department of Mathematics, Technical University of Munich |
Abstract
In smooth optimization, Taylor expansion is a powerful tool when analyzing the speed of convergence of solution methods. In nonsmooth optimization, this tool cannot be applied anymore, as there is no suitable generalization of Taylor expansion for a nonsmooth function. As a result, while many different solution methods for nonsmooth problems have been proposed, the speeds of convergence of these methods are rarely analyzed. In this talk, I will present a novel approach based on the Goldstein epsilon-subdifferential for analyzing convergence behavior in nonsmooth optimization. More precisely, given a converging sequence and upper bounds for the distance of the epsilon-subdifferential to zero for vanishing epsilon, we will derive an estimate for the distance of said sequence to the minimum. The assumptions we require for this are polynomial growth around the minimum and, depending on the order of growth, a higher-order generalization of semismoothness. After giving an overview of the assumptions and the theoretical results, I will show how these results lead to a better understanding of the behavior of gradient sampling methods.
Keywords
- Non-smooth optimization
- Complexity and efficiency of algorithms
- First-order optimization
Status: accepted
Back to the list of papers