130. SDP performance estimation for the boosted DCA
Invited abstract in session WC-2: Systematic and computer-aided analyses VI: Systematic approaches to the analyses of proximal and higher-order methods, stream Systematic and computer-aided analyses of optimization algorithms.
Wednesday, 14:00-16:00Room: B100/7011
Authors (first author is the speaker)
| 1. | Etienne De Klerk
|
| Econometrics and OR, Tilburg University | |
| 2. | Hadi Abbaszadehpeivasti
|
| Tilburg University |
Abstract
The difference-of-convex algorithm (DCA) is a well-established nonlinear programming technique that solves successive convex optimization problems. These sub-problems are obtained from the difference-of-convex (DC) decompositions of the objective and constraint functions. We investigate the worst-case performance of the unconstrained DCA, with and without boosting, where boosting simply performs an additional step in the direction generated by the usual DCA method. We show that, for certain classes of DC decompositions, the boosted DCA is provably better in the worst-case than the usual DCA. The proof technique relies on semidefinite programming (SDP) performance estimation, as introduced by Drori and Teboulle. This talk is based on joint work with Hadi Abbaszadehpeivasti.
Keywords
- Complexity and efficiency of algorithms
- Global optimization
- Conic and semidefinite optimization
Status: accepted
Back to the list of papers