49. Implicit Regularisation of Mirror Flow on Separable Classification Problems
Invited abstract in session FD-2: Deterministic and stochastic optimization beyond Euclidean geometry, stream Advances in first-order optimization.
Friday, 14:10 - 15:50Room: M:O
Authors (first author is the speaker)
| 1. | Radu-Alexandru Dragomir
|
| Telecom Paris |
Abstract
We study continuous-time counterpart of mirror descent, namely mirror flow, on classification problems which are linearly separable. Such problems are minimised `at infinity' and have many possible solutions; we study which solution is preferred by the algorithm depending on the mirror potential.
We show that the iterates converge in direction towards the solution of a certain max-margin problem. This problem is determined by the horizon function of the potential, which can be seen as the norm induced by its shape `at infinity'. When the potential is separable, a simple formula allows to compute this function. We also prove the general existence of the horizon shape for subanalytic potentials.
Joint work with Scott Pesme and Nicolas Flammarion.
Keywords
- Optimization for learning and data analysis
Status: accepted
Back to the list of papers