189. Low rank of the matrix LASSO under RIP with consequences for fast large-scale algorithms
Invited abstract in session WE-4: Large-scale optimization I, stream Large-scale optimization.
Wednesday, 14:10 - 15:50Room: M:M
Authors (first author is the speaker)
| 1. | Andrew McRae
|
| Institute of Mathematics, EPFL |
Abstract
A popular and efficient algorithmic approach to large-scale low-rank matrix recovery is optimization over matrices constrained to be low rank. However, this approach has limited theoretical guarantees due to nonconvexity. We address this problem for the popular ``matrix LASSO'' estimator. In general, computing this estimator requires many full singular value decompositions of large matrices. To get around this, first, we show that the LASSO estimator is exactly low-rank under similar conditions to those in classic low-rank matrix sensing error bounds. Second, this low solution rank guarantees the success of several low-rank algorithms. Specifically, we show that if (a) the ground truth is low-rank, (b) the (linear) sensing operator satisfies the matrix restricted isometry property (RIP), and (c) the measurement error is small enough relative to the amount of regularization, then the (unique) LASSO estimate has rank (approximately) bounded by that of the ground truth. From this, we conclude (a) that a low-rank--projected proximal algorithm (which can take advantage of fast partial SVD algorithms), will converge linearly to the global solution from any initialization, and (b) that the nonconvex landscape of a low-rank Burer-Monteiro--factorized problem formulation is benign in the sense that all second-order critical points yield the global solution.
Keywords
- Large- and Huge-scale optimization
- Optimization for learning and data analysis
- Convex and non-smooth optimization
Status: accepted
Back to the list of papers