243. Implicit Bias in Matrix Factorization and its Explicit Realization in a new Architecture
Invited abstract in session MC-2: Matrix factorization, stream Nonsmooth and nonconvex optimization.
Monday, 14:00-16:00Room: B100/7011
Authors (first author is the speaker)
| 1. | Yikun Hou
|
| 2. | Suvrit Sra
|
| Massachusetts Institute of Technology | |
| 3. | Alp Yurtsever
|
| UmeƄ University |
Abstract
Gradient descent for matrix factorization is known to exhibit an implicit bias toward approximately low-rank solutions. While existing theories often assume the boundedness of iterates, empirically the bias persists even with unbounded sequences. We thus hypothesize that implicit bias is driven by divergent dynamics markedly different from the convergent dynamics for data fitting. Using this perspective, we introduce a new factorization model: $X\approx UDV^\top$, where $U$ and $V$ are constrained within norm balls, while $D$ is a diagonal factor allowing the model to span the entire search space. Our experiments reveal that this model exhibits a strong implicit bias regardless of initialization and step size, yielding truly (rather than approximately) low-rank solutions. Furthermore, drawing parallels between matrix factorization and neural networks, we propose a novel neural network model featuring constrained layers and diagonal components. This model achieves strong performance across various regression and classification tasks while finding low-rank solutions, resulting in efficient and lightweight networks.
Keywords
- Optimization for learning and data analysis
- First-order optimization
Status: accepted
Back to the list of papers