Published on Mon Aug 31 2020

Low-rank matrix recovery with non-quadratic loss: projected gradient method and regularity projection oracle

Lijun Ding, Yuqian Zhang, Yudong Chen
0
0
0
Abstract

Existing results for low-rank matrix recovery largely focus on quadratic loss, which enjoys favorable properties such as restricted strong convexity/smoothness (RSC/RSM) and well conditioning over all low rank matrices. However, many interesting problems involve non-quadratic loss do not satisfy such properties; examples including one-bit matrix sensing, one-bit matrix completion, and rank aggregation. For these problems, standard nonconvex approaches such as projected gradient with rank constraint alone (a.k.a. iterative hard thresholding) and Burer-Monteiro approach may perform badly in practice and have no satisfactory theory in guaranteeing global and efficient convergence. In this paper, we show that the critical component in low-rank recovery with non-quadratic loss is a regularity projection oracle, which restricts iterates to low-rank matrix within an appropriate bounded set, over which the loss function is well behaved and satisfies a set of relaxed RSC/RSM conditions. Accordingly, we analyze an (averaged) projected gradient method equipped with such an oracle, and prove that it converges globally and linearly. Our results apply to a wide range of non-quadratic problems including rank aggregation, one bit matrix sensing/completion, and more broadly generalized linear models with rank constraint.

Mon Oct 26 2020
Machine Learning
Low-Rank Matrix Recovery with Scaled Subgradient Methods: Fast and Robust Convergence Without the Condition Number
Many problems in data science can be treated as estimating a low-rank matrix from highly incomplete, sometimes even corrupted, observations. One popular approach is to resort to matrix factorization. We propose scaled sub gradient methods to minimize a family of nonsmooth and nonconvex formulations.
0
0
0
Mon May 18 2020
Machine Learning
Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent
Low-rank matrix estimation is a canonical problem that finds numerous applications in signal processing, machine learning and imaging science. A popular approach in practice is to factorize the matrix into two compact low-rank factors, and then optimize these factors directly via simple iterative methods such as gradient descent and alternating minimization.
0
0
0
Mon May 18 2015
Machine Learning
Towards Faster Rates and Oracle Property for Low-Rank Matrix Estimation
We present a unified framework for low-rank matrix estimation with nonconvexPenalties. We first prove that the proposed estimator attains a faster statistical rate than the traditional low- rank matrix estimator with nuclear norm penalty.
0
0
0
Wed Feb 20 2019
Machine Learning
Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization
This paper studies noisy low-rank matrix completion. The goal is to estimate the underlying matrix faithfully and efficiently. The theoretical support of this approach is still far from optimal in the noisy setting.
0
0
0
Tue Feb 21 2017
Machine Learning
A Unified Framework for Low-Rank plus Sparse Matrix Recovery
We propose a unified framework to solve general low-rank plus sparse matrix recovery problems based on matrix factorization. At the core of our theory is a novel structural Lipschitz gradient condition. We believe this condition is of independent interest to prove fast superposition-structured models.
0
0
0
Wed Mar 08 2017
Machine Learning
On Approximation Guarantees for Greedy Low Rank Optimization
We provide new approximation guarantees for greedy low rank matrix estimation. Our analysis uncovers previously unknown connections between the low rank estimation and combinatorial optimization. We provide statistical recovery guarantees.
0
0
0