Published on Fri Jun 12 2015

Robust Structured Low-Rank Approximation on the Grassmannian

Clemens Hage, Martin Kleinsteuber

Robust PCA has been established as a standard tool for reliable low-rank approximation of matrices in the presence of outliers. The method is evaluated in online time series forecasting tasks on simulated and real-world data.

0
0
0
Abstract

Over the past years Robust PCA has been established as a standard tool for reliable low-rank approximation of matrices in the presence of outliers. Recently, the Robust PCA approach via nuclear norm minimization has been extended to matrices with linear structures which appear in applications such as system identification and data series analysis. At the same time it has been shown how to control the rank of a structured approximation via matrix factorization approaches. The drawbacks of these methods either lie in the lack of robustness against outliers or in their static nature of repeated batch-processing. We present a Robust Structured Low-Rank Approximation method on the Grassmannian that on the one hand allows for fast re-initialization in an online setting due to subspace identification with manifolds, and that is robust against outliers due to a smooth approximation of the -norm cost function on the other hand. The method is evaluated in online time series forecasting tasks on simulated and real-world data.

Tue Oct 27 2009
Machine Learning
A Gradient Descent Algorithm on the Grassman Manifold for Matrix Completion
OptSpace is based on singular value decomposition and local manifold optimization. It can reconstruct the low rank matrix exactly from a very small subset of its entries. The robustness of the algorithm with respect to noise, and its performance on actual collaborative filtering datasets.
0
0
0
Mon Jun 06 2016
Machine Learning
Low-rank Optimization with Convex Constraints
The problem of low-rank approximation with convex constraints is considered in data analysis, system identification, model order reduction, low-order controller design and low-complexity modelling. In many situations, this non-convex problem is convexified by nuclear norm regularization. We
0
0
0
Mon Sep 03 2012
Machine Learning
Fixed-rank matrix factorizations and Riemannian low-rank optimization
The proposed algorithms generalize our previous results on fixed-rank symmetric positive semidefinite matrices. The proposed algorithms apply to abroad range of applications, scale to high-dimensional problems.
0
0
0
Sun Dec 11 2011
Machine Learning
Low-rank optimization with trace norm penalty
The paper addresses the problem of low-rank trace norm minimization. We present a second-order trust-region algorithm with a guaranteed quadratic rate of convergence.
0
0
0
Fri Dec 12 2014
Machine Learning
Adaptive Stochastic Gradient Descent on the Grassmannian for Robust Low-Rank Subspace Recovery and Clustering
GASG21 (Grassmannian Adaptive Stochastic Gradient) is an adaptive stochastic gradient algorithm to recover the low-rank subspace from a large matrix. In the presence of column outliers, we reformulate the batch mode matrix $L_{2,1
0
0
0
Tue May 24 2016
Machine Learning
Riemannian stochastic variance reduced gradient on Grassmann manifold
Stochastic variance reduction algorithms have recently become popular for minimizing the average of a large, but finite, number of loss functions. We propose a novel Riemannian extension of the Euclidean stochastic variance reduced gradient algorithm to a compact manifold search space. The
0
0
0