Published on Tue Apr 05 2016

Sparse Recovery from Extreme Eigenvalues Deviation Inequalities

Sandrine Dallaporta, Yohann De Castro

This article provides a new toolbox to derive sparse recovery guarantees from small deviations on extreme singular values. This work is based on Restricted Isometry Constants(RICs) which are a pivotal notion in Compressed Sensing and High-Dimensional Statistics.

0
0
0
Abstract

This article provides a new toolbox to derive sparse recovery guarantees from small deviations on extreme singular values or extreme eigenvalues obtained in Random Matrix Theory. This work is based on Restricted Isometry Constants (RICs) which are a pivotal notion in Compressed Sensing and High-Dimensional Statistics as these constants finely assess how a linear operator is conditioned on the set of sparse vectors and hence how it performs in SRSR. While it is an open problem to construct deterministic matrices with apposite RICs, one can prove that such matrices exist using random matrices models. In this paper, we show upper bounds on RICs for Gaussian and Rademacher matrices using state-of-the-art small deviation estimates on their extreme eigenvalues. This allows us to derive a lower bound on the probability of getting SRSR. One benefit of this paper is a direct and explicit derivation of upper bounds on RICs and lower bounds on SRSR from small deviations on the extreme eigenvalues given by Random Matrix theory.

Tue Jan 28 2020
Machine Learning
Sub-Gaussian Matrices on Sets: Optimal Tail Dependence and Applications
Random linear mappings are widely used in modern signal processing, compressed sensing and machine learning. The performance of these mappings is usually captured by how close they are to an isometry on the data. We study when a sub-Gaussian matrix can become a near isometry.
0
0
0
Thu May 05 2016
Machine Learning
A Tight Bound of Hard Thresholding
This paper is concerned with the hard thresholding operator which sets all but the largest absolute elements of a vector to zero. On account of the crucial estimate, we bridge the connection between the restricted isometry property (RIP) and the sparsity parameter.
0
0
0
Thu Apr 19 2012
Machine Learning
Estimating Unknown Sparsity in Compressed Sensing
In the theory of compressed sensing (CS), the sparsity of a signal is commonly assumed to be a known parameter. However, it is typically unknown in practice. In this paper, we propose to estimate a stable measure of sparsity, s(x), which is a sharp lower bound on ||x||_0.
0
0
0
Thu Apr 17 2014
Machine Learning
Geometric Inference for General High-Dimensional Linear Inverse Problems
This paper presents a unified geometric framework for the statistical analysis of a general ill-posed linear inverse model. We propose computationally feasible convex programs for statistical inference including confidence intervals and hypothesis testing.
0
0
0
Sat Oct 19 2019
Machine Learning
Convex Reconstruction of Structured Matrix Signals from Linear Measurements (I): Theoretical Results
The contribution in this paper has two parts. The first part is about conditions for stability and robustness in signal reconstruction via convex programming. The second part is to establish upper and lower bounds on number of measurements for robust reconstruction.
0
0
0
Tue Jun 16 2020
Machine Learning
Robust Compressed Sensing using Generative Models
The goal of compressed sensing is to estimate a high dimensional vector from an underdetermined system of noisy linear equations. Classical recovery approaches such as empirical risk minimization (ERM) are guaranteed to succeed when the measurement matrix is sub-Gaussian.
0
0
0