Published on Thu Oct 13 2011

A tail inequality for quadratic forms of subgaussian random vectors

Daniel Hsu, Sham M. Kakade, Tong Zhang

We prove an exponential probability tail inequality for positive semidefinite quadratic forms in a subgaussian random vector. The bound is analogous to one that holds when the vector has independent Gaussian entries.

0
0
0
Abstract

We prove an exponential probability tail inequality for positive semidefinite quadratic forms in a subgaussian random vector. The bound is analogous to one that holds when the vector has independent Gaussian entries.

Mon Feb 11 2019
Machine Learning
A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm
In this note, we derive concentration inequalities for random vectors with a subGaussian norm. The inequalities are tight up to logarithmic factors. They are a generalization of both sub Gaussian random vectors and norm bounded random vectors.
2
0
0
Sat Apr 09 2011
Machine Learning
Dimension-free tail inequalities for sums of random matrices
We derive exponential tail inequalities for sums of random matrices with no dependency on the explicit matrix dimensions. These are similar to the matrix versions of the Chernoff bound and Bernstein inequality except with a trace quantity that can be small.
0
0
0
Thu May 12 2011
Machine Learning
A Maximal Large Deviation Inequality for Sub-Gaussian Variables
In this short note we prove a maximal concentration lemma for sub-Gaussian random variables. We say that for independent sub- Gaussian random variables we have.
0
0
0
Sun Oct 21 2018
Machine Learning
On the Non-asymptotic and Sharp Lower Tail Bounds of Random Variables
The non-asymptotic tail bounds of random variables play crucial roles in statistics, and machine learning. Despite much success in developing upper bounds on tail probability in literature, the lower bounds on tail probabilities are relatively fewer.
0
0
0
Tue Feb 16 2021
Machine Learning
Concentration of measure and generalized product of random vectors with an application to Hanson-Wright-like inequalities
This article provides an expression of the concentration of functionals on random vectors. We illustrate the importance of this result through various generalizations of the Hanson-Wright concentration.
0
0
0
Sun Aug 30 2020
Machine Learning
Sharp finite-sample concentration of independent variables
We show an extension of Sanov's theorem on large deviations. This result has a general scope, applies to samples of any size, and has a short information-theoretic proof.
0
0
0