Published on Mon Jul 01 2019

Sparse regular variation

Meyer Nicolas, Olivier Wintenberger

Regular variation provides a convenient theoretical framework to study large events. The concept is based on the Euclidean projection onto the simplex for efficient algorithms.

0
0
0
Abstract

Regular variation provides a convenient theoretical framework to study large events. In the multivariate setting, the dependence structure of the positive extremes is characterized by a measure - the spectral measure - defined on the positive orthant of the unit sphere. This measure gathers information on the localization of extreme events and has often a sparse support since severe events do not simultaneously occur in all directions. However, it is defined through weak convergence which does not provide a natural way to capture this sparsity structure.In this paper, we introduce the notion of sparse regular variation which allows to better learn the dependence structure of extreme events. This concept is based on the Euclidean projection onto the simplex for which efficient algorithms are known. We prove that under mild assumptions sparse regular variation and regular variation are two equivalent notions and we establish several results for sparsely regularly varying random vectors. Finally, we illustrate on numerical examples how this new concept allows one to detect extremal directions.

Tue Jul 21 2015
Machine Learning
Sparsity in Multivariate Extremes with Applications to Anomaly Detection
Capturing the dependence structure of multivariate extreme events is a major concern in many fields involving the management of risks stemming from multiple sources. The present paper proposes a novel methodology aiming at exhibiting a sparsity pattern within the dependence Structure of extremes.
0
0
0
Sun Apr 08 2018
Machine Learning
Moving Beyond Sub-Gaussianity in High-Dimensional Statistics: Applications in Covariance Estimation and Linear Regression
Concentration inequalities form an essential toolkit in the study of high dimensional (HD) statistical methods. Most of the relevant statistics in this regard is based on sub-Gaussian or sub-exponential tail assumptions. We illustrate the usefulness of these inequalities through the analysis of four fundamental problems in HD statistics.
0
0
0
Tue Oct 11 2016
Machine Learning
Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach
We study statistical inference and distributionally robust solution methods for stochastic optimization problems. We provide a principled method for choosing the size of distributional uncertainty regions to provide one- and two-sided confidence intervals that achieve exact coverage.
0
0
0
Mon May 27 2019
Machine Learning
One Method to Rule Them All: Variance Reduction for Data, Parameters and Many New Methods
We propose a remarkably general variance-reduced method suitable for solving empirical risk minimization problems. We provide a single theorem establishing linear convergence of the method under smoothness and quasi strong convexity assumptions. With this theorem we recover best-known and sometimes improved rates for known methods.
0
0
0
Wed Jan 11 2017
Machine Learning
The empirical Christoffel function with applications in data analysis
We provide a thresholding scheme which allows to approximate the support of a measure from a finite subset of its moments with strong asymptotic guaranties. We illustrate the relevance of our results on simulated and real world datasets for several applications in statistics and machine learning.
0
0
0
Tue Dec 15 2020
Machine Learning
Spectral Methods for Data Science: A Statistical Perspective
Spectral methods have emerged as a simple yet surprisingly effective approach for extracting information from massive, noisy and incomplete data. Due to their simplicity and effectiveness, spectral methods are not only used as a stand-alone estimator, but frequently employed to initialize other more sophisticated algorithms.
0
0
0