Published on Tue Jul 09 2019

Iteratively Reweighted -Penalized Robust Regression

Xiaoou Pan, Qiang Sun, Wen-Xin Zhou

This paper investigates tradeoffs among optimization errors, statistical rates of convergence and the effect of heavy-tailed errors for high-dimensionalrobust regression with nonconvex regularization. Extension to a general class of robust loss functions is also considered.

0
0
0
Abstract

This paper investigates tradeoffs among optimization errors, statistical rates of convergence and the effect of heavy-tailed errors for high-dimensional robust regression with nonconvex regularization. When the additive errors in linear models have only bounded second moment, we show that iteratively reweighted -penalized adaptive Huber regression estimator satisfies exponential deviation bounds and oracle properties, including the oracle convergence rate and variable selection consistency, under a weak beta-min condition. Computationally, we need as many as iterations to reach such an oracle estimator, where and denote the sparsity and ambient dimension, respectively. Extension to a general class of robust loss functions is also considered. Numerical studies lend strong support to our methodology and theory.