Published on Sat Jun 26 2010

Split Bregman method for large scale fused Lasso

Gui-Bo Ye, Xiaohui Xie

Solving the fused Lasso problem is computationally demanding. Existing solvers can only deal with problems of small or medium size. Fused Lasso exploits this ordering by explicitly regularizing the differences between neighboring coefficients.

0
0
0
Abstract

rdering of regression or classification coefficients occurs in many real-world applications. Fused Lasso exploits this ordering by explicitly regularizing the differences between neighboring coefficients through an norm regularizer. However, due to nonseparability and nonsmoothness of the regularization term, solving the fused Lasso problem is computationally demanding. Existing solvers can only deal with problems of small or medium size, or a special case of the fused Lasso problem in which the predictor matrix is identity matrix. In this paper, we propose an iterative algorithm based on split Bregman method to solve a class of large-scale fused Lasso problems, including a generalized fused Lasso and a fused Lasso support vector classifier. We derive our algorithm using augmented Lagrangian method and prove its convergence properties. The performance of our method is tested on both artificial data and real-world applications including proteomic data from mass spectrometry and genomic data from array CGH. We demonstrate that our method is many times faster than the existing solvers, and show that it is especially efficient for large p, small n problems.

Fri Feb 01 2019
Machine Learning
A dual Newton based preconditioned proximal point algorithm for exclusive lasso models
The exclusive lasso (also known as elitist lasso) regularization has become popular recently due to its superior performance on group sparsity. Compared to the group lasso regularization which enforces the competition on variables among different groups, the exclusive Lasso enforces competition within each
0
0
0
Sat Oct 24 2015
Machine Learning
Fast and Scalable Lasso via Stochastic Frank-Wolfe Methods with a Convergence Guarantee
Frank-Wolfe (FW) algorithms have been often proposed over the last few years as efficient solvers for a variety of optimization problems. The ability to work with cheap projection-free iterations and the incremental nature of the method make FW a very effective choice for many large-scale problems.
0
0
0
Mon Jun 18 2012
Machine Learning
A Complete Analysis of the l_1,p Group-Lasso
The Group-Lasso is a well-known tool for joint regularization in machine learning methods. We present a unified active set algorithm for all p-norms, a highly efficient projected gradient algorithm. We conduct large-scale experiments on synthetic and real-world data sets.
0
0
0
Wed Jun 08 2016
Machine Learning
Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression
In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider a penalty to enforce sparsity in such scenarios. We propose a modification we coined Smoothed Concomitant Lasso aimed at increasing numerical stability.
0
0
0
Thu Apr 07 2011
Machine Learning
Efficient First Order Methods for Linear Composite Regularizers
A wide class of regularization problems in machine learning and statistics employ a regularization term which is obtained by composing a simple convex function with a linear transformation. This setting includes Group Lasso methods, the Fused Lasso and other total variation methods.
0
0
0
Wed Feb 21 2018
Machine Learning
Celer: a Fast Solver for the Lasso with Dual Extrapolation
We propose an extrapolation technique starting from asequence of iterates in the dual that leads to the construction of improvedDual points. This enables a tighter control of optimality as used in stoppingCriterion, as well as better screening performance of Gap Safe rules.
0
0
0