Published on Tue Aug 31 2010

Network Flow Algorithms for Structured Sparsity

Julien Mairal, Rodolphe Jenatton, Guillaume Obozinski, Francis Bach

We consider a class of learning problems that involve a structuredsparsity-inducing norm. We propose an efficient algorithm which computes its solution exactly in polynomial time. Our algorithm reportedlyscales up to millions of variables, and opens up a whole new range of applications.

0
0
0
Abstract

We consider a class of learning problems that involve a structured sparsity-inducing norm defined as the sum of -norms over groups of variables. Whereas a lot of effort has been put in developing fast optimization methods when the groups are disjoint or embedded in a specific hierarchical structure, we address here the case of general overlapping groups. To this end, we show that the corresponding optimization problem is related to network flow optimization. More precisely, the proximal problem associated with the norm we consider is dual to a quadratic min-cost flow problem. We propose an efficient procedure which computes its solution exactly in polynomial time. Our algorithm scales up to millions of variables, and opens up a whole new range of applications for structured sparse models. We present several experiments on image and video data, demonstrating the applicability and scalability of our approach for various problems.

Mon Apr 11 2011
Machine Learning
Convex and Network Flow Optimization for Structured Sparsity
We consider a class of learning problems regularized by a structured sparsity-inducing norm. We show that the proximal operator associated with a sum of l_infinity-norms can be computed exactly in polynomial time.
0
0
0
Fri Dec 06 2013
Machine Learning
An Algorithmic Theory of Dependent Regularizers, Part 1: Submodular Structure
This work unifies key aspects of these problems under a common theory. It leads to novel methods for working with several important models of interest in statistics, machine learning and computer vision.
0
0
0
Sun Oct 07 2012
Machine Learning
Sparsity by Worst-Case Penalties
This paper proposes a new interpretation of sparse penalties such as the elastic-net and the group-lasso.
0
0
0
Tue Jul 18 2017
Machine Learning
Graph learning under sparsity priors
Graph signals offer a very generic and natural representation for data that lives on networks or irregular structures. The actual data structure is however often unknown a priori but can sometimes be estimated from the knowledge of the application domain. If this is not possible, the data structure has to beferred from the
0
0
0
Sun Dec 11 2016
Artificial Intelligence
Technical Report: A Generalized Matching Pursuit Approach for Graph-Structured Sparsity
Sparsity-constrained optimization is an important and challenging problem that has wide applicability in data mining, machine learning, and statistics. Existing methods explore this problem in the context of sparse estimation in linear models. Graph-structured Matching Pursuit (Graph-Mp) is
0
0
0
Fri Oct 04 2019
Machine Learning
On the Duality between Network Flows and Network Lasso
The network Lasso (nLasso) has been proposed recently as a method for joint clustering and optimization of machine learning models for networked data. The nLasso extends the Lasso fromparse linear models to clustered graph signals.
0
0
0