Published on Tue May 24 2016

Adaptive ADMM with Spectral Penalty Parameter Selection

Zheng Xu, Mario A. T. Figueiredo, Tom Goldstein

The alternating direction method of multipliers (ADMM) is a versatile tool for solving constrained optimization problems. Its performance is highly sensitive to a penalty parameter, which makes ADMM often unreliable. We propose a method to adaptively tune the penalty parameters to achieve fast convergence.

0
0
0
Abstract

The alternating direction method of multipliers (ADMM) is a versatile tool for solving a wide range of constrained optimization problems, with differentiable or non-differentiable objective functions. Unfortunately, its performance is highly sensitive to a penalty parameter, which makes ADMM often unreliable and hard to automate for a non-expert user. We tackle this weakness of ADMM by proposing a method to adaptively tune the penalty parameters to achieve fast convergence. The resulting adaptive ADMM (AADMM) algorithm, inspired by the successful Barzilai-Borwein spectral method for gradient descent, yields fast convergence and relative insensitivity to the initial stepsize and problem scaling.

Thu Apr 20 2017
Machine Learning
ADMM Penalty Parameter Selection by Residual Balancing
The Alternating Direction Method of Multipliers (ADMM) is a heuristic method that appears to be relatively successful in a number of different problems. The contribution of this paper is to demonstrate that there is a potentially serious flaw in this heuristic approach.
0
0
0
Mon Dec 16 2013
Machine Learning
Adaptive Stochastic Alternating Direction Method of Multipliers
The Alternating Direction Method of Multipliers (ADMM) has been studied for decades. The traditional ADMM algorithm needs to compute an expected loss function on all training examples. The Bregman divergence, however, is derived from a simple second order proximal function.
0
0
0
Sun Apr 24 2016
Machine Learning
Stochastic Variance-Reduced ADMM
The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. The proposed algorithm retains the fast convergence benefits of SAG-ADMM and SDCA-AdMM, but is more advantageous in that its storage requirement is very low.
0
0
0
Sat Dec 10 2016
Machine Learning
An Empirical Study of ADMM for Nonconvex Problems
The alternating direction method of multipliers (ADMM) is a common optimization tool for solving constrained and non-differentiable problems. We provide an empirical study of the practical performance of ADMM on several non-convex applications.
0
0
0
Mon Oct 10 2016
Machine Learning
Stochastic Alternating Direction Method of Multipliers with Variance Reduction for Nonconvex Optimization
Stochastic alternating direction method of multipliers (ADMM) for the nonconvex optimizations. We propose three classes of methods based on different reduced variance stochastic gradients. We establish the iteration complexity of these methods.
0
0
0
Mon Apr 10 2017
Artificial Intelligence
Adaptive Relaxed ADMM: Convergence Theory and Practical Implementation
Relaxed ADMM is a generalization of ADMM that often achieves better performance. Itsefficiency depends strongly on algorithm parameters that must be chosen by an expert user. We propose an adaptive method that automatically tunes the key parameters to achieve optimal performance without user oversight.
0
0
0