Published on Mon Sep 03 2018

A Dual Approach for Optimal Algorithms in Distributed Optimization over Networks

César A. Uribe, Soomin Lee, Alexander Gasnikov, Angelia Nedić

We study dual-based algorithms for distributed convex optimization problems. Our approach is based on the dual of an appropriately formulated primal problem. We propose distributed algorithms that achieve the same optimal rates as their centralized counterparts.

0
0
0
Abstract

We study dual-based algorithms for distributed convex optimization problems over networks, where the objective is to minimize a sum of functions over in a network. We provide complexity bounds for four different cases, namely: each function is strongly convex and smooth, each function is either strongly convex or smooth, and when it is convex but neither strongly convex nor smooth. Our approach is based on the dual of an appropriately formulated primal problem, which includes a graph that models the communication restrictions. We propose distributed algorithms that achieve the same optimal rates as their centralized counterparts (up to constant and logarithmic factors), with an additional optimal cost related to the spectral properties of the network. Initially, we focus on functions for which we can explicitly minimize its Legendre-Fenchel conjugate, i.e., admissible or dual friendly functions. Then, we study distributed optimization algorithms for non-dual friendly functions, as well as a method to improve the dependency on the parameters of the functions involved. Numerical analysis of the proposed algorithms is also provided.

Fri Dec 01 2017
Machine Learning
Optimal Algorithms for Distributed Optimization
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We show that Nesterov's gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem.
0
0
0
Wed Oct 23 2019
Machine Learning
Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks
This paper proposes a novel family of primal-dual-based distributedgorithms for smooth, convex, multi-agent optimization over networks that uses only gradient information and gossip communications. The algorithms can also employ acceleration on the computation and communications.
0
0
0
Tue Feb 28 2017
Machine Learning
Optimal algorithms for smooth and strongly convex distributed optimization in networks
In this paper, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e.master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision $Varepsilon > 0
0
0
0
Thu Aug 18 2016
Machine Learning
Distributed Optimization of Convex Sum of Non-Convex Functions
We show that coupled consensus and projected gradient descent algorithm proposed in [1] can optimize convex sum of non-convex functions. We further discuss the applications of this analysis in improving privacy in distributed optimization.
0
0
0
Mon Oct 29 2018
Machine Learning
Distributed Convex Optimization With Limited Communications
A distributed convex optimization algorithm is proposed. The algorithm addresses the scenario of a large distributed optimization problem with limited communication among nodes.
0
0
0
Fri Oct 05 2018
Machine Learning
Accelerated Decentralized Optimization with Local Updates for Smooth and Strongly Convex Objectives
Algorithm is a decentralized accelerated gossip algorithm that only requires local synchrony. Its rate depends on the condition of the local functions as well as the network topology and delays.
0
0
0