Published on Sun Jan 12 2020

Private and Communication-Efficient Edge Learning: A Sparse Differential Gaussian-Masking Distributed SGD Approach

Xin Zhang, Minghong Fang, Jia Liu, Zhengyuan Zhu

In this paper, we consider the problem of jointly improving data privacy and communication efficiency of distributed edge learning. We propose a new decentralized stochastic gradient method for non-convex distributed edge learning. We show that SDM-DSGD improves the fundamental training-privacy trade-

0
0
0
Abstract

With rise of machine learning (ML) and the proliferation of smart mobile devices, recent years have witnessed a surge of interest in performing ML in wireless edge networks. In this paper, we consider the problem of jointly improving data privacy and communication efficiency of distributed edge learning, both of which are critical performance metrics in wireless edge network computing. Toward this end, we propose a new decentralized stochastic gradient method with sparse differential Gaussian-masked stochastic gradients (SDM-DSGD) for non-convex distributed edge learning. Our main contributions are three-fold: i) We theoretically establish the privacy and communication efficiency performance guarantee of our SDM-DSGD method, which outperforms all existing works; ii) We show that SDM-DSGD improves the fundamental training-privacy trade-off by {\em two orders of magnitude} compared with the state-of-the-art. iii) We reveal theoretical insights and offer practical design guidelines for the interactions between privacy preservation and communication efficiency, two conflicting performance goals. We conduct extensive experiments with a variety of learning models on MNIST and CIFAR-10 datasets to verify our theoretical findings. Collectively, our results contribute to the theory and algorithm design for distributed edge learning.

Thu Dec 10 2020
Machine Learning
DONE: Distributed Approximate Newton-type Method for Federated Edge Learning
There is growing interest in applying distributed machine learning to edge computing. DONE is a distributed approximate Newton-type algorithm with fast convergence rate for for-communication-efficient edge learning.
0
0
0
Sun Jun 20 2021
Machine Learning
Fine-Grained Data Selection for Improved Energy Efficiency of Federated Edge Learning
0
0
0
Mon Aug 31 2020
Artificial Intelligence
Federated Edge Learning : Design Issues and Challenges
Federated Learning (FL) is a distributed machine learning technique. Each device contributes to the learning model by independently computing the gradient based on its local training data. implementing FL at the network edge is challenging due to system and data heterogeneity.
0
0
0
Mon Oct 19 2020
Machine Learning
Blind Federated Edge Learning
We study federated edge learning (FEEL) where wireless edge devices, each with its own dataset, learn a global model collaboratively. At each iteration, devices perform local updates using their local data and the most recent global model received from the PS. The PS then updates the global model according to the signal received over the wireless MAC.
0
0
0
Thu Aug 05 2021
Artificial Intelligence
Multi-task Federated Edge Learning (MtFEEL) in Wireless Networks
Federated Learning (FL) has evolved as a promising technique to handle distributed machine learning across edge devices. A single neural network (NN) that optimises a global objective is generally learned in most work in FL. Although works finding a NN that can be personalised for edge device
8
0
0
Sun Dec 30 2018
Machine Learning
Broadband Analog Aggregation for Low-Latency Federated Edge Learning (Extended Version)
The popularity of mobile devices results in the availability of enormous data and computational resources at the network edge. To leverage the data and resources, a new machine learning paradigm, called edge learning, has emerged. While computing speeds are advancing rapidly, the communication latency is becoming the bottleneck of fast edge learning.
0
0
0