Published on Mon Dec 14 2020

Federated Learning under Importance Sampling

Elsa Rizk, Stefan Vlaski, Ali H. Sayed

Federated learning encapsulates distributed learning strategies that are managed by a central unit. Since it relies on using a selected number of agents, and since each agent, in turn, taps into its local data, it is only natural to study optimal sampling policies for selecting agents and their data.

0
0
0
Abstract

Federated learning encapsulates distributed learning strategies that are managed by a central unit. Since it relies on using a selected number of agents at each iteration, and since each agent, in turn, taps into its local data, it is only natural to study optimal sampling policies for selecting agents and their data in federated learning implementations. Usually, only uniform sampling schemes are used. However, in this work, we examine the effect of importance sampling and devise schemes for sampling agents and data non-uniformly guided by a performance measure. We find that in schemes involving sampling without replacement, the performance of the resulting architecture is controlled by two factors related to data variability at each agent, and model variability across agents. We illustrate the theoretical findings with experiments on simulated and real data and show the improvement in performance that results from the proposed strategies.

Mon Oct 26 2020
Machine Learning
Optimal Importance Sampling for Federated Learning
Federated learning involves a mixture of centralized and decentralized processing tasks. A server regularly selects a sample of the agents and these in turn sample their local data to compute stochastic gradients. This process runs continually.
0
0
0
Thu Feb 20 2020
Machine Learning
Dynamic Federated Learning
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments. Most performance analyses assume static optimization problems andoffer no guarantees in the presence of drifts in the problem solution or data.
0
0
0
Mon Jul 26 2021
Machine Learning
On The Impact of Client Sampling on Federated Learning Convergence
Clients' sampling is a central operation of current state-of-the-art Federated learning (FL) approaches. The impact of this procedure on the convergence and speed of FL remains to date under-investigated. In this work we introduce a novel decomposition theorem for the convergence of FL.
1
0
0
Sat Feb 29 2020
Machine Learning
Adaptive Federated Optimization
Federated learning is a distributed machine learning paradigm in which a large number of clients coordinate with a central server to learn a model. Standard federated optimization methods such as Federated Averaging (FedAvg) are often difficult to tune and exhibit unfavorable convergence behavior.
0
0
0
Mon Sep 06 2021
Machine Learning
On Second-order Optimization Methods for Federated Learning
We consider federated learning (FL), where the training data is distributed across a large number of clients. The standard optimization method is Federated Averaging (FedAvg), which performs multiple local first-order optimization steps between communication rounds. We (i) show that FedAvg performs surprisingly
1
0
0
Thu Feb 25 2021
Machine Learning
Emerging Trends in Federated Learning: From Model Fusion to Federated X Learning
Federated learning is a new learning paradigm that decouples data collection and model training via multi-party computation and model aggregation. As aflexible learning setting, federated learning has the potential to integrate with other learning frameworks.
0
0
0