Published on Tue Sep 08 2020

FedCM: A Real-time Contribution Measurement Method for Participants in Federated Learning

Boyi Liu, Bingjie Yan, Yize Zhou, Zhixuan Liang, Cheng-Zhong Xu

Federated Learning (FL) creates an ecosystem for multiple agents to collaborate on building models with data privacy consideration. Real-time is not considered by the existing approaches, but it is critical for FL systems to allocate computing power,communication resources, etc.

0
0
0
Abstract

Federated Learning (FL) creates an ecosystem for multiple agents to collaborate on building models with data privacy consideration. The method for contribution measurement of each agent in the FL system is critical for fair credits allocation but few are proposed. In this paper, we develop a real-time contribution measurement method FedCM that is simple but powerful. The method defines the impact of each agent, comprehensively considers the current round and the previous round to obtain the contribution rate of each agent with attention aggregation. Moreover, FedCM updates contribution every round, which enable it to perform in real-time. Real-time is not considered by the existing approaches, but it is critical for FL systems to allocate computing power, communication resources, etc. Compared to the state-of-the-art method, the experimental results show that FedCM is more sensitive to data quantity and data quality under the premise of real-time. Furthermore, we developed federated learning open-source software based on FedCM. The software has been applied to identify COVID-19 based on medical images.

Fri Feb 26 2021
Machine Learning
Efficient Client Contribution Evaluation for Horizontal Federated Learning
In federated learning (FL), fair and accurate measurement of the contribution of each federated participant is of great significance. Previous methods for contribution measurement were based on enumeration over possible combination of participants. In this paper an efficient method is proposed to evaluate the contributions of federated participants.
0
0
0
Sun Sep 05 2021
Artificial Intelligence
GTG-Shapley: Efficient and Accurate Participant Contribution Evaluation in Federated Learning
Federated Learning (FL) bridges the gap between collaborative machine learning and preserving data privacy. It is essential to fairly evaluate participants' contribution to the final FL model without exposing their private data. Shapley-based techniques have been widely adopted to provide fair evaluation of FL participant contributions.
0
0
0
Sun Sep 20 2020
Machine Learning
Estimation of Individual Device Contributions for Incentivizing Federated Learning
Federated learning (FL) is an emerging technique used to train a machine-learning model collaboratively using the data and computation resource of mobile devices without exposing privacy-sensitive user data. This paper proposes a computation-and communication-efficient method of estimating a participating device's contribution level.
0
0
0
Mon Sep 14 2020
Machine Learning
A Principled Approach to Data Valuation for Federated Learning
Federated learning (FL) is a popular technique to train machine learning (ML) models on decentralized data sources. The Shapley value (SV) defines a unique payoff scheme that satisfies many desiderata for a data value notion. This paper proposes a variant of the SV amenable to FL.
0
0
0
Tue Sep 17 2019
Machine Learning
Measure Contribution of Participants in Federated Learning
Federated Machine Learning (FML) creates an ecosystem for multiple parties to collaborate on building models while protecting data privacy. A measure of the contribution for each party in FML enables fair credits allocation.
0
0
0
Tue Aug 24 2021
Machine Learning
Data-Free Evaluation of User Contributions in Federated Learning
Federated learning trains a machine learning model on mobile devices in a distributed manner using each device's private data and computing resources. A critical issues is to evaluate individual users' contributions so that (1) users' effort in model training can be compensated with proper incentives and (2) malicious
0
0
0