Published on Tue Mar 10 2020

Addressing multiple metrics of group fairness in data-driven decision making

Marius Miron, Songül Tolan, Emilia Gómez, Carlos Castillo

The Fairness, Accountability, and Transparency in Machine Learning (FAT-ML) proposes a varied set of group fairness metrics to measure discrimination against socio-demographic groups. Such a system can be deemed as either fair or unfair depending on the choice of the metric.

0
0
0
Abstract

The Fairness, Accountability, and Transparency in Machine Learning (FAT-ML) literature proposes a varied set of group fairness metrics to measure discrimination against socio-demographic groups that are characterized by a protected feature, such as gender or race.Such a system can be deemed as either fair or unfair depending on the choice of the metric. Several metrics have been proposed, some of them incompatible with each other.We do so empirically, by observing that several of these metrics cluster together in two or three main clusters for the same groups and machine learning methods. In addition, we propose a robust way to visualize multidimensional fairness in two dimensions through a Principal Component Analysis (PCA) of the group fairness metrics. Experimental results on multiple datasets show that the PCA decomposition explains the variance between the metrics with one to three components.

Thu Sep 09 2021
Machine Learning
A Systematic Approach to Group Fairness in Automated Decision Making
The field of algorithmic fairness has brought forth many ways to measure and improve the fairness of machine learning models. These findings are still not widely used in practice. The goal of this paper is to provide data scientists with an accessible introduction to group fairness metrics.
0
0
0
Tue Jan 05 2021
Artificial Intelligence
Characterizing Intersectional Group Fairness with Worst-Case Comparisons
Machine Learning or Artificial Intelligence algorithms have gained considerable scrutiny in recent times. This has led to a growing body of work that identifies and attempts to fix these biases. A first step towards making these algorithms more fair is designing metrics that measure unfairness.
0
0
0
Mon May 24 2021
Machine Learning
MultiFair: Multi-Group Fairness in Machine Learning
Algorithmic fairness is becoming increasingly important in data mining and machine learning. The vast majority of the existing works on group fairness focus on debiasing with respect to a single sensitive attribute. We formulate it as a mutual information minimization problem and propose a generic end-to-end algorithmic framework to solve it.
2
0
0
Tue Sep 03 2019
Artificial Intelligence
Quantifying Infra-Marginality and Its Trade-off with Group Fairness
In critical decision-making scenarios, optimizing accuracy can lead to a biased classifier. We propose a method to measure infra-marginality, and a simple algorithm to maximize group-wise accuracy.
0
0
0
Fri Jun 19 2020
Machine Learning
Two Simple Ways to Learn Individual Fairness Metrics from Data
0
0
0
Sun Nov 01 2020
Artificial Intelligence
Making ML models fairer through explanations: the case of LimeOut
Algorithmic decisions are now being used on a daily basis, and based on complex and biased processes. Not only unfair outcomes affect human rights, they also undermine public trust in ML and AI. The simple idea of "feature dropout" followed by an "ensemble approach" can improve model fairness.
0
0
0