Published on Wed Aug 11 2021

Estimation of Fair Ranking Metrics with Incomplete Judgments

Ömer Kırnap, Fernando Diaz, Asia Biega, Michael Ekstrand, Ben Carterette, Emine Yılmaz

There is increasing attention to evaluating the fairness of search system decisions. These metrics often consider the membership of items to particular groups. To date, these metrics typically assume the availability and completeness of protected attribute labels.

0
0
0
Abstract

There is increasing attention to evaluating the fairness of search system ranking decisions. These metrics often consider the membership of items to particular groups, often identified using protected attributes such as gender or ethnicity. To date, these metrics typically assume the availability and completeness of protected attribute labels of items. However, the protected attributes of individuals are rarely present, limiting the application of fair ranking metrics in large scale systems. In order to address this problem, we propose a sampling strategy and estimation technique for four fair ranking metrics. We formulate a robust and unbiased estimator which can operate even with very limited number of labeled items. We evaluate our approach using both simulated and real world data. Our experimental results demonstrate that our method can estimate this family of fair ranking metrics and provides a robust, reliable alternative to exhaustive or random data annotation.

Mon Apr 25 2016
Machine Learning
Unbiased Comparative Evaluation of Ranking Functions
Eliciting relevance judgments for ranking evaluation is labor-intensive. Unlike traditional approaches that make this selection deterministically, Probabilistic sampling has shown intriguing promise since it enables the design of unbiased estimators.
0
0
0
Wed May 05 2021
Machine Learning
When Fair Ranking Meets Uncertain Inference
Existing fair ranking systems assume accurate demographic information about individuals is available to the ranking algorithm. In practice, however, this assumption may not hold. Social and legal barriers may prevent algorithm operators from collecting peoples' demographic information. In these cases, algorithm operators may attempt to infer peoples' demographics.
3
5
19
Tue Jun 04 2019
Artificial Intelligence
Balanced Ranking with Diversity Constraints
In-group fairness can be affected by the presence of disadvantaged groups in a set. This is because the selected candidates may not be the best ones in a given group. We introduce additional constraints, aimed at balancing the in-group unfairness. We then formalize the induced fairness problems
0
0
0
Wed Jul 14 2021
Machine Learning
Fairness in Ranking under Uncertainty
Unfairness occurs when an agent with higher merit obtains a worse outcome. A principal or algorithm making decisions never has access to the agents' true merit. The role of observed features is to give rise to a posterior distribution of the agent's merits.
1
0
0
Fri Mar 19 2021
Machine Learning
Individually Fair Ranking
0
0
0
Fri Jun 19 2020
Machine Learning
Achieving Fairness via Post-Processing in Web-Scale Recommender Systems
0
0
0
Tue Jun 23 2020
Machine Learning
Fairness without Demographics through Adversarially Reweighted Learning
The previous machine learning (ML) fairness literature assumes that race and sex are present in the dataset. However, in practice factors like privacy and regulation often preclude the collection of protected features. How can we train an ML model to improve fairness when we do not even know the protected group memberships?
1
2
4
Fri Oct 07 2016
Machine Learning
Equality of Opportunity in Supervised Learning
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning. We show how to optimally adjust any learned predictor so as to remove discrimination according to our definition. Our framework also improves incentives by shifting the cost of poor classification from disadvantaged groups to the decision maker.
1
0
2
Thu Dec 13 2018
Machine Learning
Improving fairness in machine learning systems: What do industry practitioners need?
The potential for machine learning (ML) systems to amplify social inequities and unfairness is receiving increasing popular and academic attention. If these tools are to have a positive impact on industry practice, it is crucial that their design be informed by an understanding of real-world needs.
0
0
0
Sat Mar 02 2019
Artificial Intelligence
Fairness in Recommendation Ranking through Pairwise Comparisons
Recommender systems are one of the most pervasive applications of machine learning in industry. Many services using them to match users to products or information. What are the possible fairness risks, and how can we quantify them?
0
0
0
Sat Jun 01 2019
Machine Learning
Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination
The increasing impact of algorithmic decisions on people's lives compels us to scrutinize their fairness. We consider the use of an auxiliary dataset, such as the US census, to construct models that predict protected classes. We show that a variety of common disparity measures are generally unidentifiable.
0
0
0
Wed Aug 11 2021
Machine Learning
Overview of the TREC 2020 Fair Ranking Track
This paper provides an overview of the NIST TREC 2020 Fair Ranking track. The track contains two tasks,reranking and retrieval, with a shared evaluation.
1
0
0