Published on Thu Nov 05 2020

Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent

Dimitris Fotakis, Thanasis Lianeas, Georgios Piliouras, Stratis Skoulakis

We consider a natural model of online preference aggregation. The widely studied Generalized Min-Sum-Set-Cover (GMSSC) problem serves as a formal model for the setting above. We show how to achieve low regret for GMSSC in polynomial-time.

0
0
0
Abstract

We consider a natural model of online preference aggregation, where sets of preferred items along with a demand for items in each , appear online. Without prior knowledge of , the learner maintains a ranking aiming that at least items from appear high in . This is a fundamental problem in preference aggregation with applications to, e.g., ordering product or news items in web pages based on user scrolling and click patterns. The widely studied Generalized Min-Sum-Set-Cover (GMSSC) problem serves as a formal model for the setting above. GMSSC is NP-hard and the standard application of no-regret online learning algorithms is computationally inefficient, because they operate in the space of rankings. In this work, we show how to achieve low regret for GMSSC in polynomial-time. We employ dimensionality reduction from rankings to the space of doubly stochastic matrices, where we apply Online Gradient Descent. A key step is to show how subgradients can be computed efficiently, by solving the dual of a configuration LP. Using oblivious deterministic and randomized rounding schemes, we map doubly stochastic matrices back to rankings with a small loss in the GMSSC objective.

Fri Aug 30 2013
Machine Learning
Online Ranking: Discrete Choice, Spearman Correlation and Other Feedback
We present an algorithm of expected regret over a time horizon of steps with respect to the best single ranking in hindsight. We also prove a matching lower regret bound for online rank-aggregation over the Spearman correlation measure.
0
0
0
Sat Jun 21 2014
Machine Learning
Minimax-optimal Inference from Partial Rankings
This paper studies the problem of inferring a global preference based on the partial rankings provided by many users over different subsets of items. A question of particular interest is how to optimally assign items to users for ranking.
0
0
0
Sat Aug 11 2018
Machine Learning
Ranking with Features: Algorithm and A Graph Theoretic Analysis
We consider the problem of ranking a set of items from pairwise comparisons in the presence of features associated with the items. We present a new least squares based algorithm called fBTL-LS which we show requires much lesser than pairs to obtain a good ranking.
0
0
0
Tue Jul 25 2017
Machine Learning
A Nearly Instance Optimal Algorithm for Top-k Ranking under the Multinomial Logit Model
We study the active learning problem of top- ranking from multi-wise comparisons under the popular multinomial logit model. We design a new active ranking algorithm without using any information about the underlying items' preference scores. We show our algorithm is nearly instance optimal.
0
0
0
Tue Nov 06 2018
Machine Learning
How Many Pairwise Preferences Do We Need to Rank A Graph Consistently?
We consider the problem of optimal recovery of true ranking of items from a randomly chosen subset of their pairwise preferences. We embed not the graph, but, its strong product to capture the pairwise node relationships. Our proposed algorithm is the first to provide statistical consistency on two ranking losses. We also report experimental evaluations on different synthetic and real datasets.
0
0
0
Sun Oct 05 2014
Machine Learning
Online Ranking with Top-1 Feedback
We consider a setting where a system learns to rank a fixed set of items. At the end of each round, the relevance score for only the topranked object is revealed. We provide a comprehensive set of results regarding learnability under this challenging setting.
0
0
0