Published on Wed May 28 2014

The PeerRank Method for Peer Assessment

Toby Walsh

PeerRank method weights grades by the grades of the grading agent. The PeerRank method also provides an incentive for agents to grade correctly.

0
0
0
Abstract

We propose the PeerRank method for peer assessment. This constructs a grade for an agent based on the grades proposed by the agents evaluating the agent. Since the grade of an agent is a measure of their ability to grade correctly, the PeerRank method weights grades by the grades of the grading agent. The PeerRank method also provides an incentive for agents to grade correctly. As the grades of an agent depend on the grades of the grading agents, and as these grades themselves depend on the grades of other agents, we define the PeerRank method by a fixed point equation similar to the PageRank method for ranking web-pages. We identify some formal properties of the PeerRank method (for example, it satisfies axioms of unanimity, no dummy, no discrimination and symmetry), discuss some examples, compare with related work and evaluate the performance on some synthetic data. Our results show considerable promise, reducing the error in grade predictions by a factor of 2 or more in many cases over the natural baseline of averaging peer grades.

Wed Jul 21 2021
Artificial Intelligence
Peer Selection with Noisy Assessments
In the peer selection problem a group of agents must select a subset of themselves as winners for, e.g., peer-reviewed grants or prizes. In this paper we extend PeerNomination, the most accurate peer reviewing algorithm to date, into a system that can handle noisy and inaccurate agents.
0
0
0
Thu Apr 30 2020
Artificial Intelligence
PeerNomination: Relaxing Exactness for Increased Accuracy in Peer Selection
In peer selection agents must choose a subset of themselves for an award or a prize. As agents are self-interested, we want to design algorithms that are impartial. We present a novel algorithm for impartial peer selection, PeerNomination.
0
0
0
Thu Oct 08 2020
Machine Learning
Catch Me if I Can: Detecting Strategic Behaviour in Peer Assessment
When a peer-assessment task is competitive, agents may be incentivized to misreport evaluations in order to improve their own final standing. Our focus is on designing methods for detection of such manipulations. We prove that our test has strong false alarm guarantees and evaluate its detection ability.
0
0
0
Wed Apr 13 2016
Artificial Intelligence
Strategyproof Peer Selection using Randomization, Partitioning, and Apportionment
Peer reviews, evaluations, and selections are a fundamental aspect of modern science. We propose a novel mechanism that is strategyproof, i.e., agents cannot benefit by reporting insincere valuations. We demonstrate the effectiveness of our mechanism by a comprehensive simulation-based comparison.
0
0
0
Wed May 24 2006
Artificial Intelligence
An Algorithm to Determine Peer-Reviewers
0
0
0
Thu Jun 16 2016
Machine Learning
Avoiding Imposters and Delinquents: Adversarial Crowdsourcing and Peer Prediction
We consider a crowdsourcing model in which workers are asked to rate the quality of $ n$ items previously generated by other workers. An unknown set of workers generate reliable ratings, while the remaining workers may behave arbitrarily and possibly adversarially. We show that this is possible
0
0
0