Published on Wed Feb 24 2016

The Myopia of Crowds: A Study of Collective Evaluation on Stack Exchange

Keith Burghardt, Emanuel F. Alsina, Michelle Girvan, William Rand, Kristina Lerman

Crowds can often make better decisions than individuals or small groups of experts by leveraging their ability to aggregate diverse information. Question answering sites, such as Stack Exchange, rely on the "wisdom of crowds" effect to identify the best answers.

0
0
0
Abstract

Crowds can often make better decisions than individuals or small groups of experts by leveraging their ability to aggregate diverse information. Question answering sites, such as Stack Exchange, rely on the "wisdom of crowds" effect to identify the best answers to questions asked by users. We analyze data from 250 communities on the Stack Exchange network to pinpoint factors affecting which answers are chosen as the best answers. Our results suggest that, rather than evaluate all available answers to a question, users rely on simple cognitive heuristics to choose an answer to vote for or accept. These cognitive heuristics are linked to an answer's salience, such as the order in which it is listed and how much screen space it occupies. While askers appear to depend more on heuristics, compared to voting users, when choosing an answer to accept as the most helpful one, voters use acceptance itself as a heuristic: they are more likely to choose the answer after it is accepted than before that very same answer was accepted. These heuristics become more important in explaining and predicting behavior as the number of available answers increases. Our findings suggest that crowd judgments may become less reliable as the number of answers grow.

Sun Feb 14 2016
Computer Vision
Embracing Error to Enable Rapid Crowdsourcing
Microtask crowdsourcing has enabled dataset advances in social science andmachine learning. existing crowdsourcing schemes are too expensive to scale with the expanding volume of data. To scale and widen the applicability of crowdsourcing, we present a technique that produces extremely rapid judgments.
0
0
0
Wed Dec 23 2015
Artificial Intelligence
Selecting the top-quality item through crowd scoring
We investigate crowdsourcing algorithms for finding the top-quality item within a large collection of objects. The core of the algorithms is that objects are distributed to crowd workers, who return a noisy and biased evaluation.
0
0
0
Sun Oct 09 2011
Machine Learning
A Study of Unsupervised Adaptive Crowdsourcing
We consider unsupervised crowdsourcing performance based on the model wherein end-users are essentially rated according to how their responses correlate with the majority of other responses. In one setting, we consider an independent sequence of progressivelyidentically distributed crowdsourcing assignments (meta-tasks) While in the
0
0
0
Fri Oct 28 2016
Machine Learning
Beyond Exchangeability: The Chinese Voting Process
Many online communities present user-contributed responses such as reviews of products and answers to questions. User-provided helpfulness votes can highlight the most useful responses, but voting is a social process that can gain momentum based on the popularity of responses and the polarity of existing votes.
0
0
0
Tue Oct 16 2012
Artificial Intelligence
Crowdsourcing Control: Moving Beyond Multiple Choice
LazySusan is a decision-theoretic controller that dynamically requests responses to crowdsourced tasks. Live experiments on Amazon Mechanical Turk demonstrate the superiority of LazySusan at solving SAT Math questions. We also show in live experiments that our EM algorithm outperforms majority-voting on a visualization
0
0
0
Wed Nov 09 2011
Machine Learning
Pushing Your Point of View: Behavioral Measures of Manipulation in Wikipedia
Wikipedia presents tremendous potential for people to promulgate their own points of view. Such efforts may be more subtle than typical vandalism. We introduce new behavioral metrics to quantify the level of controversy associated with a particular user.
0
0
0