Publication | Closed Access
The face of quality in crowdsourcing relevance labels
113
Citations
17
References
2012
Year
Unknown Venue
EngineeringOutput QualityCommunicationInformation QualityJournalismText MiningComputational Social ScienceInformation RetrievalTask CompletionBiasContent AnalysisHuman ComputationCrowdsourcingHuman Information InteractionCrowd ComputingRelevance LabelsInteractive MarketingSocial ComputingHuman-computer InteractionArts
Information retrieval systems require human contributed relevance labels for their training and evaluation. Increasingly such labels are collected under the anonymous, uncontrolled conditions of crowdsourcing, leading to varied output quality. While a range of quality assurance and control techniques have now been developed to reduce noise during or after task completion, little is known about the workers themselves and possible relationships between workers' characteristics and the quality of their work. In this paper, we ask how do the relatively well or poorly-performing crowds, working under specific task conditions, actually look like in terms of worker characteristics, such as demographics or personality traits. Our findings show that the face of a crowd is in fact indicative of the quality of their work.
| Year | Citations | |
|---|---|---|
Page 1
Page 1