Publication | Closed Access
Quality management on Amazon Mechanical Turk
983
Citations
3
References
2010
Year
Unknown Venue
Artificial IntelligenceEngineeringMassive RedundancyInformation QualityJournalismText MiningNatural Language ProcessingComputational Social ScienceInformation RetrievalData ScienceData MiningBiasContent AnalysisHuman ComputationStatisticsAmazon Mechanical TurkKnowledge DiscoveryComputer ScienceCrowdsourcingMarketingCrowd ComputingInteractive MarketingAlgorithmic FairnessHuman-computer InteractionArtsLow Quality
Crowdsourcing platforms such as Amazon Mechanical Turk distribute tasks to many workers, yet verifying result quality is difficult, prompting malicious workers to submit low‑quality answers and forcing requesters to rely on costly redundancy. This study seeks methods to accurately estimate worker quality so that low‑performing workers and spammers can be rejected or blocked.
Crowdsourcing services, such as Amazon Mechanical Turk, allow for easy distribution of small tasks to a large number of workers. Unfortunately, since manually verifying the quality of the submitted results is hard, malicious workers often take advantage of the verification difficulty and submit answers of low quality. Currently, most requesters rely on redundancy to identify the correct answers. However, redundancy is not a panacea. Massive redundancy is expensive, increasing significantly the cost of crowdsourced solutions. Therefore, we need techniques that will accurately estimate the quality of the workers, allowing for the rejection and blocking of the low-performing workers and spammers.
| Year | Citations | |
|---|---|---|
Page 1
Page 1