Publication | Closed Access
An End-to-End Deep RL Framework for Task Arrangement in Crowdsourcing Platforms
21
Citations
35
References
2020
Year
Unknown Venue
Artificial IntelligenceEngineeringMachine LearningSequential LearningNeural NetworkCrowdsourcing PlatformsMulti-agent LearningData ScienceRobot LearningHuman-in-the-loopAction Model LearningComputer ScienceTask AllocationCrowdsourcingWorld ModelDeep LearningCrowd ComputingDeep Reinforcement LearningTask Arrangement
In this paper, we propose a Deep Reinforcement Learning (RL) framework for task arrangement, which is a critical problem for the success of crowdsourcing platforms. Previous works conduct the personalized recommendation of tasks to workers via supervised learning methods. However, the majority of them only consider the benefit of either workers or requesters independently. In addition, they do not consider the real dynamic environments (e.g., dynamic tasks, dynamic workers), so they may produce sub-optimal results. To address these issues, we utilize Deep Q-Network (DQN), an RL-based method combined with a neural network to estimate the expected long-term return of recommending a task. DQN inherently considers the immediate and the future rewards and can be updated quickly to deal with evolving data and dynamic changes. Furthermore, we design two DQNs that capture the benefit of both workers and requesters and maximize the profit of the platform. To learn value functions in DQN effectively, we also propose novel state representations, carefully design the computation of Q values, and predict transition probabilities and future states. Experiments on synthetic and real datasets demonstrate the superior performance of our framework.
| Year | Citations | |
|---|---|---|
Page 1
Page 1