Publication | Closed Access
Deep Reinforcement Learning for Partially Observable Data Poisoning Attack in Crowdsensing Systems
140
Citations
36
References
2019
Year
Artificial IntelligenceEngineeringMachine LearningInformation SecurityVerificationInformation ForensicsVarious TypesMulti-agent LearningGround TruthData ScienceAdversarial Machine LearningParticipatory SensingData PrivacyComputer ScienceMobile ComputingCrowdsourcingAttack StrategiesCrowdsensing SystemsData SecurityCryptographyCrowd ComputingDeep Reinforcement Learning
Crowdsensing systems collect various types of data from sensors embedded on mobile devices owned by individuals. These individuals are commonly referred to as workers that complete tasks published by crowdsensing systems. Because of the relative lack of control over worker identities, crowdsensing systems are susceptible to data poisoning attacks which interfering with data analysis results by injecting fake data conflicting with ground truth. Frameworks like TruthFinder can resolve data conflicts by evaluating the trustworthiness of the data providers. These frameworks somehow make crowdsensing systems more robust since they can limit the impact of dirty data by reducing the value of unreliable workers. However, previous work has shown that TruthFinder may also be affected by the data poisoning attack when the malicious workers have access to global information. In this article, we focus on partially observable data poisoning attacks in crowdsensing systems. We show that even if the malicious workers only have access to local information, they can find effective data poisoning attack strategies to interfere with crowdsensing systems with TruthFinder. First, we formally model the problem of partially observable data poisoning attack against crowdsensing systems. Then, we propose a data poisoning attack method based on deep reinforcement learning, which helps malicious workers jeopardize with TruthFinder while hiding themselves. Based on the method, the malicious workers can learn from their attack attempts and evolve the poisoning strategies continuously. Finally, we conduct experiments on real-life data sets to verify the effectiveness of the proposed method.
| Year | Citations | |
|---|---|---|
Page 1
Page 1