Publication | Open Access
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements
158
Citations
39
References
2021
Year
Deep neural networks (DNNs) have progressed rapidly during the past decade and have been deployed in various real-world applications. Meanwhile, DNN models have been shown to be vulnerable to security and privacy attacks. One such attack that has attracted a great deal of attention recently is the backdoor attack. Specifically, the adversary poisons the target model’s training set to mislead any input with an added secret trigger to a target class.
| Year | Citations | |
|---|---|---|
Page 1
Page 1