Publication | Closed Access
Feature Denoising for Improving Adversarial Robustness
869
Citations
22
References
2019
Year
Unknown Venue
Convolutional Neural NetworkEngineeringMachine LearningAutoencodersInformation ForensicsRobust FeatureImage AnalysisData ScienceUncertainty QuantificationAdversarial Machine LearningConvolutional NetworksAdversarial AttacksMachine VisionComputer ScienceDeep LearningComputer VisionData SecurityAdversarial RobustnessGenerative Adversarial NetworkAttack Model
Adversarial attacks challenge convolutional networks and offer insights into their vulnerabilities. The study proposes that adversarial perturbations introduce noise into network features. We design feature‑denoising blocks using non‑local means or similar filters, training the networks end‑to‑end. Combining feature denoising with adversarial training markedly boosts robustness, raising ImageNet PGD‑10 accuracy from 27.9 % to 55.7 % and achieving 42.6 % under PGD‑2000, and winning CAAD 2018 with 50.6 % accuracy. Code is available at https://github.com/facebookresearch/ImageNet-Adversarial-Training.
Adversarial attacks to image classification systems present challenges to convolutional networks and opportunities for understanding them. This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks. Motivated by this observation, we develop new network architectures that increase adversarial robustness by performing feature denoising. Specifically, our networks contain blocks that denoise the features using non-local means or other filters; the entire networks are trained end-to-end. When combined with adversarial training, our feature denoising networks substantially improve the state-of-the-art in adversarial robustness in both white-box and black-box attack settings. On ImageNet, under 10-iteration PGD white-box attacks where prior art has 27.9% accuracy, our method achieves 55.7%; even under extreme 2000-iteration PGD white-box attacks, our method secures 42.6% accuracy. Our method was ranked first in Competition on Adversarial Attacks and Defenses (CAAD) 2018 --- it achieved 50.6% classification accuracy on a secret, ImageNet-like test dataset against 48 unknown attackers, surpassing the runner-up approach by ~10%. Code is available at https://github.com/facebookresearch/ImageNet-Adversarial-Training.
| Year | Citations | |
|---|---|---|
Page 1
Page 1