Publication | Open Access
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
1.3K
Citations
42
References
2019
Year
Unknown Venue
Convolutional Neural NetworkEngineeringMachine LearningInformation SecurityAi SafetyInformation ForensicsSide-channel AttackData ScienceAdversarial Machine LearningBackdoor AttackComputer EngineeringData PrivacyComputer ScienceDeep LearningNeural Architecture SearchSerious Security RiskData SecurityDeep Neural NetworksAttack ModelNeural Cleanse
Deep neural networks lack transparency, making them vulnerable to backdoor attacks that silently alter predictions when a hidden trigger is present, posing serious risks to security‑sensitive systems such as biometric authentication and autonomous vehicles. This work introduces the first robust, generalizable framework for detecting and mitigating such backdoor attacks in DNNs. The framework identifies backdoors, reconstructs possible triggers, and applies mitigation through input filtering, neuron pruning, and unlearning. Extensive experiments on diverse DNNs and two injection methods show the framework’s effectiveness and robustness against multiple backdoor variants.
Lack of transparency in deep neural networks (DNNs) make them susceptible to backdoor attacks, where hidden associations or triggers override normal classification to produce unexpected results. For example, a model with a backdoor always identifies a face as Bill Gates if a specific symbol is present in the input. Backdoors can stay hidden indefinitely until activated by an input, and present a serious security risk to many security or safety related applications, e.g. biometric authentication systems or self-driving cars. We present the first robust and generalizable detection and mitigation system for DNN backdoor attacks. Our techniques identify backdoors and reconstruct possible triggers. We identify multiple mitigation techniques via input filters, neuron pruning and unlearning. We demonstrate their efficacy via extensive experiments on a variety of DNNs, against two types of backdoor injection methods identified by prior work. Our techniques also prove robust against a number of variants of the backdoor attack.
| Year | Citations | |
|---|---|---|
Page 1
Page 1