Publication | Open Access
Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation Graph Distillation
29
Citations
16
References
2022
Year
Artificial IntelligenceNatural Language ProcessingConvolutional Neural NetworkGraph Neural NetworkEngineeringMachine LearningData ScienceKnowledge DistillationMachine Learning ModelThreat DetectionAi FoundationAdversarial Machine LearningComputer ScienceBackdoor TriggersDeep LearningRecurrent Neural NetworkBackdoor Defense
Due to the prosperity of Artificial Intelligence (AI) techniques, more and more backdoors are designed by adversaries to attack Deep Neural Networks (DNNs). Although the state-of-the-art method Neural Attention Distillation (NAD) can effectively erase backdoor triggers from DNNs, it still suffers from non-negligible Attack Success Rate (ASR) together with lowered classification ACCuracy (ACC), since NAD focuses on backdoor defense using attention features (i.e., attention maps) of the same order. In this paper, we introduce a novel backdoor defense framework named Attention Relation Graph Distillation (ARGD), which fully explores the correlation among attention features with different orders using our proposed Attention Relation Graphs (ARGs). Based on the alignment of ARGs between teacher and student models during knowledge distillation, ARGD can more effectively eradicate backdoors than NAD. Comprehensive experimental results show that, against six latest backdoor attacks, ARGD outperforms NAD by up to 94.85% reduction in ASR, while ACC can be improved by up to 3.23%.
| Year | Citations | |
|---|---|---|
Page 1
Page 1