Concepedia

Publication | Open Access

Why Do Adversarial Attacks Transfer? Explaining Transferability of\n Evasion and Poisoning Attacks

81

Citations

0

References

2018

Year

Abstract

Transferability captures the ability of an attack against a machine-learning\nmodel to be effective against a different, potentially unknown, model.\nEmpirical evidence for transferability has been shown in previous work, but the\nunderlying reasons why an attack transfers or not are not yet well understood.\nIn this paper, we present a comprehensive analysis aimed to investigate the\ntransferability of both test-time evasion and training-time poisoning attacks.\nWe provide a unifying optimization framework for evasion and poisoning attacks,\nand a formal definition of transferability of such attacks. We highlight two\nmain factors contributing to attack transferability: the intrinsic adversarial\nvulnerability of the target model, and the complexity of the surrogate model\nused to optimize the attack. Based on these insights, we define three metrics\nthat impact an attack's transferability. Interestingly, our results derived\nfrom theoretical analysis hold for both evasion and poisoning attacks, and are\nconfirmed experimentally using a wide range of linear and non-linear\nclassifiers and datasets.\n