Publication | Open Access
Semantic Robustness of Models of Source Code
16
Citations
0
References
2020
Year
Software MaintenanceEngineeringMachine LearningRobustness (Computer Science)Software EngineeringSource Code AnalysisSoftware AnalysisFormal VerificationData ScienceAdversarial Machine LearningSemantic RobustnessSource CodeCode GenerationIncorrect PredictionsComputer ScienceDeep LearningCode RepresentationSoftware DesignDeep Neural NetworksAutomated ReasoningProgram AnalysisSoftware TestingFormal Methods
Deep neural networks are vulnerable to adversarial examples - small input perturbations that result in incorrect predictions. We study this problem for models of source code, where we want the network to be robust to source-code modifications that preserve code functionality. (1) We define a powerful adversary that can employ sequences of parametric, semantics-preserving program transformations; (2) we show how to perform adversarial training to learn models robust to such adversaries; (3) we conduct an evaluation on different languages and architectures, demonstrating significant quantitative gains in robustness.