Reinforcement-learning-based Adversarial Attacks Against Vulnerability Detection Models
Author:
Affiliation:

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Deep learning-based code vulnerability detection models have gradually become an important method for detecting software vulnerabilities due to their advantages of high detection efficiency and accuracy, and play an important role in the code auditing service of the code hosting platform GitHub. However, deep neural networks have been proved to be susceptible to the interference of adversarial attacks, which leads to the risk of deep learning-based vulnerability detection models being attacked and reducing the detection accuracy. Therefore, building adversarial attacks against vulnerability detection models can not only uncover the security flaws of such models, but also help to evaluate the robustness of the models, and then improve the performance of the models through corresponding methods. However, the existing counter-attack methods for vulnerability detection models rely on generalized code transformation tools, and do not propose targeted code perturbation operations and decision algorithms, so it is difficult to generate effective counter-attack samples, and the legitimacy of the counter-attack samples relies on manual checking. To address the above problems, a reinforcement learning adversarial attack method for vulnerability detection model is proposed. The method firstly designs a series of semantically constrained and vulnerability-preserving code perturbation operations as a set of perturbations; secondly, the code samples with vulnerabilities are used as inputs, and the reinforcement learning model is used to select specific sequences of perturbation operations; finally, the code samples are used to search for potential locations of perturbations according to the types of nodes in the syntax tree, and then code transformations are carried out, thus generating the counteracting samples. Based on SARD and NVD, two experimental datasets with a total of 14 278 code samples are constructed, and four vulnerability detection models with different characteristics are trained as attack targets. For each target model, a reinforcement learning network is trained to counter the attack. The results show that the attack method leads to a 74.34% decrease in the recall of the models and a 96.71% success rate, which is an average increase of 68.76% compared to the baseline method. The experiment proves that the current vulnerability detection model has the risk of being attacked, and further research is needed to improve the robustness of the model.

    Reference
    Related
    Cited by
Get Citation

陈思然,吴敬征,凌祥,罗天悦,刘镓煜,武延军.面向漏洞检测模型的强化学习式对抗攻击方法.软件学报,2024,35(8):3647-3667

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:September 10,2023
  • Revised:October 30,2023
  • Adopted:
  • Online: January 05,2024
  • Published:
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063