Transfer-based Adversarial Attack with Rectified Adam and Color Invariance
Author:
Affiliation:

Clc Number:

TP18

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Deep neural networks have been widely used in object detection, image classification, natural language processing, speech recognition, and so on. Nevertheless, deep neural networks are vulnerable to adversarial examples which could misclassify deep neural network classifiers by adding imperceptible perturbations to the input. Moreover, the same perturbation can deceive multiple classifiers across models and even across tasks. The cross-model transfer characteristics of adversarial examples limit the application of deep neural network in real life. The threat of adversarial examples to deep neural networks has stimulated researchers' interest in adversarial attack. Recently, researchers have proposed several adversarial attacks, but the cross-model ability of adversarial examples generated by the existing attacks is often poor, especially for the defense models via adversarial training or input transformation. To improve the transferability of adversarial examples in black box environment, this study proposes a method, namely, RLI-CI-FGSM. RLI-CI-FGSM is a transfer-based attack, which employs the gradient-based white-box attack RLI-FGSM to generate adversarial examples on the substitute model, as well as CIM to expand the substitute model so that RLI-FGSM is able to attack the substitute model and the extended model at the same time. Specifically, RLI-FGSM integrates the RAdam optimization algorithm into iterative fast gradient sign method, and makes use of the second-derivative information of objective function to generate adversarial examples, which prevents optimization algorithm from falling into poor local optimum. Based on the color transformation-invariant property of deep neural networks, CIM optimizes the perturbations of the color transform image sets to generate adversarial examples that are less sensitive to the defense models. The experimental results show that the proposed method has a higher success rate in both normal and adversarial network models.

    Reference
    Related
    Cited by
Get Citation

丁佳,许智武.基于Rectified Adam和颜色不变性的对抗迁移攻击.软件学报,2022,33(7):2525-2537

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:September 05,2021
  • Revised:October 14,2021
  • Adopted:
  • Online: January 28,2022
  • Published: July 06,2022
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063