Transfer-based Adversarial Attack with Rectified Adam and Color Invariance
Author:
Affiliation:

Clc Number:

TP18

  • Article
  • | |
  • Metrics
  • |
  • Reference [34]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    Deep neural networks have been widely used in object detection, image classification, natural language processing, speech recognition, and so on. Nevertheless, deep neural networks are vulnerable to adversarial examples which could misclassify deep neural network classifiers by adding imperceptible perturbations to the input. Moreover, the same perturbation can deceive multiple classifiers across models and even across tasks. The cross-model transfer characteristics of adversarial examples limit the application of deep neural network in real life. The threat of adversarial examples to deep neural networks has stimulated researchers' interest in adversarial attack. Recently, researchers have proposed several adversarial attacks, but the cross-model ability of adversarial examples generated by the existing attacks is often poor, especially for the defense models via adversarial training or input transformation. To improve the transferability of adversarial examples in black box environment, this study proposes a method, namely, RLI-CI-FGSM. RLI-CI-FGSM is a transfer-based attack, which employs the gradient-based white-box attack RLI-FGSM to generate adversarial examples on the substitute model, as well as CIM to expand the substitute model so that RLI-FGSM is able to attack the substitute model and the extended model at the same time. Specifically, RLI-FGSM integrates the RAdam optimization algorithm into iterative fast gradient sign method, and makes use of the second-derivative information of objective function to generate adversarial examples, which prevents optimization algorithm from falling into poor local optimum. Based on the color transformation-invariant property of deep neural networks, CIM optimizes the perturbations of the color transform image sets to generate adversarial examples that are less sensitive to the defense models. The experimental results show that the proposed method has a higher success rate in both normal and adversarial network models.

    Reference
    [1] Redmon J, Farhadi A. Yolov3:An incremental improvement. arXiv:1804.02767, 2018.
    [2] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556, 2014.
    [3] Sutskever I, Vinyals O, Le QV. Sequence to sequence learning with neural networks. In:Zoubin G, ed. Advances in Neural Information Processing Systems 27(NIPS 2014). 2014. 3104-3112.
    [4] Xiong W, Droppo J, Huang X, Seide F, Seltzer M, Stolcke A, Yu D, Zweig G. Achieving human parity in conversational speech recognition. arXiv:1610.05256, 2016.
    [5] Zhang Z, Geiger J, Pohjalainen J, Mousa AED, Jin W, Schuller B. Deep learning for environmentally robust speech recognition:An overview of recent developments. ACM Trans. on Intelligent Systems and Technology (TIST), 2018, 9(5):Article No.49.[doi:10. 1145/3178115]
    [6] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R. Intriguing properties of neural networks. arXiv:1312.6199, 2013.
    [7] Lu, Y, et al. Enhancing cross-task black-box transferability of adversarial examples with dispersion reduction. In:Proc. of the 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR 2020). 2020. 940-949.
    [8] Ren K, Wang Q, Wang C, Qin Z, Lin X. The security of autonomous driving:Threats, defenses, and future directions. Proc. of the IEEE, 2019, 108(2):357-372.
    [9] Ibitoye O, Abou-Khamis R, Matrawy A, Shafiq MO. The threat of adversarial attacks on machine learning in network security-A survey. arXiv:1911.02621, 2019.
    [10] Pan WW, Wang XY, Song ML, Chen C. Survey on generating adversarial examples. Ruan Jian Xue Bao/Journal of Software, 2020, 31(1):67-81(in Chinese with English abstract). http://www.jos.org.cn/1000-9825/5884.htm[doi:10.13328/j.cnki.jos.005884]
    [11] Kurakin A, Goodfellow IJ, Bengio S. Adversarial examples in the physical world. In:Artificial Intelligence Safety and Security. Chapman and Hall/CRC, 2018. 99-112.
    [12] Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. arXiv:1706.06083, 2017.
    [13] Dong YP, Liao FZ, Pang TY, Su H, Zhu J, Hu XL, Li JG. Boosting adversarial attacks with momentum. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 9185-9193.
    [14] Tramer F, Kurakin A, Papernot N, Boneh D, McDaniel P. Ensemble adversarial training:Attacks and defenses. arXiv:1705.07204, 2017.
    [15] Xie CH, Wang JY, Zhang ZS, Ren Z, Yuille A. Mitigating adversarial effects through randomization. arXiv:1711.01991, 2017.
    [16] Liao FZ, Liang M, Dong YP, Pang TY, Hu XL, Zhu J. Defense against adversarial attacks using high-level representation guided denoiser. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 1778-1787.
    [17] Xie CH, Zhang ZS, Zhou YY, Bai S, Wang JY, Ren Z, Yuille AY. Improving transferability of adversarial examples with input diversity. In:Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition. 2019. 2730-2739.
    [18] Dong YP, Pang TY, Su H, Zhu J. Evading defenses to transferable adversarial examples by translation-invariant attacks. In:Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition. 2019. 4312-4321.
    [19] Hang J, Han K, Chen H, et al. Ensemble adversarial black-box attacks against deep learning system. Pattern Recognition, 2020, 101:Article No.107184.
    [20] Lin JD, Song CB, He K, Wang LW, Hopcroft JE. Nesterov accelerated gradient and scale invariance for adversarial attacks. arXiv:1908.06281, 2019.
    [21] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural net-works. In:Bartlett PL, ed. Proc. of the 26th Annual Conf. on Neural Information Processing Systems. Lake Tahoe, 2012. 1106-1114.
    [22] Howard AG. Some improvements on deep convolutional neural network based image classification. arXiv:1312.5402, 2013.
    [23] Chen PG, Liu S, Zhao HS, Jia JY. GriMask data augmentation. arXiv:2001.04086, 2020.
    [24] Xie PF, Wang LY, Qin RX, Qiao K, Shi SH, Hu G, Yan B. Improving the transferability of adversarial examples with new iteration framework and input dropout. arXiv:2106.01617, 2021.
    [25] Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. arXiv:1412.6572, 2014.
    [26] Zou J, Pan Z, Qiu J, Liu X, Rui T, Li W. Improving the transferability of adversarial examples with resized-diverse-inputs, diversity-ensemble and region fitting. In:Proc. of the European Conf. on Computer Vision. Springer, 2020. 563-579.
    [27] Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Frossard P. Universal adversarial perturbations. In:Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 1765-1773.
    [28] Kingma DP, Ba J. Adam:A method for stochastic optimization. arXiv:1412.6980, 2014.
    [29] Liu LY, Jiang HM, He PC, Chen WZ, Liu XD, Gao JF, Han JW. On the variance of the adaptive learning rate and beyond. arXiv:1908.03265, 2019.
    [30] Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In:Proc. of the 2016 IEEE Conf. on Computer Vision and Pattern Recognition. Las Vegas:IEEE Computer Society, 2016. 2818-2826.
    [31] Szegedy C, Ioffe S, Vanhoucke V, Alemi AA. Inception-v4, inception-ResNet and the impact of residual connections on learning. In:Singh SP, ed. Proc. of the 31st AAAI Conf. on Artificial Intelligence. San Francisco:AAAI, 2017. 4278-4284.
    [32] He K, Zhang X, Ren S, Sun J. Identity mappings in deep residual networks. In:Leibe B, ed. Proc. of the 14th European Conf. Amsterdam:Springer, 2016. 630-645.
    附中文参考文献:
    [10] 潘文雯,王新宇,宋明黎,陈纯.对抗样本生成技术综述.软件学报, 2020, 31(1):67-81. http://www.jos.org.cn/1000-9825/5884.htm[doi:10.13328/j.cnki.jos.005884]
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

丁佳,许智武.基于Rectified Adam和颜色不变性的对抗迁移攻击.软件学报,2022,33(7):2525-2537

Copy
Share
Article Metrics
  • Abstract:1459
  • PDF: 4269
  • HTML: 2723
  • Cited by: 0
History
  • Received:September 05,2021
  • Revised:October 14,2021
  • Online: January 28,2022
  • Published: July 06,2022
You are the first2034057Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063