Label Screening Method for Generalization and Robustness Trade-off in Convolutional Neural Network
Author:
Affiliation:

Clc Number:

TP18

  • Article
  • | |
  • Metrics
  • |
  • Reference [38]
  • | |
  • Cited by
  • | |
  • Comments
    Abstract:

    Although convolutional neural networks (CNNs) are widely used in image recognition due to their excellent generalization performance, adversarial samples contaminated by noise can easily deceive fully trained network models, posing security risks. Many existing defense methods improve the robustness of models, but most inevitably sacrifice model generalization. To alleviate this issue, a label-filtered weight parameter regularization method is proposed to balance the generalization and robustness of models using the label information of samples during model training. Many previous robust model training methods suffer from two main issues: 1) The robustness of models is mainly enhanced by increasing the quantity or complexity of training set samples, which not only diminishes the dominant role of clean samples in model training but also significantly increases the workload of training tasks. 2) The label information of samples is used only to compare with model predictions to control the direction of model parameter updates, neglecting the additional information hidden in sample labels. The proposed method selects weight parameters that play a decisive role in classifying samples by filtering the correct labels of samples and the classification labels of adversarial samples and optimizes these parameters regularly to achieve a balance between model generalization and robustness. Experiments and analysis on the MNIST, CIFAR-10, and CIFAR-100 datasets demonstrate that the proposed method achieves good training results.

    Reference
    [1] Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278–2324.
    [2] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 2017, 60(6): 84–90.
    [3] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Proc. of the 3rd Int’l Conf. on Learning Representation. San Diego, 2015. 1–14. (查阅网上资料, 未找到对应的出版者信息, 请确认)
    [4] He KM, Zhang XY, Ren SQ, Sun J. Deep residual learning for image recognition. In: Proc. of 2016 IEEE Conf. on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016. 770–778. [doi: 10.1109/CVPR.2016.90]
    [5] 卢泓宇, 张敏, 刘奕群, 马少平. 卷积神经网络特征重要性分析及增强特征选择模型. 软件学报, 2017, 28(11): 2879–2890.
    Lu HY, Zhang M, Liu YQ, Ma SP. Convolution neural network feature importance analysis and feature selection enhanced model. Journal of Software, 2017, 28(11): 2879–2890 (in Chinese with English abstract).
    [6] 白琮, 黄玲, 陈佳楠, 潘翔, 陈胜勇. 面向大规模图像分类的深度卷积神经网络优化. 软件学报, 2018, 29(4): 1029–1038.
    Bai C, Huang L, Chen JN, Pan X, Chen SY. Optimization of deep convolutional neural network for large scale image classification. Journal of Software, 2018, 29(4): 1029–1038 (in Chinese with English abstract).
    [7] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R. Intriguing properties of neural networks. In: Proc. of the 2nd Int’l Conf. on Learning Representations. Banff, 2014. 1–10. (查阅网上资料, 未找到对应的出版者信息, 请确认)
    [8] Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In: Proc. of 2015 IEEE Conf. on Computer Vision and Pattern Recognition. Boston: IEEE, 2015. 427–436. [doi: 10.1109/CVPR.2015.7298640]
    [9] 潘文雯, 王新宇, 宋明黎, 陈纯. 对抗样本生成技术综述. 软件学报, 2020, 31(1): 67–81.
    Pan WW, Wang XY, Song ML, Chen C. Survey on generating adversarial examples. Journal of Software, 2020, 31(1): 67–81 (in Chinese with English abstract).
    [10] Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. In: Proc. of the 3rd Int’l Conf. on Learning Representations. San Diego, 2015. 1–11. (查阅网上资料, 未找到对应的出版者信息, 请确认)
    [11] Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. In: Proc. of the 6th Int’l Conf. on Learning Representations. Vancouver: OpenReview. net, 2018. 1–28.
    [12] Moosavi-Dezfooli SM, Fawzi A, Frossard P. DeepFool: A simple and accurate method to fool deep neural networks. In: Proc. of 2016 IEEE Conf. on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016. 2574–2582. [doi: 10.1109/CVPR.2016.282]
    [13] Carlini N, Wagner D. Towards evaluating the robustness of neural networks. In: Proc. of 2017 IEEE Symp. on Security and Privacy. San Jose: IEEE, 2017. 39–57. [doi: 10.1109/SP.2017.49]
    [14] Wei XX, Guo Y, Yu J. Adversarial sticker: A stealthy attack method in the physical world. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(3): 2711–2725.
    [15] Wu T, Tong L, Vorobeychik Y. Defending against physically realizable attacks on image classification. In: Proc. of the 8th Int’l Conf. on Learning Representation. Addis Ababa: OpenReview. net, 2020. 1–10.
    [16] Lyu C, Huang KZ, Liang HN. A unified gradient regularization family for adversarial examples. In: Proc. of 2015 IEEE Int’l Conf. on Data Mining. Atlantic City: IEEE, 2015. 301–309. [doi: 10.1109/ICDM.2015.84]
    [17] Jakubovitz D, Giryes R. Improving DNN robustness to adversarial attacks using Jacobian regularization. In: Proc. of the 15th European Conf. on Computer Vision. Munich: Springer, 2018. 525–541. [doi: 10.1007/978-3-030-01258-8_32]
    [18] Chan A, Tay Y, Ong YS, Fu J. Jacobian adversarially regularized networks for robustness. In: Proc. of the 8th Int’l Conf. on Learning Representation. Addis Ababa: OpenReview. net, 2020. 1–13.
    [19] Moosavi-Dezfooli SM, Fawzi A, Uesato J, Frossard P. Robustness via curvature regularization, and vice versa. In: Proc. of 2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019. 9070–9078. [doi: 10.1109/CVPR.2019.00929]
    [20] Guo C, Rana M, Cissé M, van der Maaten L. Countering adversarial images using input transformations. In: Proc. of the 6th Int’l Conf. on Learning Representations. Vancouver: OpenReview. net, 2018. 1–12.
    [21] Rebuffi SA, Gowal S, Calian DA, Stimberg F, Wiles O, Mann TA. Data augmentation can improve robustness. In: Proc. of the 35th Conf. on Neural Information Processing Systems. 2021. 29935–29948. (查阅网上资料, 未找到对应的出版信息, 请确认)
    [22] Wang YS, Zou DF, Yi JF, Bailey J, Ma XJ, Gu QQ. Improving adversarial robustness requires revisiting misclassified examples. In: Proc. of the 8th Int’l Conf. on Learning Representations. Addis Ababa: OpenReview. net, 2020. 1–13.
    [23] Chen ZM, Xue W, Tian WW, Wu YH, Hua B. Toward deep neural networks robust to adversarial examples, using augmented data importance perception. Journal of Electronic Imaging, 2022, 31(6): 063046.
    [24] Jin GJ, Yi XP, Wu DY, Mu RH, Huang XW. Randomized adversarial training via Taylor expansion. In: Proc. of 2023 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Vancouver: IEEE, 2023. 16447–16457. [doi: 10.1109/CVPR52729.2023.01578]
    [25] Tsipras D, Santurkar S, Engstrom LG, Turner AM, Madry A. Robustness may be at odds with accuracy. In: Proc. of the 7th Int’l Conf. on Learning Representations. New Orleans, 2019. 1–24.
    [26] Zhang HY, Yu YD, Jiao JT, Xing E, El Ghaoui L, Jordan M. Theoretically principled trade-off between robustness and accuracy. In: Proc. of the 36th Int’l Conf. on Machine Learning. Long Beach: PMLR, 2019. 12907–12929.
    [27] Co KT, Martinez-Rego D, Hau Z, Lupu EC. Jacobian ensembles improve robustness trade-offs to adversarial attacks. In: Proc. of the 31st Int’l Conf. on Artificial Neural Networks. Bristol: Springer, 2022. 680–691. [doi: 10.1007/978-3-031-15934-3_56]
    [28] Grabinski J, Gavrikov P, Keuper J, Keuper M. Robust models are less over-confident. In: Proc. of the 36th Int’l Conf. on Neural Information Processing Systems. New Orleans: Curran Associates Inc, 2022. 2831.
    [29] Zhang JF, Xu XL, Han B, Niu G, Cui LZ, Sugiyama M, Kankanhalli M. Attacks which do not kill training make adversarial learning stronger. In: Proc. of the 37th Int’l Conf. on Machine Learning. PMLR, 2020. 11278–11287. (查阅网上资料, 未找到对应的出版地信息, 请确认)
    [30] Sharma A, Narayan A. Soft adversarial training can retain natural accuracy. In: Proc. of the 14th Int’l Conf. on Agents and Artificial Intelligence. SciTePress, 2022. 1–7. (查阅网上资料, 未找到对应的出版地信息, 请确认)
    [31] Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M. AI2: Safety and robustness certification of neural networks with abstract interpretation. In: Proc. of 2018 IEEE Symp. on Security and Privacy. San Francisco: IEEE, 2018. 3–18.
    [32] Takase T. Feature combination mixup: Novel mixup method using feature combination for neural networks. Neural Computing and Applications, 2023, 35(17): 12763–12774.
    [33] Papernot N, McDaniel P, Wu X, Jha S, Swami A. Distillation as a defense to adversarial perturbations against deep neural networks. In: Proc. of 2016 IEEE Symp. on Security and Privacy. San Jose: IEEE, 2016. 582–597. [doi: 10.1109/SP.2016.41]
    [34] Dong NQ, Wang JY, Voiculescu I. Revisiting vicinal risk minimization for partially supervised multi-label classification under data scarcity. In: Proc. of 2022 IEEE/CVF Conf. on Computer Vision and Pattern Recognition Workshops. New Orleans: IEEE, 2022. 4211–4219. [doi: 10.1109/CVPRW56347.2022.00466]
    [35] Hoffman J, Roberts DA, Yaida S. Robust learning with Jacobian regularization. In: Proc. of 2020 Int’l Conf. on Learning Representations. 2020. 1–21. (查阅网上资料, 未找到对应的出版信息, 请确认)
    Related
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

王益民,龙显忠,李云,熊健.面向卷积神经网络泛化性和健壮性权衡的标签筛选方法.软件学报,,():1-16

Copy
Share
Article Metrics
  • Abstract:115
  • PDF: 1468
  • HTML: 0
  • Cited by: 0
History
  • Received:November 07,2023
  • Revised:December 24,2023
  • Online: June 14,2024
You are the first2033772Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063