完备神经网络验证加速技术综述
作者:
作者简介:

刘宗鑫(1999-), 男, 博士生, CCF学生会员,主要研究领域为形式化方法, 神经网络验证. ;杨鹏飞(1993-), 男, 博士, CCF专业会员, 主要研究领域为人工智能安全, 概率模型检验. ;张立军(1979-), 男, 博士, 研究员, 博士生导师, CCF高级会员, 主要研究领域为概率模型检测, 协议验证, 学习算法, 自动驾驶系统验证. ;吴志林(1980-), 男, 博士, 研究员, CCF高级会员, 主要研究领域为计算逻辑, 自动机理论, 计算机软硬件基础设施的形式化验证. ;黄小炜(1979-), 男, 博士, 教授, 博士生导师, 主要研究领域为人工智能安全与验证, AI可解释性, 形式化方法.

通讯作者:

张立军, E-mail: zhanglj@ios.ac.cn

基金项目:

中国科学院基础领域研究青年团队计划(CASYSBR-040); 中国科学院软件研究所新培育方向项目(ISCAS-PYFX-202201)


Survey on Acceleration Techniques for Complete Neural Network Verification
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [78]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    人工智能技术已被广泛应用于生活中的各个领域. 然而, 神经网络作为人工智能的主要实现手段, 在面对训练数据之外的输入或对抗攻击时, 可能表现出意料之外的行为. 在自动驾驶、智能医疗等安全攸关领域, 这些未定义行为可能会对生命安全造成重大威胁. 因此, 使用完备验证方法证明神经网络的性质, 保障其行为的正确性显得尤为重要. 为了提高验证效率, 各种完备神经网络验证工具均提出各自的优化方法, 但并未充分探索这些方法真正起到的作用, 后来的研究者难以从中找出最有效的优化方向. 介绍神经网络验证领域的通用技术, 并提出一个完备神经网络验证的通用框架. 在此框架中, 重点讨论目前最先进的工具在约束求解、分支选择与边界计算这3个核心部分上的所采用的优化方法. 针对各个工具本身的性能和核心加速方法, 设计一系列实验, 旨在探究各种加速方式对于工具性能的贡献, 并尝试寻找最有效的加速策略和更具潜力的优化方向, 为研究者提供有价值的参考.

    Abstract:

    Artificial intelligence (AI) has been widely applied to various aspects of lives. However, as the primary technique of realizing AI, neural networks can exhibit undefined behavior when faced with inputs outside of their training data or under adversarial attacks. In safety-critical fields such as autonomous driving and intelligent healthcare, undefined behavior can pose significant threats to human safety. Therefore, it is particularly crucial to employ complete verification methods to verify the properties of neural networks and ensure the correctness of the behavior. To enhance the verification efficiency, various complete neural network verification tools have proposed their optimization methods. However, the true influence of these methods has not been thoroughly explored, making it challenging for researchers to identify the most effective optimization directions. This paper introduces the common techniques in neural network verification and presents a universal framework for complete neural network verification. Within this framework, it focuses on discussing the optimization methods employed by state-of-the-art tools for constraint solving, branch selection, and boundary computation. Meanwhile, a series of experiments are designed to investigate the contributions of various acceleration techniques to the performance of each tool and to explore the most effective acceleration strategies and more promising optimization directions. Finally, valuable references can be provided for researchers in this field.

    参考文献
    [1] Lu D, Weng Q. A survey of image classification methods and techniques for improving classification performance. Int’l Journal of Remote Sensing, 2007, 28(5): 823–870.
    [2] Sheikhtaheri A, Sadoughi F, Hashemi Dehaghi Z. Developing and using expert systems and neural networks in medicine: A review on benefits and challenges. Journal of Medical Systems, 2014, 38(9): 110.
    [3] Julian KD, Kochenderfer MJ, Owen MP. Deep neural network compression for aircraft collision avoidance systems. Journal of Guidance, Control, and Dynamics, 2019, 42(3): 598–608.
    [4] Urmson C, Whittaker W. Self-driving cars and the urban challenge. IEEE Intelligent Systems, 2008, 23(2): 66–68.
    [5] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R. Intriguing properties of neural networks. arXiv:1312.6199, 2013.
    [6] Pulina L, Tacchella A. An abstraction-refinement approach to verification of artificial neural networks. In: Proc. of the 22nd Int’l Conf. on Computer Aided Verification. Edinburgh: Springer, 2010. 243–257. [doi: 10.1007/978-3-642-14295-6_24]
    [7] Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ. Reluplex: An efficient SMT solver for verifying deep neural networks. In: Proc. of the 29th Int’l Conf. on Computer Aided Verification. Heidelberg: Springer, 2017. 97–117. [doi: 10.1007/978-3-319-63387-9_5]
    [8] Ehlers R. Formal verification of piece-wise linear feed-forward neural networks. In: Proc. of the 15th Int’l Symp. on Automated Technology for Verification and Analysis. Pune: Springer, 2017. 269–286. [doi: 10.1007/978-3-319-68167-2_19]
    [9] Huang XW, Kwiatkowska M, Wang S, Wu M. Safety verification of deep neural networks. In: Proc. of the 29th Int’l Conf. on Computer Aided Verification. Heidelberg: Springer, 2017. 3–29. [doi: 10.1007/978-3-319-63387-9_1]
    [10] de Moura L, Bj?rner N. Z3: An efficient SMT solver. In: Proc. of the 14th Int’l Conf. on Tools and Algorithms for the Construction and Analysis of Systems. Budapest: Springer, 2008. 337–340. [doi: 10.1007/978-3-540-78800-3_24]
    [11] Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M. AI2: Safety and robustness certification of neural networks with abstract interpretation. In: Proc. of the 2018 IEEE Symp. on Security and Privacy. San Francisco: IEEE, 2018. 3–18.
    [12] Mirman M, Gehr T, Vechev MT. Differentiable abstract interpretation for provably robust neural networks. In: Proc. of the 35th Int’l Conf. on Machine Learning. Stockholm: PMLR, 2018. 3575–3583.
    [13] Singh G, Gehr T, Mirman M, Püschel M, Vechev M. Fast and effective robustness certification. In: Proc. of the 32nd Int’l Conf. on Neural Information Processing Systems. Montréal: Curran Associates Inc., 2018. 10825–10836.
    [14] Singh G, Gehr T, Püschel M, Vechev M. Boosting robustness certification of neural networks. In: Proc. of the 7th Int’l Conf. on Learning Representations. New Orleans: OpenReview.net, 2019.
    [15] Singh G, Gehr T, Püschel M, Vechev M. An abstract domain for certifying neural networks. Proc. of the ACM on Programming Languages, 2019, 3(POPL): 41.
    [16] Singh G, Ganvir R, Püschel M, Vechev M. Beyond the single neuron convex barrier for neural network certification. In: Proc. of the 33rd Int’l Conf. on Neural Information Processing Systems. Vancouver: Curran Associates Inc., 2019. 1352.
    [17] Wang SQ, Pei KX, Whitehouse J, Yang JF, Jana S. Formal security analysis of neural networks using symbolic intervals. In: Proc. of the 27th USENIX Conf. on Security Symp. Baltimore: USENIX Association, 2018. 1599–1614.
    [18] Weng L, Zhang H, Chen HG, Song Z, Hsieh CJ, Daniel L, Boning D, Dhillon I. Towards fast computation of certified robustness for ReLU networks. In: Proc. of the 35th Int’l Conf. on Machine Learning. Stockholmsm?ssan: PMLR, 2018. 5276–5285.
    [19] Wang SQ, Pei KX, Whitehouse J, Yang JF, Jana S. Efficient formal safety analysis of neural networks. In: Proc. of the 32nd Int’l Conf. on Neural Information Processing Systems (NIPS 2018). Montréal, 2018. 6369–6379.
    [20] Gowal S, Dvijotham K, Stanforth R, Bunel R, Qin CL, Uesato J, Arandjelovic R, Mann TA, Kohli P. Scalable verified training for provably robust image classification. In: Proc. of the 2019 IEEE/CVF Int’l Conf. on Computer Vision. Seoul: IEEE, 2019. 4842–4851. [doi: 10.1109/ICCV.2019.00494]
    [21] Boopathy A, Weng TW, Chen PY, Liu SJ, Daniel L. CNN-Cert: An efficient framework for certifying robustness of convolutional neural networks. In: Proc. of the 33rd AAAI Conf. on Artificial Intelligence. Honolulu: AAAI Press, 2019. 3240–3247.
    [22] Li JL, Liu JC, Yang PF, Chen LQ, Huang XW, Zhang LJ. Analyzing deep neural networks with symbolic propagation: Towards higher precision and faster verification. In: Proc. of the 26th Int’l Static Analysis Symp. Porto: Springer, 2019. 296–319. [doi: 10.1007/978-3-030-32304-2_15]
    [23] Zhang H, Chen HG, Xiao CW, Gowal S, Stanforth R, Li B, Boning D, Hsieh CJ. Towards stable and efficient training of verifiably robust neural networks. In: Proc. of the 8th Int’l Conf. on Learning Representations. Addis Ababa: OpenReview.net, 2020.
    [24] Gopinath D, Katz G, P?s?reanu CS, Barrett C. DeepSafe: A data-driven approach for assessing robustness of neural networks. In: Proc. of the 16th Int’l Symp. on Automated Technology for Verification and Analysis. Los Angeles: Springer, 2018. 3–19. [doi: 10.1007/978-3-030-01090-4_1]
    [25] Katz G, Huang DA, Ibeling D, Julian K, Lazarus C, Lim R, Shah P, Thakoor S, Wu HZ, Zelji? A, Dill DL, Kochenderfer MJ, Barrett C. The Marabou framework for verification and analysis of deep neural networks. In: Proc. of the 31st Int’l Conf. on Computer Aided Verification. New York City: Springer, 2019. 443–452. [doi: 10.1007/978-3-030-25540-4_26]
    [26] Bastani O, Ioannou Y, Lampropoulos L, Vytiniotis D, Nori AV, Criminisi A. Measuring neural net robustness with constraints. In: Proc. of the 30th Int’l Conf. on Neural Information Processing Systems. Barcelona: Curran Associates Inc., 2016. 2621-2629.
    [27] Müller MN, Makarchuk G, Singh G, Püschel M, Vechev M. PRIMA: General and precise neural network certification via scalable convex hull approximations. Proc. of the ACM on Programming Languages, 2022, 6(POPL): 43.
    [28] Anderson G, Pailoor S, Dillig I, Chaudhuri S. Optimization and abstraction: A synergistic approach for analyzing neural network robustness. In: Proc. of the 40th ACM SIGPLAN Conf. on Programming Language Design and Implementation. Phoenix: ACM, 2019. 731–744. [doi: 10.1145/3314221.3314614]
    [29] Bertsimas D, Tsitsiklis JN. Introduction to Linear Optimization. Belmont: Athena Scientific, 1997.
    [30] Tran HD, Yang XD, Manzanas Lopez D, Musau P, Nguyen LV, Xiang WM, Bak S, Johnson TT. NNV: The neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In: Proc. of the 32nd Int’l Conf. on Computer Aided Verification. Los Angeles: Springer, 2020. 3–17. [doi: 10.1007/978-3-030-53288-8_1]
    [31] Tjeng V, Xiao KY, Tedrake R. Evaluating robustness of neural networks with mixed integer programming. In: Proc. of the 7th Int’l Conf. on Learning Representations. New Orleans: OpenReview.net, 2019.
    [32] Yang PF, Li RJ, Li JL, Huang CC, Wang JY, Sun J, Xue B, Zhang LJ. Improving neural network verification through spurious region guided refinement. In: Proc. of the 27th Int’l Conf. on Tools and Algorithms for the Construction and Analysis of Systems. Luxembourg City: Springer, 2021. 389–408. [doi: 10.1007/978-3-030-72016-2_21]
    [33] Ashok P, Hashemi V, K?etínsky J, Mohr S. DeepAbstract: Neural network abstraction for accelerating verification. In: Proc. of the 18th Int’l Symp. on Automated Technology for Verification and Analysis. Hanoi: Springer, 2020. 92–107.
    [34] Elboher YY, Gottschlich J, Katz G. An abstraction-based framework for neural network verification. In: Proc. of the 32nd Int’l Conf. on Computer Aided Verification. Los Angeles: Springer, 2020. 43–65. [doi: 10.1007/978-3-030-53288-8_3]
    [35] Hein M, Andriushchenko M. Formal guarantees on the robustness of a classifier against adversarial manipulation. In: Proc. of the 31st Int’l Conf. on Neural Information Processing Systems (NIPS 2017). Long Beach, 2017. 2263–2273.
    [36] Weng TW, Zhang H, Chen PY, Yi JF, Su D, Gao YP, Hsieh CJ, Daniel L. Evaluating the robustness of neural networks: An extreme value theory approach. In: Proc. of the 6th Int’l Conf. on Learning Representations. Vancouver: OpenReview.net, 2018.
    [37] Bunel R, Lu JY, Turkaslan I, Torr PHS, Kohli P, Kumar MP. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research, 2020, 21(42): 1–39.
    [38] Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. arXiv:1412.6572, 2014.
    [39] Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. In: Proc. of the 6th Int’l Conf. on Learning Representations. Vancouver: OpenReview.net, 2018.
    [40] Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A. The limitations of deep learning in adversarial settings. In: Proc. of the 2016 IEEE European Symp. on Security and Privacy. Saarbruecken: IEEE, 2016. 372–387. [doi: 10.1109/EuroSP.2016.36]
    [41] Moosavi-Dezfooli SM, Fawzi A, Frossard P. DeepFool: A simple and accurate method to fool deep neural networks. In: Proc. of the 2016 IEEE Conf. on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016. 2574–2582. [doi: 10.1109/CVPR.2016.282]
    [42] Carlini N, Wagner D. Towards evaluating the robustness of neural networks. In: Proc. of the 2017 IEEE Symp. on Security and Privacy. San Jose: IEEE, 2017. 39–57. [doi: 10.1109/SP.2017.49]
    [43] Yang PF, Chi ZM, Liu ZX, Zhao MY, Huang CC, Cai SW, Zhang LJ. Incremental satisfiability modulo theory for verification of deep neural networks. arXiv:2302.06455, 2023.
    [44] Ugare S, Banerjee D, Misailovic S, Singh G. Incremental verification of neural networks. Proc. of the ACM on Programming Languages, 2023, 7(PLDI): 185.
    [45] Urban C, Min’e A. A review of formal methods applied to machine learning. arXiv:2104.02466, 2021.
    [46] Huang XW, Kroening D, Ruan WJ, Sharp J, Sun YC, Thamo E, Wu M, Yi XP. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 2020, 37: 100270.
    [47] 刘颖, 杨鹏飞, 张立军, 吴志林, 冯元. 前馈神经网络和循环神经网络的鲁棒性验证综述. 软件学报, 2023, 34(7): 3134–3166. http://www.jos.org.cn/1000-9825/6863.htm
    Liu Y, Yang PF, Zhang LJ, Wu ZL, Feng Y. Survey on robustness verification of feedforward neural networks and recurrent neural networks. Ruan Jian Xue Bao/Journal of Software, 2023, 34(7): 3134–3166 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6863.htm
    [48] 纪守领, 杜天宇, 邓水光, 程鹏, 时杰, 杨珉, 李博. 深度学习模型鲁棒性研究综述. 计算机学报, 2022, 45(1): 190–206.
    Ji SL, Du TY, Deng SG, Cheng P, Shi J, Yang M, Li B. Robustness certification research on deep learning models: A survey. Chinese Journal of Computers, 2022, 45(1): 190–206 (in Chinese with English abstract).
    [49] Bak S, Liu CL, Johnson T. The second international verification of neural networks competition (VNN-COMP 2021): Summary and results. arXiv:2109.00498, 2021.
    [50] Müller MN, Brix C, Bak S, Liu CL, Johnson TT. The third international verification of neural networks competition (VNN-COMP 2022): Summary and results. arXiv:2212.10376, 2022.
    [51] Ruan WJ, Huang XW, Kwiatkowska M. Reachability analysis of deep neural networks with provable guarantees. In: Proc. of the 27th Int’l Joint Conf. on Artificial Intelligence. Stockholm: AAAI Press, 2018. 2651–2659.
    [52] Ghorbal K, Goubault E, Putot S. The zonotope abstract domain taylor1+. In: Proc. of the 21st Int’l Conf. on Computer Aided Verification. Grenoble: Springer, 2009. 627–633. [doi: 10.1007/978-3-642-02658-4_47]
    [53] Wu YT, Zhang M. Tightening robustness verification of convolutional neural networks with fine-grained linear approximation. In: Proc. of the 35th AAAI Conf. on Artificial Intelligence. AAAI Press, 2021. 11674–11681.
    [54] Bak S. nnenum: Verification of ReLU neural networks with optimized abstraction refinement. In: Proc. of the 13th NASA Formal Methods Symp. Springer, 2021. 19–36.
    [55] Everett III H. Generalized Lagrange multiplier method for solving problems of optimum allocation of resources. Operations Research, 1963, 11(3): 399–417.
    [56] Gowal S, Dvijotham K, Stanforth R, Bunel R, Qin CL, Uesato J, Arandjelovi? R, Mann T, Kohli P. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv:1810.12715, 2019.
    [57] Bunel R, Turkaslan I, Torr PHS, Kohli P, Mudigonda PK. A unified view of piecewise linear neural network verification. In: Proc. of the 31st Int’l Conf. on Neural Information Processing Systems. Montréal, 2018. 4795–4804.
    [58] Xu KD, Zhang H, Wang SQ, Wang YH, Jana S, Lin X, Hsieh CJ. Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers. In: Proc. of the 9th Int’l Conf. on Learning Representations. OpenReview.net, 2021.
    [59] Zhang H, Wang SQ, Xu KD, Li LY, Li B, Jana S, Hsieh CJ, Kolter JZ. General cutting planes for bound-propagation-based neural network verification. In: Proc. of the 35th Int’l Conf. on Neural Information Processing Systems. New Orleans, 2022. 1656–1670.
    [60] Wang SQ, Zhang H, Xu KD, Lin X, Jana S, Hsieh CJ, Kolter JZ. Beta-CROWN: Efficient bound propagation with per-neuron split constraints for neural network robustness verification. In: Proc. of the 34th Int’l Conf. on Neural Information Processing Systems. 2021. 29909–29921.
    [61] Ferrari C, Müller MN, Jovanovi? N, Vechev MT. Complete verification via multi-neuron relaxation guided branch-and-bound. In: Proc. of the 10th Int’l Conf. on Learning Representations. OpenReview.net, 2022.
    [62] Henriksen P, Lomuscio A. Efficient neural network verification via adaptive refinement and adversarial search. In: Proc. of the 24th European Conf. on Artificial Intelligence. Santiago de Compostela, 2020. 2513–2520.
    [63] Khedr H, Ferlez J, Shoukry Y. PEREGRiNN: Penalized-relaxation greedy neural network verifier. In: Proc. of the 33rd Int’l Conf. on Computer Aided Verification. Springer, 2020. 287–300.
    [64] Dutta S, Jha S, Sankaranarayanan S, Tiwari A. Output range analysis for deep feedforward neural networks. In: Proc. of the 10th NASA Formal Methods Symp. Newport News: Springer, 2018. 121–138. [doi: 10.1007/978-3-319-77935-5_9]
    [65] Kouvaros P, Lomuscio A. Towards scalable complete verification of ReLU neural networks via dependency-based branching. In: Proc. of the 30th Int’l Joint Conf. on Artificial Intelligence. IJCAI.org, 2021. 2643–2650.
    [66] Zhang H, Weng TW, Chen PY, Hsieh CJ, Daniel L. Efficient neural network robustness certification with general activation functions. In: Proc. of the 31st Int’l Conf. on Neural Information Processing Systems. Montréal, 2018. 4944–4953.
    [67] Yin BH, Chen LQ, Liu JC, Wang J. Efficient complete verification of neural networks via layerwised splitting and refinement. IEEE Trans. on Computer-aided Design of Integrated Circuits and Systems, 2022, 41(11): 3898–3909.
    [68] Tran HD, Manzanas Lopez D, Musau P, Yang XD, Nguyen LV, Xiang WM, Johnson TT. Star-based reachability analysis of deep neural networks. In: Proc. of the 3rd Int’l Symp. on Formal Methods. Porto: Springer. 2019. 670–686. [doi: 10.1007/978-3-030-30942-8_39]
    [69] De Palma A, Behl HS, Bunel R, Torr PHS, Kumar MP. Scaling the convex barrier with active sets. In: Proc. of the 9th Int’l Conf. on Learning Representations. OpenReview.net, 2021.
    [70] Wu HZ, Zelji? A, Katz G, Barrett C. Efficient neural network analysis with sum-of-infeasibilities. In: Proc. of the 28th Int’l Conf. on Tools and Algorithms for the Construction and Analysis of Systems. Munich: Springer, 2022. 143–163. [doi: 10.1007/978-3-030-99524-9_8]
    [71] Henriksen P, Lomuscio A. DEEPSPLIT: An efficient splitting method for neural network verification via indirect effect analysis. In: Proc. of the 30th Int’l Joint Conf. on Artificial Intelligence. Montreal: IJCAI.org, 2021. 2549–2555. [doi: 10.24963/ijcai.2021/351]
    [72] Shoukry Y, Nuzzo P, Sangiovanni-Vincentelli AL, Seshia SA, Pappas GJ, Tabuada P. SMC: Satisfiability modulo convex programming. Proc. of the IEEE, 2018, 106(9): 1655–1679.
    [73] Botoeva E, Kouvaros P, Kronqvist J, Lomuscio A, Misener R. Efficient verification of ReLU-based neural networks via dependency analysis. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. New York: AAAI Press, 2020. 3291–3299.
    [74] Prabhakar P, Rahimi Afzal Z. Abstraction based output range analysis for neural networks. In: Proc. of the 33rd Int’l Conf. on Neural Information Processing Systems. Vancouver: Curran Associates Inc., 2019. 1414.
    [75] Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang JM. Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE Trans. on Medical Imaging, 2016, 35(5): 1299–1312.
    [76] Yu B, Qi H, Guo Q, Juefei-Xu F, Xie XF, Ma L, Zhao JJ. DeepRepair: Style-guided repairing for deep neural networks in the real-world operational environment. IEEE Trans. on Reliability, 2022, 71(4): 1401–1416.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

刘宗鑫,杨鹏飞,张立军,吴志林,黄小炜.完备神经网络验证加速技术综述.软件学报,2024,35(9):4038-4068

复制
分享
文章指标
  • 点击次数:936
  • 下载次数: 3286
  • HTML阅读次数: 1182
  • 引用次数: 0
历史
  • 收稿日期:2023-09-10
  • 最后修改日期:2023-10-30
  • 在线发布日期: 2024-01-05
  • 出版日期: 2024-09-06
文章二维码
您是第19827760位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号