Survey on Repair Strategies for Deep Neural Network
Author:
Affiliation:

  • Article
  • | |
  • Metrics
  • |
  • Reference [138]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    With the development of the intelligent information era, applications of deep neural networks in various fields of human society, especially deployments in safety-critical systems such as automatic driving and military defense, have aroused concern from academic and industrial communities on the erroneous behaviors that deep neural networks may exhibit. Although neural network verification and neural network testing can provide qualitative or quantitative conclusions about erroneous behaviors, such post-analysis cannot prevent their occurrence. How to repair the pre-trained neural networks that feature wrong behavior is still a very challenging problem. To this end, deep neural network repair comes into being, aiming at eliminating the unexpected predictions generated by defective neural networks and making the neural networks meet certain specification properties. So far, there are three typical neural network repair paradigms: retraining, fine tuning without fault localization, and fine tuning with fault localization. This study introduces the development of deep neural networks and the necessity of deep neural network repair, clarifies some similar concepts, and identifies the challenges of deep neural network repair. In addition, it investigates the existing neural network repair strategies in detail and compares the internal relationships and differences among these strategies. Moreover, the study explores and sorts out the evaluation metrics and benchmark tests commonly used in neural network repair strategies. Finally, it forecasts the feasible research directions that should be paid attention to in the future development of neural network repair strategies.

    Reference
    [1] Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma SA, Huang ZH, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L. ImageNet large scale visual recognition challenge. Int'l Journal of Computer Vision, 2015, 115(3):211-252.
    [2] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 2017, 60(6):84-90.
    [3] Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L. Large-scale video classification with convolutional neural networks. In:Proc. of the 2014 IEEE Conf. on Computer Vision and Pattern Recognition. Columbus:IEEE Computer Society, 2014. 1725-1732.
    [4] Pennington J, Socher R, Manning CD. GloVe:Global vectors for word representation. In:Proc. of the 2014 Conf. on Empirical Methods in Natural Language Processing. Doha:Association for Computational Linguistics, 2014. 1532-1543.
    [5] Andor D, Alberti C, Weiss D, Severyn A, Presta A, Ganchev K, Petrov S, Collins M. Globally normalized transition-based neural networks. In:Proc. of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin:Association for Computational Linguistics, 2016. 2442-2452.
    [6] Hinton G, Deng L, Yu D, Dahl GE, Mohamed AR, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN, Kingsbury B. Deep neural networks for acoustic modeling in speech recognition:The shared views of four research groups. IEEE Signal Processing Magazine, 2012, 29(6):82-97.
    [7] Bojarski M, Del Testa D, Dworakowski D, Firner B, Flepp B, Goyal P, Jackel LD, Monfort M, Muller U, Zhang JK, Zhang X, Zhao J, Zieba K. End to end learning for self-driving cars. arXiv:1604.07316, 2016.
    [8] Scheiner N, Appenrodt N, Dickmann J, Sick B. Radar-based road user classification and novelty detection with recurrent neural network ensembles. In:Proc. of the 2019 IEEE Intelligent Vehicles Symp. (IV). Paris:IEEE, 2019. 722-729.
    [9] Amato F, López A, Peña-Méndez EM, Vaňhara P, Hampl A, Havel J. Artificial neural networks in medical diagnosis. Journal of Applied Biomedicine, 2013, 11(2):47-58.
    [10] Lipton ZC, Kale DC, Elkan C, Wetzel RC. Learning to diagnose with LSTM recurrent neural networks. In:Proc. of the 4th Int'l Conf. on Learning Representations. San Juan, 2016.
    [11] Wang QL, Guo WB, Zhang KX, Ororbia AG, Xing XY, Liu X, Giles CL. Adversary resistant deep neural networks with an application to malware detection. In:Proc. of the 23rd ACM SIGKDD Int'l Conf. on Knowledge Discovery and Data Mining. Halifax:ACM, 2017. 1145-1153.
    [12] Yuan ZL, Lu YQ, Wang ZG, Xue YB. Droid-Sec:Deep learning in Android malware detection. ACM SIGCOMM Computer Communication Review, 2014, 44(4):371-372.
    [13] Teubner T, Flath CM, Weinhardt C, van der Aalst W, Hinz O. Welcome to the Era of ChatGPT et al. The prospects of large language models. Business & Information Systems Engineering, 2023, 65(2):95-101.
    [14] Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. In:Proc. of the 3rd Int'l Conf. on Learning Representations. San Diego, 2015.
    [15] Xu K, Xie M, Tang LC, Ho SL. Application of neural networks in forecasting engine systems reliability. Applied Soft Computing, 2003, 2(4):255-268.
    [16] Berwick RC. The failure of deep neural networks to capture human language's cognitive core. In:Proc. of the 20th IEEE Int'l Conf. on Cognitive Informatics & Cognitive Computing (ICCI*CC). Banff:IEEE, 2021. 3.
    [17] Hern A. Facebook translates 'good morning' into 'attack them', leading to arrest. 2017. https://www.theguardian.com/technology/2017/oct/24/facebook-palestine-israel-translates-good-morning-attack-them-arrest
    [18] Hill K. Wrongfully accused by an algorithm. New York Times, 2020. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
    [19] Lee D. US opens investigation into Tesla after fatal crash. BBC, 2016. https://www.bbc.co.uk/news/technology-36680043
    [20] Seshia SA, Sadigh D, Sastry SS. Toward verified artificial intelligence. Communications of the ACM, 2022, 65(7):46-55.
    [21] 刘颖, 杨鹏飞, 张立军, 吴志林, 冯元. 前馈神经网络和循环神经网络的鲁棒性验证综述. 软件学报, 2023, 34(7):3134-3166. http://www.jos.org.cn/1000-9825/6863.htm
    Liu Y, Yang PF, Zhang LJ, Wu ZL, Feng Y. Survey on robustness verification of feedforward neural networks and recurrent neural networks. Ruan Jian Xue Bao/Journal of Software, 2023, 34(7):3134-3166 (in Chinese with English abstract). http://www.jos.org.cn/1000-9825/6863.htm
    [22] Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M. AI2:Safety and robustness certification of neural networks with abstract interpretation. In:Proc. of the 2018 IEEE Symp. on Security and Privacy (SP). San Francisco:IEEE, 2018. 3-18.
    [23] Mirman M, Gehr T, Vechev M. Differentiable abstract interpretation for provably robust neural networks. In:Proc. of the 35th Int'l Conf. on Machine Learning. Stockholm:PMLR, 2018. 3578-3586.
    [24] Singh G, Gehr T, Mirman M, Püschel M, Vechev M. Fast and effective robustness certification. In:Proc. of the 32nd Int'l Conf. on Neural Information Processing Systems. Montréal:Curran Associates Inc., 2018. 10825-10836.
    [25] Singh G, Gehr T, Püschel M, Vechev M. An abstract domain for certifying neural networks. Proc. of the ACM on Programming Languages, 2019, 3(POPL):41.
    [26] Müller C, Serre F, Singh G, Püschel M, Vechev M. Scaling polyhedral neural network verification on GPUs. In:Proc. of the 4th MLSys Conf. San Jose, 2021. 733-746.
    [27] Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ. Reluplex:An efficient SMT solver for verifying deep neural networks. In:Proc. of the 29th Int'l Conf. on Computer Aided Verification. Heidelberg:Springer, 2017. 97-117.
    [28] Ehlers R. Formal verification of piece-wise linear feed-forward neural networks. In:Proc. of the 15th Int'l Symp. on Automated Technology for Verification and Analysis. Pune:Springer, 2017. 269-286.
    [29] Gopinath D, Katz G, Pasareanu CS, Barrett C. DeepSafe:A data-driven approach for checking adversarial robustness in neural networks. arXiv:1710.00486, 2020.
    [30] Katz G, Huang DA, Ibeling D, Julian K, Lazarus C, Lim R, Shah P, Thakoor S, Wu HZ, Zeljić A, Dill DL, Kochenderfer MJ, Barrett C. The marabou framework for verification and analysis of deep neural networks. In:Proc. of the 31st Int'l Conf. on Computer Aided Verification. New York:Springer, 2019. 443-452.
    [31] Müller MN, Makarchuk G, Singh G, Püschel M, Vechev M. PRIMA:General and precise neural network certification via scalable convex hull approximations. Proc. of the ACM on Programming Languages, 2022, 6(POPL):43.
    [32] Dvijotham K, Gowal S, Stanforth R, Arandjelovic R, O'Donoghue B, Uesato J, Kohli P. Training verified learners with learned verifiers. arXiv:1805.10265, 2018.
    [33] Dvijotham K, Stanforth R, Gowal S, Mann T, Kohli P. A dual approach to scalable verification of deep networks. In:Proc. of the 34th Conf. on Uncertainty in Artificial Intelligence. Monterey:AUAI Press, 2018. 550-559.
    [34] Raghunathan A, Steinhardt J, Liang P. Certified defenses against adversarial examples. In:Proc. of the 6th Int'l Conf. on Learning Representations. Vancouver:OpenReview.net, 2018.
    [35] Liu WW, Song F, Zhang THR, Wang J. Verifying ReLU neural networks from a model checking perspective. Journal of Computer Science and Technology, 2020, 35(6):1365-1381.
    [36] Zhang YD, Zhao Z, Chen GK, Song F, Zhang M, Chen TL, Sun J. QVIP:An ILP-based formal verification approach for quantized neural networks. In:Proc. of the 37th IEEE/ACM Int'l Conf. on Automated Software Engineering (ASE). Rochester:ACM, 2022. 82.
    [37] Zhao Z, Zhang YD, Chen GK, Song F, Chen TL, Liu JX. CLEVEREST:Accelerating CEGAR-based neural network verification via adversarial attacks. In:Proc. of the 29th Int'l Symp. on Static Analysis (SAS). Auckland:Springer, 2022. 449-473.
    [38] Yang PF, Li JL, Liu JC, Huang CC, Li RJ, Chen LQ, Huang XW, Zhang LJ. Enhancing robustness verification for deep neural networks via symbolic propagation. Formal Aspects of Computing, 2021, 33(3):407-435.
    [39] Huang XW, Kroening D, Ruan WJ, Sharp J, Sun YC, Thamo E, Wu M, Yi XP. A survey of safety and trustworthiness of deep neural networks:Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 2020, 37:100270.
    [40] Tambon F, Khomh F, Antoniol G. A probabilistic framework for mutation testing in deep neural networks. Information and Software Technology, 2023, 155:107129.
    [41] Ige T, Marfo W, Tonkinson J, Adewale S, Matti BH. Adversarial sampling for fairness testing in deep neural network. Int'l Journal of Advanced Computer Science and Applications, 2023, 14(2):7-13.
    [42] Han G, Li Z, Tang P, Hu CY, Guo SQ. FuzzGAN:A generation-based fuzzing framework for testing deep neural networks. In:Proc. of the 24th IEEE Int'l Conf. on High Performance Computing & Communications; the 8th Int'l Conf. on Data Science & Systems; the 20th Int'l Conf. on Smart City; the 8th Int'l Conf. on Dependability in Sensor, Cloud & Big Data Systems & Application. Hainan:IEEE, 2022. 1601-1608.
    [43] Xie XF, Ma L, Juefei-Xu F, Xue MH, Chen HX, Liu Y, Zhao JJ, Li B, Yin JX, See S. DeepHunter:A coverage-guided fuzz testing framework for deep neural networks. In:Proc. of the 28th ACM SIGSOFT Int'l Symp. on Software Testing and Analysis (ISSTA). Beijing:ACM, 2019. 146-157.
    [44] Zhang XY, Xie XF, Ma L, Du XN, Hu Q, Liu Y, Zhao JJ, Sun M. Towards characterizing adversarial defects of deep learning software from the lens of uncertainty. In:Proc. of the 42nd IEEE/ACM Int'l Conf. on Software Engineering. Seoul:ACM, 2020. 739-751.
    [45] Chen ZP, Cao YB, Liu YQ, Wang HY, Xie T, Liu XZ. Understanding challenges in deploying deep learning based software:An empirical study. arXiv:2005.00760, 2020.
    [46] Gowal S, Dvijotham K, Stanforth R, Bunel R, Qin CL, Uesato J, Arandjelovic R, Mann T, Kohli P. On the effectiveness of interval bound propagation for training verifiably robust models. arXiv:1810.12715, 2018.
    [47] Zhang H, Chen HG, Xiao CW, Gowal S, Stanforth R, Li B, Boning D, Hsieh CJ. Towards stable and efficient training of verifiably robust neural networks. In:Proc. of the 8th Int'l Conf. on Learning Representations. Addis Ababa:OpenReview.net, 2020.
    [48] Pauli P, Koch A, Berberich J, Kohler P, Allgöwer F. Training robust neural networks using Lipschitz bounds. IEEE Control Systems Letters, 2022, 6:121-126.
    [49] Liang Z, Wu TR, Liu WW, Xue B, Yang WJ, Wang J, Pang ZB. Towards robust neural networks via a global and monotonically decreasing robustness training strategy. Frontiers of Information Technology & Electronic Engineering, 2023, 24(10):1375-1389.
    [50] 梁震, 刘万伟, 吴陶然, 任德金, 薛白. 鲁棒神经网络的训练方法研究进展与前景. 前瞻科技, 2023, 2(1):78-89.
    Liang Z, Liu WW, Wu TR, Ren DJ, Xue B. Advances and prospects of training methods for robust neural networks. Science and Technology Foresight, 2023, 2(1):78-89 (in Chinese with English abstract).
    [51] Zhao HJ, Zeng X, Chen TL, Liu ZM, Woodcock J. Learning safe neural network controllers with barrier certificates. Formal Aspects of Computing, 2021, 33(3):437-455.
    [52] Liang Z, Ren DJ, Liu WW, Wang J, Yang WJ, Xue B. Safety verification for neural networks based on set-boundary analysis. In:Proc. of the 17th Int'l Symp. on Theoretical Aspects of Software Engineering. Bristol:Springer, 2023. 248-267.
    [53] Sun B, Sun J, Dai T, Zhang LJ. Probabilistic verification of neural networks against group fairness. In:Proc. of the 24th Int'l Symp. on Formal Methods. Springer, 2021. 83-102.
    [54] Meyer PJ. Reachability analysis of neural networks using mixed monotonicity. IEEE Control Systems Letters, 2022, 6:3068-3073.
    [55] Goubault E, Putot S. RINO:Robust inner and outer approximated reachability of neural networks controlled systems. In:Proc. of the 34th Int'l Conf. on Computer Aided Verification. Haifa:Springer, 2022. 511-523.
    [56] Nicholas FL. A survey of repair strategies for deep neural networks[MS. Thesis]. Ames:Iowa State University, 2022.
    [57] Zhang H, Chan WK. Apricot:A weight-adaptation approach to fixing deep learning models. In:Proc. of the 34th IEEE/ACM Int'l Conf. on Automated Software Engineering. San Diego:IEEE, 2019. 376-387.
    [58] Sinitsin A, Plokhotnyuk V, Pyrkin D, Popov S, Babenko A. Editable neural networks. In:Proc. of the 8th Int'l Conf. on Learning Representations (ICLR). Addis Ababa:OpenReview.net, 2020.
    [59] Goldberger B, Adi Y, Keshet J, Katz G. Minimal modifications of deep neural networks using verification. In:Proc. of the 23rd Int'l Conf. on Logic for Programming, Artificial Intelligence and Reasoning. Alicante, 2020. 260-278.
    [60] Sotoudeh M, Thakur AV. Provable repair of deep neural networks. In:Proc. of the 42nd ACM SIGPLAN Int'l Conf. on Programming Language Design and Implementation. ACM, 2021. 588-603.
    [61] Leino K, Fromherz A, Mangal R, Fredrikson M, Parno B, Păsăreanu C. Self-repairing neural networks:Provable safety for deep networks via dynamic repair. arXiv:2107.11445, 2021.
    [62] Dong GL, Sun J, Wang XG, Wang XY, Dai T. Towards repairing neural networks correctly. In:Proc. of the 21st IEEE Int'l Conf. on Software Quality, Reliability and Security (QRS). Hainan:IEEE, 2021. 714-725.
    [63] Henriksen P, Leofante F, Lomuscio A. Repairing misclassifications in neural networks using limited data. In:Proc. of the 37th ACM/SIGAPP Symp. on Applied Computing. ACM, 2022. 1031-1038.
    [64] Majd K, Zhou SY, Amor HB, Fainekos G, Sankaranarayanan S. Local repair of neural networks using optimization. arXiv:2109.14041, 2021.
    [65] Usman M, Gopinath D, Sun YC, Noller Y, Păsăreanu CS. NNrepair:Constraint-based repair of neural network classifiers. In:Proc. of the 33rd Int'l Conf. on Computer Aided Verification. Los Angeles:Springer, 2021. 3-25.
    [66] Refaeli I, Katz G. Minimal multi-layer modifications of deep neural networks. In:Proc. of the 5th Int'l Workshop and the 15th Int'l Workshop Software Verification and Formal Methods for ML-enabled Autonomous Systems. Haifa:Springer, 2022. 46-66.
    [67] Sun S, Yan J, Yan RJ. Layer-specific repair of neural network classifiers. In:Proc. of the 31st Int'l Conf. on Artificial Neural Networks and Machine Learning. Bristol:Springer, 2022. 550-561.
    [68] Cohen D, Strichman O. Automated repair of neural networks. arXiv:2207.08157, 2022.
    [69] Yang XD, Yamaguchi T, Tran HD, Hoxha B, Johnson TT, Prokhorov D. Neural network repair with reachability analysis. In:Proc. of the 20th Int'l Conf. on Formal Modeling and Analysis of Timed Systems. Warsaw:Springer, 2022. 221-236.
    [70] Bauer-Marquart F, Boetius D, Leue S, Schilling C. SpecRepair:Counter-example guided safety repair of deep neural networks. In:Proc. of the 28th Int'l Symp. on Model Checking Software. Springer, 2022. 79-96.
    [71] Sohn J, Kang S, Yoo S. Arachne:Search-based repair of deep neural networks. ACM Trans. on Software Engineering and Methodology, 2022, 32(4):85.
    [72] Liang Z, Wu TR, Zhao CY, Liu WW, Xue B, Yang WJ, Wang J. Repairing deep neural networks based on behavior imitation. arXiv:2305.03365, 2023.
    [73] Fu FS, Li WC. Sound and complete neural network repair with minimality and locality guarantees. In:Proc. of the 10th Int'l Conf. on Learning Representations. OpenReview.net, 2022.
    [74] Tao Z, Nawas S, Mitchell J, Thakur AV. Architecture-preserving provable repair of deep neural networks. Proc. of the ACM on Programming Languages, 2023, 7(PLDI):124.
    [75] Sun B, Sun J, Pham LH, Shi J. Causality-based neural network repair. In:Proc. of the 44th Int'l Conf. on Software Engineering. Pittsburgh:ACM, 2022. 338-349.
    [76] Wu HH, Li Z, Cui ZQ, Zhang JM. A mutation-based approach to repair deep neural network models. In:Proc. of the 8th Int'l Conf. on Dependable Systems and Their Applications (DSA). Yinchuan:IEEE, 2021. 730-731.
    [77] Tokui S, Tokumoto S, Yoshii A, Ishikawa F, Nakagawa T, Munakata K, Kikuchi S. NeuRecover:Regression-controlled repair of deep neural networks with training history. In:Proc. of the 2022 IEEE Int'l Conf. on Software Analysis, Evolution and Reengineering. Honolulu:IEEE, 2022. 1111-1121.
    [78] 邱锡鹏. 神经网络与深度学习. 北京:机械工业出版社, 2020.
    Qiu XP. Neural Networks and Deep Learning. Beijing:China Machine Press, 2020 (in Chinese).
    [79] Cybenko G. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems, 1989, 2(4):303-314.
    [80] Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Networks, 1989, 2(5):359-366.
    [81] Ziegler GM. Lectures on Polytopes, Vol. 152. New York:Springer Science & Business Media, 2012.
    [82] Zhang YH, Chen YF, Cheung SC, Xiong YF, Zhang L. An empirical study on TensorFlow program bugs. In:Proc. of the 27th ACM SIGSOFT Int'l Symp. on Software Testing and Analysis (ISSTA). Amsterdam:ACM, 2018. 129-140.
    [83] Islam MJ, Nguyen G, Pan R, Rajan H. A comprehensive study on deep learning bug characteristics. In:Proc. of the 27th ACM Joint Meeting on European Software Engineering Conf. and Symp. on the Foundations of Software Engineering. Tallinn:ACM, 2019. 510-520.
    [84] Islam MJ, Pan R, Nguyen G, Rajan H. Repairing deep neural networks:Fix patterns and challenges. In:Proc. of the 42nd ACM/IEEE Int'l Conf. on Software Engineering. Soul:ACM, 2020. 1135-1146.
    [85] Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. Journal of Big Data, 2019, 6(1):60.
    [86] Semenoglou AA, Spiliotis E, Assimakopoulos V. Data augmentation for univariate time series forecasting with neural networks. Pattern Recognition, 2023, 134:109132.
    [87] Huang K, Xu ZZ, Yang S, Sun HY, Li XJ, Yan Z, Zhang YQ. A survey on automated program repair techniques. arXiv:2303.18184, 2023.
    [88] Huq F, Hasan M, Haque MMA, Mahbub S, Iqbal A, Ahmed T. Review4Repair:Code review aided automatic program repairing. Information and Software Technology, 2022, 143:106765.
    [89] Ji S, Choi SM, Ko SK, Kim D, Im H. RepCoder:An automated program repair framework for probability-based program synthesis. In:Proc. of the 37th ACM/SIGAPP Symp. on Applied Computing. ACM, 2022. 1554-1561.
    [90] Yao J, Rao BB, Xing WW, Wang LQ. Bug-transformer:Automated program repair using attention-based deep neural network. Journal of Circuits, Systems and Computers, 2022, 31(12):2250210.
    [91] Wang WS, Wu C, Cheng L, Zhang Y. Tea:Program repair using neural network based on program information attention matrix. arXiv:2107.08262, 2021.
    [92] Li Y, Wang SH, Nguyen TN. Improving automated program repair using two-layer tree-based neural networks. In:Proc. of the 42nd ACM/IEEE Int'l Conf. on Software Engineering. Seoul:ACM, 2020. 316-317.
    [93] Riedmiller M, Braun H. A direct adaptive method for faster backpropagation learning:The RPROP algorithm. In:Proc. of the 1993 Int'l Conf. on Neural Networks. San Francisco:IEEE, 1993. 586-591.
    [94] Bernstein J, Wang YX, Azizzadenesheli K, Anandkumar A. signSGD:Compressed optimisation for non-convex problems. In:Proc. of the 35th Int'l Conf. on Machine Learning. Stockholm:PMLR, 2018. 560-569.
    [95] Dauphin YN, de Vries H, Chung J, Bengio Y. RMSProp and equilibrated adaptive learning rates for non-convex optimization. arXiv:1502.04390, 2015.
    [96] Yang XD, Johnson TT, Tran HD, Yamaguchi T, Hoxha B, Prokhorov D. Reachability analysis of deep ReLU neural networks using facet-vertex incidence. In:Proc. of the 24th ACM Int'l Conf. on Hybrid Systems:Computation and Control. Nashville:ACM, 2021. 18.
    [97] Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D. Continuous control with deep reinforcement learning. In:Proc. of the 4th Int'l Conf. on Learning Representations. San Juan, 2016.
    [98] Browne CB, Powley E, Whitehouse D, Lucas SM, Cowling PI, Rohlfshagen P, Tavener S, Perez D, Samothrakis S, Colton S. A survey of Monte Carlo tree search methods. IEEE Trans. on Computational Intelligence and AI in Games, 2012, 4(1):1-43
    [99] Sotoudeh M, Tao Z, Thakur AV. SyReNN:A tool for analyzing deep neural networks. Int'l Journal on Software Tools for Technology Transfer, 2023, 25(2):145-165.
    [100] Jarre F, Vavasis SA. Convex optimization. In:Atallah MJ, Blanton M, eds. Algorithms and Theory of Computation Handbook:General Concepts and Techniques. 2nd ed., Boca Raton:Chapman & Hall/CRC, 2010.
    [101] Shostak RE. A practical decision procedure for arithmetic with function symbols. Journal of the ACM, 1979, 26(2):351-360.
    [102] Freire RA. First-order logic and first-order functions. Logica Universalis, 2015, 9(3):281-329.
    [103] Eniser HF, Gerasimou S, Sen A. DeepFault:Fault localization for deep neural networks. In:Proc. of the 22nd Int'l Conf. on Fundamental Approaches to Software Engineering. Prague:Springer, 2019. 171-191.
    [104] Jones JA, Harrold MJ. Empirical evaluation of the tarantula automatic fault-localization technique. In:Proc. of the 20th IEEE/ACM Int'l Conf. on Automated Software Engineering. Long Beach:ACM, 2005. 273-282.
    [105] Ochiai A. Zoogeographical studies on the soleoid fishes found in Japan and its neighbouring regions-I. Nippon Suisan Gakkaishi, 1957, 22(9):522-525.
    [106] Wong WE, Debroy V, Gao RZ, Li YH. The DStar method for effective software fault localization. IEEE Trans. on Reliability, 2014, 63(1):290-308.
    [107] Le Goues C, Nguyen T, Forrest S, Weimer W. GenProg:A generic method for automatic software repair. IEEE Trans. on Software Engineering, 2012, 38(1):54-72.
    [108] Forrest S. Genetic algorithms:Principles of natural selection applied to computation. Science, 1993, 261(5123):872-878.
    [109] Gopinath D, Converse H, Pasareanu C, Taly A. Property inference for deep neural networks. In:Proc. of the 34th IEEE/ACM Int'l Conf. on Automated Software Engineering. San Diego:IEEE, 2019. 797-809.
    [110] Sen K, Marinov D, Agha G. CUTE:A concolic unit testing engine for C. In:Proc. of the 10th European Software Engineering Conf. Held Jointly with the 13th ACM SIGSOFT Int'l Symp. on Foundations of Software Engineering. Lisbon:ACM, 2005. 263-272.
    [111] Storn R, Price K. Differential evolution-A simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 1997, 11(4):341-359.
    [112] Weimer W, Nguyen T, Le Goues C, Forrest S. Automatically finding patches using genetic programming. In:Proc. of the 31st Int'l Conf. on Software Engineering. Vancouver:IEEE, 2009. 364-374.
    [113] Yuan Y, Banzhaf W. ARJA:Automated repair of java programs via multi-objective genetic programming. IEEE Trans. on Software Engineering, 2020, 46(10):1040-1067.
    [114] Chattopadhyay A, Manupriya P, Sarkar A, Balasubramanian VN. Neural network attributions:A causal perspective. In:Proc. of the 36th Int'l Conf. on Machine Learning. Long Beach:PMLR, 2019. 981-990.
    [115] Zhang JZ, Bareinboim E. Fairness in decision-making-The causal explanation formula. In:Proc. of the 32nd AAAI Conf. on Artificial Intelligence and the 30th Innovative Applications of Artificial Intelligence Conf. and the 8th AAAI Symp. on Educational Advances in Artificial Intelligence. New Orleans:AAAI, 2018. 248.
    [116] Poli R, Kennedy J, Blackwell T. Particle swarm optimization:An overview. Swarm Intelligence, 2007, 1(1):33-57.
    [117] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc. of the IEEE, 1998, 86(11):2278-2324.
    [118] Xiao H, Rasul K, Vollgraf R. Fashion-MNIST:A novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747, 2017.
    [119] Krizhevsky A, Hinton GE. Learning multiple layers of features from tiny images. 2009. https://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf
    [120] Huang GB, Mattar M, Berg T, Learned-Miller E. Labeled faces in the wild:A database for studying face recognition in unconstrained environments. In:Proc. of the 2008 Workshop on faces in 'Real-Life' Images:Detection, alignment, and recognition, Erik Learned-Miller and Andras Ferencz and Frédéric Jurie. Marseille, 2008.
    [121] Hendrycks D, Zhao K, Basart S, Steinhardt J, Song D. Natural adversarial examples. In:Proc. of the 2021 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Nashville:IEEE, 2021. 15257-15266.
    [122] Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K. Squeezenet:Alexnet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv:1602.07360, 2016.
    [123] Census income. UCI Machine Learning Repository. 1996. https://doi.org/10.24432/C5GP7S
    [124] Statlog (German Credit Data). UCI Machine Learning Repository. 1994. https://doi.org/10.24432/C5NC77
    [125] Bank marketing. UCI Machine Learning Repository. 2012. https://doi.org/10.24432/C5K306
    [126] Singh G. ETH robustness analyzer for neural networks (ERAN). 2020. https://github.com/eth-sri/eran
    [127] Mu N, Gilmer J. MNIST-C:A robustness benchmark for computer vision. arXiv:1906.02337, 2019.
    [128] Julian KD, Kochenderfer MJ, Owen MP. Deep neural network compression for aircraft collision avoidance systems. Journal of Guidance, Control, and Dynamics, 2019, 42(3):598-608.
    [129] Julian KD, Kochenderfer MJ. Guaranteeing safety for neural network-based aircraft collision avoidance systems. In:Proc. of the 38th IEEE/AIAA Digital Avionics Systems Conf. (DASC). San Diego:IEEE, 2019. 1-10.
    [130] Li RJ, Li JL, Huang CC, Yang PF, Huang XW, Zhang LJ, Xue B, Hermanns H. PRODeep:A platform for robustness verification of deep neural networks. In:Proc. of the 28th ACM Joint Meeting on European Software Engineering Conf. and Symp. on the Foundations of Software Engineering. ACM, 2020. 1630-1634.
    [131] Song XD, Sun YC, Mustafa MA, Cordeiro LC. AIREPAIR:A repair platform for neural networks. In:Proc. of the 45th IEEE/ACM Int'l Conf. on Software Engineering:Companion Proc. Melbourne:IEEE, 2023. 98-101.
    [132] Guidotti D, Pulina L, Tacchella A. NeVer 2.0:Learning, verification and repair of deep neural networks. arXiv:2011.09933, 2020.
    [133] Yu B, Qi H, Guo Q, Juefei-Xu F, Xie XF, Ma L, Zhao JJ. DeepRepair:Style-guided repairing for deep neural networks in the real-world operational environment. IEEE Trans. on Reliability, 2022, 71(4):1401-1416.
    [134] Majd K, Clark G, Khandait T, Zhou SY, Sankaranarayanan S, Fainekos G, Amor HB. Safe robot learning in assistive devices through neural network repair. In:Proc. of the 6th Conf. on Robot Learning (CoRL). Auckland:PMLR, 2023. 2148-2158.
    [135] Tanno R, Pradier MF, Nori A, Li YZ. Repairing neural networks by leaving the right past behind. In:Proc. of the 36th Conf. on Neural Information Processing Systems. New Orleans:Curran Associates Inc., 2022. 13132-13145.
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

梁震,刘万伟,吴陶然,薛白,王戟,杨文婧.深度神经网络修复策略综述.软件学报,2024,35(3):1231-1256

Copy
Share
Article Metrics
  • Abstract:1005
  • PDF: 3243
  • HTML: 2068
  • Cited by: 0
History
  • Received:June 02,2023
  • Revised:August 19,2023
  • Online: December 20,2023
  • Published: March 06,2024
You are the first2033271Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063