State-of-the-art Survey on Fuzz Testing for Deep Learning System
Author:
Affiliation:

  • Article
  • | |
  • Metrics
  • |
  • Reference [124]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    Deep learning (DL) systems have powerful learning and reasoning capabilities and are widely employed in many fields including unmanned vehicles, speech recognition, intelligent robotics, etc. Due to the dataset limit and dependence on manually labeled data, DL systems are prone to unexpected behaviors. Accordingly, the quality of DL systems has received widespread attention in recent years, especially in safety-critical fields. Fuzz testing with strong fault-detecting ability is utilized to test DL systems, which becomes a research hotspot. This study summarizes existing fuzz testing for DL systems in the aspects of test case generation (including seed queue construction, seed selection, and seed mutation), test result determination, and coverage analysis. Additionally, commonly used datasets and metrics are introduced. Finally, the study prospects for the future development of this field.

    Reference
    [1] Zhang PC, Dai QY, Pelliccione P. CAGFuzz: Coverage-guided adversarial generative fuzzing testing for image-based deep learning systems. IEEE Trans. on Software Engineering, 2021: 1
    [2] Sutskever I, Vinyals O, Le QV. Sequence to sequence learning with neural networks. In: Proc. of the 27th Int’l Conf. on Neural Information Processing Systems. Cambridge: MIT Press, 2014. 3104–3112.
    [3] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Proc. of the 3rd Int’l Conf. on Learning Representations. San Diego: ICLR, 2014.
    [4] Silver D, Huang A, Maddison CJ, Guez A, Sifre L, Van Den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap T, Leach M, Kavukcuoglu K, Graepel T, Hassabis D. Mastering the game of Go with deep neural networks and tree search. Nature, 2016, 529(7587): 484–489. [doi: 10.1038/nature16961
    [5] Sermanet P, LeCun Y. Traffic sign recognition with multi-scale convolutional networks. In: Proc. of the 2011 Int’l Joint Conf. on Neural Networks. San Jose: IEEE, 2011. 2809–2813.
    [6] Yuan ZL, Lu YQ, Wang ZG, Xue YB. Droid-Sec: Deep learning in android malware detection. ACM SIGCOMM Computer Communication Review, 2014, 44(4): 371–372. [doi: 10.1145/2740070.2631434
    [7] Julian KD, Lopez J, Brush JS, Owen MP, Kochenderfer MJ. Policy compression for aircraft collision avoidance systems. In: Proc. of the 35th IEEE/AIAA Digital Avionics Systems Conf. Sacramento: IEEE, 2016. 1–10.
    [8] Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao CW, Prakash A, Kohno T, Song D. Robust physical-world attacks on deep learning visual classification. In: Proc. of the 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018. 1625–1634.
    [9] Bertolino A. Software testing research: Achievements, challenges, dreams. In: Proc. of the 2007 Future of Software Engineering (FOSE 2007). Minneapolis: IEEE, 2007. 85–103.
    [10] Tian YC, Pei KX, Jana S, Ray B. Deeptest: Automated testing of deep-neural-network-driven autonomous cars. In: Proc. of the 40th Int’l Conf. on Software Engineering. Gothenburg: ACM, 2018. 303–314.
    [11] Deng J, Dong W, Socher R, Li LJ, Li K, Li FF. ImageNet: A large-scale hierarchical image database. In: Proc. of the 2009 Conf. on Computer Vision and Pattern Recognition. Miami: IEEE, 2009. 248–255.
    [12] Merkel R. Software reliability growth models predict autonomous vehicle disengagement event. arXiv:1812.08901, 2018.
    [13] Zhang JM, Harman M, Ma L, Liu Y. Machine learning testing: Survey, landscapes and horizons. IEEE Trans. on Software Engineering, 2022, 48(1): 1–36. [doi: 10.1109/TSE.2019.2962027
    [14] Gopinath D, Wang KY, Zhang MS, Pasareanu CS, Khurshid S. Symbolic execution for deep neural networks. arXiv:1807.10439, 2018.
    [15] Chen TY, Cheung SC, Yiu SM. Metamorphic testing: A new approach for generating next test cases. arXiv:2002.12543, 2020.
    [16] Du XN, Xie XF, Li Y, Ma L, Zhao JJ, Liu Y. DeepCruiser: Automated guided testing for stateful deep learning systems. arXiv:1812.05339, 2018.
    [17] Miller BP, Fredriksen L, So B. An empirical study of the reliability of UNIX utilities. Communications of the ACM, 1990, 33(12): 32–44. [doi: 10.1145/96267.96279
    [18] Guo J, Jiang Y, Zhao Y, Chen Q, Sun JG. DLFuzz: Differential fuzzing testing of deep learning systems. In: Proc. of the 26th ACM Joint Meeting on European Software Engineering Conf. and Symp. on the Foundations of Software Engineering. Lake Buena: ACM, 2018. 739–743.
    [19] Liang HL, Pei XX, Jia XD, Shen WW, Zhang J. Fuzzing: State of the art. IEEE Trans. on Reliability, 2018, 67(3): 1199–1218. [doi: 10.1109/TR.2018.2834476
    [20] Manès VJM, Han HS, Han C, Cha SK, Egele M, Schwartz EJ, Woo M. The art, science, and engineering of fuzzing: A survey. IEEE Trans. on Software Engineering, 2021, 47(11): 2312–2331. [doi: 10.1109/TSE.2019.2946563
    [21] Brereton F. Binspector: Evolving a security tool. 2014. http://binspector.github.io/
    [22] Cisco secure development lifecycle. 2021. https://www.cisco.com/c/en/us/about/security-center/security-programs/secure-development-lifecycle/sdl-process/validate.html
    [23] Google chromium security. 2021. https://www.chromium.org/Home/chromium-security/bugs
    [24] Announcing OSS-Fuzz: Continuous fuzzing for open source software. 2016. https://opensource.googleblog.com/2016/12/announcing-oss-fuzz-continuous-fuzzing.html
    [25] Clusterfuzz. https://code.google.com/p/clusterfuzz/
    [26] Microsoft Security Development Lifecycle, Verification Phase. https://docs.microsoft.com/en-us/previous-versions/windows/desktop/cc307418(v=msdn.10)
    [27] Bounimova E, Godefroid P, Molnar D. Billions and billions of constraints: Whitebox fuzz testing in production. In: Proc. of the 35th Int’l Conf. on Software Engineering (ICSE). San Francisco: IEEE, 2013. 122–131.
    [28] Howard M, Lipner S. The Security Development Lifecycle. Redmond: Microsoft Press, 2006.
    [29] Huang XW, Kroening D, Ruan WJ, Sharp J, Sun YC, Thamo E, Wu M, Yi XP. A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Computer Science Review, 2020, 37: 100270. [doi: 10.1016/j.cosrev.2020.100270
    [30] Tahir Z, Alexander R. Coverage based testing for V&V and safety assurance of self-driving autonomous vehicles: A systematic literature review. In: Proc. of the 2020 IEEE Int’l Conf. on Artificial Intelligence Testing (AITest). Oxford: IEEE, 2020. 23–30.
    [31] Riccio V, Jahangirova G, Stocco A, Humbatova N, Weiss M, Tonella P. Testing machine learning based systems: A systematic mapping. Empirical Software Engineering, 2020, 25(6): 5193–5254. [doi: 10.1007/s10664-020-09881-0
    [32] Pei KX, Cao YZ, Yang JF, Jana S. DeepXplore: Automated whitebox testing of deep learning systems. In: Proc. of the 26th Symp. on Operating Systems Principles. Shanghai: ACM, 2017. 1–18.
    [33] Jouppi NP, Young C, Patil N, et al. In-datacenter performance analysis of a tensor processing unit. In: Proc. of the 44th ACM/IEEE Annual Int’l Symp. on Computer Architecture (ISCA). Toronto: IEEE, 2017. 1–12.
    [34] Lindholm E, Nickolls J, Oberman S, Montrym J. NVIDIA Tesla: A unified graphics and computing architecture. IEEE Micro, 2008, 28(2): 39–55. [doi: 10.1109/MM.2008.31
    [35] Luebke D. CUDA: Scalable parallel programming for high-performance scientific computing. In: Proc. of the 5th IEEE Int’l Symp. on Biomedical Imaging: From Nano to Macro. Paris: IEEE, 2008. 836–838.
    [36] He KM, Zhang XY, Ren SQ, Sun J. Deep residual learning for image recognition. In: Proc. of the 25th IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016. 770–778.
    [37] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Communications of the ACM, 2017, 60(6): 84–90. [doi: 10.1145/3065386
    [38] Szegedy C, Liu W, Jia YQ, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: Proc. of the 2015 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). Boston: IEEE, 2015. 1–9.
    [39] Krizhevsky A. Learning multiple layers of features from tiny images [MS. Thesis]. Toronto: University of Toronto, 2009.
    [40] Miller GA. WordNet: A lexical database for English. Communications of the ACM, 1995, 38(11): 39–41. [doi: 10.1145/219717.219748
    [41] Nair V, Hinton GE. Rectified linear units improve restricted Boltzmann machines. In: Proc. of the 27th Int’l Conf. on Machine Learning. Haifa: Omnipress, 2010. 807–814.
    [42] Yosinski J, Clune J, Nguyen A, Fuchs T, Lipson H. Understanding neural networks through deep visualization. arXiv:1506.06579, 2015.
    [43] Monniaux D. A survey of satisfiability modulo theory. In: Proc. of the 18th Int’l Workshop on Computer Algebra in Scientific Computing. Bucharest: Springer, 2016. 401–425.
    [44] Oehlert P. Violating assumptions with fuzzing. IEEE Security & Privacy, 2005, 3(2): 58–62. [doi: 10.1109/MSP.2005.55
    [45] Woo M, Cha SK, Gottlieb S, Brumley D. Scheduling black-box mutational fuzzing. In: Proc. of the 20th IGSAC Conf. on Computer & Communications Security. Berlin: ACM, 2013. 511–522.
    [46] Zhang PC, Dai QY, Ji SH. Condition-guided adversarial generative testing for deep learning systems. In: Proc. of the 2019 IEEE Int’l Conf. on Artificial Intelligence Testing (AITest). Newark: IEEE, 2019. 71–72.
    [47] Segura S, Fraser G, Sanchez AB, Ruiz-Cortés A. A survey on metamorphic testing. IEEE Trans. on Software Engineering, 2016, 42(9): 805–824. [doi: 10.1109/TSE.2016.2532875
    [48] Chen TY, Kuo FC, Liu H, Poon PL, Towey D, Tse TH, Zhou ZQ. Metamorphic testing: A review of challenges and opportunities. ACM Computing Surveys, 2018, 51(1): 4. [doi: 10.1145/3143561
    [49] Xie XF, Ma L, Xu FJ, Xue MH, Chen HX, Liu Y, Zhao JJ, Li B, Yin JX, See S. DeepHunter: A coverage-guided fuzz testing framework for deep neural networks. In: Proc. of the 28th ACM SIGSOFT Int’l Symp. on Software Testing and Analysis. Beijing: ACM, 2019. 146–157.
    [50] Odena A, Olsson C, Andersen D, Goodfellow I. Tensorfuzz: Debugging neural networks with coverage-guided fuzzing. In: Proc. of the 36th Int’l Conf. on Machine Learning. Long Beach: ICML, 2019. 4901–4911.
    [51] Braiek HB, Khomh F. DeepEvolution: A search-based testing approach for deep neural networks. In: Proc. of the 35th IEEE Int’l Conf. on Software Maintenance and Evolution (ICSME). Cleveland: IEEE, 2019. 454–458.
    [52] Du XN, Xie XF, Li Y, Ma L, Liu Y, Zhao JJ. DeepStellar: Model-based quantitative analysis of stateful deep learning systems. In: Proc. of the 27th ACM Joint Meeting on European Software Engineering Conf. and Symp. on the Foundations of Software Engineering. Tallinn: ACM, 2019. 477–487.
    [53] Park LH, Oh S, Kim J, Chung S, Kwon T. Poster: Effective layers in coverage metrics for deep neural networks. In: Proc. of the 27th ACM SIGSAC Conf. on Computer and Communications Security. London: ACM, 2019. 2681–2683.
    [54] Xie XF, Chen HX, Li Y, Ma L, Liu Y, Zhao JJ. Coverage-guided fuzzing for feedforward neural networks. In: Proc. of the 34th IEEE/ACM Int’l Conf. on Automated Software Engineering (ASE). San Diego: IEEE, 2019. 1162–1165.
    [55] Zhang F, Chowdhury SP, Christakis M. DeepSearch: A simple and effective blackbox attack for deep neural networks. In: Proc. of the 28th ACM Joint Meeting on European Software Engineering Conf. and the Symp. on the Foundations of Software Engineering. ACM, 2020. 800–812.
    [56] Lee S, Cha S, Lee D, Oh H. Effective white-box testing of deep neural networks with adaptive neuron-selection strategy. In: Proc. of the 29th ACM SIGSOFT Int’l Symp. on Software Testing and Analysis. ACM, 2020. 165–176.
    [57] Qin Y, Yue C. Fuzzing-based hard-label black-box attacks against machine learning models. Computers & Security, 2022, 117: 102694. [doi: 10.1016/j.cose.2022.102694
    [58] Demir S, Eniser HF, Sen A. DeepSmartFuzzer: Reward guided test generation for deep learning. In: Proc. of the 2020 Workshop on Artificial Intelligence Safety Co-located with the 29th Int’l Joint Conf. on Artificial Intelligence and the 17th Pacific Rim Int’l Conf. on Artificial Intelligence. Yokohama: CEUR-WS.org, 2021. 1–7.
    [59] Wang J, Cao KF, Fang CR, Chen JX. FDFuzz: Applying feature detection to fuzz deep learning systems. Int’l Journal of Performability Engineering, 2019, 15(10): 2675. [doi: 10.23940/ijpe.19.10.p13.26752682
    [60] Gao X, Saha RK, Prasad MR, Roychoudhury A. Fuzz testing based data augmentation to improve robustness of deep neural networks. In: Proc. of the 42nd IEEE/ACM Int’l Conf. on Software Engineering (ICSE). Seoul: IEEE, 2020. 1147–1158.
    [61] Rios A. FuzzE: Fuzzy fairness evaluation of offensive language classifiers on African-American English. In: Proc. of the 34th AAAI Conf. on Artificial Intelligence. Palo Alto: AAAI Press, 2020. 881–889.
    [62] Zhu JY, Park T, Isola P, Efros AA. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proc. of the 16th IEEE Int’l Conf. on Computer Vision (ICCV). Venice: IEEE, 2017. 2223–2232.
    [63] Lowe DG. Distinctive image features from scale-invariant keypoints. Int’l Journal of Computer Vision, 2004, 60(2): 91–110. [doi: 10.1023/B:VISI.0000029664.99615.94
    [64] Davis MD, Weyuker EJ. Pseudo-oracles for non-testable programs. In: Proc. of the 1981 ACM Conf. New York: ACM, 1981. 254–257.
    [65] McKeeman WM. Differential testing for software. Digital Technical Journal, 1998, 10(1): 100–107
    [66] Avizienis A. The methodology of N-version programming. In: Lyu M, ed. Software Fault Tolerance. New York: Wiley, 1995. 23–46.
    [67] Zhang J, Zhang LM, Harman M, Hao D, Jia Y, Zhang L. Predictive mutation testing. IEEE Trans. on Software Engineering, 2018, 45(9): 898–918. [doi: 10.1109/TSE.2018.2809496
    [68] Ma L, Xu FJ, Zhang FY, Sun JY, Xue MH, Li B, Chen CY, Su T, Li L, Liu Y, Zhao JJ, Wang YD. DeepGauge: Multi-granularity testing criteria for deep learning systems. In: Proc. of the 33rd IEEE/ACM Int’l Conf. on Automated Software Engineering (ASE). Montpellier: IEEE, 2018. 120–131.
    [69] Sun YC, Huang XW, Kroening D, Sharp J, Hill M, Ashmore R. Structural test coverage criteria for deep neural networks. In: Proc. of the 41st IEEE/ACM Int’l Conf. on Software Engineering (ICSE-Companion). Montreal: IEEE, 2019. 320–321.
    [70] Ma L, Zhang FJ, Xue MH, Li B, Liu Y, Zhao JJ, Wang YD. DeepCT: Combinatorial testing for deep learning systems. In: Proc. of the 26th IEEE Int’l Conf. on Software Analysis, Evolution and Reengineering. Hangzhou: IEEE, 2019. 614–618.
    [71] Chen JJ, Yan M, Wang Z, Kang YN, Wu Z. Deep neural network test coverage: How far are we? arXiv:2010.04946, 2020.
    [72] Gerasimou S, Eniser HF, Sen A, Cakan A. Importance-driven deep learning system testing. In: Proc. of the 42nd IEEE/ACM Int’l Conf. on Software Engineering (ICSE). Seoul: IEEE, 2020. 702–713.
    [73] Sekhon J, Fleming C. Towards improved testing for deep learning. In: Proc. of the 41st IEEE/ACM Int’l Conf. on Software Engineering (ICSE-NIER). Montreal: IEEE, 2019. 85–88.
    [74] Wang D, Wang ZY, Fang CR, Chen YS, Chen ZY. DeepPath: Path-driven testing criteria for deep neural networks. In: Proc. of the 2019 IEEE Int’l Conf. on Artificial Intelligence Testing (AITest). Newark: IEEE, 2019. 119–120.
    [75] Sun YC, Huang XW, Kroening D, Sharp J, Hill M, Ashmore R. Testing deep neural networks. arXiv:1803.04792, 2018.
    [76] Papadakis M, Kintis M, Zhang J, Jia Y, Le Traon Y, Harman M. Mutation testing advances: An analysis and survey. Advances in Computers, 2019, 112: 275–378. [doi: 10.1016/bs.adcom.2018.03.015
    [77] Ma L, Zhang FY, Sun JY, Xue MH, Li B, Xu JF F, Xie C, Li L, Liu Y, Zhao JJ, Wang YD. DeepMutation: Mutation testing of deep learning systems. In: Proc. of the 29th IEEE Int’l Symp. on Software Reliability Engineering (ISSRE). Memphis: IEEE, 2018. 100–111.
    [78] Hu Q, Ma L, Xie XF, Yu B, Liu Y, Zhao JJ. DeepMutation++: A mutation testing framework for deep learning systems. In: Proc. of the 34th IEEE/ACM Int’l Conf. on Automated Software Engineering (ASE). San Diego: IEEE, 2019. 1158–1161.
    [79] Shen WJ, Wan J, Chen ZY. MuNN: Mutation analysis of neural networks. In: Proc. of the 18th IEEE Int’l Conf. on Software Quality, Reliability and Security Companion (QRS-C). Lisbon: IEEE, 2018. 108–115.
    [80] Feldt R, Poulding S, Clark D, Yoo S. Test set diameter: Quantifying the diversity of sets of test cases. In: Proc. of the 2016 IEEE Int’l Conf. on Software Testing, Verification and Validation (ICST). Chicago: IEEE, 2016. 223–233.
    [81] Poulding S, Feldt R. Generating controllably invalid and a typical inputs for robustness testing. In: Proc. of the 10th IEEE Int’l Conf. on Software Testing, Verification and Validation Workshops (ICSTW). Tokyo: IEEE, 2017. 81–84.
    [82] Kim J, Feldt R, Yoo S. Guiding deep learning system testing using surprise adequacy. In: Proc. of the 41st IEEE/ACM Int’l Conf. on Software Engineering (ICSE). Montreal: IEEE, 2019. 1039–1049.
    [83] Trujillo M, Linares-Vásquez M, Escobar-Velásquez C, Dusparic I, Cardozo N. Does neuron coverage matter for deep reinforcement learning? A preliminary study. In: Proc. of the 42nd IEEE/ACM Int’l Conf. on Software Engineering Workshops. Seoul: ACM, 2020. 215–220.
    [84] Li ZN, Ma XX, Chang X, Cao C. Structural coverage criteria for neural networks could be misleading. In: Proc. of the 41st IEEE/ACM Int’l Conf. on Software Engineering (ICSE-NIER). Montreal: IEEE, 2019. 89–92.
    [85] Jahangirova G, Tonella P. An empirical evaluation of mutation operators for deep learning systems. In: Proc. of the 13th IEEE Int’l Conf. on Software Testing, Validation and Verification (ICST). Porto: IEEE, 2020. 74–84.
    [86] LeCun Y, Cortes C, Burges C. MNIST handwritten digit database. AT&T Labs. 2010. http://yann.lecun.com/exdb/mnist
    [87] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc. of the IEEE, 1998, 86(11): 2278–2324. [doi: 10.1109/5.726791
    [88] Variant of lenet. 2022. https://github.com/tensorflow/models/blob/master/research/slim/nets/lenet.py
    [89] Hinton G, Deng L, Yu D, Dahl GE, Mohamed AR, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN, Kingsbury B. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 2012, 29(6): 82–97. [doi: 10.1109/MSP.2012.2205597
    [90] Cho K, Van Merriënboer B, Bahdanau D, Bengio Y. On the properties of neural machine translation: Encoder-decoder approaches. In: Proc. of the 8th Workshop on Syntax, Semantics and Structure in Statistical Translation (SSST-8). Doha: Association for Computational Linguistics, 2014. 103–111.
    [91] Krizhevsky A. The CIFAR-10 dataset. 2022. http://www.cs.toronto.edu/~kriz/cifar.html
    [92] Zagoruyko S, Komodakis N. Wide residual networks. arXiv:1605.07146, 2016.
    [93] CIFARNet. 2022. https://github.com/tensorflow/models/blob/master/research/slim/nets/cifarnet.py
    [94] Xiao CW, Li B, Zhu JY, He W, Liu MY, Song D. Generating adversarial examples with adversarial networks. In: Proc. of the 27th Int’l Joint Conf. on Artificial Intelligence. Stockholm: IJCAI.org, 2018. 3905–3911.
    [95] Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proc. of the 25th IEEE Conf. on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016. 2818–2826.
    [96] Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng AY. Reading digits in natural images with unsupervised feature learning. In: Proc. of the 2011 Deep Learning and Unsupervised Feature Learning Workshop, Co-located with the 25th Annual Conf. on Neural Information Processing Systems (NIPS), 2011. 23–32.
    [97] Task3. 2022. https://github.com/yh1008/deepLearning
    [98] Udacity self-driving car challenge dataset. 2022. https://github.com/udacity/self-driving-car/tree/master/datasets/CH2
    [99] Chauffeur model. 2022. https://github.com/udacity/self-driving-car/tree/master/steering-models/community-models/chauffeur
    [100] Epoch model. 2022. https://github.com/udacity/self-driving-car/tree/master/steering-models/community-models/cg23
    [101] Rambo model. 2022. https://github.com/udacity/self-driving-car/tree/master/steering-models/community-models/rambo
    [102] Houben S, Stallkamp J, Salmen J, Schlipsing M, Igel C. Detection of traffic signs in real-world images: The German traffic sign detection benchmark. In: Proc. of the 23rd Int’l Joint Conf. on Neural Networks (IJCNN). Dallas: IEEE, 2013. 1–8.
    [103] Traffic-Sign-Recognition. 2022. https://github.com/chsasank/Traffic-Sign-Classification.keras
    [104] Traffic-Sign-Classification. 2022. https://github.com/xitizzz/Traffic-Sign-Recognition-using-Deep-Neural-Network
    [105] Xiao H, Rasul K, Vollgraf R. Fashion-MNIST: A novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747, 2017.
    [106] Fashion-MNIST-CNN-Keras. 2022. https://github.com/umbertogriffo/Fashion-mnistcnn-keras
    [107] Fashion-MNIST-with-Keras. 2022. https://github.com/markjay4k/Fashion-MNIST-with-Keras
    [108] Panayotov V, Chen GG, Povey D, Khundanpur S. LibriSpeech: An ASR corpus based on public domain audio books. In: Proc. of the 40th IEEE Int’l Conf. on Acoustics, Speech and Signal Processing (ICASSP). Brisbane: IEEE, 2015. 5206–5210.
    [109] Mozilla/DeepSpeech. 2022. https://github.com/mozilla/DeepSpeech
    [110] Cieri C, Miller D, Walker K. The fisher corpus: A resource for the next generations of speech-to-text. In: Proc. of the 4th Int’l Conf. on Language Resources and Evaluation (LREC 2004). Lisbon: European Language Resources Association (ELRA), 2004. 69–71.
    [111] Godfrey JJ, Holliman EC, McDaniel J. SWITCHBOARD: Telephone speech corpus for research and development. In: Proc. of the 17th IEEE Int’l Conf. on Acoustics, Speech, and Signal Processing. San Francisco: IEEE, 1992. 517–520.
    [112] CommomVoice dataset. 2022. https://voice.mozilla.org/en/datasets
    [113] PDF malware dump. 2022. http://contagiodump.blogspot.de/2010/08/malicious-documents-archive-for.html
    [114] Virustotal. 2022. https://www.virustotal.com/
    [115] IMDB. 2022. https://www.imdb.com/interfaces/
    [116] Blodgett SL, Green L, O’Connor B. Demographic dialectal variation in social media: A case study of African-American English. In: Proc. of the 2016 Conf. on Empirical Methods in Natural Language Processing. Austin: Association for Computational Linguistics, 2016. 1119–1130.
    [117] Zampieri M, Malmasi S, Nakov P, Rosenthal S, Farra N, Kumar R. Predicting the type and target of offensive posts in social media. In: Proc. of the 2019 Conf. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1 (Long and Short Papers). Minneapolis: Association for Computational Linguistics, 2019. 1415–1420.
    [118] Arp D, Spreitzenbarth M, Hubner M, Gascon H, Rieck K. DREBIN: Effective and explainable detection of Android malware in your pocket. In: Proc. of the 21st Annual Network and Distributed System Security Symp. San Diego: The Internet Society, 2014. 23–26.
    [119] Spreitzenbarth M, Freiling F, Echtler F, Schreck T, Hoffmann J. Mobile-sandbox: Having a deeper look into Android applications. In: Proc. of the 28th Annual ACM Symp. on Applied Computing. Coimbra: ACM, 2013. 1808–1815.
    [120] Grosse K, Papernot N, Manoharan P, Backes M, McDaniel P. Adversarial perturbations against deep neural networks for malware classification. arXiv:1606.04435, 2016.
    [121] Self-driving-car. 2022. https://github.com/udacity/self-driving-car
    [122] Nvidia-Autopilot-Keras. 2022. https://github.com/0bserver07/Nvidia-Autopilot-Keras
    [123] Keras-steering-angle-visualizations. 2022. https://github.com/jacobgil/keras-steering-angle-visualizations
    [124] Behavioral-cloning. 2022. https://github.com/navoshta/behavioral-cloning
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

代贺鹏,孙昌爱,金慧,肖明俊.面向深度学习系统的模糊测试技术研究进展.软件学报,2023,34(11):5008-5028

Copy
Share
Article Metrics
  • Abstract:2290
  • PDF: 5915
  • HTML: 3025
  • Cited by: 0
History
  • Received:August 18,2021
  • Revised:December 23,2021
  • Online: May 24,2022
  • Published: November 06,2023
You are the first2038154Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063