Empirical Studies on Deep-learning-based Security Bug Report Prediction Methods
Author:
Affiliation:

Fund Project:

Industrial Science and Technology Plan of Shaanxi Province (2015GY073); Key Research and Development Program of Shaanxi Province (2019GY-057)

  • Article
  • | |
  • Metrics
  • |
  • Reference [47]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    The occurrence of software security issues can cause serious consequences in most cases. Early detection of security issues is one of the key measures to prevent security incidents. Security bug report prediction (SBR) can help developers identify hidden security issues in the bug tracking system and fix them as early as possible. However, since the number of security bug reports in real software projects is small, and the features are complex (i.e., there are many types of security vulnerabilities with different types of features), this makes the manual extraction of security features relatively difficult and lead to low accuracy of security bug report prediction with traditional machine learning classification algorithms. To solve this problem, a deep-learning-based security bug report prediction method is proposed. The text mining models TextCNN and TextRNN via deep learning are used to construct security bug report prediction models. For extracting textual features of security bug reports, the Skip-Gram method is used to construct a word embedding matrix. The constructed model has been empirically evaluated on five classical security bug report datasets with different scales. The results show that the deep learning model is superior to the traditional machine learning classification algorithm in 80% of the experimental cases, and the performance of the constructed models can improve 0.258 on average and 0.535 at most in terms of F1-score performance measure. Furthermore, different re-sampling strategies are applied to deal with class imbalance problem in gathered SBR prediction datasets, and the experiment results are discussed.

    Reference
    [1] Amoroso E. Recent progress in software security. IEEE Software, 2018,35(2):11-13.
    [2] CVE Website. 2019. https://www.openstack.org/
    [3] CVE Detail. 2019. https://www.cvedetails.com/vendor/11727/Openstack.html
    [4] Wijayasekara D, et al. Mining bug databases for unidentified software vulnerabilities. In: Proc. of the Int’l Conf. on Human System Interaction (HIS). IEEE, 2012. 89-96.
    [5] Peters F, Tun TT, Yu YJ, Nuseibeh B. Text filtering and ranking for security bug report prediction. IEEE Trans. on Software Engineering, 2019,45(6):615-631.
    [6] Gegick M, Rotella P, Xie T. Identifying security bug reports via text mining: An industrial case study. In: Proc. of the 2010 7th IEEE Working Conf. on Mining Software Repositories (MSR). IEEE, 2010.
    [7] Shu R, Xia TP, Williams L, Menzies T. Better security bug report classification via hyperparameter optimization. arXiv preprint arXiv:1905.06872, 2019.
    [8] Goseva-Popstojanova K, Tyo J. Identification of security related bug reports via text mining using supervised and unsupervised classification. In: Proc. of the Int’l Conf. on Software Quality, Reliability, and Security. 2018. 344-355.
    [9] Zhang S, Yao LN, Sun AX, Tay Y. Deep learning based recommender system: A survey and new perspectives. ACM Computing Surveys (CSUR), 2019,52(1):1-38.
    [10] Lai SW, Liu K, He SZ, Zhao J. How to generate a good word embedding. IEEE Intelligent Systems, 2016,31(6):5-14.
    [11] PyTorch official website. 2019. https://pytorch.org/
    [12] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. Attention is all you need. In: Proc. of the Advances in Neural Information Processing Systems. 2017. 5998-6008.
    [13] Kim Y. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014.
    [14] Liu PF, Qiu XP, Huang XJ. Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101, 2016.
    [15] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
    [16] Agrawal A, Menzies T. Is “Better Data” Better Than “Better Data Miners”? In: Proc. of the 2018 IEEE/ACM 40th Int’l Conf. on Software Engineering (ICSE). IEEE, 2018. 1050-1061.
    [17] Lopez MM, Kalita J. Deep learning applied to NLP. arXiv preprint arXiv:1703.03091, 2017.
    [18] Hinton GE, Krizhevsky A, Wang SD. Transforming auto-encoders. In: Proc. of the Int’l Conf. on Artificial Neural Networks. Springer-Verlag, 2011. 44-51.
    [19] Sabour S, Frosst N, Hinton GE. Dynamic routing between capsules. In: Proc. of the Advances in Neural Information Processing Systems. 2017. 3856-3866.
    [20] Yin Z, Shen YY. On the dimensionality of word embedding. In: Advances in Neural Information Processing Systems. 2018. 887-898.
    [21] Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
    [22] Blei DM, Ng AY, Jordan MI, et al. Latent Dirichlet allocation. Journal of Machine Learning Research, 2003,3(1):993-1022.
    [23] Bojanowski P, Grave E, Joulin A, et al. Enriching word vectors with subword information. Trans. of the Association for Computational Linguistics, 2017,5:135-146.
    [24] Pennington J, Socher R, Manning C. Glove: Global vectors for word representation. In: Proc. of the 2014 Conf. on Empirical Methods in Natural Language Processing. 2014. 1532-1543.
    [25] Peters ME, Neumann M, Iyyer M, et al. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018.
    [26] Devlin J, Chang MW, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
    [27] Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training. 2018. https://s3-us-west-2.amazonaws.com/openai-assets/researchcovers/languageunsupervised/languageunderstanding paper.pdf
    [28] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In: Proc. of the Advances in Neural Information Processing Systems. 2017. 5998-6008.
    [29] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409. 0473, 2014.
    [30] Krawczyk B. Learning from imbalanced data: Open challenges and future directions. Progress in Artificial Intelligence, 2016,5(4): 221-232.
    [31] Huang C, Li YN, Loy CC, Tang XO. Learning deep representation for imbalanced classification. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2016. 5375-5384.
    [32] Song J, Huang XL, Qin SJ, Song Q. A bi-directional sampling based on K-means method for imbalance text classification. In: Proc. of the 15th Int’l Conf. on Computer and Information Science. 2016. 1-5.
    [33] Greff K, Srivastava RK, Koutník J, et al. LSTM: A search space odyssey. IEEE Trans. on Neural Networks & Learning Systems, 2015,28(10):2222-2232.
    [34] Chaparro O, Lu J, Zampetti F, et al. Detecting missing information in bug descriptions. In: Proc. of the Joint Meeting on Foundations of Software Engineering. 2017.
    [35] Zheng W, Feng C, Yu TT, et al. Towards understanding bugs in an open source cloud management stack: An empirical study of OpenStack software bugs. Journal of Systems and Software, 2019,151:210-223.
    [36] NLTK. 2019. http://www.nltk.org/
    [37] Bishop CM. Pattern Recognition and Machine Learning. Springer-Verlag, 2006.
    [38] Ohira M, Kashiwa Y, Yamatani Y, et al. A dataset of high impact bugs: Manually-classified issue reports. In: Proc. of the 2015 IEEE/ACM 12th Working Conf. on Mining Software Repositories. IEEE, 2015. 518-521.
    [39] Ambari. 2019. http://ambari.apache.org/
    [40] Camel. 2019. http://camel.apache.org/
    [41] Derby. 2019. http://db.apache.org/derby/
    [42] Wicket. 2019. http://wicket.apache.org/
    [43] Apache. 2019. http://db.apache.org/
    [44] OpenStack bugs in LaunchPad. 2019. https://bugs.launchpad.net/openstack/
    [45] LaunchPad. 2019. https://launchpad.net/
    [46] Romano J. Appropriate statistics for ordinal level data: Should we really be using t-test and Cohen’s d for evaluating group differences on the NSSE and other surveys. In: Proc. of the Annual Meeting of the Florida Association of Institutional Research. 2006.
    [47] Dai JF, Qi HZ, Xiong YW, Li Y, Zhang GD, Hu H, Wei YC. Deformable convolutional networks. In: Proc. of the IEEE Int’l Conf. on Computer Vision. 2017. 764-773.
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

郑炜,陈军正,吴潇雪,陈翔,夏鑫.基于深度学习的安全缺陷报告预测方法实证研究.软件学报,2020,31(5):1294-1313

Copy
Share
Article Metrics
  • Abstract:4736
  • PDF: 6871
  • HTML: 3317
  • Cited by: 0
History
  • Received:August 31,2019
  • Revised:October 24,2019
  • Online: April 09,2020
  • Published: May 06,2020
You are the first2036682Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063