基于深度学习的安全缺陷报告预测方法实证研究
作者:
作者简介:

郑炜(1975-),男,安徽合肥人,博士,副教授,CCF专业会员,主要研究领域为软件测试,软件安全漏洞分析与检测;陈军正(1998-),男,学士,主要研究领域为软件安全研究,高性能计算;吴潇雪(1983-),女,博士生,CCF学生会员,主要研究领域为软件漏洞分析与检测,软件测试;陈翔(1980-),男,博士,副教授,CCF高级会员,主要研究领域为软件缺陷预测,软件缺陷定位,回归测试和组合测试;夏鑫(1986-),男,博士,讲师,主要研究领域为软件工程,软件仓库挖掘,经验软件工程.

通讯作者:

吴潇雪,E-mail:wuxiaoxue00@gmail.com

基金项目:

陕西省工业科技攻关项目(2015GY073);陕西省重点研发计划(2019GY-057)


Empirical Studies on Deep-learning-based Security Bug Report Prediction Methods
Author:
Fund Project:

Industrial Science and Technology Plan of Shaanxi Province (2015GY073); Key Research and Development Program of Shaanxi Province (2019GY-057)

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [47]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    软件安全问题的发生在大多数情况下会造成非常严重的后果,及早发现安全问题,是预防安全事故的关键手段之一.安全缺陷报告预测可以辅助开发人员及早发现被测软件中潜藏的安全缺陷,从而尽早得以修复.然而,由于安全缺陷在实际项目中的数量较少,而且特征复杂(即安全缺陷类型繁多,不同类型安全缺陷特征差异性较大),这使得手工提取特征相对困难,并随后造成传统机器学习分类算法在安全缺陷报告预测性能方面存在一定的瓶颈.针对该问题,提出基于深度学习的安全缺陷报告预测方法,采用深度文本挖掘模型TextCNN和TextRNN构建安全缺陷报告预测模型;针对安全缺陷报告文本特征,使用Skip-Gram方式构建词嵌入矩阵,并借助注意力机制对TextRNN模型进行优化.所构建的模型在5个不同规模的安全缺陷报告数据集上展开了大规模实证研究,实证结果表明,深度学习模型在80%的实验案例中都优于传统机器学习分类算法,性能指标F1-score平均可提升0.258,在最好的情况下甚至可以提升0.535.此外,针对安全缺陷报告数据集存在的类不均衡问题,对不同采样方法进行了实证研究,并对结果进行了分析.

    Abstract:

    The occurrence of software security issues can cause serious consequences in most cases. Early detection of security issues is one of the key measures to prevent security incidents. Security bug report prediction (SBR) can help developers identify hidden security issues in the bug tracking system and fix them as early as possible. However, since the number of security bug reports in real software projects is small, and the features are complex (i.e., there are many types of security vulnerabilities with different types of features), this makes the manual extraction of security features relatively difficult and lead to low accuracy of security bug report prediction with traditional machine learning classification algorithms. To solve this problem, a deep-learning-based security bug report prediction method is proposed. The text mining models TextCNN and TextRNN via deep learning are used to construct security bug report prediction models. For extracting textual features of security bug reports, the Skip-Gram method is used to construct a word embedding matrix. The constructed model has been empirically evaluated on five classical security bug report datasets with different scales. The results show that the deep learning model is superior to the traditional machine learning classification algorithm in 80% of the experimental cases, and the performance of the constructed models can improve 0.258 on average and 0.535 at most in terms of F1-score performance measure. Furthermore, different re-sampling strategies are applied to deal with class imbalance problem in gathered SBR prediction datasets, and the experiment results are discussed.

    参考文献
    [1] Amoroso E. Recent progress in software security. IEEE Software, 2018,35(2):11-13.
    [2] CVE Website. 2019. https://www.openstack.org/
    [3] CVE Detail. 2019. https://www.cvedetails.com/vendor/11727/Openstack.html
    [4] Wijayasekara D, et al. Mining bug databases for unidentified software vulnerabilities. In: Proc. of the Int’l Conf. on Human System Interaction (HIS). IEEE, 2012. 89-96.
    [5] Peters F, Tun TT, Yu YJ, Nuseibeh B. Text filtering and ranking for security bug report prediction. IEEE Trans. on Software Engineering, 2019,45(6):615-631.
    [6] Gegick M, Rotella P, Xie T. Identifying security bug reports via text mining: An industrial case study. In: Proc. of the 2010 7th IEEE Working Conf. on Mining Software Repositories (MSR). IEEE, 2010.
    [7] Shu R, Xia TP, Williams L, Menzies T. Better security bug report classification via hyperparameter optimization. arXiv preprint arXiv:1905.06872, 2019.
    [8] Goseva-Popstojanova K, Tyo J. Identification of security related bug reports via text mining using supervised and unsupervised classification. In: Proc. of the Int’l Conf. on Software Quality, Reliability, and Security. 2018. 344-355.
    [9] Zhang S, Yao LN, Sun AX, Tay Y. Deep learning based recommender system: A survey and new perspectives. ACM Computing Surveys (CSUR), 2019,52(1):1-38.
    [10] Lai SW, Liu K, He SZ, Zhao J. How to generate a good word embedding. IEEE Intelligent Systems, 2016,31(6):5-14.
    [11] PyTorch official website. 2019. https://pytorch.org/
    [12] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. Attention is all you need. In: Proc. of the Advances in Neural Information Processing Systems. 2017. 5998-6008.
    [13] Kim Y. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882, 2014.
    [14] Liu PF, Qiu XP, Huang XJ. Recurrent neural network for text classification with multi-task learning. arXiv preprint arXiv:1605.05101, 2016.
    [15] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
    [16] Agrawal A, Menzies T. Is “Better Data” Better Than “Better Data Miners”? In: Proc. of the 2018 IEEE/ACM 40th Int’l Conf. on Software Engineering (ICSE). IEEE, 2018. 1050-1061.
    [17] Lopez MM, Kalita J. Deep learning applied to NLP. arXiv preprint arXiv:1703.03091, 2017.
    [18] Hinton GE, Krizhevsky A, Wang SD. Transforming auto-encoders. In: Proc. of the Int’l Conf. on Artificial Neural Networks. Springer-Verlag, 2011. 44-51.
    [19] Sabour S, Frosst N, Hinton GE. Dynamic routing between capsules. In: Proc. of the Advances in Neural Information Processing Systems. 2017. 3856-3866.
    [20] Yin Z, Shen YY. On the dimensionality of word embedding. In: Advances in Neural Information Processing Systems. 2018. 887-898.
    [21] Mikolov T, Chen K, Corrado G, et al. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
    [22] Blei DM, Ng AY, Jordan MI, et al. Latent Dirichlet allocation. Journal of Machine Learning Research, 2003,3(1):993-1022.
    [23] Bojanowski P, Grave E, Joulin A, et al. Enriching word vectors with subword information. Trans. of the Association for Computational Linguistics, 2017,5:135-146.
    [24] Pennington J, Socher R, Manning C. Glove: Global vectors for word representation. In: Proc. of the 2014 Conf. on Empirical Methods in Natural Language Processing. 2014. 1532-1543.
    [25] Peters ME, Neumann M, Iyyer M, et al. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018.
    [26] Devlin J, Chang MW, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
    [27] Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training. 2018. https://s3-us-west-2.amazonaws.com/openai-assets/researchcovers/languageunsupervised/languageunderstanding paper.pdf
    [28] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. In: Proc. of the Advances in Neural Information Processing Systems. 2017. 5998-6008.
    [29] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409. 0473, 2014.
    [30] Krawczyk B. Learning from imbalanced data: Open challenges and future directions. Progress in Artificial Intelligence, 2016,5(4): 221-232.
    [31] Huang C, Li YN, Loy CC, Tang XO. Learning deep representation for imbalanced classification. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2016. 5375-5384.
    [32] Song J, Huang XL, Qin SJ, Song Q. A bi-directional sampling based on K-means method for imbalance text classification. In: Proc. of the 15th Int’l Conf. on Computer and Information Science. 2016. 1-5.
    [33] Greff K, Srivastava RK, Koutník J, et al. LSTM: A search space odyssey. IEEE Trans. on Neural Networks & Learning Systems, 2015,28(10):2222-2232.
    [34] Chaparro O, Lu J, Zampetti F, et al. Detecting missing information in bug descriptions. In: Proc. of the Joint Meeting on Foundations of Software Engineering. 2017.
    [35] Zheng W, Feng C, Yu TT, et al. Towards understanding bugs in an open source cloud management stack: An empirical study of OpenStack software bugs. Journal of Systems and Software, 2019,151:210-223.
    [36] NLTK. 2019. http://www.nltk.org/
    [37] Bishop CM. Pattern Recognition and Machine Learning. Springer-Verlag, 2006.
    [38] Ohira M, Kashiwa Y, Yamatani Y, et al. A dataset of high impact bugs: Manually-classified issue reports. In: Proc. of the 2015 IEEE/ACM 12th Working Conf. on Mining Software Repositories. IEEE, 2015. 518-521.
    [39] Ambari. 2019. http://ambari.apache.org/
    [40] Camel. 2019. http://camel.apache.org/
    [41] Derby. 2019. http://db.apache.org/derby/
    [42] Wicket. 2019. http://wicket.apache.org/
    [43] Apache. 2019. http://db.apache.org/
    [44] OpenStack bugs in LaunchPad. 2019. https://bugs.launchpad.net/openstack/
    [45] LaunchPad. 2019. https://launchpad.net/
    [46] Romano J. Appropriate statistics for ordinal level data: Should we really be using t-test and Cohen’s d for evaluating group differences on the NSSE and other surveys. In: Proc. of the Annual Meeting of the Florida Association of Institutional Research. 2006.
    [47] Dai JF, Qi HZ, Xiong YW, Li Y, Zhang GD, Hu H, Wei YC. Deformable convolutional networks. In: Proc. of the IEEE Int’l Conf. on Computer Vision. 2017. 764-773.
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

郑炜,陈军正,吴潇雪,陈翔,夏鑫.基于深度学习的安全缺陷报告预测方法实证研究.软件学报,2020,31(5):1294-1313

复制
分享
文章指标
  • 点击次数:4730
  • 下载次数: 6834
  • HTML阅读次数: 3285
  • 引用次数: 0
历史
  • 收稿日期:2019-08-31
  • 最后修改日期:2019-10-24
  • 在线发布日期: 2020-04-09
  • 出版日期: 2020-05-06
文章二维码
您是第19786826位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号