静态软件缺陷预测方法研究
作者:
基金项目:

国家自然科学基金(61202006, 61373012, 61202030); 南京大学计算机软件新技术国家重点实验室开放课题(KFKT2012B29)


Survey of Static Software Defect Prediction
Author:
Fund Project:

National Natural Science Foundation of China (61202006, 61373012, 61202030); Open Project Program of the State Key Laboratory for Novel Software Technology (Nanjing University) (KFKT2012B29)

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [132]
  • |
  • 相似文献 [20]
  • |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    静态软件缺陷预测是软件工程数据挖掘领域中的一个研究热点.通过分析软件代码或开发过程,设计出与软件缺陷相关的度量元;随后,通过挖掘软件历史仓库来创建缺陷预测数据集,旨在构建出缺陷预测模型,以预测出被测项目内的潜在缺陷程序模块,最终达到优化测试资源分配和提高软件产品质量的目的.对近些年来国内外学者在该研究领域取得的成果进行了系统的总结.首先,给出了研究框架并识别出了影响缺陷预测性能的3个重要影响因素:度量元的设定、缺陷预测模型的构建方法和缺陷预测数据集的相关问题;接着,依次总结了这3个影响因素的已有研究成果;随后,总结了一类特殊的软件缺陷预测问题(即,基于代码修改的缺陷预测)的已有研究工作;最后,对未来研究可能面临的挑战进行了展望.

    Abstract:

    Static software defect prediction is an active research topic in the domain of software engineering data mining. The phases of the study include designing novel code or process metrics to characterize the faults in the program modules, constructing software defect prediction model based on the training data gathered after mining software historical repositories, using the trained model to predict potential defect-proneness of program modules. The research on software defect prediction can optimize the allocation of testing resources and improve the quality of software. This paper offers a systematic survey of existing research achievements of the domestic and foreign researchers in recent years. First, a research framework is proposed and three key factors (i.e., metrics, model construction approaches, and issues in datasets) influencing the performance of defect prediction are identified. Next, existing research achievements in these three key factors are discussed in sequence. Then, the existing achievements on a special defect prediction issues (i.e., code change based defect prediction) are summarized. Finally a perspective of the future work in this research area is discussed.

    参考文献
    [1] Wang Q, Wu SJ, Li MS. Software defect prediction. Ruan Jian Xue Bao/Journal of Software, 2008,19(7):1565-1580(in Chinese with English abstract). http://www.jos.org.cn/1000-9825/19/1565.htm
    [2] Hall T, Beecham S, Bowes D, Gray D, Counsell S. A systematic literature review on fault prediction performance in software engineering. IEEE Trans. on Software Engineering, 2012,38(6):1276-1304.[doi:10.1109/TSE.2011.103]
    [3] Yu SS, Zhou SG, Guan JH. Software engineering data mining:A survey. Journal of Frontiers of Computer Science and Technology, 2012,6(1):1-31(in Chinese with English abstract).[doi:10.1299/jcst.6.1]
    [4] Radjenovic D, Hericko M, Torkar R, Zivkovic A. Software fault prediction metrics:A systematic literature review. Information and Software Technology, 2013,55(8):1397-1418.[doi:10.1016/j.infsof.2013.02.009]
    [5] Akiyama F. An example of software system debugging. In:Proc. of the Int'l Federation of Information Proc. Societies Congress. New York:Springer Science and Business Media, 1971. 353-359.
    [6] Halstead MH. Elements of Software Science (Operating and Programming Systems Series). New York:Elsevier Science Inc., 1977.
    [7] McCabe TJ. A complexity measure. IEEE Trans. on Software Engineering, 1976,2(4):308-320.[doi:10.1109/TSE.1976.233837]
    [8] Chidamber SR, Kemerer CF. A metrics suite for object oriented design. IEEE Trans. on Software Engineering, 1994,20(6):476-493.[doi:10.1109/32.295895]
    [9] Basili VR, Briand LC, Melo WL. A validation of object-oriented design metrics as quality indicators. IEEE Trans. on Software Engineering, 1996,22(10):751-761.[doi:10.1109/32.544352]
    [10] Subramanyam R, Krishnan MS. Empirical analysis of CK metrics for object-oriented design complexity:Implications for software defects. IEEE Trans. on Software Engineering, 2003,29(4):297-310.[doi:10.1109/TSE.2003.1191795]
    [11] Zhou YM, Xu BW, Leung H. On the ability of complexity metrics to predict fault-prone classes in object-oriented systems. Journal of Systems and Software, 2010,83(4):660-674.[doi:10.1016/j.jss.2009.11.704]
    [12] Zhou YM, Leung H, Xu BW. Examining the potentially confounding effect of class size on the associations between objectoriented metrics and change-proneness. IEEE Trans. on Software Engineering, 2009,35(5):607-623.[doi:10.1109/TSE.2009.32]
    [13] Zhou YM, Xu BW, Leung H, Chen L. An in-depth study of the potentially confounding effect of class size in fault prediction. ACM Trans. on Software Engineering and Methodology, 2014,23(1):10:1-10:51.[doi:10.1145/2556777]
    [14] Zhao YY, Yang YB, Lu HM, Zhou YM, Song QB, Xu BW. An empirical analysis of package-modularization metrics:Implications for software fault-proneness. Information and Software Technology, 2015,57:186-203.[doi:10.1016/j.infsof.2014.09.006]
    [15] Yang YB, Zhou YM, Lu HM, Chen L, Chen ZY, Xu BW, Leung H, Zhang ZY. Are slice-based cohesion metrics actually useful in effort-aware post-release fault-proneness prediction? an empirical study. IEEE Trans. on Software Engineering, 2015,41(4):331-357.[doi:10.1109/TSE.2014.2370048]
    [16] Sarkar S, Kak AC, Rama GM. Metrics for measuring the quality of modularization of large-scale object-oriented software. IEEE Trans. on Software Engineering, 2008,34(5):700-720.[doi:10.1109/TSE.2008.43]
    [17] Meyers TM, Binkley D. An empirical study of slice-based cohesion and coupling metrics. ACM Trans. on Software Engineering and Methodology, 2007,17(1):2:1-27.[doi:10.1145/1314493.1314495]
    [18] Zimmermann T, Nagappan N. Predicting subsystem failures using dependency graph complexities. In:Proc. of the Int'l Symp. on Software Reliability. 2007. 227-236.[doi:10.1109/ISSRE.2007.19]
    [19] Zimmermann T, Nagappan N. Predicting defects using network analysis on dependency graphs. In:Proc. of the Int'l Conf. on Software Engineering. 2008. 531-540.[doi:10.1145/1368088.1368161]
    [20] Shin Y, Bell RM, Ostrand TJ, Weyuker EJ. On the use of calling structure information to improve fault prediction. Empirical Software Engineering, 2012,17(4-5):390-423.[doi:10.1007/s10664-011-9165-9]
    [21] Binkley D, Field H, Lawrie D, Pighin M. Increasing diversity:Natural language measures for software fault prediction. Journal of Systems and Software, 2009,82(11):1793-1803.[doi:10.1016/j.jss.2009.06.036]
    [22] Lawrie DJ, Field H, Binkley D. Leveraged quality assessment using information retrieval techniques. In:Proc. of the Int'l Conf. on Program Comprehension. 2006. 149-158.[doi:10.1109/ICPC.2006.34]
    [23] Nagappan N, Ball T. Use of relative code churn measures to predict system defect density. In:Proc. of the Int'l Conf. on Software Engineering. 2005. 284-292.[doi:10.1145/1062455.1062514]
    [24] Moser R, Pedrycz W, Succi G. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In:Proc. of the Int'l Conf. on Software Engineering. 2008. 181-190.[doi:10.1145/1368088.1368114]
    [25] Hassan AE. Predicting faults using the complexity of code changes. In:Proc. of the Int'l Conf. on Software Engineering. 2009. 78-88.[doi:10.1109/ICSE.2009.5070510]
    [26] D'Ambros M, Lanza M, Robbes R. An extensive comparison of bug prediction approaches. In:Proc. of the Working Conf. on Mining Software Repositories. 2010. 31-41.[doi:10.1109/MSR.2010.5463279]
    [27] Graves TL, Karr AF, Marron JS, Siy H. Predicting fault incidence using software change history. IEEE Trans. on Software Engineering, 2000,26(7):653-661.[doi:10.1109/32.859533]
    [28] Weyuker EJ, Ostrand TJ, Bell RM. Using developer information as a factor for fault prediction. In:Proc. of the Int'l Workshop on Predictor Models in Software Engineering. 2007. 1-7.[doi:10.1109/PROMISE.2007.14]
    [29] Weyuker EJ, Ostrand TJ, Bell RM. Do too many cooks spoil the broth? Using the number of developers to enhance defect prediction models. Empirical Software Engineering, 2008,13(5):539-559.[doi:10.1007/s10664-008-9082-8]
    [30] Pinzger M, Nagappan N, Murphy B. Can developer-module networks predict failures? In:Proc. of the Int'l Symp. on Foundations of Software Engineering. 2008. 2-12.[doi:10.1145/1453101.1453105]
    [31] Meneely A, Williams L, Snipes W, Osborne J. Predicting failures with developer networks and social network analysis. In:Proc. of the Int'l Symp. on Foundations of Software Engineering. 2008. 13-23.[doi:10.1145/1453101.1453106]
    [32] Jiang T, Tan L, Kim S. Personalized defect prediction. In:Proc. of the Int'l Conf. on Automated Software Engineering. 2013. 279-289.[doi:10.1109/ASE.2013.6693087]
    [33] Bird C, Nagappan N, Murphy B, Gall H, Devanbu P. Don't touch my code! Examining the effects of ownership on software quality. In:Proc. of Joint Meeting of the European Software Engineering Conf. and the Symp. on the Foundations of Software Engineering. 2011. 4-14. http://dl.acm.org/citation.cfm?id=2025119[doi:10.1145/2025113.2025119]
    [34] Rahman F, Devanbu P. Ownership, experience and defects:A fine-grained study of authorship. In:Proc. of the Int'l Conf. on Software Engineering. 2011. 491-500.[doi:10.1145/1985793.1985860]
    [35] Posnett D, D'Souza R, Devanbu P, Filkov V. Dual ecological measures of focus in software development. In:Proc. of the Int'l Conf. on Software Engineering. 2013. 452-461.[doi:10.1109/ICSE.2013.6606591]
    [36] Lee T, Nam J, Han D, Kim S, In HP. Micro interaction metrics for defect prediction. In:Proc. of the Joint Meeting of the European Software Engineering Conf. and the Symp. on the Foundations of Software Engineering. 2011. 311-321.[doi:10.1145/2025113. 2025156]
    [37] Bird C, Nagappan N, Gall H, Murphy B, Devanbu P. Putting it all together:Using socio-technical networks to predict failures. In:Proc. of the Int'l Symp. on Software Reliability Engineering. 2009. 109-119.[doi:10.1109/ISSRE.2009.17]
    [38] Hu W, Wong K. Using citation influence to predict software defects. In:Proc. of the Working Conf. on Mining Software Repositories. 2013. 419-428.[doi:10.1109/MSR.2013.6624058]
    [39] D'Ambros M, Lanza M, Robbes R. On the relationship between change coupling and software defects. In:Proc. of the Working Conf. on Reverse Engineering. 2009. 135-144.[doi:10.1109/WCRE.2009.19]
    [40] Herzig K, Just S, Rau A, Zeller A. Predicting defects using change genealogies. In:Proc. of the Int'l Symp. on Software Reliability Engineering. 2013. 118-127.[doi:10.1109/ISSRE.2013.6698911]
    [41] Conway ME. How do committees invent. Datamation, 1968,14(4):28-31.
    [42] Nagappan N, Murphy B, Basili VR. The influence of organizational structure on software quality:An empirical case study. In:Proc. of the Int'l Conf. on Software Engineering. 2008. 521-530.[doi:10.1145/1368088.1368160]
    [43] Mockus A. Organizational volatility and its effects on software defects. In:Proc. of the Int'l Symp. on Foundations of Software Engineering. 2010. 117-126.[doi:10.1145/1882291.1882311]
    [44] Bird C, Nagappan N, Devanbu P, Gall H, Murphy B. Does distributed development affect software quality? an empirical case study of Windows Vista. In:Proc. of Int'l Conf. on Software Engineering. 2009. 518-528.[doi:10.1109/ICSE.2009.5070550]
    [45] Bacchelli A, D'Ambros M, Lanza M. Are popular classes more defect prone. In:Proc. of the Int'l Conf. on Fundamental Approaches to Software Engineering. 2010. 59-73.[doi:10.1007/978-3-642-12029-9_5]
    [46] Taba SES, Khomh F, Zou Y, Hassan AE, Nagappan M. Predicting bugs using antipatterns. In:Proc. of the Int'l Conf. on Software Maintenance. 2013. 270-279.[doi:10.1109/ICSM.2013.38]
    [47] Herzig K. Using pre-release test failures to build early post-release defect prediction models. In:Proc. of the Int'l Symp. on Software Reliability Engineering. 2014. 300-311.[doi:10.1109/ISSRE.2014.21]
    [48] Wang JJ, Li J, Wang Q, Yand D, Zhang H, Li MS. Can requirements dependency network be used as early indicator of software integration bugs. In:Proc. of the Int'l Conf. on Requirement Engineering. 2013. 185-194.[doi:10.1109/RE.2013.6636718]
    [49] Menzies T, Greenwald J, Frank A. Data mining static code attributes to learn defect predictors. IEEE Trans. on Software Engineering, 2007,3(1):2-13.[doi:10.1109/TSE.2007.256941]
    [50] Zhang HY. An investigation of the relationships between lines of code and defects. In:Proc. of the Int'l Conf. on Software Maintenance. 2009. 274-283.[doi:10.1109/ICSM.2009.5306304]
    [51] Arisholm E, Briand LC, Johannessen EB. A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. Journal of Systems and Software, 2010,83(1):2-17.[doi:10.1016/j.jss.2009.06.055]
    [52] Rahman F, Devanbu P. How, and why, process metrics are better. In:Proc. of the Int'l Conf. on Software Engineering. 2013. 432-441.[doi:10.1109/ICSE.2013.6606589]
    [53] Madeyski L, Jureczko M. Which process metrics can significantly improve defect prediction models? An empirical study. Software Quality Journal, 2015,23(3):393-422.[doi:10.1007/s11219-014-9241-7]
    [54] D'Ambros M, Lanza M, Robbes R. Evaluating defect prediction approaches:A benchmark and an extensive comparison. Empirical Software Engineering, 2012,17(4-5):531-577.[doi:10.1007/s10664-011-9173-9]
    [55] Elish KO, Elish MO. Predicting defect-prone software modules using support vector machines. Journal of Systems and Software, 2008,81(5):649-660.[doi:10.1016/j.jss.2007.07.040]
    [56] Lessmann S, Baesens B, Mues C, Pietsch S. Benchmarking classification models for software defect prediction:A proposed framework and novel findings. IEEE Trans. on Software Engineering, 2008,34(4):485-496.[doi:10.1109/TSE.2008.35]
    [57] Ghotra B, McIntosh S, Hassan AE. Revisiting the impact of classification techniques on the performance of defect prediction models. In:Proc. of the Int'l Conf. on Software Engineering. 2015. 789-800.[doi:10.1109/ICSE.2015.91]
    [58] Shepperd M, Bowes D, Hall T. Research Bias:The use of machine learning in software defect prediction. IEEE Trans. on Software Engineering, 2014,40(6):603-616.[doi:10.1109/TSE.2014.2322358]
    [59] Shepperd M, Song QB, Sun ZB, Mair C. Data quality:Some comments on the NASA software defect datasets. IEEE Trans. on Software Engineering, 2013,39(9):1208-1215.[doi:10.1109/TSE.2013.11]
    [60] Catal C, Diri B. Investigating the effect of dataset size, metrics sets, and feature selection techniques on software fault prediction problem. Information Science, 2009,179(8):1040-1058.[doi:10.1016/j.ins.2008.12.001]
    [61] Song QB, Jia ZH, Shepperd M, Ying S, Liu J. A general software defect-proneness prediction framework. IEEE Trans. on Software Engineering, 2011,37(3):356-370.[doi:10.1109/TSE.2010.90]
    [62] Li M, Zhang HY, Wu RX, Zhou ZH. Sample-Based software defect prediction with active and semi-supervised learning. Automated Software Engineering, 2012,19(2):201-230.[doi:10.1007/s10515-011-0092-1]
    [63] Lu HH, Kocaguneli E, Cuki B. Defect prediction between software versions with active learning and dimensionality reduction. In:Proc. of the Int'l Symp. on Software Reliability Engineering. 2014. 312-322.[doi:10.1109/ISSRE.2014.35]
    [64] Hassan AE, Holt RC. The top ten list:Dynamic fault prediction. In:Proc. of the Int'l Conf. on Software Maintenance. 2005. 263-272.[doi:10.1109/ICSM.2005.91]
    [65] Kim S, Zimmermann T, Whitehead J, Zeller A. Predicting faults from cached history. In:Proc. of the Int'l Conf. on Software Engineering. 2007. 489-498.[doi:10.1109/ICSE.2007.66]
    [66] Rahman F, Posnett D, Hindle A, Barr E, Devanbu P. BugCache for inspections:Hit or miss. In:Proc. of the Joint Meeting of the European Software Engineering Conf. and the Symp. on the Foundations of Software Engineering. 2011. 322-331.[doi:10.1145/2025113.2025157]
    [67] Lewis C, Lin ZP, Sadowski C, Zhu XY, Ou R, Whitehead J. Does bug prediction support human developers? Findings from a Google case study. In:Proc. of the Int'l Conf. on Software Engineering. 2013. 372-381.[doi:10.1109/ICSE.2013.6606583]
    [68] Bird C, Bachmann A, Aune E, Duffy J, Bernstein A, Filkov V, Devanbu P. Fair and balanced? Bias in bug-fix datasets. In:Proc. of the the Joint Meeting of the European Software Engineering Conf. and the Symp. on The Foundations of Software Engineering. 2009. 121-130.[doi:10.1145/1595696.1595716]
    [69] Bachmann A, Bird C, Rahman F, Devanbu P, Bernstein A. The missing links:bugs and bug-fix commits. In:Proc. of the Int'l Symp. on Foundations of Software Engineering. 2010. 97-106.[doi:10.1145/1882291.1882308]
    [70] Nguyen TH, Adams B, Hassan AE. A case study of bias in bug-fix datasets. In:Proc. of the Working Conf. on Reverse Engineering. 2010. 259-268.[doi:10.1109/WCRE.2010.37]
    [71] Wu RX, Zhang HY, Kim S, Cheung SC. Relink:Recovering links between bugs and changes. In:Proc. of the European Conf. on Foundations of Software Engineering. 2011. 15-25.[doi:10.1145/2025113.2025120]
    [72] Nguyen AT, Nguyen TT, Nguyen HA, Nguyen TN. Multi-Layered approach for recovering links between bug reports and fixes. In:Proc. of the Int'l Symp. on the Foundations of Software Engineering. 2012. 63:1-63:11.[doi:10.1145/2393596.2393671]
    [73] Rahman F, Posnett D, Herraiz I, Devanbu P. Sample size vs. bias in defect prediction. In:Proc. of the Joint Meeting of the European Software Engineering Conf. and the Symp. on the Foundations of Software Engineering. 2013. 147-157.[doi:10.1145/2491411.2491418]
    [74] Herzig K, Just S, Zeller A. It's not a bug, it's a feature:How misclassification impacts bug prediction. In:Proc. of the Int'l Conf. on Software Engineering. 2013. 392-401.[doi:10.1109/ICSE.2013.6606585]
    [75] Kim S, Zhang HY, Wu RX, Gong L. Dealing with noise in defect prediction. In:Proc. of the Int'l Conf. on Software Engineering. 2011. 481-490.[doi:10.1145/1985793.1985859]
    [76] Tantithamthavorn C, McIntosh S, Hassan AE, Ihar A, Matsumoto K. The impact of mislabeling on the performance and interpretation of defect prediction models. In:Proc. of the Int'l Conf. on Software Engineering. 2015. 812-823.
    [77] Gao KH, Khoshgoftaar TM, Wang HJ, Seliya N. Choosing software metrics for defect prediction:An investigation on feature selection techniques. Software-Practice and Experience, 2011,41(5):579-606.[doi:10.1002/spe.1043]
    [78] Liu SL, Chen X, Liu WS, Chen JQ, Gu Q, Chen DX. FECAR:A feature selection framework for software defect prediction. In:Proc. of the Annual Computer Software and Applications Conf. 2014. 426-435.[doi:10.1109/COMPSAC.2014.66]
    [79] Wang Q, Zhu J, Yu B. Feature selection and clustering in software quality prediction. In:Proc. of the Int'l Conf. on Evaluation and Assessment in Software Engineering. 2007. 21-32. http://www.bcs.org/upload/pdf/ewic_ea07_paper3.pdf
    [80] Turhan B, Bener A. A multivariate analysis of static code attributes for defect prediction. In:Proc. of the Int'l Conf. on Quality Software. 2007. 231-237.[doi:10.1109/QSIC.2007.4385500]
    [81] He HB, Garcia EA. Learning from imbalanced data. IEEE Trans. on Knowledge and Data Engineering, 2009,21(9):1263-1284.[doi:10.1109/TKDE.2008.239]
    [82] Menzies T, Turhan B, Bener A, Gay G, Cukic B, Jiang Y. Implications of ceiling effects in defect predictors. In:Proc. of the Int'l Workshop on Predictor Models in Software Engineering. 2008. 47-54.[doi:10.1145/1370788.1370801]
    [83] Shatnawi R. Improving software fault-prediction for imbalanced data. In:Proc. of the Int'l Conf. on Innovations in Information Technology. 2012. 54-59.[doi:10.1109/INNOVATIONS.2012.6207774]
    [84] Pelayo L, Dick S. Applying novel resampling strategies to software defect prediction. In:Proc. of the Annual Meeting of the North American Fuzzy Information Processing Society. 2007. 69-72.[doi:10.1109/NAFIPS.2007.383813]
    [85] Wang S, Yao X. Using class imbalance learning for software defect prediction. IEEE Trans. on Reliability, 2013,62(2):434-443.[doi:10.1109/TR.2013.2259203]
    [86] Zheng J. Cost-Sensitive boosting neural networks for software defect prediction. Expert Systems with Application, 2010,37(6):4537-4543.[doi:10.1016/j.eswa.2009.12.056]
    [87] Jiang Y, Li M, Zhou ZH. Software defect detection with Rocus. Journal of Computer Science and Technology, 2011,26(2):328-342.[doi:10.1007/s11390-011-9439-0]
    [88] Jing XY, Ying S, Zhang ZW, Wu SS, Liu J. Dictionary learning based software defect prediction. In:Proc. of the Int'l Conf. on Software Engineering. 2014. 414-423.[doi:10.1145/2568225.2568320]
    [89] Rodriguez D, Herraiz I, Harrison R, Dolado J, Riquelme JC. Preliminary comparison of techniques for dealing with imbalance in software defect prediction. In:Proc. of the Int'l Conf. on Evaluation and Assessment in Software Engineering. 2014. 43:1-43:10.[doi:10.1145/2601248.2601294]
    [90] Seiffert C, Khoshgoftaar TM, Van Hulse J, Folleco A. An empirical study of the classification performance of learners on imbalanced and noisy software quality data. Information Sciences, 2014,259:571-595.[doi:10.1016/j.ins.2010.12.016]
    [91] Khoshgoftaar TM, Gao KH. Feature selection with imbalanced data for software defect prediction. In:Proc. of the Int'l Conf. on Machine Learning and Applications. 2009. 235-240.[doi:10.1109/ICMLA.2009.18]
    [92] Liu MX, Miao LS, Zhang DQ. Two-Stage cost-sensitive learning for software defect prediction. IEEE Trans. on Reliability, 2014, 63(2):676-686. http://rs.ieee.org/publications.html
    [93] Chen JQ, Liu SL, Liu WS, Chen X, Gu Q, Chen DX. A two-stage data preprocessing approach for software fault prediction. In:Proc. of the Int'l Conf. on Software Security and Reliability. 2014. 20-29.[doi:10.1109/SERE.2014.15]
    [94] Laradji IH, Alshayeb M, Ghouti L. Software defect prediction using ensemble learning on selected features. Information and Software Technology, 2015,58:388-402.[doi:10.1016/j.infsof.2014.07.005]
    [95] Briand LC, Melo WL, Wust J. Assessing the applicability of fault-proness models across object-oriented software projects. IEEE Trans. on Software Engineering, 2002,28(7):706-720.[doi:10.1109/TSE.2002.1019484]
    [96] Zimmermann T, Nagappan N, Gall H, Giger E, Murphy B. Cross-Project defect prediction:A large scale experiment on data vs. domain vs. process. In:Proc. of the Joint Meeting of the European Software Engineering Conf. and the Symp. on the Foundations of Software Engineering. 2009. 91-100.[doi:10.1145/1595696.1595713]
    [97] He ZM, Shu FD, Yang Y, Li MS, Wang Q. An investigation on the feasibility of cross-project defect prediction. Automated Software Engineering, 2012,19(2):167-199.[doi:10.1007/s10515-011-0090-3]
    [98] Rahman F, Posnett D, Devanbu P. Recalling the "imprecision" of cross-project defect prediction. In:Proc. of the Int'l Symp. on the Foundations of Software Engineering. 2012. 61:1-61:11.[doi:10.1145/2393596.2393669]
    [99] Pan SJ, Yang Q. A survey on transfer learning. IEEE Trans. on Knowledge and Data Engineering, 2010,22(10):1345-1359.[doi:10. 1109/TKDE.2009.191]
    [100] Turhan B, Menzies T, Bener AB, Stefano JD. On the relative value of cross-company and within-company data for defect prediction. Empirical Software Engineering, 2009,14(5):540-578.[doi:10.1007/s10664-008-9103-7]
    [101] Peters F, Menzies T, Marcus A. Better cross company defect prediction. In:Proc. of the Working Conf. on Mining Software Repositories. 2013. 409-418.[doi:10.1109/MSR.2013.6624057]
    [102] Chen L, Fang B, Shang ZW, Tang YY. Negative samples reduction in cross-company software defects prediction. Information and Software Technology, 2015,62:67-77.[doi:10.1016/j.infsof.2015.01.014]
    [103] He ZM, Peters F, Menzies T, Yang Y. Learning from open-source projects:An empirical study on defect prediction. In:Proc. of the Int'l Symp. on Empirical Software Engineering and Measurement. 2013. 45-54.[doi:10.1109/ESEM.2013.20]
    [104] Ma Y, Luo GC, Zeng X, Chen AG. Transfer learning for cross-company software defect prediction. Information and Software Technology, 2012,54(3):248-256.[doi:10.1016/j.infsof.2011.09.007]
    [105] Nam J, Pan SJ, Kim S. Transfer defect learning. In:Proc. of the Int'l Conf. on Software Engineering. 2013. 382-391.[doi:10. 1109/ICSE.2013.6606584]
    [106] Pan SJ, Tsang IW, Kwok JT, Yang Q. Domain adaptation via transfer component analysis. IEEE Trans. on Neural Networks, 2011, 22(2):199-210.[doi:10.1109/TNN.2010.2091281]
    [107] Zhang F, Mockus A, Keivanloo I, Zou Y. Towards building a universal defect prediction model. In:Proc. of the Working Conf. on Mining Software Repositories. 2014. 182-191.[doi:10.1145/2597073.2597078]
    [108] He P, Li B, Liu X, Chen J, Ma YT. An empirical study on software defect prediction with a simplified metric set. Information and Software Technology, 2015,59:170-190.[doi:10.1016/j.infsof.2014.11.006]
    [109] Turhan B, Msrl AT, Bener A. Empirical evaluation of the effects of mixed project data on learning defect predictors. Information and Software Technology, 2013,55(6):1101-1118.[doi:10.1016/j.infsof.2012.10.003]
    [110] Canfora G, Lucia AD, Penta MD, Oliveto R, Panichella A, Panichella S. Multi-Objective cross-project defect prediction. In:Proc. of the Int'l Conf. on Software Testing, Verification and Validation. 2013. 252-261.[doi:10.1109/ICST.2013.38]
    [111] Panichella A, Oliveto R, Lucia AD. Cross-Project defect prediction models:L'Union fait la force. In:Proc. of the Software Maintenance, Reengineering and Reverse Engineering. 2014. 164-173.[doi:10.1109/CSMR-WCRE.2014.6747166]
    [112] Kamei Y, Shihab E, Adams B, Hassan AE, Mockus A, Sinha A, Ubayashi N. A large-scale empirical study of just-in-time quality assurance. IEEE Trans. on Software Engineering, 2013,39(6):757-773.[doi:10.1109/TSE.2012.70]
    [113] Kim S, Whitehead Jr EJ, Zhang Y. Classifying software changes:Clean or buggy. IEEE Trans. on Software Engineering, 2008, 34(2):181-196.[doi:10.1109/TSE.2007.70773]
    [114] Sliwerski J, Zimmermann T, Zeller A. When do changes induce fixes. In:Proc. of the Int'l Workshop on Mining Software Repositories. 2005. 1-5.[doi:10.1145/1083142.1083147]
    [115] Shivaji S, Whitehead Jr EJ, Akella R, Kim S. Reducing features to improve code change-based bug prediction. IEEE Trans. on Software Engineering, 2013,39(4):552-569.[doi:10.1109/TSE.2012.43]
    [116] Fukushima T, Kamei Y, McIntosh S, Yamashita K, Ubayashi. N. An empirical study of just-in-time defect prediction using cross-project models. In:Proc. of the Working Conf. on Mining Software Repositories. 2014. 172-181.[doi:10.1145/2597073. 2597075]
    [117] Yuan Z, Yu LL, Liu C. Bug prediction method for fine-grained source code changes. Ruan Jian Xue Bao/Journal of Software, 2014, 25(11):2499-2517(in Chinese with English abstract). http://www.jos.org.cn/1000-9825/4559.htm[doi:10.13328/j.cnki.jos. 004559]
    [118] Tan M, Tan L, Dara S, Mayeux C. Online defect prediction for imbalanced data. In:Proc. of the Int'l Conf. on Software Engineering. 2015. 99-108.[doi:10.1109/ICSE.2015.139]
    [119] Chen X, Chen JH, Ju XL, Gu Q. Survey of test case prioritization techniques for regression testing. Ruan Jian Xue Bao/Journal of Software, 2013,24(8):1695-1712(in Chinese with English abstract). http://www.jos.org.cn/1000-9825/4420.htm[doi:10.3724/SP.J. 1001.2013.04420]
    [120] Chen X, Ju XL, Wen WZ, Gu Q. Review of dynamic fault localization approaches based on program spectrum. Ruan Jian Xue Bao/Journal of Software, 2015,26(2):390-412(in Chinese with English abstract). http://www.jos.org.cn/1000-9825/4708.htm[doi:10. 13328/j.cnki.jos.004708]
    [121] Peters F, Menzies T. Privacy and utility for defect prediction:Experiments with morph. In:Proc. of the Int'l Conf. on Software Engineering. 2012. 189-199.[doi:10.1109/ICSE.2012.6227194]
    [122] Peters F, Menzies T, Gong L, Zhang HY. Balancing privacy and utility in cross-company defect prediction. IEEE Trans. on Software Engineering, 2013,39(8):1054-1068.[doi:10.1109/TSE.2013.6]
    [123] Rahman F, Khatri S, Barr ET, Devanbu P. Comparing static bug finders and statistical prediction. In:Proc. of the Int'l Conf. on Software Engineering. 2014. 424-434.[doi:10.1145/2568225.2568269]
    [124] Hata H, Mizuno O, Kikuno T. Bug prediction based on fine-grained module histories. In:Proc. of the Int'l Conf. on Software Engineering. 2012. 200-210.[doi:10.1109/ICSE.2012.6227193]
    [125] Giger E, D'Ambros M, Pinzger M, Harald CG. Method-Level bug prediction. In:Proc. of the Int'l Symp. on Empirical Software Engineering and Measurement. 2012. 171-180.[doi:10.1145/2372251.2372285]
    [126] Czerwonka J, Das R, Nagappan N, Tarvo A, Teterev A. CRANE:Failure prediction, change analysis and test prioritization in practice-Experiences from Windows. In:Proc. of the Int'l Conf. on Software Testing, Verification and Validation. 2011. 357-366.[doi:10.1109/ICST.2011.24]
    附中文参考文献:
    [1] 王青,伍书剑,李明树.软件缺陷预测技术.软件学报,2008,19(7):1565-1580. http://www.jos.org.cn/1000-9825/19/1565.htm
    [3] 郁抒思,周水庚,关佶红.软件工程数据挖掘研究进展.计算机科学与探索,2012,6(1):1-31.[doi:10.1299/jcst.6.1]
    [117] 原子,于莉莉,刘超.面向细粒度源代码变更的缺陷预测方法.软件学报,2014,25(11):2499-2517. http://www.jos.org.cn/1000-9825/4559.htm[doi:10.13328/j.cnki.jos.004559]
    [119] 陈翔,陈继红,鞠小林,顾庆.回归测试中的测试用例优先排序技术述评.软件学报,2013,24(8):1695-1712. http://www.jos.org.cn/1000-9825/4420.htm[doi:10.3724/SP.J.1001.2013.04420]
    [120] 陈翔,鞠小林,文万志,顾庆.基于程序频谱的动态缺陷定位方法研究.软件学报,2015,26(2):390-412. http://www.jos.org.cn/1000-9825/4708.htm[doi:10.13328/j.cnki.jos.004708]
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

陈翔,顾庆,刘望舒,刘树龙,倪超.静态软件缺陷预测方法研究.软件学报,2016,27(1):1-25

复制
分享
文章指标
  • 点击次数:11039
  • 下载次数: 14803
  • HTML阅读次数: 4887
  • 引用次数: 0
历史
  • 收稿日期:2015-05-12
  • 最后修改日期:2015-07-27
  • 在线发布日期: 2015-11-04
文章二维码
您是第19786826位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号