众包软件测试技术研究进展
作者:
基金项目:

国家自然科学基金(61772263,61772014,61572375);软件新技术与产业化协同创新中心资助


Research Progress of Crowdsourced Software Testing
Author:
Fund Project:

National Natural Science Foundation of China (61772263, 61772014, 61572375); Collaborative Innovation Center of Novel Software Technology and Industrialization

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [119]
  • |
  • 相似文献 [20]
  • | | |
  • 文章评论
    摘要:

    众包测试是一种新兴的软件测试方式,得到了学术界和工业界的广泛关注.系统地总结了近年来众包软件测试研究的学术文献以及工业界实践进展:首先,从学术文献涉及的研究主题演变、涵盖的软件测试问题和众包测试流程、采用的实验对象及测试人员规模等多个角度对相关文献中提出的技术和方法进行了汇总;然后,从测试领域、测试对象、工人召集方式、绩效考核方式等方面分析对比了当前应用最广泛的20个众包测试商业平台;最后,探讨了众包软件测试的未来发展趋势、机遇和挑战.

    Abstract:

    Crowdsourced software testing is an emerging testing method which has drawn extensive attention in both industrial and academic community. This paper systematically summarizes the academic literatures and industry practice in recent years. This article first summarizes the related literatures from the perspectives of the research topics including software testing, crowdsourcing test process, experimental subjects and scale of crowdsourcing. It also compares total of 20 widely used crowdsourced software testing commercial platforms, and discusses their task domains, subjects, open call forms and performance evaluation forms. Finally, the paper presents the future trends, issues and opportunities for crowdsourced software testing.

    参考文献
    [1] Raymond E. The cathedral and the bazaar. Knowledge, Technology & Policy, 1999,12(3):23-49.[doi:10.1007/s12130-999-1026-0]
    [2] Howe J. The rise of crowdsourcing. Wired Magazine, 2006,14(6):1-4.
    [3] Feng JH, Li GL, Feng JH. A survey on crowdsourcing. Chinese Journal of Computers, 2015,38(9):1713-1725(in Chinese with English abstract).[doi:10.11897/SP.J.1016.2015.01713]
    [4] Heer J, Bostock M. Crowdsourcing graphical perception:Using mechanical Turk to assess visualization design. In:Proc. of the SIGCHI Conf. on Human Factors in Computing Systems. Atlanta, 2010. 203-212.[doi:10.1145/1753326.1753357]
    [5] Franklin MJ, Kossmann D, Kraska T, Ramesh S, Xin R. CrowdDB:Answering queries with crowdsourcing. In:Proc. of the ACM SIGMOD Int'l Conf. on Management of Data. 2011. 61-72.[doi:10.1145/1989323.1989331]
    [6] Negri M, Bentivogli L, Mehdad Y, Giampiccolo D, Marchetti A. Divide and conquer:Crowdsourcing the creation of cross-lingual textual entailment corpora. In:Proc. of the Conf. on Empirical Methods in Natural Language Processing. Edinburgh, 2011. 670-679. http://conferences.inf.ed.ac.uk/emnlp2011/
    [7] Yan Y, Rosales R, Fung G, Dy JG. Active learning from crowds. In:Proc. of the 28th Int'l Conf. on Machine Learning. Bellevue, 2011. 1161-1168. http://www.icml-2011.org/
    [8] Zuccon G, Leelanupab T, Whiting S, Yilmaz E, Jose JM, Azzopardi L. Crowdsourcing interactions:Using crowdsourcing for evaluating interactive information retrieval systems. Information Retrieval Journal, 2013,16(2):267-305.[doi:10.1007/s10791-012-9206-z]
    [9] Cinalli D, Marti L, Sanchezpi N, Gracia ACB. Using collective intelligence to support multi-objective decisions:Collaborative and online preferences. In:Proc. of the IEEE/ACM Int'l Conf. on Automated Software Engineering Workshops. 2015. 82-85.[doi:10. 1109/ASEW.2015.12]
    [10] Wu W, Tsai WT, Li W. An evaluation framework for software crowdsourcing. Frontiers of Computer Science, 2013,7(5):694-709.[doi:10.1007/s11704-013-2320-2]
    [11] Mao K, Capra L, Harman M, Jia Y. A survey of the use of crowdsourcing in software engineering. Journal of Systems and Software, 2017,126:57-84.[doi:10.1016/j.jss.2016.09.015]
    [12] Stolee KT, Elbaum S. Exploring the use of crowdsourcing to support empirical studies in software engineering. In:Proc. of the ACM/IEEE Int'l Symp. on Empirical Software Engineering and Measurement. Bolzano-Bozen:ACM Press, 2010. 1-4.[doi:10. 1145/1852786.1852832]
    [13] Bari E, Johnston M, Tsai W, Wu W. Software crowdsourcing practices and research directions. In:Proc. of the IEEE Symp. on Service-Oriented System Engineering. 2016. 372-379.[doi:10.1109/SOSE.2016.69]
    [14] Latoza T, Hoek A. Crowdsourcing in software engineering:Models, motivations, and challenges. IEEE Software, 2016,33(1):74-80.[doi:10.1109/MS.2016.12]
    [15] Zhao Y, Zhu Q. Evaluation on crowdsourcing research:Current status and future direction. Information Systems Frontiers, 2014, 16(3):417-434.[doi:10.1007/s10796-012-9350-4]
    [16] Yuen M, King I, Leung K. A survey of crowdsourcing systems. In:Proc. of the 3rd IEEE Int'l Conf. on Privacy, Security, Risk and Trust, and IEEE Int'l Conf. on Social Computing. Boston, 2011. 766-773.[doi:10.1109/PASSAT/SocialCom.2011.203]
    [17] Kittur A, Nickerson JV, Bernstein MS, Gerber EM, Shaw A, Zimmerman J, Lease M, Horton JJ. The future of crowd work. In:Proc. of the 2013 ACM Conf. on Computer Supported Cooperative Work. San Antonio, 2013. 1301-1318.[doi:10.1145/2441776. 2441923]
    [18] Doan A, Ramakrishnan R, Halevy AY. Crowdsourcing systems on the World-Wide Web. Communications of the ACM, 2011,54(4):86-96.[doi:10.1145/1924421.1924442]
    [19] Chittilappilly AI, Chen L, Amer-Yahia S. A survey of general-purpose crowdsourcing techniques. IEEE Trans. on Knowledge and Data Engineering, 2016,28(9):2246-2266.[doi:10.1109/TKDE.2016.2555805]
    [20] Tong YX, Yuan Y, Cheng YR, Chen L, Wang GR. A survey of spatiotemporal crowdsourced data management techniques. Ruan Jian Xue Bao/Journal of Software, 2017,28(1):35-58(in Chinese with English abstract). http://www.jos.org.cn/1000-9825/5140.htm[doi:10.13328/j.cnki.jos.005140]
    [21] Estellésarolas E. Towards an integrated crowdsourcing definition. Journal of Information Science, 2012,38(2):189-200.[doi:10. 1177/0165551512437638]
    [22] Chen KT, Wu CC, Chang YC, Lei CL. A crowdsourceable QoE evaluation framework for multimedia content. In:Proc. of the 17th ACM Int'l Conf. on Multimedia. ACM Press, 2009. 491-500.[doi:10.1145/1631272.1631339]
    [23] Chen KT, Chang CJ, Wu CC, Chang YC, Lei CL. Quadrant of euphoria:A crowdsourcing platform for QoE assessment. IEEE Network the Magazine of Global Internet Working, 2010,24(2):28-35.[doi:10.1109/MNET.2010.5430141]
    [24] Wu CC, Chen KT, Chang YC, Lei CL. Crowdsourcing multimedia QoE evaluation:A trusted framework. IEEE Trans. on Multimedia, 2013,15(5):1121-1137.[doi:10.1109/TMM.2013.2241043]
    [25] Hossfeld T, Keimel C, Timmerer C. Crowdsourcing quality-of-experience assessments. Computer, 2014,47(9):98-102.[doi:10. 1109/MC.2014.245]
    [26] Hossfeld T, Keimel C, Hirth M, Gardlo B, Habigt J, Diepold K, TranGia P. Best practices for QoE crowdtesting:QoE assessment with crowdsourcing. IEEE Trans. on Multimedia, 2014,16(2):541-558.[doi:10.1109/TMM.2013.2291663]
    [27] Gardlo B, Ries M, Hoßfeld T, Schatz R. Microworkers vs. facebook:The impact of crowdsourcing platform choice on experimental results. In:Proc. of the 4th IEEE Int'l Workshop on Quality of Multimedia Experience. 2012. 35-36.[doi:10.1109/QoMEX.2012. 6263885]
    [28] Gardlo B. Quality of experience evaluation methodology via crowdsourcing[Ph.D. Thesis]. Slovakia:University of Zilina, 2012.
    [29] Gardlo B, Egger S, Seufert M, Schatz R. Crowdsourcing 2.0:Enhancing execution speed and reliability of Web-based QoE testing. In:Proc. of the IEEE Int'l Conf. on Communications. 2014. 1070-1075.[doi:10.1109/ICC.2014.6883463]
    [30] Sun H, Zhang W, Yan M, Liu X. Recommending Web services using crowdsourced testing data. In:Crowdsourcing. Berlin, Heidelberg:Springer-Verlag, 2015. 219-241.[doi:10.1007/978-3-662-47011-4_12]
    [31] Seufert M, Zach O, Hoßfeld T, Slanina M, TranGia P. Impact of test condition selection in adaptive crowdsourcing studies on subjective quality. In:Proc. of the IEEE 8th Int'l Conf. on Quality of Multimedia Experience. 2016. 1-6.[doi:10.1109/QoMEX. 2016.7498939]
    [32] Liu D, Bias RG, Lease M, Kuipers R. Crowdsourcing for usability testing. American Society for Information Science and Technology, 2012,49(1):1-10.[doi:10.1002/meet.14504901100]
    [33] Meier F, Bazo A, Burghardt M, Wolff C. Evaluating a Web-based tool for crowdsourced navigation stress tests. In:Proc. of the Int'l Conf. of Design, User Experience, and Usability. Berlin, Heidelberg:Springer-Verlag, 2013. 248-256.[doi:10.1007/978-3-642-39253-5_27]
    [34] Schneider C, Cheung T. The power of the crowd:Performing usability testing using an on-demand workforce. In:Proc. of the Information Systems Development. New York:Springer-Verlag, 2013. 551-560.[doi:10.1007/978-1-4614-4951-5_44]
    [35] Gomide VHM, Valle PA, Ferreira JO, Barbosa JR G, da Rocha AF, de Barbosa TMGA. Affective crowdsourcing applied to usability testing. Int'l Journal of Computer Scienceand Information Technologies, 2014,5(1):575-579.
    [36] Nebeling M, Speicher M, Grossniklaus M, Norrie MC. Crowdsourced Web site evaluation with crowdstudy. In:Proc. of the Int'l Conf. on Web Engineering. Berlin, Heidelberg:Springer-Verlag, 2012. 494-497.[doi:10.1007/978-3-642-31753-8_52]
    [37] Khan AI, Al-khanjari Z, Sarrab M. Crowd sourced testing through end users for mobile learning application in the context of bring your own device. In:Proc. of the IEEE 7th Annual Conf. on Information Technology, Electronics and Mobile Communication. 2016. 1-6.[doi:10.1109/IEMCON.2016.7746256]
    [38] Vliegendhart R, Dolstra E, Pouwelse J. Crowdsourced user interface testing for multimedia applications. In:Proc. of the ACM Multimedia 2012 Workshop on Crowdsourcing for Multimedia. ACM Press, 2012. 21-22.[doi:10.1145/2390803.2390813]
    [39] Dolstra E, Vliegendhart R, Pouwelse J. Crowdsourcing GUI tests. In:Proc. of the 6th IEEE Int'l Conf. on Software Testing, Verification and Validation. 2013. 332-341.[doi:10.1109/ICST.2013.44]
    [40] Komarov S, Reinecke K, Gajos KZ. Crowdsourcing performance evaluations of user interfaces. In:Proc. of the SIGCHI Conf. on Human Factors in Computing Systems. ACM Press, 2013. 207-216.[doi:10.1145/2470654.2470684]
    [41] Musson R, Richards J, Fisher D, Bird C, Bussone B, Ganguly S. Leveraging the crowd:How 48000 users helped improve LYNC performance. IEEE Software, 2013,30(4):38-45.[doi:10.1109/MS.2013.67]
    [42] Chen N, Kim S. Puzzle-Based automatic testing:Bringing humans into the loop by solving puzzles. In:Proc. of the 27th IEEE/ACM Int'l Conf. on Automated Software Engineering. ACM Press, 2012. 140-149.[doi:10.1145/2351676.2351697]
    [43] Pham R, Singer L, Schneider K. Building test suites in social coding sites by leveraging drive-by commits. In:Proc. of the 2013 Int'l Conf. on Software Engineering. IEEE Press, 2013. 1209-1212.[doi:10.1109/ICSE.2013.6606680]
    [44] Gómez M, Rouvoy R, Adams B, Seinturier L. Reproducing context-sensitive crashes of mobile apps using crowdsourced monitoring. In:Proc. of the ACM Int'l Workshop on Mobile Software Engineering and Systems. 2016. 88-99.[doi:10.1109/MobileSoft.2016.033]
    [45] Pastore F, Mariani L, Fraser G. Crowdoracles:Can the crowd solve the oracle problem? In:Proc. of the 6th IEEE Int'l Conf. on Software Testing, Verification and Validation. 2013. 342-351.[doi:10.1109/ICST.2013.13]
    [46] Bachrach Y, Minka T, Guiver J, Graepel T. How to grade a test without knowing the answers-A Bayesian graphical model for adaptive crowdsourcing and aptitude testing. In:Proc. of the 29th Int'l Conf. on Machine Learning. Edinburgh, 2012. 1183-1190. https://icml.cc/2012/
    [47] Chen F, Kim S. Crowd debugging. In:Proc. of the ACM Joint Meeting on Foundations of Software Engineering. 2015. 320-332.[doi:10.1145/2786805.2786819]
    [48] Petrillo F, Lacerda G, Pimenta M, Freitas C. Visualizing interactive and shared debugging sessions. In:Proc. of the IEEE Working Conf. on Software Visualization. Bremen, 2015. 140-144.[doi:10.1109/VISSOFT.2015.7332425]
    [49] Petrillo F, Soh Z, Khomh F, Pimenta M, Freitas C, Guéhéneuc YG. Towards understanding interactive debugging. In:Proc. of the IEEE Int'l Conf. on Software Quality, Reliability and Security. 2016. 152-163.[doi:10.1109/QRS.2016.27]
    [50] Badashian AS, Hindle A, Stroulia E. Crowdsourced bug triaging. In:Proc. of the IEEE Int'l Conf. on Software Maintenance and Evolution. 2015. 506-510.[doi:10.1109/ICSM.2015.7332503]
    [51] Badashian AS, Hindle A, Stroulia E. Crowdsourced bug triaging:Leveraging Q&A platforms for bug assignment. In:Proc. of the Int'l Conf. on Fundamental Approaches to Software Engineering. Berlin, Heidelberg:Springer-Verlag, 2016. 231-248.[doi:10. 1007/978-3-662-49665-7_14]
    [52] Sherief N, Jiang N, Hosseini M, Phalp K, Ali R. Crowdsourcing software evaluation. In:Proc. of the 18th ACM Int'l Conf. on Evaluation and Assessment in Software Engineering. 2014. 19.[doi:10.1145/2601248.2601300]
    [53] Sherief N. Software evaluation via users' feedback at runtime. In:Proc. of the 18th Int'l Conf. on Evaluation and Assessment in Software Engineering. 2014. 1-4. http://ease2014.org/
    [54] Blanco R, Halpin H, Herzig DM, Mika P, Pound J, Thompson HS. Repeatable and reliable search system evaluation using crowdsourcing. In:Proc. of the 34th Int'l ACM SIGIR Conf. on Research and Development in Information Retrieval. 2011. 923-932.[doi:10.1145/2009916.2010039]
    [55] Mäntylä MV, Itkonen J. More testers-The effect of crowd size and time restriction in software testing. Information and Software Technology, 2013,55(6):986-1003.[doi:10.1016/j.infsof.2012.12.004]
    [56] Chen Z, Luo B. Quasi-Crowdsourcing testing for educational projects. In:Proc. of the 36th ACM Int'l Conf. on Software Engineering. 2014. 272-275.[doi:10.1145/2591062.2591153]
    [57] Tung YH, Tseng SS. A novel approach to collaborative testing in a crowdsourcing environment. Journal of Systems and Software, 2013,86(8):2143-2153.[doi:10.1016/j.jss.2013.03.079]
    [58] Guo S, Chen R, Li H. A real-time collaborative testing approach for Web application:Via multi-tasks matching. In:Proc. of the IEEE Int'l Conf. on Software Quality, Reliability and Security Companion. 2016. 61-68.[doi:10.1109/QRS-C.2016.13]
    [59] Feng Y, Chen Z, Jones JA, Fang C, Xu B. Test report prioritization to assist crowdsourced testing. In:Proc. of the 10th ACM Joint Meeting on Foundations of Software Engineering. 2015. 225-236.[doi:10.1145/2786805.2786862]
    [60] Feng Y, Jones JA, Chen Z, Fang C. Multi-Objective test report prioritization using image understanding. In:Proc. of the IEEE/ACM Int'l Conf. on Automated Software Engineering. 2016. 202-213.[doi:10.1145/2970276.2970367]
    [61] Wang J, Wang S, Cui Q, Wang Q, Li M, Zhai J. Local-Based active classification of test report to assist crowdsourced testing. In:Proc. of the IEEE/ACM Int'l Conf. on Automated Software Engineering. 2016. 190-201.[doi:10.1145/2970276.2970300]
    [62] Wang J, Cui Q, Wang Q, Wang S. Towards effectively test report classification to assist crowdsourced testing. In:Proc. of the ACM/IEEE Int'l Symp. on Empirical Software Engineering and Measurement. 2016. 6-16.[doi:10.1145/2961111.2962584]
    [63] Zogaj S, Bretschneider U, Leimeister JM. Managing crowdsourced software testing:A case study based insight on the challenges of a crowdsourcing intermediary. Journal of Business Economics, 2014,84(3):375-405.[doi:10.1007/s11573-014-0721-9]
    [64] Guaiani F, Muccini H. Crowd and laboratory testing, can they co-exist? An exploratory study. In:Proc. of the 2nd IEEE/ACM Int'l Workshop on CrowdSourcing in Software Engineering. 2015. 32-37.[doi:10.1109/CSI-SE.2015.14]
    [65] Teinum A. User testing tool:Towards a tool for crowdsource-enabled accessibility evaluation of Web sites[MS. Thesis]. Agder:University of Agder, 2013.
    [66] Nebeling M, Speicher M, Norrie MC. CrowdStudy:General toolkit for crowdsourced evaluation of Web interfaces. In:Proc. of the ACM SIGCHI Symp. on Engineering Interactive Computing Systems. 2013. 255-264.[doi:10.1145/2480296.2480303]
    [67] Starov O. Cloud platform for research crowdsourcing in mobile testing[MS. Thesis]. East Carolina University, 2013.
    [68] Zogaj S, Bretschneider U. Crowdtesting with testcloud-Managing the challenges of an intermediary in a crowdsourcing business model. In:Proc. of the European Conf. on Information Systems. 2013. 143-157.[doi:10.2139/ssrn.2475415]
    [69] Yan M, Sun H, Liu X. iTest:Testing software with mobile crowdsourcing. In:Proc. of the 1st Int'l Workshop on Crowd-Based Software Development Methods and Technologies. 2014. 19-24.[doi:10.1145/2666539.2666569]
    [70] Xue H. Using redundancy to improve security and testing[Ph.D. Thesis]. University of Illinois at Urbana-Champaign, 2013.
    [71] Liang CJM, Lane ND, Brouwers N, Zhang L, Karlsson BF, Liu H, Liu Y, Tang J, Shan X, Chandra R, Zhao F. Caiipa:Automated large-scale mobile app testing through contextual fuzzing. In:Proc. of the 20th Annual Int'l Conf. on Mobile Computing and Networking. 2014. 519-530.[doi:10.1145/2639108.2639131]
    [72] Rao P, Dubey A, Virdi G. Crowdsourced testing for enterprises:Experiences. In:Proc. of the Workshop on Alternate Workforces for Software Engineering. 2015. 56-57. http://ceur-ws.org/Vol-1519/
    [73] Sharma M, Padmanaban R. Leveraging the Wisdom of the Crowd in Software Testing. Boca Raton:CRC Press, 2014.
    [74] Memon A, Banerjee I, Nagarajan A. GUI ripping:Reverse engineering of graphical user interfaces for testing. In:Proc. of the 10th Working Conf. on Reverse Engineering. 2003. 260-269.[doi:10.1109/WCRE.2003.1287256]
    [75] Microsoft LYNC. http://office:microsoft:com/lync
    [76] Chrome telemetry. http://www:chromium:org/developers/telemetry
    [77] Firefox telemetry. https://telemetry:mozilla:org
    [78] Sen K, Agha G. CUTE and jCUTE:Concolic unit testing and explicit path model-checking tools. In:Ball T, Jones R, eds. Proc. of the Computer Aided Verification. LNCS 4144, 2006. 419-423.[doi:10.1007/11817963_38]
    [79] Pacheco C, Lahiri SK, Ernst MD, Ball T. Feedback-Directed random test generation. In:Proc. of the 29th Int'l Conf. on Software Engineering. 2007. 75-84.[doi:10.1109/ICSE.2007.37]
    [80] Tillmann N. De Halleux J. Pex-White box test generation for.NET. In:Beckert B, Hhnle R, eds. Proc. of the 2nd Int'l Conf. on Tests and Proofs. LNCS 4966, 2008. 134-153.[doi:10.1007/978-3-540-79124-9_10]
    [81] Barr ET, Harman M, McMinn P, Shahbaz M, Yoo S. The oracle problem in software testing:A survey. IEEE Trans. on Software Engineering, 2015,41(5):507-525.[doi:10.1109/TSE.2014.2372785]
    [82] Allamanis M, Sutton C. Mining idioms from source code. In:Proc. of the ACM Sigsoft Int'l Symp. on Foundations of Software Engineering. 2014. 472-483.[doi:10.1145/2635868.2635901]
    [83] Lawrance J, Bogart C, Burnett M, Bellamy R, Rector K, Fleming SD. How programmers debug, revisited:An information foraging theory perspective. IEEE Trans. on Software Engineering, 2013,39(2):197-215.[doi:10.1109/TSE.2010.111]
    [84] Zhang J, Wang X, Hao D, Bing X, Lu Z, Hong M. A survey on bug-report analysis. Science China Information Sciences, 2015, 58(2):1-24.[doi:10.1007/s11432-014-5241-2]
    [85] Xia X, Wang XY, Yang XH, Lo D. Bug-Report management and ananlysis of open-sourced software sysytms. Communications of the CCF, 2016,2:29-34(in Chinese with English abstract).
    [86] Bruch M. Ide 2.0:Leveraging the wisdom of the software engineering crowds[Ph.D. Thesis]. Technische Universität Darmstadt, 2012.
    [87] Ponzanelli L. Exploiting crowd knowledge in the ide[MS. Thesis]. Universita Della Svizzera Italiana, 2012.
    [88] Zagalsky A, Barzilay O, Yehudai A. Example overflow:Using social media for code recommendation. In:Proc. of the 3rd IEEE Int'l Workshop on Recommendation Systems for Software Engineering. 2012. 38-42.[doi:10.1109/RSSE.2012.6233407]
    [89] Kittur A, Chi EH, Suh B. Crowdsourcing user studies with mechanical Turk. In:Proc. of the ACM SIGCHI Conf. on Human Factors in Computing Systems. 2008. 453-456.[doi:10.1145/1357054.1357127]
    [90] Zhang ZQ, Pang JS, Xie XQ, Zhou Y. Research on crowdsoucing quality control strategies and evaluation algorithm. Chinese Journal of Computers, 2013,36(8):1636-1649(in Chinese with English abstract).
    [91] Singla A, Krause A. Truthful incentives in crowdsourcing tasks using regret minimization mechanisms. In:Proc. of the 22nd ACM Int'l Conf. on World Wide Web. 2013. 1167-1178.[doi:10.1145/2488388.2488490]
    [92] Zhao D, Li XY, Ma H. How to crowdsource tasks truthfully without sacrificing utility:Online incentive mechanisms with budget constraint. In:Proc. of the IEEE INFOCOM 2014-IEEE Conf. on Computer Communications. 2014. 1213-1221.[doi:10.1109/INFOCOM.2014.6848053]
    [93] Wu Y, Zeng JR, Peng H, Chen H, Li CP. Survey on incentive mechanisms for crowd sensing. Ruan Jian Xue Bao/Journal of Software, 2016,27(8):2025-2047(in Chinese with English abstract). http://www.jos.org.cn/1000-9825/5049.htm[doi:10.13328/j. cnki.jos.005049]
    [94] Singer Y, Mittal M. Pricing mechanisms for crowdsourcing markets. In:Proc. of the 22nd ACM Int'l Conf. on World Wide Web. 2013. 157-1166.[doi:10.1145/2488388.2488489]
    [95] Morschheuser B, Hamari J, Koivisto J. Gamification in crowdsourcing:A review. In:Proc. of the IEEE Hawaii Int'l Conf. on System Sciences. 2016. 4375-4384.[doi:10.1109/HICSS.2016.543]
    [96] Hu Z, Wu W. A game theoretic model of software crowdsourcing. In:Proc. of the IEEE Int'l Symp. on Service Oriented System Engineering. 2014. 446-453.[doi:10.1109/SOSE.2014.79]
    [97] Xie T. Cooperative testing and analysis:Human-Tool, tool-tool, and human-human cooperations to get work done. In:Proc. of the 12th IEEE Int'l Working Conf. on Source Code Analysis and Manipulation (Keynote). 2012.[doi:10.1109/SCAM.2012.31]
    [98] Wang X, Zhang L, Xie T, Anvik J, Sun J. An approach to detecting duplicate bug reports using natural language and execution information. In:Proc. of the Int'l Conf. on Software Engineering. Leipzig, 2008. 461-470.[doi:10.1145/1368088.1368151]
    [99] Sun C, Lo D, Wang X, Jiang J, Khoo S. A discriminative model approach for accurate duplicate bug report retrieval. In:Proc. of the ACM/IEEE Int'l Conf. on Software Engineering. 2010. 45-54.[doi:10.1145/1806799.1806811]
    [100] Bettenburg N, Premraj R, Zimmermann T. Duplicate bug reports considered harmful … really? In:Proc. of the IEEE Int'l Conf. on Software Maintenance. 2008. 337-345.[doi:10.1109/ICSM.2008.4658082]
    [101] Xuan J, Jiang H, Ren Z, Yan J, Luo Z. Automatic bug triage using semi-supervised text classification. In:Proc. of the 22nd Int'l Conf. on Software Engineering and Knowledge Engineering. 2010. 209-214. http://www.ksi.edu/seke/seke10.html
    [102] Xuan J, Jiang H, Ren Z, Zou W. Developer prioritization in bug repositories. In:Proc. of the 34th IEEE Int'l Conf. on Software Engineering. 2012. 25-35.[doi:10.1109/ICSE.2012.6227209]
    [103] Hu H, Zhang H, Xuan J, Sun W. Effective bug triage based on historical bug-fix information. In:Proc. of the IEEE Int'l Symp. on Software Reliability Engineering. Naples, 2014. 122-132.[doi:10.1109/ISSRE.2014.17]
    [104] Xia X, Lo D, Wang X, Zhou B. Dual analysis for recommending developers to resolve bugs. Journal of Software Evolution & Process, 2015,27(3):195-220.[doi:10.1002/smr.1706]
    [105] Yang X, Lo D, Xia X, Bao L, Sun J. Combining word embedding with information retrieval to recommend similar bug reports. In:Proc. of the IEEE Int'l Symp. on Software Reliability Engineering. 2016. 127-137.[doi:10.1109/ISSRE.2016.33]
    [106] Xia X, Lo D, Ding Y, AI-Kofahi JM, Nguyen TN, Wang X. Improving automated bug triaging with specialized topic model. IEEE Trans. on Software Engineering, 2017,43(3):272-297.[doi:10.1109/TSE.2016.2576454]
    [107] Mani S, Catherine R, Sinha VS, Dubey A. AUSUM:Approach for unsupervised bug report summarization. In:Proc. of the ACM SIGSOFT, Int'l Symp. on the Foundations of Software Engineering. 2012. 1-11.[doi:10.1145/2393596.2393607]
    [108] Rastkar S, Murphy GC, Murray G. Automatic summarization of bug reports. IEEE Trans. on Software Engineering, 2014,40(4):366-380.[doi:10.1109/TSE.2013.2297712]
    [109] Bettenburg N, Just S, Schröter A, Weiss C, Premraj R, Zimmermann T. What makes a good bug report? In:Proc. of the ACM SIGSOFT Int'l Symp. on Foundations of Software Engineering. Atlanta, 2008. 308-318. http://dblp.uni-trier.de/db/conf/sigsoft/fse2008.html
    [110] Zhou J, Zhang H, Lo D. Where should the bugs be fixed? More accurate information retrieval-based bug localization based on bug reports. In:Proc. of the ACM/IEEE Int'l Conf. on Software Engineering. Zurich, 2012. 14-24. https://files.ifi.uzh.ch/icseweb/
    [111] Feng Y, Liu Q, Dou M, Liu J, Chen Z. Mubug:A mobile service for rapid bug tracking. Science China Information Sciences, 2016, 59(1):1-5.[doi:10.1007/s11432-015-5506-4]
    [112] Liao XK, Li SS, Dong W, Jia ZY, Liu XD, Zhou SL. Survey on log research of large scale software system. Ruan Jian Xue Bao/Journal of Software, 2016,27(8):1934-1947(in Chinese with English abstract). http://www.jos.org.cn/1000-9825/4936.htm[doi:10.13328/j.cnki.jos.004936]
    附中文参考文献:
    [3] 冯剑红,李国良,冯建华.众包技术研究综述.计算机学报,2015,38(9):1713-1725.[doi:10.11897/SP.J.1016.2015.01713]
    [20] 童咏昕,袁野,成雨蓉,陈雷,王国仁.时空众包数据管理技术研究综述.软件学报,2017,28(1):35-58. http://www.jos.org.cn/1000-9825/5140.htm[doi:10.13328/j.cnki.jos.005140]
    [85] 夏鑫,王新宇,杨小虎,David Lo.开源软件系统缺陷报告管理与分析.计算机学会通讯,2016,2:29-34.
    [90] 张志强,逄居升,谢晓芹,周永.众包质量控制策略及评估算法研究.计算机学报,2013,36(8):1636-1649.
    [93] 吴垚,曾菊儒,彭辉,陈红,李翠平.群智感知激励机制研究综述.软件学报,2016,27(8):2025-2047. http://www.jos.org.cn/1000-9825/5049.htm[doi:10.13328/j.cnki.jos.005049]
    [112] 廖湘科,李姗姗,董威,贾周阳,刘晓东,周书林.大规模软件系统日志研究综述.软件学报,2016,27(8):1934-1947. http://www.jos.org.cn/1000-9825/4936.htm[doi:10.13328/j.cnki.jos.004936]
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

章晓芳,冯洋,刘頔,陈振宇,徐宝文.众包软件测试技术研究进展.软件学报,2018,29(1):69-88

复制
分享
文章指标
  • 点击次数:8844
  • 下载次数: 15168
  • HTML阅读次数: 4655
  • 引用次数: 0
历史
  • 收稿日期:2017-06-23
  • 最后修改日期:2017-08-01
  • 在线发布日期: 2017-10-09
文章二维码
您是第19731776位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号