Towards Crowd Worker Selection for Crowdsourced Testing Task
Author:
Affiliation:

Fund Project:

National Natural Science Foundation of China (61602450, 61432001)

  • Article
  • | |
  • Metrics
  • |
  • Reference [31]
  • |
  • Related [20]
  • | | |
  • Comments
    Abstract:

    Crowdsourced testing is an emerging trend in software testing, which relies on crowd workers to accomplish test tasks. Thus, who performs a test task is extremely important for detecting bugs and covering key points of test requirements in crowdsourced testing. There are a lot of candidate crowd workers who may have different testing experience but can also produce duplicate test reports for the same task due to the lack of cooperation. As crowd workers can freely participate in a test task, high quality of testing in terms of bug detection and coverage of key points of test requirements is not guaranteed. Thus, to improve bug detection and coverage of key points of test requirements, selecting an appropriate subset of workers to perform a test task is becoming an important problem. In this paper, three motivating studies are first conducted to investigate important aspects of workers in detecting bugs and covering key points of test requirements. Accordingly, the studies identify three aspects:initiative, relevance and diversity are identified, and produce a novel approach for selecting workers considering all these three aspects. This new approach is evaluated based on 46 real test tasks from Baidu CrowdTest, and the experimental results show the effectiveness of the approach.

    Reference
    [1] Feng Y, Chen Z, Jones JA, Fang C, Xu B. Test report prioritization to assist crowdsourced testing. In:Proc. of the 201510th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). New York:ACM Press, 2015. 225-236.
    [2] Wang J, Cui Q, Wang Q, Wang S. Towards effectively test report classification to assist crowdsourced testing. In:Proc. of the 10th ACM/IEEE Int'l Symp. on Empirical Software Engineering and Measurement (ESEM 2016). 2016. 6:1-6:10.
    [3] Wang J, Wang S, Cui Q, Wang Q. Local-Based active classification of test report to assist crowdsourced testing. In:Proc. of the 31st IEEE/ACM Int'l Conf. on Automated Software Engineering (ASE 2016). 2016. 190-201.
    [4] Hochba DS. Approximation algorithms for NP-hard problems. ACM Sigact News, 1997,28(2):40-52.
    [5] Hiemstra D. Using Language Models for Information retrieval. Taaluitgeverij Neslia Paniculata, 2001.
    [6] Canfora G, Di Penta M, Oliveto R, Panichella S. Who is going to mentor newcomers in open source projects? In:Proc. of the ACM SIGSOFT 20th Int'l Symp. on the Foundations of Software Engineering (ESEC/FSE 2012). New York:ACM Press, 2012. 44:1-44:11.
    [7] Dror G, Koren Y, Maarek Y, Szpektor I. I want to answer; Who has a question? Yahoo! Answers recommender system. In:Proc. of the 17th ACM SIGKDD Int'l Conf. on Knowledge Discovery and Data Mining (KDD 2011). New York:ACM Press, 2011. 1109-1117.
    [8] Zhao Z, Wei F, Zhou M, Chen W, Ng W. Crowd-Selection query processing in crowdsourcing databases:Atask-driven approach. In:Proc. of the 18th Int'l Conf. on Extending Database Technology (EDBT 2015). 2015. 397-408.
    [9] Chen Z, Luo B. Quasi-Crowdsourcing testing for educational projects. In:Companion Proc. of the 36th Int'l Conf. on Software Engineering (ICSE Companion 2014). New York:ACM Press, 2014. 272-275.
    [10] Mao K, Capra L, Harman M, Jia Y. A survey of theuse of crowdsourcing in software engineering. Technical Report. 2015.
    [11] Feng JH, Li GL, Feng JH. A survey on crowdsourcing. Chinese Journal on Computers, 2015,38(9):1713-1726(in Chinese with English abstract).
    [12] Pastore F, Mariani L, Fraser G. Crowd oracles:Can the crowd solve the oracle problem? In:Proc. of the 2013 IEEE 6th Int'l Conf. on Software Testing, Verification and Validation (ICST 2013). 2013. 342-351.
    [13] Liu D, Bias RG, Lease M, Kuipers R. Crowdsourcing for usability testing. Proc. of the American Society for Information Science and Technology, 2012,49(1):1-10.
    [14] Nebeling M, Speicher M, Grossniklaus M, Norrie MC. Crowdsourced Web Site Evaluation with Crowdstudy. Springer-Verlag, 2012.
    [15] Tung YH, Tseng SS. A novel approach to collaborative testing in a crowdsourcing environment. Journal of Systems and Software, 2013,86(8):2143-2153.
    [16] Bertolino A. Software testing research:Achievements, challenges, dreams. In:Proc. of the 2007 Future of Software Engineering (FOSE 2007). Washington:IEEE Computer Society, 2007. 85-103.
    [17] Athanasiou D, Nugroho A, Visser J, Zaidman A. Test code quality and its relation to issue handling performance. IEEE Trans. on Software Engineering, 2014,40(11):1100-1125.
    [18] Rothermel G, Untch RH, Chu C, Harrold MJ. Test case prioritization:An empirical study. In:Proc. of the IEEE Int'l Conf. on Software Maintenance (ICSM'99). 1999. 179-188.
    [19] Zhou C, Frankl P. Empirical studies on test effectiveness for database applications. In:Proc. of the 2012 IEEE 5th Int'l Conf. on Software Testing, Verification and validation (ICST 2012). 2012. 61-70.
    [20] Gopinath R, Jensen C, Groce A. Code coveragefor suite evaluation by developers. In:Proc. of the 36th Int'l Conf. on Software Engineering (ICSE 2014). New York:ACM Press, 2014. 72-82.
    [21] Leon D, Podgurski A, White LJ. Multi variatevisualization in observation-based testing. In:Proc. of the 22nd Int'l Conf. on Software Engineering (ICSE 2000). New York:ACM Press, 2000. 116-125.
    [22] Mondal D, Hemmati H, Durocher S. Exploring testsuite diversification and code coverage in multi-objectivetest case selection. In:Proc. of the 2015 IEEE 8th Int'l Conf. on Software Testing, Verification and Validation (ICST 2015). 2015. 1-10.
    [23] Jeong G, Kim S, Zimmermann T. Improving bugtriage with bug tossing graphs. In:Proc. of the 7th Joint Meeting of the European Software Engineering Conf. and the ACM SIGSOFT Symp. on The Foundations of Software Engineering (ESEC/FSE 2009). New York:ACM Press, 2009. 111-120.
    [24] Tamrawi A, Nguyen TT, Al-Kofahi JM, Nguyen TN. Fuzzy set and cache-based approach for bug triaging. In:Proc. of the 19th ACM SIGSOFT Symp. and the 13th European Conf. on Foundations of Software Engineering (ESEC/FSE 2011). New York:ACM Press, 2011. 365-375.
    [25] Ma D, Schuler D, Zimmermann T, Sillito J. Expertrecommendation with usage expertise. In:Proc. of the IEEE Int'l Conf. on Software Maintenance (ICSM 2009). 2009. 535-538.
    [26] Yan X, Guo J, Lan Y, Cheng X. A biterm topic model for short texts. In:Proc. of the 22nd Int'l Conf. on World Wide Web (WWW 2013). New York:ACM Press, 2013. 1445-1456.
    [27] Wang S, Liu T, Tan L. Automatically learning semantic features for defect prediction. In:Proc. of the 38th Int'l Conf. on Software Engineering (ICSE 2016). New York:ACM Press, 2016. 297-308.
    [28] Villarroel L, Bavota G, Russo B, Oliveto R, Penta M. Release planning of mobile apps based on user reviews. In:Proc. of the 38th Int'l Conf. on Software Engineering (ICSE 2016). New York:ACM Press, 2016. 14-24.
    [29] Zhou M, Mockus A. What make long term contributors:Willingness and opportunity in OSS community. In:Proc. of the 34th Int'l Conf. on Software Engineering (ICSE). 2012. 518-528.
    附中文参考文献:
    [11] 冯剑红,李国良,冯建华.众包技术研究综述.计算机学报,2015,38(9):1713-1726.
    Cited by
    Comments
    Comments
    分享到微博
    Submit
Get Citation

崔强,王俊杰,谢淼,王青.众测中的工作者选择方法研究.软件学报,2018,29(12):3648-3664

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:December 20,2016
  • Revised:March 10,2017
  • Online: December 05,2018
You are the first2033288Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063