面向持续集成测试优化的强化学习奖励机制
作者:
作者单位:

作者简介:

何柳柳(1995-),女,四川达州人,硕士生,CCF学生会员,主要研究领域为软件测试,持续集成测试;李征(1974-),男,博士,教授,博士生导师,CCF高级会员,主要研究领域为基于搜索的软件工程,软件测试,源代码分析;杨羊(1991-),女,博士生,CCF学生会员,主要研究领域为软件工程,软件测试,持续集成测试;赵瑞莲(1964-),女,博士,教授,博士生导师,CCF高级会员,主要研究领域为软件测试.

通讯作者:

李征,E-mail:lizheng@mail.buct.edu.cn

中图分类号:

基金项目:

国家自然科学基金(61872026,61472025,61672085)


Reward of Reinforcement Learning of Test Optimization for Continuous Integration
Author:
Affiliation:

Fund Project:

National Natural Science Foundation of China (61872026, 61472025, 61672085)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    持续集成环境下的测试存在测试用例集变化大、测试时间有限和快速反馈等需求,传统的测试优化方法难以适用.强化学习是机器学习的一个重要分支,其本质是解决序贯决策问题,可以用于持续集成测试优化.但现有的基于强化学习的方法中,奖励函数计算只包括测试用例在当前集成周期的执行信息.从奖励函数设计和奖励策略两个方面开展研究.在奖励函数设计方面,采用测试用例的完整历史执行信息替代当前执行信息,综合考虑测试用例历史失效总次数和历史失效分布信息,提出了两种奖励函数.在奖励策略方面,提出对当前执行序列的测试用例整体奖励和仅对失效测试用例的部分奖励两种策略.在3个工业级被测程序进行实验研究,结果表明:(1)与现有方法相比,所提出的基于完整历史执行信息奖励函数的强化学习方法可以大幅度提高持续集成测试序列的检错能力;(2)测试用例历史失效分布有助于发现潜在失效的测试用例,对强化学习奖励函数的设计更加重要;(3)整体奖励与部分奖励两种奖励策略受到被测程序的多种因素影响,需要根据实际情况具体选择;(4)包含历史信息的奖励函数会增加时间消耗,但并不影响测试效率.

    Abstract:

    Testing in continuous integration environment is characterized by constantly changing test sets, limited test time, fast feedback, and so on. Traditional test optimization methods are not suitable for this. Reinforcement learning is an important branch of machine learning, and its essence is to solve sequential decision problems, thus it can be used in test optimization in continuous integration. However, in the existing reinforcement learning based methods, the reward function calculation only includes the execution information of the test case in the current integration cycle. The research is carried out from two aspects:reward function design and reward strategy. In the design of reward function, complete historical execution information of the test case is used to replace the current execution information and the total number of historical failures and historical failure distribution information of the test case is also considered. In terms of the reward strategy, two reward strategies are proposed, which are overall reward for test cases in current execution sequence and partial reward only for failed test cases. In this study, experimental research is conducted on three industrial-level programs. The results show that:(1) Compared with the existing methods, reinforcement learning methods based on reward function with complete historical information proposed in this study can greatly improve the error detection ability of test sequences in continuous integration; (2) Test case historical failure distribution can help to identify potential failure test cases, which is more important for the design of the reward function in reinforcement learning; (3) The two reward strategies, i.e. overall reward and partial reward, are influenced by various factors of the system under test, therefore the reward strategy need to be selected according to the actual situation; and (4) History-based reward functions have longer time consumption, though the test efficiency is not affected.

    参考文献
    相似文献
    引证文献
引用本文

何柳柳,杨羊,李征,赵瑞莲.面向持续集成测试优化的强化学习奖励机制.软件学报,2019,30(5):1438-1449

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2018-08-29
  • 最后修改日期:2018-10-31
  • 录用日期:
  • 在线发布日期: 2019-05-08
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号