大语言模型在代码优化任务中的能力探究及改进方法
作者:
通讯作者:

何铁科,E-mail:hetieke@nju.edu.cn

中图分类号:

TP311

基金项目:

国家自然科学基金(62306137)


Exploration and Improvement of Capabilities of LLMs in Code Refinement Task
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    代码优化任务作为自动化代码审查的关键环节, 有助于提高开发效率和代码质量. 随着大语言模型在软件工程领域中展现出远胜于传统小规模预训练模型的性能, 旨在探讨两类模型在自动代码优化任务的表现, 以评估大语言模型的综合优势. 通过使用传统代码质量评估指标(例如, BLEU, CodeBLEU, edit progress)对4种主流大语言模型和4种代表性小规模预训练模型在代码优化任务的表现进行评估, 发现大语言模型在审查前代码优化子任务的优化质量劣于小规模预训练模型. 由于现有代码质量评估指标难以解释上述现象, 提出基于Unidiff的代码优化评估指标, 量化优化过程中的变更操作, 以解释劣势原因并揭示模型执行变更操作的倾向性: (1)审查前代码优化任务难度较大, 模型执行正确变更操作的准确度极低, 且大语言模型比小规模预训练模型表现更为“激进”, 即倾向于执行更多的代码变更操作, 导致其表现不佳; (2)相比小规模预训练模型, 大语言模型在代码优化任务倾向于执行更多插入(ADD)和修改(MODIFY)变更操作且ADD变更操作平均插入的代码行数较多, 进一步证明其“激进”性. 为缓解大语言模型在审查前优化任务中的劣势, 基于大语言模型和集成学习提出LLM-Voter方法, 包含Inference-based (基于模型推理)和Confidence-based (基于置信度选择)两种子方案, 旨在集成不同基模型的优势以提升代码优化质量. 在此基础上, 进一步引入优化判定机制, 以增强模型的决策稳定性与可靠性. 实验证明: 基于置信度选择的LLM-Voter方法能够在大幅提高EM (exact match)值的同时获得优于所有基模型的优化质量, 从而有效缓解大语言模型的劣势.

    Abstract:

    As a crucial part of automated code review, the code refinement task is of great significance for improving development efficiency and code quality. Since large language models (LLMs) have shown far better performance than traditional small-scale pre-trained models in the field of software engineering, this study aims to explore the performance of these two types of models in the task of automatic code refinement, so as to evaluate the comprehensive advantages of LLMs. The traditional code quality evaluation metrics (e.g., BLEU, CodeBLEU, edit progress) are used to evaluate the performance of four mainstream LLMs and four representative small-scale pre-trained models in the code refinement task. Findings indicate that the refinement quality of LLMs in the pre-review code refinement subtask is inferior to that of small-scale pre-trained models. Due to the difficulty of the existing code quality evaluation metrics in explaining the above phenomenon, this study proposes Unidiff-based code refinement evaluation metrics to quantify the change operations in the refinement process, in order to explain the reasons for the inferiority and reveal the tendency of the models to perform change operations: (1) The pre-review code refinement task is rather difficult, the accuracy of the models in performing correct change operations is extremely low, and LLMs are more “aggressive” than small-scale pre-trained models, that is, they tend to perform more code change operations, resulting in their poor performance; (2) Compared with small-scale pretrained models, LLMs tend to perform more ADD and MODIFY change operations in the code refinement task, and the average number of inserted code lines in ADD change operations is larger, further proving their “aggressive” nature. To alleviate the disadvantages of LLMs in the pre-review refinement task, this study introduces the LLM-Vote method based on LLMs and ensemble learning, which includes two sub-schemes: Inference-based and Confidence-based, aiming to integrate the advantages of different base models to improve the code refinement quality. On this basis, a refinement determination mechanism is further introduced to enhance the decision stability and reliability of the model. Experimental results demonstrate that the Confidence-based LLM-Voter method significantly increases the exact match (EM) value and obtains a refinement quality better than all base models, thus effectively alleviating the disadvantages of large language models.

    参考文献
    相似文献
    引证文献
引用本文

王志鹏,何铁科,赵若愚,郑滔.大语言模型在代码优化任务中的能力探究及改进方法.软件学报,2025,36(6):2477-2500

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-08-25
  • 最后修改日期:2024-10-14
  • 在线发布日期: 2024-12-10
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号