基于多模态对比学习的代码表征增强预训练方法
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

马建辉,E-mail:jianhui@ustc.edu.cn

中图分类号:

基金项目:


Pre-training Method for Enhanced Code Representation Based on Multimodal Contrastive Learning
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    代码表征旨在融合源代码的特征, 以获取其语义向量, 在基于深度学习的代码智能中扮演着重要角色. 传统基于手工的代码表征依赖领域专家的标注, 繁重耗时, 且无法灵活地复用于特定下游任务, 这与绿色低碳的发展理念极不相符. 因此, 近年来, 许多自监督学习的编程语言大规模预训练模型(如CodeBERT)应运而生, 为获取通用代码表征提供了有效途径. 这些模型通过预训练获得通用的代码表征, 然后在具体任务上进行微调, 取得了显著成果. 但是, 要准确表示代码的语义信息, 需要融合所有抽象层次的特征(文本级、语义级、功能级和结构级). 然而, 现有模型将编程语言仅视为类似于自然语言的普通文本序列, 忽略了它的功能级和结构级特征. 因此,旨在进一步提高代码表征的准确性, 提出了基于多模态对比学习的代码表征增强的预训练模型(representation enhanced contrastive multimodal pretraining, REcomp). REcomp设计了新的语义级-结构级特征融合算法, 将它用于序列化抽象语法树, 并通过多模态对比学习的方法将该复合特征与编程语言的文本级和功能级特征相融合, 以实现更精准的语义建模. 最后, 在3个真实的公开数据集上进行了实验, 验证了REcomp在提高代码表征准确性方面的有效性.

    Abstract:

    Code representation aims to extract the characteristics of source code to obtain its semantic embedding, playing a crucial role in deep learning-based code intelligence. Traditional handcrafted code representation methods mainly rely on domain expert annotations, which are time-consuming and labor-intensive. Moreover, the obtained code representations are task-specific and not easily reusable for specific downstream tasks, which contradicts the green and sustainable development concept. To this end, many large-scale pretraining models for source code representation have shown remarkable success in recent years. These methods utilize massive source code for self-supervised learning to obtain universal code representations, which are then easily fine-tuned for various downstream tasks. Based on the abstraction levels of programming languages, code representations have four level features: text level, semantic level, functional level, and structural level. Nevertheless, current models for code representation treat programming languages merely as ordinary text sequences resembling natural language. They overlook the functional-level and structural-level features, which bring performance inferior. To overcome this drawback, this study proposes a representation enhanced contrastive multimodal pretraining (REcomp) framework for code representation pretraining. REcomp has developed a novel semantic-level to structure-level feature fusion algorithm, which is employed for serializing abstract syntax trees. Through a multi-modal contrastive learning approach, this composite feature is integrated with both the textual and functional features of programming languages, enabling a more precise semantic modeling. Extensive experiments are conducted on three real-world public datasets. Experimental results clearly validate the superiority of REcomp.

    参考文献
    相似文献
    引证文献
引用本文

杨宏宇,马建辉,侯旻,沈双宏,陈恩红.基于多模态对比学习的代码表征增强预训练方法.软件学报,2024,35(4):1601-1617

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-05-15
  • 最后修改日期:2023-07-07
  • 录用日期:
  • 在线发布日期: 2023-09-11
  • 出版日期: 2024-04-06
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号