面向跨模态检索的查询感知双重对比学习网络
CSTR:
作者:
作者单位:

作者简介:

通讯作者:

梁美玉,E-mail:meiyu1210@bupt.edu.cn

中图分类号:

基金项目:

国家自然科学基金(62192784,U22B2038,62172056,62272058);中国人工智能学会-华为MindSpore学术奖励基金(CAAIXSJLJJ-2021-007B)


Query Aware Dual Contrastive Learning Network for Cross-modal Retrieval
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    近期, 跨模态视频语料库时刻检索(VCMR)这一新任务被提出, 它的目标是从未分段的视频语料库中检索出与查询语句相对应的一小段视频片段. 现有的跨模态视频文本检索工作的关键点在于不同模态特征的对齐和融合, 然而, 简单地执行跨模态对齐和融合不能确保来自相同模态且语义相似的数据在联合特征空间下保持接近, 也未考虑查询语句的语义. 为了解决上述问题, 提出一种面向多模态视频片段检索的查询感知跨模态双重对比学习网络(QACLN), 该网络通过结合模态间和模态内的双重对比学习来获取不同模态数据的统一语义表示. 具体地, 提出一种查询感知的跨模态语义融合策略, 根据感知到的查询语义自适应地融合视频的视觉模态特征和字幕模态特征等多模态特征, 获得视频的查询感知多模态联合表示. 此外, 提出一种面向视频和查询语句的模态间及模态内双重对比学习机制, 以增强不同模态的语义对齐和融合, 从而提高不同模态数据表示的可分辨性和语义一致性. 最后, 采用一维卷积边界回归和跨模态语义相似度计算来完成时刻定位和视频检索. 大量实验验证表明, 所提出的QACLN优于基准方法.

    Abstract:

    Recently, a new task named cross-modal video corpus moment retrieval (VCMR) has been proposed, which aims to retrieve a small video segment corresponding to a query statement from an unsegmented video corpus. The key point of the existing cross-modal video text retrieval work is the alignment and fusion of different modal features. However, simply performing cross-modal alignment and fusion cannot ensure that semantically similar data from the same modal remain close under the joint feature space, and the semantics of query statements are not considered. To solve the above problems, this study proposes a query-aware cross-modal dual contrastive learning network for multi-modal video moment retrieval (QACLN), which achieves the unified semantic representation of different modal data by combining cross-modal and intra-modal contrastive learning. First, the study proposes a query-aware cross-modal semantic fusion strategy, obtaining the query-aware multi-modal joint representation of the video by adaptively fusing multi-modal features such as visual modal features and caption modality features of the video according to the aware query semantics. Then, a cross-modal and intra-modal dual contrastive learning mechanism for video and text query is proposed to enhance the semantic alignment and fusion of different modalities, which can improve the discriminability and semantic consistency of data representations of different modalities. Finally, the 1D convolution boundary regression and cross-modal semantic similarity calculation are employed to perform moment localization and video retrieval. Extensive experiments demonstrate that the proposed QACLN outperforms the benchmark methods.

    参考文献
    相似文献
    引证文献
引用本文

尹梦冉,梁美玉,于洋,曹晓雯,杜军平,薛哲.面向跨模态检索的查询感知双重对比学习网络.软件学报,2024,35(5):2120-2132

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-03-26
  • 最后修改日期:2023-06-08
  • 录用日期:
  • 在线发布日期: 2023-09-11
  • 出版日期: 2024-05-06
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号