跨媒体深层细粒度关联学习方法
作者:
作者单位:

作者简介:

卓昀侃(1995-),男,福建宁德人,学士,主要研究领域为跨媒体分析与检索;彭宇新(1974-),男,博士,教授,博士生导师,CCF高级会员,主要研究领域为跨媒体分析与推理,图像视频理解与检索,计算机视觉;綦金玮(1994-),男,学士,主要研究领域为跨媒体分析与检索.

通讯作者:

彭宇新,E-mail:pengyuxin@pku.edu.cn

中图分类号:

基金项目:

国家自然科学基金(61771025,61532005)


Cross-media Deep Fine-grained Correlation Learning
Author:
Affiliation:

Fund Project:

National Natural Science Foundation of China (61771025, 61532005)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    随着互联网与多媒体技术的迅猛发展,网络数据的呈现形式由单一文本扩展到包含图像、视频、文本、音频和3D模型等多种媒体,使得跨媒体检索成为信息检索的新趋势.然而,"异构鸿沟"问题导致不同媒体的数据表征不一致,难以直接进行相似性度量,因此,多种媒体之间的交叉检索面临着巨大挑战.随着深度学习的兴起,利用深度神经网络模型的非线性建模能力有望突破跨媒体信息表示的壁垒,但现有基于深度学习的跨媒体检索方法一般仅考虑图像和文本两种媒体数据之间的成对关联,难以实现更多种媒体的交叉检索.针对上述问题,提出了跨媒体深层细粒度关联学习方法,支持多达5种媒体类型数据(图像、视频、文本、音频和3D模型)的交叉检索.首先,提出了跨媒体循环神经网络,通过联合建模多达5种媒体类型数据的细粒度信息,充分挖掘不同媒体内部的细节信息以及上下文关联.然后,提出了跨媒体联合关联损失函数,通过将分布对齐和语义对齐相结合,更加准确地挖掘媒体内和媒体间的细粒度跨媒体关联,同时利用语义类别信息增强关联学习过程的语义辨识能力,提高跨媒体检索的准确率.在两个包含5种媒体的跨媒体数据集PKU XMedia和PKU XMediaNet上与现有方法进行实验对比,实验结果表明了所提方法的有效性.

    Abstract:

    With the rapid development of the Internet and multimedia technology, data on the Internet is expanded from only text to image, video, text, audio, 3D model, and other media types, which makes cross-media retrieval become a new trend of information retrieval. However, the "heterogeneity gap" leads to inconsistent representations of different media types, and it is hard to measure the similarity between the data of any two kinds of media, which makes it quite challenging to realize cross-media retrieval across multiple media types. With the recent advances of deep learning, it is hopeful to break the boundaries between different media types with the strong learning ability of deep neural network. But most existing deep learning based methods mainly focus on the pairwise correlation between two media types as image and text, and it is difficult to extend them to multi-media scenario. To address the above problem, Deep Fine-grained Correlation Learning (DFCL) approach is proposed, which can support cross-media retrieval with up to five media types (image, video, text, audio, and 3D model). First, cross-media recurrent neural network is proposed to jointly model the fine-grained information of up to five media types, which can fully exploit the internal details and context information of different media types. Second, cross-media joint correlation loss is proposed, which combines distribution alignment and semantic alignment to exploit both intra-media and inter-media fine-grained correlation, while it can further enhance the semantic discrimination capability by semantic category information, aiming to promote the accuracy of cross-media retrieval effectively. Extensive experiments on 2 cross-media datasets are conducted, namely PKU XMedia and PKU XMediaNet datasets, which contain up to five media types. The experimental results verify the effectiveness of the proposed approach.

    参考文献
    相似文献
    引证文献
引用本文

卓昀侃,綦金玮,彭宇新.跨媒体深层细粒度关联学习方法.软件学报,2019,30(4):884-895

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2018-04-16
  • 最后修改日期:2018-06-13
  • 录用日期:
  • 在线发布日期: 2019-04-01
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号