基于多样真实任务生成的鲁棒小样本分类方法
作者:
作者单位:

作者简介:

通讯作者:

景丽萍,E-mail:lpjing@bjtu.edu.cn

中图分类号:

基金项目:

中央高校基本科研业务费(2019JBZ110); 北京市自然基金(L211016); 国家自然科学基金(62176020);国家重点研发计划(2020AAA0106800)


Diverse and Authentic Task Generation Method for Robust Few-shot Classification
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    随着大数据、计算机与互联网等技术的不断进步, 以机器学习和深度学习为代表的人工智能技术取得了巨大成功, 尤其是最近不断涌现的各种大模型, 极大地加速了人工智能技术在各个领域的应用. 但这些技术的成功离不开海量训练数据和充足的计算资源, 大大限制了这些方法在一些数据或计算资源匮乏领域的应用. 因此, 如何利用少量样本进行学习, 也就是小样本学习成为以人工智能技术引领新一轮产业变革中一个十分重要的研究问题. 小样本学习中最常用的方法是基于元学习的方法, 这类方法通过在一系列相似的训练任务上学习解决这类任务的元知识, 在新的测试任务上利用元知识可以进行快速学习. 虽然这类方法在小样本分类任务上取得了不错的效果, 但是这类方法的一个潜在假设是训练任务和测试任务来自同一分布. 这意味着训练任务需要足够多才能使模型学到的元知识泛化到不断变化的测试任务中. 但是在一些真正数据匮乏的应用场景, 训练任务的数量也是难以保证的. 为此, 提出一种基于多样真实任务生成的鲁棒小样本分类方法(DATG). 该方法通过对已有少量任务进行Mixup, 可以生成更多的训练任务帮助模型进行学习. 通过约束生成任务的多样性和真实性, 该方法可以有效提高小样本分类方法的泛化性. 具体来说, 先对训练集中的基类进行聚类得到不同的簇, 然后从不同的簇中选取任务进行Mixup以增加生成任务的多样性. 此外, 簇间任务Mixup策略可以减轻学习到与类别高度相关的伪判别特征. 同时, 为了避免生成的任务与真实分布太偏离, 误导模型学习, 通过最小化生成任务与真实任务之间的最大均值差异(MMD)来保证生成任务的真实性. 最后, 从理论上分析了为什么基于簇间任务Mixup的策略可以提高模型的泛化性能. 多个数据集上的实验结果进一步证明了所提出的基于多样性和真实性任务扩充方法的有效性.

    Abstract:

    With the development of technologies such as big data, computing, and the Internet, artificial intelligence techniques represented by machine learning and deep learning have achieved tremendous success. Particularly, the emergence of various large-scale models has greatly accelerated the application of artificial intelligence in various fields. However, the success of these techniques heavily relies on massive training data and abundant computing resources, which significantly limits their application in data or resource-scarce domains. Therefore, how to learn from limited samples, known as few-shot learning, has become a crucial research problem in the new wave of industrial transformation led by artificial intelligence. The most commonly used approach in few-shot learning is based on meta- learning. Such methods learn meta-knowledge for solving similar tasks by training on a series of related training tasks, which enables fast learning on new testing tasks using the acquired meta-knowledge. Although these methods have achieved sound results in few-shot classification tasks, they assume that the training and testing tasks come from the same distribution. This implies that a sufficient number of training tasks are required for the model to generalize the learned meta-knowledge to continuously changing testing tasks. However, in some real-world scenarios with truly limited data, ensuring an adequate number of training tasks is challenging. To address this issue, this study proposes a robust few-shot classification method based on diverse and authentic task generation (DATG). The method generates additional training tasks by applying Mixup to a small number of existing tasks, aiding the model in learning. By constraining the diversity and authenticity of the generated tasks, this method effectively improves the generalization of few-shot classification methods. Specifically, the base classes in the training set are firstly clustered to obtain different clusters and then tasks are selected from different clusters for Mixup to increase task diversity. Furthermore, performing inter-cluster tasks Mixup helps alleviate the learning of pseudo-discriminative features highly correlated with the categories. To ensure that the generated tasks do not deviate too much from the real distribution and mislead the model’s learning, the maximum mean discrepancy (MMD) between the generated tasks and real tasks is minimized, thus ensuring the authenticity of the generated tasks. Finally, it is theoretically analyzed why the inter-cluster task Mixup strategy can improve the model’s generalization performance. Experimental results on multiple datasets further demonstrate the effectiveness of the proposed method.

    参考文献
    相似文献
    引证文献
引用本文

刘鑫,景丽萍,于剑.基于多样真实任务生成的鲁棒小样本分类方法.软件学报,2024,35(4):1587-1600

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2023-05-15
  • 最后修改日期:2023-07-07
  • 录用日期:
  • 在线发布日期: 2023-09-11
  • 出版日期: 2024-04-06
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号