避免近期偏好的自学习掩码分区增量学习
作者:
作者单位:

作者简介:

姚红革(1968-),男,博士,副教授,主要研究领域为人工智能,计算机视觉;邬子逸(1996-),男,硕士生,主要研究领域为元强化学习,小样本类增量学习;马姣姣(1997-),女,硕士生,主要研究领域为机器学习,计算机视觉;石俊(1972-),男,博士,讲师,主要研究领域为机器学习,计算机视觉,无人机控制;程嗣怡(1980-),男,博士,主要研究领域为机器学习,电子对抗理论与技术;陈游(1983-),男,博士,副教授,主要研究领域为雷达信号处理,信息对抗理论;喻钧(1970-),女,硕士,教授,主要研究领域为图像处理与模式识别,计算机网络与信息安全,无线传感器网络;姜虹(1977-),女,博士,副教授,主要研究领域为软件工程,图像处理

通讯作者:

邬子逸, E-mail: wuziyi@st.xatu.edu.cn

中图分类号:

TP18

基金项目:


Recency-bias-avoiding Partitioned Incremental Learning Based on Self-learning Mask
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    遗忘是人工神经网络在增量学习中的最大问题, 被称为“灾难性遗忘”. 而人类可以持续地获取新知识, 并能保存大部分经常用到的旧知识. 人类的这种能持续“增量学习”而很少遗忘是与人脑具有分区学习结构和记忆回放能力相关的. 为模拟人脑的这种结构和能力, 提出一种“避免近期偏好的自学习掩码分区增量学习方法”简称ASPIL. 它包含“区域隔离”和“区域集成”两阶段, 二者交替迭代实现持续的增量学习. 首先, 提出“BN稀疏区域隔离” 方法, 将新的学习过程与现有知识隔离, 避免干扰现有知识; 对于“区域集成”, 提出自学习掩码(SLM)和双分支融合(GBF)方法. 其中SLM准确提取新知识, 并提高网络对新知识的适应性, 而GBF将新旧知识融合, 以达到建立统一的、高精度的认知的目的; 训练时, 为确保进一步兼顾旧知识, 避免对新知识的偏好, 提出间隔损失正则项来避免“近期偏好”问题. 为评估以上所提出方法的效用, 在增量学习标准数据集CIFAR-100和miniImageNet上系统地进行消融实验, 并与最新的一系列知名方法进行比较. 实验结果表明, 所提方法提高了人工神经网络的记忆能力, 与最新知名方法相比识别率平均提升5.27%以上.

    Abstract:

    Forgetting is the biggest problem of artificial neural networks in incremental learning and is thus called “catastrophic forgetting”. In contrast, humans can continuously acquire new knowledge and retain most of the frequently used old knowledge. This continuous “incremental learning” ability of human without extensive forgetting is related to the partitioned learning structure and memory replay ability of the human brain. To simulate this structure and ability, the study proposes an incremental learning approach of “recency-bias-avoiding partitioned incremental learning based on self-learning mask (SLM)”, or ASPIL for short. ASPIL involves the two stages of regional isolation and regional integration, which are alternately iterated to accomplish continuous incremental learning. Specifically, this study proposes the “Bayesian network (BN)-based sparse regional isolation” method to isolate the new learning process from the existing knowledge and thereby avoid the interference with the existing knowledge. For regional integration, SLM and dual-branch fusion (GBF) methods are proposed. The SLM method can accurately extracts new knowledge and improves the adaptability of the network to new knowledge, while the GBF method integrates the old and new knowledge to achieve the goal of fostering unified and high-precision cognition. During training, a regularization term for Margin Loss is proposed to avoid the “recency bias”, thereby ensuring the further balance of the old knowledge and the avoidance of the bias towards the new knowledge. To evaluate the effectiveness of the proposed method, this study also presents systematic ablation experiments performed on the standard incremental learning datasets CIFAR-100 and miniImageNet and compares the proposed method with a series of well-known state-of-the-art methods. The experimental results show that the method proposed in this study improves the memory ability of the artificial neural network and outperforms the latest well-known methods by more than 5.27% in average identification rate.

    参考文献
    相似文献
    引证文献
引用本文

姚红革,邬子逸,马姣姣,石俊,程嗣怡,陈游,喻钧,姜虹.避免近期偏好的自学习掩码分区增量学习.软件学报,2024,35(7):3428-3453

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-08-31
  • 最后修改日期:2023-01-15
  • 录用日期:
  • 在线发布日期: 2023-09-13
  • 出版日期:
文章二维码
您是第位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号