1.西安工业大学;2.空军工程大学
Xi’an Technological University
忘是人工神经网络在增量学习中的最大问题,被称为“灾难性遗忘”。而人类可以持续地获取新知识,对旧知识基本不遗忘。人类的这种能持续“增量学习”而很少遗忘是与人脑具有分区学习结构和记忆回放能力相关的。为模拟人脑的这种结构和能力,本文提出一种“避免近期偏好的自学习掩码分区增量学习方法”简称ASPIL。它包含“区域隔离”和“区域集成”两阶段,二者交替迭代实现持续的增量学习。首先,本文提出“BN稀疏区域隔离”方法,将新的学习过程与现有知识隔离,避免干扰现有知识;对于“区域集成”,提出自学习掩码(SLM)和双分支融合(GBF)方法。其中SLM准确提取新知识,并提高网络对新知识的适应性,而 GBF将新旧知识融合,以达到建立统一的、高精度的认知的目的;训练时,为确保进一步兼顾旧知识,避免对新知识的偏好,提出边缘损失正则项来避免“近期偏好”问题。为评估以上所提出方法的效用,在增量学习标准数据集CIFAR-100和miniImageNet上系统地进行了消融实验,并与最新的一系列知名方法进行了比较。实验结果表明,本文方法提高了人工神经网络的记忆能力,与最新知名方法相比识别率平均提升了5.27%以上。
Forgetting is the biggest problem of artificial neural network in incremental learning, which is called "disaster forgetting", while humans can continuously acquire new knowledge without forgetting old knowledge. This continuous "incremental learning" of human without forgetting is related to the human brain's partitioned learning structure and memory playback ability. To simulate the human brain's partition learning and memory replay, this paper proposes an incremental learning approach of "Avoid Recent Biase Self-Learning Mask Partitioned Incremental Learning ", referred to as ASPIL. ASPIL is a two-stage strategy that comprises alternate iterations of “regional isolation” and “regional integration” in order to accomplish continuous class incremental learning. This paper proposes the "BN sparse region isolation" method, which isolates the new learning process from the existing knowledge and avoids interfering with the existing knowledge; For "regional ensemble", self-learning mask (SLM) and dual-branch fusion (GBF) methods are proposed. SLM accurately extracts new knowledge and improves the adaptability of the network to new knowledge, while GBF integrates the old and new knowledge to achieve the goal of establishing a unified and high-precision cognition; During training, in order to further take into account, the old knowledge and avoid the preference for new knowledge, an edge loss regular term is proposed to avoid the "recent bias" problem. To evaluate the positive utility of the proposed methods above, ablation experiments are systematically performed on the incremental learning standard datasets CIFAR-100 and miniImageNet, and compared with a series of state-of-the-art methods. The experimental results show that the method in this paper improves the memory ability of the artificial neural network, and the recognition rate is increased by more than 5.27% on average compared with the latest well-known methods. The experimental results show that this method improves the memory ability of the artificial neural network, and the recognition rate is increased by more than 5.27% on average compared with the latest well-known methods.