构造型神经网络双交叉覆盖增量学习算法
作者:
基金项目:

Supported by the National Natural Science Foundation of China under Grant No.60135010 (国家自然科学基金); the National Grand Fundamental Research 973 Program of China under Grant No.G1998030509 (国家重点基础研究发展规划(973))


An Incremental BiCovering Learning Algorithm for Constructive Neural Network
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [30]
  • |
  • 相似文献
  • |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    研究了基于覆盖的构造型神经网络(cover based constructive neural networks,简称CBCNN)中的双交叉覆盖增量学习算法(BiCovering algorithm,简称BiCA).根据CBCNN的基本思想,该算法进一步通过构造多个正反覆盖簇,使得网络在首次构造完成后还可以不断地修改与优化神经网络的参数与结构,增加或删除网络中的节点,进行增量学习.通过分析认为,BiCA学习算法不但保留了CBCNN网络的优点与特点,而且实现了增量学习并提高了CBCNN网络的泛化能力.仿真实验结果显示,该增量学习算法在神经网络初始分类能力较差的情况下具有快速学习能力,并且对样本的学习顺序不敏感.

    Abstract:

    The algorithm of incremental learning in cover based constructive neural networks (CBCNN) is investigated by using BiCovering algorithm (BiCA) in this paper. This incremental learning algorithm based on the idea of CBCNN can set up many postive-covers and negative-covers, and can modify and optimize the parameters and structure of the neural networks continuously, and can add the nodes according to the need and prune the redundant nodes. BiCA algorithm not only keep the advantages of CBCNN but also fit for incremental learning and could enhance the generalization capability of the neural networks. The simulational results show that the BiCA algorithm is not sensitive to the order of the sample and could learn quickly and steady even if the performance of initial CBCNN is not very good.

    参考文献
    [1]Zhang L, Wu FC, Zhang B, Han M. A learning and synthesis algorithm of multi-layered feed forwardneural networks. Journal of Software, 1995,6(7):440~448 (in Chinese with English Abstract).
    [2]Zhang L, Zhang B, Yin HF. An alternative covering design algorithm of multi-layer neural networks. Journal of Software, 1999, 10(7):737~742 (in Chinese with English Abstract).
    [3]Fu LM. Incremental knowledge acquisition in supervised learning networks. IEEE Transactions on Systems, Man, and Cybernetics-Part A: System and Humans, 1996,26(6):801~809.
    [4]Wang EH-C, Kuh A. A smart algorithm for incremental learning. International Joint Conference on Neural Networks, 1992,3:121 ~126.
    [5]Engelbrecht AP, Brits R. A clustering approach to incremental learning for feedforward neural networks. In: Proceedings of the International Joint Conference on Neural Networks (IJCNN 2001), Vol 3. 2001. 2019~2024.
    [6]Ash, T. Dynamic node creation in backpropagation networks. Technical Report, ICS Report 8901, University of California, 1989.
    [7]Mozer MC, Smolensky P. Using relevance to reduce network size automatically. Connection Science, 1989,1(1):3~16.
    [8]Hirose Y, Koichi Y, Shimpei H. Back-Propagation algorithm which varies the number of hidden units. Neural Networks, 1991,4:61~66.
    [9]Alpaydin, E. GAL: networks that grow when they learn and shrink when they forget. Technical Report, TR91-032, International Computer Science Institute, 1991.
    [10]Stefan W, Manuela M. Towards constructive and destructive dynamic network configuration. In: Proceedings of the ESANN 1996. 1996. 123~128. http://www.dice.ucl.ac.be/Proceedings/esann/esannpdf/es1996-53-S.pdf.
    [11]Platt J. A resource-allocating network for function interpolation. Neural Computation, 1991,3:213~225.
    [12]Kadirkamanathan V, Niranjan M. A function estimation approach to sequential learning with neural networks. Neural Computation, 1993,5(6):954~975.
    [13]Molina C, Niranjan M. Pruning with replacement on limited resource allocating networks by F-projections. Neural Computation, 1996,8(4):855~868.
    [14]Miller DA, Zurada JM, Lilly JH. Pruning via dynamic adaptation of the forgetting rate in structural learning. IEEE International Conference on Neural Networks. Vol.1, 1996. 448~452.
    [15]Yingwei L, Sundararajan N, Saratchandran P. A sequential learning scheme for function approximation using minimal radial basis function neural networks. Neural Computation, 1997,9(2):461~478.
    [16]Sugiyama M, Ogawa H. Incremental projection learning for optimal generalization. Neural Networks, 2001,14(1):53~66.
    [17]Sugiyama M, Ogawa H. Properties of incremental projection learning. Neural Networks, 2001,14(1):67~78.
    [18]Frean M. The upstart algorithm: a method for constructing and training feedforward neural networks. Neural Computation, 1990,2(2):198~209.
    [19]Fahlman S, Lebiere C. The cascade-correlation learning architecture. Technical Report, CMU-CS-90-100, Pittsburgh, PA: Carnegie Mellon University, 1991.
    [20]Mohraz K, Protzel P. FlexNet: a flexible neural network construction algorithm. In: Proceedings of the ESANN 1996. 1996. 111~116. http://www.dice.ucl.ac.be/Proceedings/esann/esannpdf/es1996-14-S.pdf.
    [21]Zhou ZH, Chen ZQ, Chen SF. Research of field theory based adaptive resonance neural network. Journal of Software, 2000, 11(11):1451~1459 (in Chinese with English abstract).
    [22]Zhang L, Zhang B. A geometrical representation of M-P neural model and its applications. Journal of Software, 1998,9(5):334~338 (in Chinese with English Abstract).
    [23]Wang JF, Cao YD. The application of support vector machine in classifying large number of catalogs. Journal of Beijing Institute of Technology, 2001,21(2):225~228 (in Chinese with English Abstract).
    [24]Mulier F. Vapnik-Chervonenkis (VC) learning theory and its applications. IEEE Transactions on Neural Networks, 1999,10(5): 985~987.
    [25]Vapnik VN. Zhang XG, Trans. Statistical Learning Theory. Beijing: Tsinghua University Press, 1998 (in Chinese).
    [26]张铃,吴朝福,张钹,韩枚.多层前馈神经网络的学习和综合算法.软件学报,1995,6(7):440~448.
    [27]张铃,张钹.多层前向网络的交叉覆盖设计算法.软件学报,1999,10(7):737~742.
    [28]张铃,张钹.M-P神经元模型的几何意义及其应用.软件学报,1998,9(5):334~338.
    [29]王建芬,曹元大.支持向量机在大类别数分类中的应用.北京理工大学学报, 2001,21(2):225~228.
    [30]Vapnik VN,著.张学工,译.统计学习理论的本质.北京:清华大学出版社,2000.
    相似文献
    引证文献
引用本文

陶品,张钹,叶榛.构造型神经网络双交叉覆盖增量学习算法.软件学报,2003,14(2):194-201

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2002-03-28
  • 最后修改日期:2002-05-17
文章二维码
您是第19795384位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号