增量和减量式标准支持向量机的分析
作者:
基金项目:

国家自然科学基金重点项目(61139002, 61232016); 国家自然科学基金青年科学基金(61202137); 江苏高校优势学科建设工程资助项目; 南京信息工程大学科研启动费(20110433)


Analysis for Incremental and Decremental Standard Support Vector Machine
Author:
  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献 [20]
  • |
  • 相似文献
  • |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    当训练数据每次发生改变时,例如增加或者删除部分数据,标准支持向量机的批处理算法就需要重新进行训练,这将不适合在线环境的计算.为了克服这个问题,Cauwenberghs 和Poggio 提出了增量和减量式标准支持向量机算法(C&P 算法).通过理论分析,证明C&P 算法的可行性和有限收敛性.可行性证明确保了C&P 算法的每步调整都是可靠的,有限收敛性证明确保了C&P 算法通过有限步调整最终收敛到问题的最优解.在此基础上,进一步通过实验结果验证了所给出的理论分析的结果.

    Abstract:

    Batch implementations of standard support vector machine (SVM) are inefficient on an online setting because they must be retrained from scratch every time the training set is modified (i.e., adding or removing some data samples). To solve this problem, Cauwenberghs and Poggio propose an incremental and decremental support vector classification algorithm (C&P algorithm). This paper proves the feasibility and finite convergence of the C&P algorithm through theoretical analysis. The feasibility ensures that each adjustment step in the C&P algorithm is reliable, and the finite convergence ensures that the C&P algorithm can converge to the optimal solution within finite steps. Finally, the conclusions of the theoretical analysis are verified by the experimental results.

    参考文献
    [1] Vapnik V. Statistical learning theory. In: Wiley Series on Adaptive and Learning Systems for Signal Processing Communicationsand Control. John Wiley & Sons, 1998.
    [2] Lin CJ. On the convergence of the decomposition method for support vector machines. IEEE Trans. on Neural Network, 2001,12(6):1288-1298. [doi: 10.1109/72.963765]
    [3] Frieß TT, Cristianini N, Campbell C. The kernel-adatron algorithm: A fast and simple learning procedure for support vectormachines. In: Proc. of the ICML’98. San Francisco: Morgan Kaufmann Publishers, 1998. 188-196. http://www.isn.ucsd.edu/pubs/nips00_inc.pdf
    [4] Tsang IW, Kwok JT, Lai KT. Core vector regression for very large regression problems. In: Proc. of the ICML 2005. 2005.912-919. [doi: 10.1145/1102351.1102466]
    [5] Zhang T. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In: Proc. of the ICML 2004.2004. 919-926. [doi: 10.1145/1015330.1015332]
    [6] Lee S, Wright SJ. ASSET: Approximate stochastic subgradient estimation training for support vector machines. In: Proc. of theInt’l Conf. on Pattern Recognition Applications and Methods. 2012. 223-228. http://pages.cs.wisc.edu/~sklee/papers/asset_icpram12.pdf
    [7] Mangasarian OL, Musicant DR. Successive overrelaxation for support vector machines. IEEE Trans. on Neural Networks, 1999,10:1032-1037. [doi: 10.1109/72.788643]
    [8] Joachims T, Finley T, Yu CNJ. Cutting-Plane training of structural SVMs. Machine Learning, 2009,77(1):27-59. [doi: 10.1007/s10994-009-5108-8]
    [9] Teo CH, Vishwanthan SVN, Smola AJ, Le QV. Bundle methods for regularized risk minimization. The Journal of MachineLearning Research, 2010,11(1):311-365.
    [10] Franc V, Sonnenburg S. Optimized cutting plane algorithm for support vector machines. In: Proc. of the Int’l Conf. on MachineLearning. 2008. 320-327. [doi: 10.1145/1390156.1390197]
    [11] Cauwenberghs G, Poggio T. Incremental and decremental support vector machine learning. In: Advances in Neural InformationProcessing Systems 13. MIT Press, 2001. http://www.isn.ucsd.edu/pubs/nips00_inc.pdf
    [12] Laskov P, Gehl C, KrAuger S, Mäuller KR, Bennett K, Parrado-Hern E. Incremental support vector learning: Analysis,implementation and applications. Journal of Machine Learning Research, 2006,7(9):1909-1936.
    [13] Gu B, Wang JD, Chen HY. On-Line off-line ranking support vector machine and analysis. In: Proc. of the Int’l Joint Conf. onNeural Networks (IJCNN 2008). IEEE Press, 2008. 1364-1369. [doi: 10.1109/IJCNN.2008.4633975]
    [14] Karasuyama M, Takeuchi I. Multiple incremental decremental learning of support vector machines. IEEE Trans. on NeuralNetworks, 2010,21(7):1048-1059. [doi: 10.1109/TNN.2010.2048039]
    [15] Gu B, Wang JD, Yu YC, Zheng GS, Huang YF, Xu T. Accurate on-line ν-support vector learning. Neural Networks, 2012,27:51-59. [doi: 10.1016/j.neunet.2011.10.006]
    [16] Kuhn HW, Tucker AW. Nonlinear programming. In: Proc. of the 2nd Berkeley Symp. Berkeley: University of California Press,1995. 481-492. http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.bsmsp/1200500249
    [17] Boyd S, Vandenberghe L. Convex Optimization. Cambridge: Cambridge University Press, 2004.
    [18] Ma JS, Theiler J, Perkins S. Accurate on-line support vector regression. Neural Computation, 2003,15(11):2683-2703. [doi: 10.1162/089976603322385117]
    [19] Hastie T, Rosset S, Tibshirani R, Zhu J. The entire regularization path for the support vector machine. The Journal of MachineLearning Research, 2004,5(10):1391-1415.
    [20] Gunter L, Zhu J. Efficient computation and model selection for the support vector regression. Neural Computation, 2007,19(6):1633-1655. [doi: 10.1162/neco.2007.19.6.1633]
    相似文献
    引证文献
引用本文

顾彬,郑关胜,王建东.增量和减量式标准支持向量机的分析.软件学报,2013,24(7):1601-1613

复制
相关视频

分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2012-05-21
  • 最后修改日期:2012-07-16
  • 在线发布日期: 2013-01-17
文章二维码
您是第19920393位访问者
版权所有:中国科学院软件研究所 京ICP备05046678号-3
地址:北京市海淀区中关村南四街4号,邮政编码:100190
电话:010-62562563 传真:010-62562533 Email:jos@iscas.ac.cn
技术支持:北京勤云科技发展有限公司

京公网安备 11040202500063号