• Article
  • | |
  • Metrics
  • |
  • Reference [12]
  • |
  • Related
  • |
  • Cited by
  • | |
  • Comments
    Abstract:

    At present sequential minimal optimization (SMO) algorithm is a quite efficient method for training large-scale support vector machines (SVM). However, the feasible direction strategy for selecting working sets may degrade the performance of the kernel cache maintained in SMO. After an interpretation of SMO as the feasible direction method in the traditional optimization theory, a novel strategy for selecting working sets applied in SMO is presented. Based on the original feasible direction selection strategy, the new method takes both reduction of the object function and computational cost related to the selected working set into consideration in order to improve the efficiency of the kernel cache. It is shown in the experiments on the well-known data sets that computation of the kernel function and training time is reduced greatly, especially for the problems with many samples, support vectors and non-bound support vectors.

    Reference
    [1]Burges C. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 1998,2(2):1~43.
    [2]Chang CC, Lin, CJ. LIBSVM: A library for support vector machines (Version 2.3). 2001. http://www.csie.ntu.edu.tw/~cjlin/papers/ libsvm.pdf. 2001.
    [3]Collobert R, Bengio S. SVMTorch: A support vector machine for large-scale regression and classification problems. Journal of Machine Learning Research, 2001,1:143~160.
    [4]Platt J. Fast training of support vector machines using sequential minimaloptimization. In: Sch?lkopf B, Burges C, Smola A, eds. Advances in Kernel Methods--Support Vector Learning. Cambridge, MA: MIT Press, 1999. 185~208.
    [5]Joachims T. Making large-scale support vector machine learning practical. In: Sch?lkopf B, Burges C, Smola A, eds. Advances in Kernel Methods--Support Vector Learning. Cambridge, MA: MIT Press, 1999. 169~184.
    [6]Platt J. Using analytic QP and sparseness to speed training of support vector machines. In: Kearns M, Solla S, Cohn D, eds. Advances in Neural InformationProcessing Systems 11. Cambridge, MA: MIT Press, 1999. 557~563.
    [7]Flake G, Lawrence S. Efficient SVM regression training with SMO. Machine Learning, 2002,46(1/3):271~290.
    [8]Keerthi S, Shevade S, Bhattcharyya C, Murthy K. Improvements to Platt's SMO algorithm for SVM classifier design. Neural Computation, 2001,13(3):637~649.
    [9]Keerthi S, Gilbert E. Convergence of a generalized SMO algorithm for SVM classifier design. Machine Learning, 2002,46(1/3): 351~360.
    [10]Lin CJ. On the convergence of the decomposition method for support vectormachines. IEEE Transactions on Neural Networks, 2001,12(6):1288~1298.
    [11]Lin CJ. Asymptotic convergence of an SMO algorithm without any assumptions. 2001. http://www.csie.ntu.edu.tw/~cjlin/papers/ q2conv.pdf.
    [12]Lin CJ. Linear convergence of a decomposition method for support vector machines. 2001. http://www.csie.ntu.edu.tw/~cjlin/ papers/linearconv.pdf.
    Related
    Cited by
Get Citation

李建民,张钹,林福宗.序贯最小优化的改进算法.软件学报,2003,14(5):918-924

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:January 07,2002
  • Revised:August 13,2002
You are the first2038572Visitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063