Abstract:At present sequential minimal optimization (SMO) algorithm is a quite efficient method for training large-scale support vector machines (SVM). However, the feasible direction strategy for selecting working sets may degrade the performance of the kernel cache maintained in SMO. After an interpretation of SMO as the feasible direction method in the traditional optimization theory, a novel strategy for selecting working sets applied in SMO is presented. Based on the original feasible direction selection strategy, the new method takes both reduction of the object function and computational cost related to the selected working set into consideration in order to improve the efficiency of the kernel cache. It is shown in the experiments on the well-known data sets that computation of the kernel function and training time is reduced greatly, especially for the problems with many samples, support vectors and non-bound support vectors.