The decomposition methods are main family to train SVM (support vector machine) for large-scale problem. In many pattern classification problems, most support vectors?Lagrangian multipliers are bound, and those multipliers change smoothly during training phases. Based on the facts, an efficient caching strategy is proposed to accelerate the decomposition methods in this paper. Platt抯 sequential minimization optimization (SMO) algorithm is improved by this caching strategy. The experimental results show that the modified algorithm can be 2~3 times faster than the classical SMO for large real-world data sets.