Using Analytic QP and Sparseness to Speed Training of Support Vector Machines

Training of support vector machine (SVM) requires the solution of a very large quadratic programming (QP) problem. This paper proposes an algorithm for training SVM's: Sequential Minimal Optimization, or SMO. SMO breaks the large QP problem into a series of smallest possible QP problems which are analytically solvable. Thus, SMO does not require a numerical QP library. SMO's computation time is dominated by evaluation of the kernel, hence kernel optimizations substantially quicken SMO. For the MNIST database, SMO is 1.7 times as fast as PCG chunking; while for the UCI adult databse and linear SVM's, SMO can 1500 times faster than the PCG chunking algorithm.

smo-nips.ps
PostScript file

In  Proc. Advances in Neural Information Processing Systems 11

Details

TypeInproceedings
Pages557-563

Previous Versions

John C. Platt. Fast Training of Support Vector Machines Using Sequential Minimal Optimization, MIT Press, January 1998.

John Platt. Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines, April 1998.

> Publications > Using Analytic QP and Sparseness to Speed Training of Support Vector Machines