ABSTRACT

Statistical learning theory was the small sample statistical learning rules in the 1960s. Based on this theory, in the mid-1990s, Vapnik, et al proposed a new learning algorithm-support vector machine (SVM) [1]. SVM can be seen as the polynomial neural network based on structural risk minimization, or the classifier based on radial basis function. It has the very strong generalization capacity and shows good classification ability in many practical problems [2-5] such as: handwritten character recognition, face recognition, text classification, intrusion detection, voice recognition, etc. However, the training of SVM corresponds to a quadratic optimization process; its time complexity is higher, which seriously influence the application of SVM in large scale data environment.