chapter  7
Support Vector Machines
ByPo-Wei Wang, Chih-Jen Lin
Pages 18

Machine learning algorithms have a tendency to over-fit. It is possible to achieve an arbitrarily low training error with some complex models, but the testing error may be high, because of poor generalization to unseen test instances. This is problematic, because the goal of classification is not to obtain good accuracy on known training data, but to predict unseen test instances correctly. Vapnik’s work [34] was motivated by this issue. His work started from a statistical derivation on linearly separable scenarios, and found that classifiers with maximum margins are less likely to overfit. This concept of maximum margin classifiers eventually evolved into support vector machines (SVMs).