ABSTRACT

SVM is a theoretically sound approach for controlling model complexity. It picks important instances to construct the separating surface between data instances. When the data is not linearly separable, it can either penalize violations with loss terms, or leverage kernel tricks to construct nonlinear separating surfaces. SVMs can also perform multiclass classifications in various ways, either by an ensemble of binary classifiers or by extending margin concepts. The optimization techniques of SVMs are mature, and SVMs have been used widely in many application domains.