ABSTRACT

A support vector machine (SVM) directly generates a classification, rather than a score. A supervised learning algorithm requires labeled training data. That is, the training data must be categorized in advance. In contrast, an unsupervised algorithm deals with unlabeled data. In the context of malware, an unsupervised algorithm could be applied to a set of unlabeled samples. The goal when training an SVM is to find a separating hyperplane, where a hyperplane is defined as a subspace of one dimension less than the space. SVMs employ two techniques to deal with training data that is not linearly separable. A soft margin allows for some classification errors when determining a separating hyperplane. Another technique employed in SVM training is to map the input data to a feature space where the problem of constructing a separating hyperplane is more tractable. The chapter considers the general problem of constrained optimization using Lagrange multipliers.