ABSTRACT

This chapter considers the use of kernel functions as a way to manipulate datasets in their own original feature space, but as though they were projected into a higher dimensional space. A support vector machine minimises the generalisation error associated with the geometrical notion of a margin. In that manner, a support vector machine (SVM) has the goal of discriminating among classes using a linear decision boundary that has the largest margin, giving rise to the so-called maximum margin hyperplane. The support vectors are the data points closest to the classification hyperplane. The main idea is to obtain a linear boundary by mapping the data into a higher-dimensional space. The support vector machine algorithm can be applied in regression problems as a way to optimise the generalisation boundaries for the regression line. The generalisation performance of the SVM depends on the kernel used, but as with other regularised models, it also depends on the hyperparameter.