ABSTRACT

SVM power dwells upon several fronts, such as: (1) robustness and sparseness of solution; the goodness of the adjustment is measured not by the usual quadratic loss function (mean square error) but by a different loss function (ε-insensitive) similar to those used in robust statistics (e.g., a way of dealing with deviations from idealized assumptions) and a (2) flexible and mathematically sound approach; non-linear regression models (e.g., polynomials, Gaussian radial basis functions, splines, etc.) can be constructed as linear models by mapping the input data into a socalled feature space, namely, a reproducing kernel Hilbert space (Wahba 2000). The linear models (a single framework) are formulated in terms of dot products in a feature space which can be efficiently calculated using special functions (kernels) associated with non-linear regression models of interest, evaluated in the original space (kernel trick). This framework can also be used with quadratic loss functions, which makes it an ideal setting for ensembles of surrogate-based analysis and optimization.