Neural networks have recently been rediscovered as an important alternative to various standard classification methods. This is due to a solid theoretical foundation underlying neural network research, along with recently-achieved strong practical results on challenging real-world problems. Early work established neural networks as universal functional approximators [33,59,60], able to approximate any given vector space mapping. As classification is merely a mapping from a vector space to a nominal space, in theory, a neural network is capable of performing any given classification task, provided that a judicious choice of the model is made and an adequate training method in implemented. In addition, neural networks are able to directly estimate posterior probabilities, which provides a clear link to classification performance and makes them a reliable estimator of the optimal Bayes classifier [14,111,131]. Neural networks are also referred to as connectionist models.