ABSTRACT

This chapter demonstrates how, with the help of an overarching supervised learning formalism, it's possible to reconcile the frameworks, and thereby depart the tower of Babel. It discusses the intimate relationships between the supervised learning frameworks between Probably Approximately Correct (PAC), the Statistical Physics Framework, the Bayesian Framework, and the Vapnik-Chervonenkis (VC) Framework. Intuitively, the extended Bayesian formalism is just the conventional Bayesian supervised learning framework, extended to add one extra random variable. The theorems bound how well a learning algorithm can be assured of performing in the absence of assumptions concerning the real world. It should be pointed out that things are a bit messier when error functions other than the misclassification rate are considered. In the case of the VC framework the vanilla-ness also means that the learning algorithm is only capable of producing one hypothesis function h. Yet when used in its convergence-issues form PAC not only does not disallow this scheme, it actually counsels it.