ABSTRACT

This chapter uses the extended Bayesian formalism to investigate the noise-free "exhaustive learning" scenario, first introduced by Schwartz et al. in the context of the statistical physics of learning. It proves that the crucial "self-averaging" assumption invoked in the conventional analysis of exhaustive learning does hold in a particular trivial implementation of exhaustive learning. The chapter shows that if one uses an off-training-set error function rather than the usual error function, then the central result does not hold even when the self-averaging assumption is valid, and even in the limit of an infinite input space. It also shows how to replicate the conventional analysis of exhaustive learning without assuming that one is using a neural net generalizer. In particular, the error function it uses, although it is conventional in much supervised learning research, has a number of major disadvantages.