ABSTRACT

This chapter presents the concepts and notation given there without further special reference. It describes a generalization of the Probably Approximately Correct (PAC) learning model that is based on statistical decision theory. The chapter shows how the pseudo dimension can be used to obtain distribution independent bounds on the random covering numbers needed for Theorem 2. The introduction of the PAC model of learning from examples has done an admirable job of drawing together practitioners of machine learning with theoretically oriented computer scientists in the pursuit of a solid and useful mathematical foundation for applied machine learning work. The general problem of regression has a different character from that of classification learning, but can be addressed in the decision theoretic learning framework. The problems of parameter estimation and density estimation can also be viewed as special cases of the decision theoretic framework. Computation units with no outgoing edges are called output units and serve as output ports for the network.