ABSTRACT

This chapter surveys some recent theoretical results on the efficiency of machine learning algorithms. The main tool described is the notion of Probably Approximately Correct (PAC) learning, introduced by Valiant. The chapter defines the learning model and then looks at some of the results obtained in it. It then considers some criticisms of the PAC model and the extensions proposed to address the criticisms. The chapter looks briefly at other models recently proposed in computational learning theory. A number of fairly sharp results have been found for the notion of proper PAC learnability. The type of reduction from one learning problem to another introduced by L. Pitt and M. Warmuth is called a polynomial-time, learning-preserving reduction. Simpler problems can also be shown not to be PAC learnable based on stronger cryptographic assumptions. A number of other theoretical approaches to machine learning are flourishing in recent computational learning theory work.