ABSTRACT

The tree structured classification and regression procedures discussed in this book use the learning sample to partition the measurement space. In this chapter a more general collection of such “partition-based” procedures will be considered. It is natural to desire that as the size of the learning sample tends to infinity, partition-based estimates of the regression function should converge to the true function and that the risks of partition-based predictors and classifiers should converge to the risk of the corresponding Bayes rules. If so, the procedures are said to be consistent. In this chapter the consistency of partition-based regression and classification procedures will be verified under surprisingly general conditions.