ABSTRACT

The chapter describes the frequently used classification method known as Adaboost (Adaptive Boosting). Adaboost belongs to algorithms using a group of voting classifiers. The principle of the method is the gradual training of weak learners (simple classifiers), where it is sufficient for each of them to classify correctly (for two classes) more than 50% of the assigned training examples. Each of the simple classifiers gets assigned a portion of the examples, some of which are classified incorrectly during training – these are then passed on to other weak learners who learn to classify them (and fail on the examples classified correctly by other colleagues). The advantage is that a simple classifier can be trained quickly so that a very effective group of experts can be created for a sufficiently large number of weak learners individually focusing on a limited part of the issue. To illustrate, the use of Adaboost in an R package on real data is demonstrated at the end of the chapter.