ABSTRACT

Chapter 10 explores the additional two techniques of ensemble learning, that is boosting and stacking. An explanation of boosting, with its major steps, forms part of this chapter. The types of boosting algorithms, followed by their importance and the means by which they are helpful for models with underfitting, are provided. The major applications of boosting, along with their benefits and challenges, are followed by a comparison of bagging and boosting algorithms. The stacking ensemble method is also addressed, with its steps listed, followed by the different types and their importance. A sample dataset to implement the XGBoost algorithm, with appropriate screenshots, is presented followed by additional learning resources, a quiz and key points to remember.