ABSTRACT

Automated decision-making systems based on data-driven models are becoming increasingly common but without proper auditing, these models may result in negative consequences for individuals, especially those from underprivileged groups. The proliferation of such systems in everyday life has made it important to address the potential for biases in these models. Group fairness methods assess if there is evidence in a model's prediction when comparing groups within a given sensitive attribute.

This final chapter introduces mlr3fairness and fair machine learning. It begins with theory behind bias, fairness and group fairness notions, and then continues by putting these into practice with measures that are included in mlr3fairness. The chapter concludes by discussing fairness reports and other methods to transparently acknowledge bias in data and models.