ABSTRACT

Intelligent algorithms help us everyday with hundreds of tasks from writing emails, movie recommendations, researching restaurants in a new city and even cooking at home. Algorithms can process huge amounts of data and make decisions faster than any human, but they are also vulnerable to bias, which results in discriminatory outcomes. This chapter reviews various examples of algorithm-based discrimination followed by a discussion on the different biases that underlie discriminatory outcomes, and how to reason about them. Many algorithms that are routinely used to make hugely important decisions in our society are dangerously biased. The COVID-19 pandemic led to the cancellation of the school-leaving exams in the UK. PredPol is another infamous example of how biased and unfair algorithms can be. Biases are seen as a limitation in reasoning or a ‘human design flaw’, but they really are not.