ABSTRACT

Using artificial intelligence (AI) systems within decision-making processes is neither a new nor an unfamiliar concept. If used correctly, algorithmic decision-making has the potential to be both beneficial and valuable to modern society. Yet the risk of discrimination resulting from the use of these automated systems is substantial and widely acknowledged. As such, this work considers instances in which algorithmic systems have discriminated against individuals based upon protected characteristics, why this is the case, and what can be done to prevent this from continuing to occur. However, the purpose of this chapter is to evaluate the effectiveness of current legal safeguards, such as General Data Protection Regulation (GDPR), the European Convention of Human Rights (ECHR), and the Equality Act 2010, in dealing with AI-based discrimination. In particular, this chapter features analysis of the first known legal challenge to the use of algorithms in the UK brought by The Joint Council for the Welfare of Immigrants and Foxglove and considers the effectiveness of the some of the aforementioned legal safeguards in this case.