ABSTRACT

The chapter retraces the barriers and the drivers that can hinder or facilitate the affirmation of algorithm audit within institutions and corporations. Algorithms can be protected in three main ways: through copyright, trade secret, and patents. The protection of patents has been used by industry as an argument against algorithmic transparency, but because patents are public, they do not constitute an obstacle to algorithm audit. Intellectual property rights may set limits to transparency or accountability. Training and user data are likely to contain personal data. The provision of algorithmic literacy that teaches core concepts such as computational thinking, the role of data, and the importance of optimisation should be made in education systems. This will increase the awareness of citizens and social groups about the data they generate and the decisions that are made through these data. There is also the need for training recognised professional figures who are able to unpack data assemblages and assess algorithms. The competencies of algorithm auditors lay in the intersection between computer science, social sciences, and data science.