ABSTRACT

Artificial intelligence (AI) systems can have racist outcomes and be interpreted in racist ways, as technical codes can be written or interpreted by machines in a way that excludes certain groups of people. Phrenology, a racist pseudoscientific relic of the past, argued that mental traits, such as intelligence, could be determined by precise skull measurements. This discredited theory, however, has echoes in the present, such as the company Faception that claims to use facial recognition to determine a person’s IQ. AI codes can be made or interpreted by machines in a way that excludes and the people making the codes can of course be biased. We have seen how law enforcement agencies are utilizing AI for mapping and assessing potential criminals, but in ways that can be discriminatory and assumptive. Technology might reproduce systemic racism; therefore we must also be aware of what societal structures produce racism in the first place.