ABSTRACT

In the early 1980s, medical schools had begun using algorithms to screen and prioritize student applications for admission. Before leveraging AI, almost half of the school’s annual applications were rejected by the admissions office. The algorithm was sustaining the biases that already existed in the admissions system. Vetting and screening protocols leveraging social scoring algorithms have historically been used to disenfranchise populations of colors. The Ofqual algorithm is an example of a mismatch between the predicted outcome of an algorithm and its actual prediction. Ofqual’s algorithm did not determine students’ real achievements throughout the year but instead expected how healthy students in a particular school “should” do based on the teachers’ input. In college admission, discriminatory uses of AI algorithms are not being used to classify and vet who gets into a university. Still, algorithms are used to predict the likelihood that a student will accept if admitted and determine the amount of financial aid for the student.