Due to benefits like increased speed, Artificial Intelligence (AI)/Machine Learning systems are increasingly involved in screening decisions which determine whether or not individuals will be granted important opportunities such as college admission or loan/mortgage approval. To discuss concerns about potential bias in such systems, this chapter focuses on AI to support human resource decisions (e.g., selecting among job applicants). As AI systems do not inherently harbor prejudices, they could increase fairness and reduce bias. We discuss, however, that bias can be introduced via: (i) human influence in the system design, (ii) the training data supplied to the system, and (iii) human involvement in processing system recommendations. For each of the above factors we review and suggest possible solutions to reduce/remove bias. Notably, developing AI systems for screening decisions increased scrutiny for bias and raised awareness about pre-existing bias in human decision patterns which AI systems were trained to emulate.