ABSTRACT

This chapter examines how landlords use algorithmic screening tools to screen tenants and how their approach shapes racial discrimination. We find that landlords with large property portfolios are especially likely to use screening software tools that rely on algorithms to weigh various tenant characteristics, thereby purporting to estimate tenant risk profiles. Like all so-called automation, however, algorithms do not operate outside the realm of social relations. Landlords have a great deal of discretion in the degree to which they use this technology and how they interpret it. The inputs and thresholds of the algorithms themselves are tuned by humans – often by property owners themselves – who seek to align them with often idiosyncratic ideas of what constitutes a “good” tenant. Furthermore, this chapter argues that like many prediction technologies, tenant screening algorithms do not succeed or fail purely on the accuracy of their predictions, but rather serve a function in and of themselves. Specifically, they represent a technology that, given current laws in the United States, is largely unassailable by fair housing testing and litigation. By using an algorithm to make screening decisions, landlords can legally claim that their selections are free from both implicit and explicit bias and thereby protect themselves from lawsuits.