This chapter looks into the relations between algorithms and human analysts. As predictive policing applications aim to facilitate analytical work through the automation of complex tasks, they aim to reduce the workload for humans and accelerate crime analysis. In doing so, they do, however, inevitably remove (parts of) the analytical process from sight. The chapter investigates how predictive policing reconfigures the relationship between humans and machines and discusses some pertinent regulatory and normative questions that come to the fore when analytical tasks are hidden in black-boxed algorithmic systems. Our analysis shows how the police still consider human operators essential in order to review the data basis on which predictive policing software computes outputs. At the same time, from a legal and ethical perspective, police departments are bent on keeping decision-making an exclusively human affair. This is, however, not easy, as arguing against a machine can be quite challenging for human operators. Police departments have, therefore, come up with a number of safeguards that are supposed to support humans and keep the algorithm in check.