ABSTRACT

This chapter explores the relation between data and speed in predictive policing. It starts by examining how the police produce data from crime scenes and illustrates how the representation of criminal activity in datasets is impacted by epistemic uncertainties and the translation of social phenomena into fixed classification systems. Generally speaking, the police usually have to grapple with data that tend to be incoherent, inaccurate, and unreliable – and thus need to be subjected to multiple layers of amendment and quality control before they can be analyzed. This creates considerable tension with regard to predictive policing and the presupposed need to run analyses as quickly as possible in order to be able to intervene in ongoing criminal activity. As the quality of crime data usually only improves over the course of investigations, police departments face a trade-off situation where they have to decide whether to immediately run analyses based on potentially unreliable data or whether to wait for consolidated data but run the risk of receiving already outdated results. This trade-off must, however, also be understood vis-à-vis the daily rhythm of public life and criminal activity within which police work takes place. Overall, the chapter foregrounds the complexity of data and multiple temporalities in predictive policing.