This chapter examines the techno-ethical limits of AI urbanism through a case study of ‘predictive policing.’ While advocates celebrate predictive policing as a tool for police reform, critics argue that the technology reinforces disparities in law enforcement by concentrating surveillance in low-income and racial-minority communities. Building on an ethnographic case study of predictive policing platform HunchLab, the chapter argues for a more nuanced account of the technicity of crime prediction algorithms and, specifically, greater attention to the statistical indeterminacies that arise when attempting to optimize for desirable but immeasurable outcomes, such as crime prevention and deterrence. These technical challenges are closely related to predictive policing’s techno-ethical appeal as a managerial reform but they ultimately stem from contradictions in the institution of urban policing itself. Predictive policing’s failures, the chapter concludes, are effects of policing’s institutional crises.