ABSTRACT

This chapter widens the perspective and looks into the societal and ethical ramifications of predictive policing. Against the backdrop that algorithmic crime analysis tools are often used to allegedly rationalize police work and frame its technoscientific character as innovative, impartial, and superior to human behavior, it discusses a number of concerns with regard to the algorithmic production of risk estimates and targeted crime prevention measures. First, the chapter looks into the data basis of predictive policing. For a number of reasons, crime data are likely to be biased. Used as input for algorithmic processing, such bias is likely to persist, although potentially in rationalized and less obvious forms. Second, the chapter engages the behavior of patrol officers within presumed risk spaces. The notion of risky environments can lead to increased suspicion, more aggressive patrolling practices, and aggravate existing racial and/or ethnic prejudice. Third, the chapter explores how predictive policing applications, by design, encourage law enforcement–heavy policing strategies. Predictive policing has been developed based on the same assumptions as situational crime prevention and thus replicates the preference to treat symptoms rather than to address root causes of crime. Finally, the chapter reviews how predictive policing removes analytical processes from sight and in doing so potentially undercuts the accountability of police departments for their actions.