Much of Big Data's appeal stems from its aura of objectivity. Proponents of big data policing argue that in contrast to human discretion, algorithms and other automated data management systems reduce bias and increase efficiency, insight, neutrality, objectivity, accountability, and fairness throughout the justice system. This overlooks the basic fact that even new analytic platforms and techniques are deployed in preexisting organizational contexts1 and embody the purposes of their creators.2 Further, algorithms are tools that humans use to outsource their decisions—or at least streamline their decision-making processes—but human discretion remains central to their operation, from programming to data entry to the weight analysts assign to their results and recommendations. What data law enforcement collects, what methods they use to analyze and interpret it, and how it informs their practice are all part of a fundamentally social process.
Rather than eliminating human discretion, big data is a form of capital, both a social product and a social resource. For that reason alone, big data cannot possibly obviate inequality: like other forms of capital, it is transactional, some people and groups have more of it than others, and it can be extracted from already disadvantaged populations.
Still, empirically, to what extent the adoption of advanced analytics will reduce organizational inefficiencies and inequalities or serve to entrench existing power dynamics remains an open question. This chapter analyzes the promises and perils of police use of big data. Surveillance is always ambiguous; it is implicated in both social inclusion and exclusion, and it creates both opportunities and constraints. When debating the merits of a new algorithm or surveillance technology, it is always important to remember that its openness and fairness are relative questions: Is the use of big data better or worse than the inequalities created by existing models of policing? Can big data be used to reduce bias and improve public safety? What are the trade-offs? What are the intended and unintended consequences? How do the digital traces we leave shape our life chances?
Big data and associated technologies permit unprecedentedly broad and deep police surveillance. The depth and breadth of surveillance has ambivalent implications for social inequalities. On the one hand, big data analytics may be a means by which to ameliorate persistent inequalities in policing. Data can be used to “police the police” and replace unparticularized suspicion of racial minorities and human exaggeration of patterns with less biased predictions of risk. On the other hand, data-intensive police surveillance practices are implicated in the reproduction of inequality in at least four ways: by (1) deepening the surveillance of individuals already under suspicion, (2) codifying a secondary surveillance network, (3) widening the criminal justice dragnet unequally, and (4) leading people to avoid “surveilling” institutions that are fundamental to social integration. As currently implemented, police use of big data exacerbates inequalities more than it remediates them, because data is collected, analyzed, and deployed in socially patterned, asymmetrical ways.