“[…] show them that the crime in your jurisdiction is predictable from year to year and therefore, month to month. […] Within 15 minutes of the conclusion of the staff meeting, I had frontline officers at my office door asking about my ‘crystal ball’.”
Public Safety Industry Expert at Motorola Solutions.
Predictive policing is a concept under which a wide array of practices and ideas fall. The underlying assumptions of this model of policing are that policing must be proactive, not reactive, and that crime can be predicted by looking at patterns, using historical records and big data. There are a couple of programs being used in big cities around the world, for example, PredPol(UK), Hunchlab (California), and Beware (Chicago, California). Each one of these represents a different approach to the same principle: crime can be predicted or prevented by using techno-scientific approaches.
Hunchlab is creating maps where areas are color-coded to represent different crime patterns. As you interact with the map, you can get more information by hovering into this area. This will let know the policeman how long should they spent in that specific area, what’s the most recurrent crime and what tactic works better there. A full automatization of the decision-making process, police can now rely on the app to know when, why and how to interact with each area. According to Hunchlab, by relying on data-driven decision making, you are avoiding biased or hunch-driven policing which might lead to racial profiling or misplaced police resources. This system codifies entire areas.
Beware creates a profile for specific addresses using data fed by information brokers. The information used by the software comes from all sorts of platforms including social media. But because the data is coming from companies that make money out of the quantity and not so much of the quality of the data they sell, their reliability is in question. This system codifies specific addresses
PredPol Is using algorithms to create lists of potential perpetrators or victims. The algorithms are developed using what they call “routine activity theory”, based on the idea that people do things over and over; that they create habits. Combined with seismology – place stroke by crime will see aftershocks as a result of that crime—they identify patterns and create a network of people that might be affected by an aftershock, not based on their criminal history, but on their connections –closeness- to those who have been victims or perpetrators. The resulting document –called SSL in Chicago—is a list of 400 people with high risk of getting involved in crimes. Once they are on the list they cannot be erased –and their relationship with the police changes as they get visits and warnings even if they haven’t committed a crime. This system codifies people, gives them a score and creates a list.
The aforementioned programs have caused concern for several reasons, which I would divide into three areas: data, theory, and implementation.
Theory: The idea that decision-making is now free of biases because is handled by an algorithm, has been proven wrong. The factors that feed the algorithm, percentages and formulas are based on decisions taken by mathematicians and developers, most of whom have specific background and biases.
Data/para-empirical: The collection of the data varies depending on the project, but reporting or underreporting can skew it. There are several reasons why some crimes might be under the radar and others more visible. Differences in police presence in specific areas, cameras, heightened expectations, etc. might be explanations for possible variations on reporting or visibility of crimes.
Implementation/map: all come together in digital platforms, sleek designs and new technologies obscure the precedence of the origin of the data and the binary logic underlying computational systems. Fully automated police, don’t leave space for contestation or accountability for their actions as they are data and algorithm-driven.
The sanitization of pre-crime systems is performed by platforms that rely on mathematics and technoscience’s that obscure decision-making process plagued by human subjectivity and error. One of the contentious aspects of these systems is not only their accuracy and criminalization of areas, addresses, and people but the fact that they render the process unaccountable. They forget that they are looking at digital quantifiable shadows, and overlook the erasure of the complexities of human lives; the unquantifiable aspect of life disappears.
The systems presented here are layering data and combining it to create “deep” maps, nevertheless, they lack a foundational part of deep mapping, conversation. As Iain Biggs through Clifford McLucas expresses it: “deep mapping should be a ‘politicized, passionate, and partisan’ evocation of a site, involving ‘negotiation and contestation over who and what is represented and how’ and giving rise to ‘debate about the documentation and portrayal of people and places’ but, above all, should strive to remain ‘unstable, fragile and temporary… a conversation and not a statement’”
Deep mapping should not be an object, but rather the result of a negotiation process. Predictive policing, as a product of unaccountable mathematical algorithms and incontestable outputs –like SSL–is far from having the relatable and human aspects that amount to deep mapping complexity. The “deepness” of a map, therefore, is not based on the amount of data put together, or how many sources we use, but on the qualitative aspects of it.
It is easy to be deceived by techno-scientific approaches –even Les Roberts believes that deep mapping is mostly a product of the digital age—because they provide the ability to gather (create) large amounts of data from different media (binary) and present it in compelling ways. Nevertheless, their deepness is questionable as they use undisclosed algorithms, binary codification and rely on quantifiable data only. To me, the biggest concern is that the digital world is being used as a way of circumventing conversation, where getting rid of “error” in their policing predictions is just a matter of time and technological advance.
To counter the idea that data can explain the world, I would like to create explore other ways to represent places, people and areas, using a series of analogue pictures that incorporate the ideas of Jim Goldberg (combination of images, text, and juxtaposition e.g.- “rich and poor”) and George Perec (“Tentative d’épuisement d’un lieu parisien”). Emphasizing the materiality of the analogs picture (non-binary), and combining it with the lived experiences through writing, would create maps where depictions and narrations blend. Put together on box-sets, they would be distributed to people on the police force as an alternative map to the area. The combination of these elements seeks to problematize the meaning of deep mapping as means to deepen our understanding of spatialized phenomena. Also, to challenge the equation of big data to “depth” that justifies binarization and furthers the lack of qualitative and material dimensions in mapping.