An artificial intelligence that scours crime data can predict the location of crimes in the coming week with up to 90 per cent accuracy, but there are concerns how systems like this can perpetuate bias
Technology
30 June 2022
An artificial intelligence can now predict the location and rate of crime across a city a week in advance with up to 90 per cent accuracy. Similar systems have been shown to perpetuate racist bias in policing, and the same could be true in this case, but the researchers who created this AI claim that it can also be used to expose those biases.
Ishanu Chattopadhyay at the University of Chicago and his colleagues created an AI model that analysed historical crime data from Chicago, Illinois, from 2014 to the end of 2016, then predicted crime levels for the weeks that followed this training period.
The model predicted the likelihood of certain crimes occurring across the city, which was divided into squares about 300 metres across, a week in advance with up to 90 per cent accuracy. It was also trained and tested on data for seven other major US cities, with a similar level of performance.
Previous efforts to use AIs to predict crime have been controversial because they can perpetuate racial bias. In recent years, Chicago Police Department has trialled an algorithm that created a list of people deemed most at risk of being involved in a shooting, either as a victim or as a perpetrator. Details of the algorithm and the list were initially kept secret, but when the list was finally released, it turned out that 56 per cent of Black men in the city aged between 20 to 29 featured on it.
Chattopadhyay concedes that the data used by his model will also be biased, but says that efforts have been taken to reduce the effect of bias and the AI doesn’t identify suspects, only potential sites of crime. “It’s not Minority Report,” he says.
“Law enforcement resources are not infinite. So you do want to use that optimally. It would be great if you could know where homicides are going to happen,” he says.
Read more: Google wants to challenge AI with 200 tasks to replace the Turing test
Chattopadhyay says the AI’s predictions could be more safely used to inform policy at a high level, rather than being used directly to allocate police resources. He has released the data and algorithm used in the study publicly so that other researchers can investigate the results.
The researchers also used the data to look for areas where human bias is affecting policing. They analysed the number of arrests following crimes in neighborhoods in Chicago with different socioeconomic levels. This showed that crimes in wealthier areas resulted in more arrests than they did in poorer areas, suggesting bias in the police response.
Lawrence Sherman at the Cambridge Centre for Evidence-Based Policing, UK, says he is concerned about the inclusion of reactive and proactive policing data in the study, or crimes that tend to be recorded because people report them and crimes that tend to be recorded because police go out looking for them. The latter type of data is very susceptible to bias, he says. “It could be reflecting intentional discrimination by police in certain areas,” he says.
Journal reference: Nature Human Behaviour, DOI: 10.1038/s41562-022-01372-0
More on these topics: