Junk Science and Bad Policing: The Homicide Prediction Project

The law enforcement breed can be a pretty dark lot.  To be paid to think suspiciously leaves its mark, fostering an incentive to identify crimes and misdemeanours with instinctive compulsion.  Historically, this saw the emergence of quackery and bogus attempts to identify criminal tendencies.  Craniometry and skull size was, for a time, an attractive pursuit for the aspiring crime hunter and lunatic sleuth.  The crime fit the skull.

With the onset of facial recognition technologies, we are seeing the same old habits appear, with their human creators struggling to identify the best means of eliminating compromising biases.  A paper published by IBM researchers in April 2019 titled “Diversity in Faces” shows that doing so ends up returning to old grounds of quackery, including the use of “craniofacial distances, areas and ratios, facial symmetry and contrast, skin color, age and gender predictions, subjective annotations, and pose and resolution.”

The emergence of artificial intelligence (AI) tools in identifying a form of predictive criminality perpetuates similar sins.  Police, to that end, have consistently shown themselves unable to resist the attractions supposedly offered by data programs and algorithmic orderings, however sophisticated.  These can take such crude forms as those advanced by Pasco County Sheriff Chris Nocco, a devotee of that oxymoronic pursuit “intelligence-led policing,” stacked with its snake oil properties.  A 2020 Tampa Bay Times piece on the exploits of that Florida county’s sheriff’s office made it clear that Nocco was keen on creating “a cutting-edge intelligence program that could stop crime before it happened.”

The counter to this was impressive in its savagery.  Such forms of law enforcement featured, in the view of criminologist David Kennedy of the John Jay College of Criminal Justice, “One of the worst manifestations of the intersection of junk science and bad policing”, in addition to its utter lack of “common sense and humanity”.

The trend towards data heavy systems that supposedly offer insight into inherent, potential criminality has captured police departments in numerous countries.  A recommendation paper from the European Crime Prevention Network notes the use of “AI tools in hopes of rendering law enforcement more effective and cost-efficient” across the European Union.  Predictive policing is singled out as particularly attractive, notably as a response to smaller budgets and fewer staff.

In the United Kingdom, the government’s Ministry of Justice has taken to AI with gusto through the Homicide Prediction Project, a pilot program that hoovers up data from police and government data sets to generate profiles and assess the risk of a person committing murder.  The program, commissioned by the Prime Minister’s Office in 2023 and involving the MoJ, the Home Office, Greater Manchester Police (GMP) and the Metropolitan Police in London, only came to light because of a Freedom of Information request by the charity Statewatch.

According to the Data and Analysis unit within the MoJ the data science program explores “the power of MOJ datasets in relation to assessment of homicide risk”, the “additional power of the Police National Computer dataset” in doing the same, and “the additional power of local police data”.  It also seeks to review the characteristics of offenders that increase such a risk, exploring “alternative and innovative data science techniques to risk assessment and homicide.”

What stands out in the program is the type of data shared between the agencies.  These include types of criminal convictions, the age a person first appeared as a victim (this includes domestic violence), and the age a person had their first encounter with the police. But also included are such matters as “health markers which are expected to have predictive power”, be they on mental health, addiction issues, suicide, self-harm and disability.

The use of predictive models is far from new for the wonks at the MoJ.  Those used in the Offender Assessment System (OASys) have been previously found to profile people differently in accordance with their ethnicities.  The National Offender Management service noted in a 2015 compendium of research and analysis of the system between 2009 and 2013, “Relative predictive validity was greater for female than male offenders, for White offenders than offenders of Asian, Black and Mixed ethnicity, and for older than  younger offenders.”

Statewatch researcher Sofia Lyall has little to recommend the program, renamed for evidently more palatable consumption the Sharing Data to Improve Risk Assessment program. “Time and again, research shows that algorithmic systems for ‘predicting’ crime are inherently flawed.”  The Homicide Prediction Project was “chilling and dystopian”, profiling individuals “as criminals before they have done anything.”  She is also convinced that the system will, as with others, “code in bias towards racialized and low-income communities” while posing grave threats to privacy.

The unit claims that the work is only intended for dry research purposes, with “no direct operational or policy changes” arising because of it, or any individual application to a “person’s journey through the justice system.”  This is a nonsensical assertion, given the sheer temptations open to officials to implement a program that uses hefty data sets in order to ease the task of rigorous policing.  The representatives of law enforcement crave results, even those poorly arrived at, and algorithmic expediency and actuarial fantasy is there to aid them.  The “precrime” dystopia portrayed in Philip K. Dick’s The Minority Report (1956) is well on its way to being realised.

Binoy Kampmark was a Commonwealth Scholar at Selwyn College, Cambridge. He lectures at RMIT University, Melbourne. Email: bkampmark@gmail.com. Read other articles by Binoy.