Think Tank: Urgent Oversight Needed for Police AI

Think Tank: Urgent Oversight Needed for Police AI

A leading think tank has called for urgent regulatory and oversight mechanisms to be introduced to govern the use of machine learning technology by UK law enforcers.

The Royal United Services Institute for Defence and Security Studies (RUSI), is the world’s oldest independent defense and security think tank. Its latest report, Machine Learning Algorithms and Police Decision-Making: Legal, Ethical and Regulatory Challenges was published with the Centre for Information Rights, University of Winchester.

It argued that although machine learning is currently being used in limited scenarios such as supporting custody decisions, there’s potential for a much wider expansion of its role in policing, with forces currently trialing its use in a variety of decision-making processes.

It described the lack of a regulatory and governance framework for its use as “concerning.”

“A new regulatory framework is needed, one which establishes minimum standards around issues such as transparency and intelligibility, the potential effects of the incorporation of an algorithm into a decision-making process, and relative ethical issues,” it continued. “A formalized system of scrutiny and oversight, including an inspection role for Her Majesty’s Inspectorate of Constabulary and Fire and Rescue Services is necessary to ensure adherence to this new framework.”

The report also warned that machine learning algorithms require “constant attention and vigilance” to make sure any predictions they provide are as unbiased and accurate as possible. To help in this, RUSI recommended the setting up of local ethics boards to assess each new implementation for police.

The use of emerging technologies in policing has been controversial over the years, as regulatory oversight often struggles to catch-up with day-to-day operations.

In May this year, rights groups called on the police to stop using facial recognition technology, claiming that FOI responses from forces proved it was “dangerous and inaccurate.”

False positives at the Metropolitan Police stood at 98%.

Source: Information Security Magazine