Met Police make first arrest using facial recognition technology

The Metropolitan Police is beginning operational use of facial recognition technology across London.

It follows a number of trials of the cameras, which have been criticised by human rights campaigners as a risk to privacy.

Here’s how the technology works, and why it has proved so controversial.

– How does it work?

Live facial recognition (LFR) technology uses special cameras to scan the structure of faces in a crowd.

The system then creates a digital image and compares the result against a ‘watch list’ made up of pictures of people who have been taken into police custody.

Not everybody on police watch lists is wanted for the purposes of arrest – they can include missing people and other persons of interest.

If a match is found, officers in the area of the cameras are alerted.

– How much has it been used?

The Met have used the technology multiple times since 2016, according to the force’s website, including at Notting Hill Carnival in 2016 and 2017, Remembrance Day in 2017, and Port of Hull docks, assisting Humberside Police, in 2018.

They have also undertaken several other trials in and around London since then.

South Wales Police piloted the technology during the week of the 2017 Champions League final in Cardiff, the first UK force to use it at a large sporting event.

Facial recognition has also been used at a number of privately-owned UK sites, including in shopping centres, museums and conference centres, according to an investigation by civil liberties group Big Brother Watch.

– Why is it controversial?

Campaigners say facial recognition breaches citizens’ human rights.

Liberty has said scanning and storing biometric data ‘as we go about our lives is a gross violation of privacy’.

Big Brother Watch says ‘the notion of live facial recognition turning citizens into walking ID cards is chilling’.

Some campaigners claim the technology will deter people from expressing views in public or going to peaceful protests.

It is also claimed that facial recognition can be unreliable, and is least accurate when it attempts to identify black people, and women.

In its own investigation into the technology, the Information Commissioner’s Office (ICO) concluded that a legal code of practice should be introduced to ensure its safe deployment.

In September last year, a High Court ruling said the use of the technology by South Wales Police had not been unlawful after an activist argued that having his face scanned caused him ‘distress’ and violated his privacy and data protection rights by processing an image taken of him in public.

Ed Bridges, 36, from Cardiff, brought the challenge after claiming his face was scanned while he was doing his Christmas shopping in 2017 and at a peaceful anti-arms protest in 2018.

After the ruling, Mr Bridges said he would appeal against the decision, which is due to be heard in June.

– What do the police say?

Speaking at the Met’s announcement last month about the technology being rolled out, Assistant Commissioner Nick Ephgrave said the force is ‘in the business of policing by consent’ and thinks it is effectively balancing the right to privacy with crime prevention.

He said: ‘Everything we do in policing is a balance between common law powers to investigate and prevent crime, and Article 8 rights to privacy.

‘It’s not just in respect of live facial recognition, it’s in respect of covert operations, stop and search – there’s any number of examples where we have to balance individuals’ right to privacy against our duty to prevent and deter crime.’

Source link