Skip to content
Naked Security Naked Security

London police’s use of facial recognition falls flat on its face

Rolled out for a second year at the Notting Hill Carnival, the technology 'couldn't tell the difference between a young woman and a balding man', said observers

A “top-of-the-line” automated facial recognition (AFR) system trialled for the second year in a row at London’s Notting Hill Carnival couldn’t even tell the difference between a young woman and a balding man, according to a rights group worker invited to view it in action.

Because yes, of course they did it again: London’s Met police used controversial, inaccurate, largely unregulated automated facial recognition (AFR) technology to spot troublemakers. And once again, it did more harm than good.

Last year, it proved useless. This year, it proved worse than useless: it blew up in their faces, with 35 false matches and one wrongful arrest of somebody erroneously tagged as being wanted on a warrant for a rioting offense.

Silkie Carlos, the technology policy officer for civil rights group Liberty, observed the technology in action. In a blog post, she described the system as showing “all the hallmarks of the very basic pitfalls technologists have warned of for years – policing led by low-quality data and low-quality algorithms”.

In spite of its lack of success, the Met’s project leads viewed the weekend not as a failure, but as a “resounding success,” Carlos said, because it had come up with one, solitary successful match.

Even that was skewered by sloppy record-keeping that got an individual wrongfully arrested: the AFR was accurate, but the person had already been processed by the justice system and was erroneously included on the suspect database.

The Notting Hill Carnival pulls in some 2m people to the west London district on the last weekend of August every year. Out of 454 arrested people last year, the technology didn’t tag a single one of them as a prior troublemaker.

But why let failure puncture your technology balloon? London’s Metropolitan Police went right ahead with plans to again use AFR to scan the faces of people partying at Carnival, in spite of Liberty having called the practice racist.

Studies bear out the claim that AFR is an inherently racist technology. One reason is that black faces are over-represented in face databases to begin with, at least in the US: according to a study from Georgetown University’s Center for Privacy and Technology, in certain states, black Americans are arrested up to three times their representation in the population. A demographic’s over-representation in the database means that whatever error rate accrues to a facial recognition technology will be multiplied for that demographic.

Beyond that over-representation, facial recognition algorithms themselves have been found to be less accurate at identifying black faces.

During a recent, scathing US House oversight committee hearing on the FBI’s use of the technology, it emerged that 80% of the people in the FBI database don’t have any sort of arrest record. Yet the system’s recognition algorithm inaccurately identifies them during criminal searches 15% of the time, with black women most often being misidentified.

That’s a lot of people wrongly identified as persons of interest to law enforcement.

The problems with American law enforcement’s use of AFR are replicated across the pond. The Home Office’s database of 19m mugshots contains hundreds of thousands of facial images that belong to individuals who’ve never been charged with, let alone convicted of, an offense.

Here’s Carlos describing this woebegone technology making mistakes in real time at Carnival:

I watched the facial recognition screen in action for less than 10 minutes. In that short time, I witnessed the algorithm produce two ‘matches’ – both immediately obvious, to the human eye, as false positives. In fact both alerts had matched innocent women with wanted men.

The police brushed it off, she said:

They make their own analysis before stopping and arresting the identified person anyway, they said.

‘It is a top-of-the-range algorithm,’ the project lead told us, as the false positive match of a young woman with a balding man hovered in the corner of the screen.

Carlos writes that Carson Arthur, from the accountable policing organization StopWatch, was also observing the AFR trial. When he asked the officers what success would look like, here’s how a project leader reportedly responded:

We have had success this weekend – we had a positive match!

That’s not a lot of return on investment, to put it lightly: the arrest was erroneous, and police stopped dozens of innocent people to request identification after they were incorrectly tagged as troublemakers (thankfully, they had it on hand; otherwise, they could have been wrongfully arrested). Carlos points out that the one single match came at the price of biometric surveillance of 2m carnival-goers and plenty of police resources.

The lack of law enforcement acknowledgement of AFR’s poor track record and invasion of privacy is par for the course, Carlos said:

None of our concerns about facial recognition have registered with the police so far. The lack of a legal basis. The lack of parliamentary or public consent. The lack of oversight. The fact that fundamental human rights are being breached.

Carlos asks where we’ll wind up if this “offensively crude” technology dominates public spaces, saying:

If we tolerated facial recognition at Carnival, what would come next? Where would the next checkpoint be? How far would the next ‘watch list’ be expanded? How long would it be before facial recognition streams are correlated?

We can look to China for answers of what pervasive AFR looks like. We already know how it looks in Beijing: it looks like being followed into public restrooms as authorities ration toilet paper.

We can also look to China for recent police use of AFR that was actually effective, if one assumes local media accounts are trustworthy and haven’t been airbrushed by censors. On Monday, local media reported that police in Qingdao, a coastal city in eastern China’s Shandong province, used the technology to identify and arrest 25 suspects during an Oktoberfest held in August.

The system also recognized people with histories of drug addiction, 19 of whom tested positive for drug use and were subsequently arrested, as were five people with previous convictions for theft who were found to have stolen phones and other items at the festival. According to Sixth Tone, 18 cameras installed at four entrances captured a total of 2.3m faces.

We’ve also seen China roll out AFR in these situations:

Sixth Tone quoted a Shanghai lawyer who said that China has a long way to go when it comes to protecting individuals’ privacy rights. While cities and provinces have published or proposed guidelines, a set of rules at the national level that were drafted and published in November 2016 still haven’t been passed.

But really, what good are laws protecting individuals’ privacy when they’re simply ignored?

Both UK and US police have been on non-sanctioned AFR sprees. In the US, the FBI never bothered to do a legally required privacy impact assessment before rolling out the technology. In the UK, retention of millions of people’s faces was declared illegal by the High Court back in 2012. At the time, Lord Justice Richards told police to revise their policies, giving them a period of “months, not years” to do so.

“Months”, eh? It took five years. The Home Office only came up with a new set of policies in February of this year.

In the absence of policies, China’s gone whole-hog for AFR. Hell, soon people are going to be able to purchase KFC fried chicken by smiling.

We’re used to clucking our tongues over China’s approach to surveillance and censorship, from extensive media coverage of the Great Firewall to its forcing spyware onto a minority group.

But as the Notting Hill Carnival shows yet again, there’s nothing uniquely Chinese, or British, or American, about police using biometrics willy-nilly, without regard for effectiveness, privacy invasion or legality.

Apparently, the gee-whiz nature of the technology is sparkling so brightly that it’s obscuring its flaws and repercussions. Politicians and law enforcement have, too often, regrettably, proven deaf and blind to hearing about or seeing the downsides.


7 Comments

Is it time to put on our Guy Fawkes mask yet every time we leave the house?
If “freedoms just another word for nothing left to lose” (Janis Joplin),
is privacy just another word for nothing left to known?

Reply

“one wrongful arrest of somebody erroneously tagged”

Geez, it shouldn’t have gone that far. Any software filter will have false positives, but what about safeguards? Wouldn’t it be programmed to retrieve a mugshot for comparison? Quick verification for the humans actually handcuffing the suspect.

“Hey WAIT. That’s a young woman–not a balding man! Sorry miss; our mistake. You’re free to go.”

Reply

Why would you use a data base with everybody in it? With only the ‘types’ you’re looking for (criminal past, whatever..) In one of the cases it would eliminate 80% of the data base and probably increase processing through-put … At least it would increase the odds of a ‘correct’ hit. IMHO

Reply

Interesting that unless people had ID on them, they could have been wrongfully arrested. The reason we were given for the UK not joining Schengen (no border controls) was because it means the compulsory carrying of photo ID. So now, we have a de-facto intra-UK Schengen with none of the benefits.
Biometrics still have a long way to go – the fingerprint reader on my Note 4 doesn’t work for me for several minutes after I have washed my hands, and up to a week if I have been doing DIY.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!