Skip to content
Naked Security Naked Security

Facial recognition – another setback for law enforcement

"Something needs to be done," said the court. Where do you stand? For or against, have your say in our comments.

So far this year, the use of facial recognition by law enforcement has been successfully challenged by courts and legislatures on both sides of the Atlantic.
In the US, for example, Washington State Senate Bill 6280 appeared in January 2020, and proposed curbing the use of facial recognition in the state, though not entirely.
The bill admitted that:

[S]tate and local government agencies may use facial recognition services in a variety of beneficial ways, such as locating missing or incapacitated persons, identifying victims of crime, and keeping the public safe.

But it also insisted that:

Unconstrained use of facial recognition services by state and local government agencies poses broad social ramifications that should be considered and addressed. Accordingly, legislation is required to establish safeguards that will allow state and local government agencies to use facial recognition services in a manner that benefits society while prohibiting uses that threaten our democratic freedoms and put our civil liberties at risk.

And in June 2020, Boston followed San Fransisco to become the second-largest metropolis in the US – indeed, in the world – to prohibit the use of facial recognition.
Even Boston’s Police Department Commissioner, William Gross, was against it, despite its obvious benefits for finding wanted persons or fugitive convicts who might otherwise easily hide in plain sight.
Gross, it seems, just doesn’t think it’s accurate enough to be useful, and was additionally concerned that facial recogition software, loosely put, may work less accurately as your skin tone gets darker:

Until this technology is 100%, I’m not interested in it. I didn’t forget that I’m African American and I can be misidentified as well.


Across the Atlantic, similar objections have been brewing.
Edward Bridges, a civil rights campaigner in South Wales, UK, has just received a judgement from Britain’s Court of Appeal that establishes judicial concerns along similar lines to those aired in Washington and Boston.
In 2017, 2018 and 2019, the South Wales Police (Heddlu De Cymru) had been trialling a system known as AFR Locate (AFR is short for automatic facial recognition), with the aim of using overt cameras – mounted on police vans – to look for the sort of people who are often described as “persons of interest”.
In its recent press summary, the court described those people as: “persons wanted on warrants, persons who had escaped from custody, persons suspected of having committed crimes, persons who may be in need of protection, vulnerable persons, persons of possible interest […] for intelligence purposes, and persons whose presence at a particular event causes particular concern.”
Bridges originally brought a case against the authorities back in 2019, on two main grounds.
Firstly, Bridges argued that even though AFR Locate would reject (and automatically delete) the vast majority of images it captured while monitoring passers-by, it was nevertheless a violation of the right to, and the expectation of, what the law refers to as “a private life”.

AFR Locate wasn’t using the much-maligned technology known as Clearview AI, based on a database of billions of already-published facial images scraped from public sites such as social networks and then indexed against names in order to produce a global-scale mugshot-to-name “reverse image search” engine. AFR Locate matches up to 50 captured images a second from a video feed against a modest list of mugshots already assembled, supposedly with good cause, by the police. The system trialled was apparently limited to a maximum mugshot database of 2000 faces, with South Wales Police typically looking for matches against just 400 to 800 at a time.
Secondly, Bridges argued that the system breached what are known as Public Sector Equality Duty (PSED) provisions because of possible gender and race based inaccuracies in the technology itself – simply put, that unless AFR Locate were known to be free from any sort of potentially sexist or racist inaccuracies, however inadvertent, it shouldn’t be used.
In 2019, a hearing at Divisional Court level found against Bridges, arguing that the use of AFR Locate was proportionate – presumably on the grounds that it wasn’t actually trying to identify everyone it saw, but would essentially ignore any faces that didn’t seem to match a modestly-sized watchlist.
The Divisional Court also dismissed Bridges’ claim that the software might essentially be discriminatory by saying that there was no evidence, at the time the system was being trialled, that it was prone to that sort of error.
Bridges went to the Court of Appeal, which overturned the earlier decision somewhat, but not entirely.
There were five points in the appeal, of which three were accepted by court, and two rejected:

  • The court decided that there was insufficient guidance on how AFR Locate was to be deployed, notably in repect of deciding where it was OK to use it, and who would be put on the watchlist. The court found that its trial amounted to “too broad a discretion to afford to […] police officers.”
  • The court decided that the South Wales Police had not conducted an adequate assessment of the impact of the system on data protection.
  • The court decided that, even though there was “no clear evidence” that AFR Locate had any gender or race-related bias, the South Wales Police had essentially assumed as much rather than taking reasonable steps to establish this as a fact.

(The court rejected one of Bridges’ fives point on the basis that it was legally irrelevant, being enacted into law more recently that the events in the case.)
Interestingly, the court rejected what you might think of as the core of Bridges’ objections – which are the “gut feeling” objections that many people have against facial recognition in general – namely that AFR Locate interfered with the right to privacy, no matter how objectively it might be programmed.
The court argued that “[t]he benefits were potentially great, and the impact on Mr Bridges was minor, and so the use of [automatic facial recogntion] was proportionate.”
In other words, the technology itself hasn’t been banned, and the court seems to think it has great potential, just not in the way it’s been trialled so far.
And there you have it.
The full judgement runs to 59 very busy pages, but is worth looking at nevertheless, for a sense of how much complexity cases of this sort seem to create.
Thhe bottom line right now, at least where the UK judiciary stands on this, seems to be that:

  1. Facial recognition is OK in principle and may have significant benefits in detecting criminals at large and identifying vulnerable people.
  2. More care is neeeded in working out how we use it to make sure that we benefit from point (1) without throwing privacy in general to the winds.
  3. Absence of evidence of potential discriminatory biases in facial recognition software is not enough on its own, and what we really need is evidence of absence of bias instead.

In short, “Something needs to be done,” which leads to the open question…
…what do you think that should be? Let us know in the comments!


21 Comments

Facial recognition technology isn’t always accurate! I have two friends who resemble me and I have been falsely identified when their pictures were posted on Facebook
This can be a helpful tool but it is not as accurate as DNA

Reply

I’m not sure how absence of bias can be proven to people who are convinced that it is inherently biased and won’t accept evidence. There are plenty of such people around.

Reply

I didn’t say “absence of bias” but rather “absence of evidence”. If you “won’t accept evidence” (one way or the other) even when it is presented and can be considered credible, then you are indeed part of the problem – but that goes for a plethora of claims and counter claims in the world of science and engineering. OTOH, if there isn’t any evidence either way then I think a court is entitled to say, “There should be,” especially given the existence of a directive explicitly entitled “public service equality duty.”
I suspect that any trial that is designed well enough to search for biases in recognition on the basis of characteristics such as gender or race will also help us devise a more general way to measure the yet more fundamental issues of “is the technology suitable to use against *anyone*, let alone *everyone*”. For example, what precision should we expect? What sort of true positive versus true negative error rates? Should our standards change as products support ever larger watchlists? How does the composition of the watchlist itself affect the precision of the detection? And so on.

Reply

> How does the composition of the watchlist itself affect the precision of the detection
That’s HUGE. A not-asked-enough question (and rarely easily-answered) is
“are we asking the correct questions?”
.
similar: Are IQ tests useless? Probably not.
But they must be evaluated for the attributes they overlook as well as skills they assess and attempt to quantify.

Reply

In general, facial recognition should only be used as a tool for gathering information and assisting the authorities, but it should never be used for convicting anyone. All that stuff needs to be verified the old fashioned way.

Reply

I don’t see the big deal, at least in the Wales case where they’re not using a globally-scraped collection of stolen photos. Basically, that’s having a computer do with greater speed and accuracy what you already expect police officers to do (scan everyone’s face to see if they kinda look like one of the mugshots he’s supposed to be looking out for.) But those guidelines do need to be in place and strictly applied, to keep any potential misapplication in check.
One need, for example, is to make sure everyone understands this isn’t a 100% accurate indictment, it’s just an indication of resemblance. They should treat a “hit” from this system the same as they’d treat seeing a guy in a red hat when someone was assaulted nearby by someone wearing a red hat.
And if there are any biases in the rate of matching, that doesn’t seem like an actual problem either, though they should be measured. Set a bar for accuracy, and subtract any measurable biases so that few “hits” reach the bar for problematic distinctions, or none if the bias is especially heavy. If that resulted in disproportionate police attention to white criminals, since others have a cloak of bad-ID-tech anonymity, would anyone complain about that?

Reply

Basically, that’s having a computer do with greater speed and accuracy what you already expect police officers to do (scan everyone’s face to see if they kinda look like one of the mugshots he’s supposed to be looking out for.)
But it also represents a change in the way in which we are policed (or an opportunity to do so). Policing the streets with cameras is different to policing the street with police officers. Police officers (at least in the UK – still, just about) represent a human face of justice and act as a deterrent. By being on the streets (preferably on foot, but also in a car) they see a wider context and if they see someone wanted they can stop/arrest them far faster than any camera can.
A camera connected to a computer is however authoritarian and erodes the relationship with the wider public.
Cameras and computers may be cheaper; but there is a difference between price and value.

Reply

Nobody is saying a camera is the same thing as a police officer or that they should replace police, merely that they can aid police.

Reply

i fully support facial recognition in principle and practice for a variety of use cases. i am not into gut reactions based on the mainstream media and political entities. yes, law enforcement in general can be biased, with or without FR tech. the question (context) here is it better with it? i.e. is there less bias, less false arrests, less time to case resolution, less fear, more safety with FR deployed? i have yet to see a study on that. it is / never will be 100%… but is it better than status quo? and, citizens should have access to the same FR tools in order to catch bad cops. i think we need A/B comparisons of the trials – and independent ngo quality standards – if courts/the govt hold the keys, nothing will change for the better ever. final thoughts… FR tech is way less harmful than guns… can we ban those for police in SF,boston since they apparently care so much?

Reply

Although AFR is not 100% accurate, it is almost certainly more accurate than expecting a police officer to identify a person whose mugshot he or she saw at the station: and more accurate that the traditional identity parade. But AFR should never be the sole reason to prosecute a suspect; it should be a step towards finding more compelling evidence.

Reply

You can justify any egregious change on the basis of child protection, terrorism, or fighting crime. For example the US justifies torture on this basis.

Reply

I can’t get past the fact that this is a fundamental violation of an individual’s right to privacy.

Reply

But it’s not, though. I can’t think of any reasonable argument that supports an expectation of privacy from people recognizing your face, and even at worst this is a tool that makes that exponentially easier for them, not some source of heretofore unobtainable information about you.
They might have some database of privacy-violating information about you that they consult when they recognize you, and this technology might be able to make the consultation near-instantaneous rather than something they do over the succeeding nights.. but those thoughts are more “implications that this has potential misapplications,” rather than a “fact that this is a fundamental violation.”

Reply

Even if it worked perfectly, hell no.
On private property (your house, court, office building) that’s the owners decision, just like “if” they decide to let you in. In public it becomes a prison guard.
I will avoid any country that implements this evil empire slave management system, I like being allowed to think I’m free.

Reply

It is only a matter of time, the tech does support valid law enforcement uses. But I think the oversight would have to be rather high.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!