Skip to content
Naked Security Naked Security

Clearview AI face-matching service set to be fined over $20m

Scraping data for a facial recognition service? "That's unlawful", concluded both the British and the Australians.

The UK data protection regulator has announced its intention to issue a fine of £17m (about $23m) to controversial facial recognition company Clearview AI.

Clearview AI, as you’ll know if you’ve read any of our numerous previous articles about the company, essentially pitches itself as a social network contact finding service with extraordinary reach, even though no one in its immense facial recognition database ever signed up to “belong” to the “service”.

Simply put, the company crawls the web looking for facial images from what it calls “public-only sources, including news media, mugshot websites, public social media, and other open sources.”

The company claims to have a database of more than 10 billion facial images, and pitches itself as a friend of law enforcement, able to search for matches against mug shots and scene-of-crime footage to help track down alleged offenders who might otherwise never be found.

That’s the theory, at any rate: find criminals who would otherwise evade both recognition and justice.

In practice, of course, any picture in which you appeared that was ever posted to a social media site such as Facebook could be used to “recognise” you as a suspect or other person of interest in a criminal investigation.

Importantly, this “identification” would take place not only without your consent but also without you knowing that the system had alleged some sort of connection between you and criminal activity.

Any expectations you may have had about how your likeness was going to be used and licensed when it was uploaded to the relevant service (if you even knew it had been uploaded in the first place) would thus be ignored entirely.

Understandably, this attitude provoked an enormous privacy backlash, including from giant social media brands including Facebook, Twitter, YouTube and Google.

You can’t do that!

Early in 2020, those behemoths firmly told Clearview AI, “Stop leeching image data from our services.”

You don’t have to like any of those companies, or their own data-slurping terms-and-conditions of service, to sympathise with their position.

Uploaded images, no matter how publicly they may be displayed, don’t suddenly stop being personal information just because they’re published, and the terms and conditions applied to their ongoing use don’t magically evaporate as soon as they appear online.

Clearview, it seemed, was having none of this, with its self-confident and unapologetic founder Hoan Ton-That claiming that:

There is […] a First Amendment right to public information. So the way we have built our system is to only take publicly available information and index it that way.

The other side of that coin, as a commenter pointed out on the CBS video from which the above quote is taken, is the observation that:

You were so preoccupied with whether or not you could, you didn’t stop to think if you should.

Clearview AI has apparently continued scraping internet images heartily over the 22 months since that video aired, given that it claimed at that time to have processed 3 billion images, but now claims more than 10 billion images in its database.

That’s despite the obvious public opposition implied by lawsuits brought against it, including a class action suit in Illinois, which has some of the strictest biometric data processing regulations in the USA, and an action brought by the American Civil Liberties Union (ACLU) and four community organisations.

UK and Australia enter the fray

Claiming First Amendment protection is an intriguing ploy in the US, but is meaningless in other jurisdictions, including in the UK and Australia, which have completely different constitutions (and, in the case of the UK, an entirely different constitutional apparatus) to the US.

Those two countries decided to pool their resources and conduct a joint investigation into Clearview, with both country’s privacy regulators recently publishing reports on what they found, and interpreting the results in local terms.

The Office of the Australian Information Commisioner (OAIC) decided that Clearview “interfered with the privacy of Australian individuals” because the company:

  • Collected sensitive information without consent;
  • Collected information by unlawful or unfair means;
  • Did not notify individuals of data that was collected; and
  • Did not ensure that the information was accurate and up-to-date.

Their counterparts at the ICO (Information Commissioner’s Office) in the UK, came to similar conclusions, including that Clearview:

  • Had no lawful reason for collecting the information in the first place;
  • Did not process information in a way that people were likely to expect;
  • Had no process to to stop the data being retained indefinitely;
  • Did not meet the “higher data protection standards” required for biometric data;
  • Did not tell anyone what was happening to their data.

Loosely speaking, both the OAIC and the ICO clearly concluded that an individual’s right to privacy trumps any consideration of “fair use” or “free speech”, and both regulators explicity decried Clearview’s data collection as unlawful.

The ICO has now decided what it actually plans to do, as well as what it thinks about Clearview’s business model.

The proposed intervention includes: the aforementioned £17m ($23m) fine; a requirement not to touch UK residents’ data any more; and a notice to delete all data on British people that Clearview already holds.

The Aussies don’t seem to have proposed a financial penalty, but also demanded that Clearview must not scrape Australian data in future; must delete all data already collected from Australians; and must show in writing within 90 days that it has done both of those things.

What next?

According to reports, Clearview CEO Hoan Ton-That has reacted to these unequivocally adverse findings with an opening sentiment that would surely not be out of place in a tragic lovesong:

It breaks my heart that Clearview AI has been unable to assist when receiving urgent requests from UK law enforcement agencies seeking to use this technology to investigate cases of severe sexual abuse of children in the UK.

Clearview AI may, however, find its plentiful opponents replying with song lyrics of their own:

Cry me a river. (Don’t act like you don’t know it.)

What do you think?

Is Clearview AI providing a genuinely useful and acceptable service to law enforcement, or merely taking the proverbial? (Let us know in the comments. You may remain anonymous.)


24 Comments

I’m horrified that this has happened – that a company thinks these actions are ok is beyond belief! No privacy allowed. I hope they are prevented from continuing permanently.

Reply

Privacy vanished years ago.
CCTV, ANPR, Internet records, even GDPR has backfired a nobody read the acceptance click on most web sites.
If this service can detect criminals, let them get on with it. If they want another copy of the thousands of pictures of me that are on the internet somewhere, that’s fine by me.
Surely nobody still thinks they can avoid such recognition.

Reply

If you have nothing to lose or hide, then why lock your doors at night or while away from home??
I would concur/agree with if it was ONLY used for criminal round-up or use of that nature then maybe
As many have pointed out, when or where does it stop? Everything is sooo stretched and twisted, that the original reason for use ends up in the trash – Criminals can be on either side of the law
This is a double-edged sword!

Reply

Indeed, you need to remember that even if Clearview AI is banned from operating in the United Kingdom and in the Commonwealth of Australia – even if it ends up banned worldwide – there is little to stop groups who have no intention of complying with the regulations from using this technology anyway.

So the UK and AU rulings will not put this technology back into Pandora’s Jar where it can no longer be used at all….

…but that doesn’t mean that regulators such as OAIC and ICO ought not to consider the issues involved.

If, indeed, as a society, we don’t like this stuff happening at all, then failing to tell mainstream companies they can’t do it doesn’t put us in a very strong position if we later want to tell cybercriminals or corrupt governments that they aren’t allowed to do it.

Reply

I would agree with Sally, this is a George Orwell nightmare coming to pass. Although some may say what’s the problem if you have nothing to hide. However, this data is open to abuse, may be missused by the corrupt for their own ends and is a vigilantes dream come true. Completely open to abuse by non democratic countries.

Reply

Although I am not familiar with Clearview’s financial status, I think the fine is too low by about an order of magnitude. Maybe two orders of magnitude.

Side note: In “The proposed intervention includes: the aforementioned $17m ($23m) fine; ” you doubled up on the US Dollar sign.

Reply

Duck wrote: “social media brands including Facebook, Twitter, YouTube and Google.” What about LinkedIn?

Reply

Ah, yes, I think LinkedIn also told Clearview to take a hike. I don’t have an exhaustive list of companies that sent ceases-and-desists (and for some reason I can’t find mention of those companies on Clearview’s website :-)

Reply

We live at a time of huge change – comparasions with the wild west do seem appropriate. The arguments that the Clearview’s technology is only used by law enforcement agencies to catch the most repugnant offenders relies on the assumption that the laws passed by the law makers are appropraite for the peoples they apply to. Senior politicans have considerable power which can be used against their people, again too many examples in the last hundreds years to ignore.

It also assumes that all members of the law enforcement agencies would act solely for the pursuit of evidence for the offence they are investigating or preventing. Sadly they are all human and there are too many examples of indivduals using the technology for their personal agenda.

What is not mentioned but perhpas should be is that the is mention of time – an image obtained of a teenager may still be used decades later and cast doubt on a character even where there is no justification. The temptation to use the easy route and avoid balance evidence would be strong.

Finally this case highlights what must be a major issue for all counteries – the extent to which they are able to legislate for their own people. In this case Clearview operate across the globe in areas not covered by international law and interacts with many national interests. Pictures scraped in Australia by an American Company of a UK National.

It is cases like this that will slowly bring law and order to the internet. There will be many more.

Reply

I can see a business opportunity here – to infect these skimmers with false information e.g. generating fake photos and fake identities to go with it. Eventually there would be more fake humans in the databases than real people.

Reply

Nice smokescreen. Is it unimaginable that the American NSA, CIA, and FBI have secretly done this already? The world needs another whistleblower like Snowden. Is UK and Australian governments completely ignorant of the reality? There is no such thing as “privacy” on the Internet. Sooner or later somebody you didn’t intend will have access to it regardless of your privacy expectations. if you upload it, consider it public throughout the universe for eternity.

Reply

Sadly, that’s sound advice: the fact that Clearview AI have been stopped in some countries doesn’t stop the raw data being available to others…

…but that doesn’t change whether it’s acceptable or legal to do it, of course. So you can’t fault the UK and AU privacy regulators for investigating and acting simply on the grounds that by stopping someone doing this they can’t magically stop everyone.

If, as the OAIC says, running a business as Clearview AI has been doing is indeed unlawful and unfair, then you can hardly fault the OAIC for coming out and saying so. That’t a bit like saying that drivers bombing through urban areas at 120km/hr aren’t worth prosecuting and punishing on the grounds that you’ll never catch them all, or saying that we shouldn’t do anything about illegal robocallers in our own countries because we can typically can’t catch those who commit that crime from overseas.

Reply

…even though no one in its immense facial recognition database ever signed up to “belong” to the “service”

Sounds strikingly similar to the US (and likely everywhere else) credit reporting agencies.
and here’s something I never expected to say:
The credit reporting structure seems to have a bit more constructive use than this.

Reply

The problem with thinking it’s OK if they ‘just use the images to identify criminals’ is that what if a corrupt or tyrannical government, corporation, or agency decides that something YOU believe in, post online, visit a website, or purchase is a ‘crime’? Now you have a problem.

Reply

Especially when governments change, CEOs change, companies change their manifesto. Laws often change and the average Joe is told ‘ignorance is not an excuse’…

Reply

…or when sales targets increase and new and ambitious sales staff are determined to help the company to continue growing by finding “new and as-yet untapped markets.” Ironically, those new markets may very well be ones where the government has *not* changed, perhaps for a surprisingly long time.

Reply

My issue is with abuse (by criminals or governments).

To my knowledge ANPR isn’t abused by the law or criminals to much detriment in the UK. Occasionally a cowboy parking company tries it on.

Internet privacy however, is a mess.
Biometrics is getting more and more messy every day.

So the ‘if you’ve got nothing to hide’ argument is a bit wet in my opinion – better to spend time looking into actual incidents; it’s not a clear cut yes or no.

Reply

A few remarks.

ANPR seems to be used quite broadly used by private companies in the UK, who use it to buy vehicle keepers’ addresses from the vehicle licensing agency in order to send “enforcement” notices demanding payment of a “fine” for those they claim have failed to pay the right fees. The fact that you didn’t pay for parking is monitored by the fact that when you buy a ticket for parking you are asked to type in the plate number of the vehicle. This is matched with the APNR data stream at the entry and exit lanes to the car park. This seems to be a mainstream and legally acceptable, though obviously deeply disliked and insensitive, way of exploiting the technology. (Get a bicycle. No registration or tag required. Yet.)

Many car parks have Ts-and-Cs signs that advise of automated “enforcement” by an operating company hired to do the surveillance. Some car parks even do away with tickets by using ANPR at the exit to see if you’ve paid, e.g. via mobile phone. If you haven’t, or they think you haven’t, you’re essentially locked inside the cark park until you negotiate your own release. Apparently many drivers like this approach because it’s convenient – no need to visit a pay station or ticket machine to get a proof of payment, and no need to wind your window down and swipe a ticket or tag when you leave.

Some people argue that this use of ANPR is detrimental to privacy, Ts-and-Cs signs or not, because it’s an example of “feature creep” in a strictly regulated environment such as vehicle licensing that goes well beyond the original purposes of collecting large amounts of private data about car owners and keepers. They say that commercial parking providers should not be allowed to cozy up to government databases to turn disputes surrounding very basic, low-value, informal civil contracts into neo-criminal issues with what amount to privatised fines to stave off threatened court cases, simply so that parking attendants can be fired and non-payment turned into an automated and apparently lucrative revenue stream.

As for “if you have nothing to hide”, well, we *all* have things to hide. Either because we wish to, which should be our right except in specific and well-defined cases, or because we have already entered into an agreement with someone else to keep something secret. An example of that is the PIN for your bank card: you’re not supposed to tell it to anyone, not your spouse, not the police and certainly not an employee of the bank itself. So the “if you have nothing to hide” argument is a complete red herring in any discussion about cybersecurity, and serves only as a distraction whenever it appears. (It’s even a distraction in cases like this, where the purpose of mentioning it is to explain that it’s a distraction.)

There’s also the additional and important issue here about just how far it’s acceptable to go in scraping data collections that belong to other people, data that is already under an existing set of Ts-and-Cs, without permission (and, indeed, in the face of a formal request not to do so); in forming a derivative collection, and then commercialising the collection-you-made-from-the-collection for an utterly different purpose, in a completely different and secretive commercial ecosystem; in using the data you’ve scraped without the consent of anyone involved, and without them being informed at all – even if only by means of the equivalent of a verbose, legally complex and hard-to-read sign on the lamppost at the car park; in making secret inferences from that data in a way that might reasonably described as alleging involvement in criminal activity; and in giving you no choice about, or acknowledgement of, any allegations made about you behind your back, no matter what privacy regulations might say. (That was a breathless sentence. But this is a breathlessly complex issue. Take a deeper breath and read it again :-)

There is a school of thought that argues that there *is* a clear-cut “yes” or “no” here, and that you can investigate the answer from first principles without looking into actual incidents to find whether the technology works or not. The problem here, you could say, that it is the principle and not the process that is improper. For example, you might argue that what Clearview AI is doing, much like the many governmental plans for “encryption backdoors” currently in play around the world, represents a fundamental and unacceptable erosion of the presumption of innocence, and therefore – like “trial by ordeal” in earlier generations, or interrogation without the right to legal representation in the modern era – ought to be prohibited outright as incompatible with the spirit of the times.

I’m not taking sides here. Just setting out some points of view.

Reply

Is it any different than when Mark Zuckerberg scanned the servers at Harvard and stole information from their databases to create the first iteration of Facebook? It’s a larger data source, but it’s the same crime.
Also… ClearView says it’s been doing this. There are probably 100’s of other organizations/systems that have been scanning gathering and storing fast quantities of PUBLIC information, for their own use or future use. I think this privacy situation is like trying to plug a 1 inch hole in a planet sized dam.

It’s out of the box. Tell me of a technology that has went back in the box, once it was brought out to the world.

People will freak out a bit right now, and ten years from now when you walk up to an automated Starbucks, it will recognize you immediately, and give you your favourite drink, and ask if you want to pay from your usual account.

Reply

The fact that Zuckerberg may have done wrong when getting The Facebook off the ground doesn’t magically give everyone else the right to do something similar, on a much bigger scale, throughout all ages, for ever and ever, so be it.

And the legal frameworks surrounding scraping and commercialising what is now specially regulated as biometric data are different.

So while I agree with you that even if Clearview AI isn’t doing this, others probably are/have/will start doing so, and some of those will lie as low as they can.

And in 10 years’ time, I’ll still be going to coffee shops, not to Starbucks :-)

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!