Skip to content
Naked Security Naked Security

Clearview AI loses entire database of faceprint-buying clients to hackers

Time to worry about how well the facial recognition startup protects its 3b+ database of faceprints scraped from our social media accounts?

Clearview AI, the controversial facial recognition startup that’s gobbled up more than three billion of our photos by scraping social media sites and any other publicly accessible nook and cranny it can find, has lost its entire list of clients to hackers – including details about its many law enforcement clients.
In a notification that The Daily Beast reviewed, the company told its customers that an intruder “gained unauthorized access” to its list of customers, to the number of user accounts they’ve set up, and to the number of searches they’ve run.
The disclosure also claimed that Clearview’s servers hadn’t been breached and that there was “no compromise of Clearview’s systems or network.” The company said that it’s patched the unspecified hole that let the intruder in, and that whoever it was didn’t manage to get their hands on customers’ search histories.
Tor Ekeland, an attorney for Clearview, sent a statement to news outlets saying that breaches are just a fact of life nowadays:

Security is Clearview’s top priority. Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security.

Clearview, which has sold access to its gargantuan faceprint database to hundreds of law enforcement agencies, first came to the public’s attention in January when the New York Times ran a front-page article suggesting that the “secretive company […] might end privacy as we know it.”
In its exposé, the Times revealed that Clearview has quietly sold access to faceprints and facial recognition software to more than 600 law enforcement agencies across the US, claiming that it can identify a person based on a single photo, reveal their real name and far more.
Within a few weeks of the Times article, Clearview found itself being sued in a potential class action lawsuit that claims the company amassed the photos out of “pure greed” to sell to law enforcement, thereby violating the nation’s strictest biometrics privacy law – the Biometric Information Privacy Act (BIPA).
Among the many online sources that Clearview has scraped to get all the biometric data it’s selling (or giving away), Twitter, Facebook, Google and YouTube have ordered the company to stop its scraping – a practice that violates the social media giants’ policies.


In a followup report, the Times noted that there’s a strong use case for Clearview’s technology: finding victims of child abuse. Investigators told the newspaper that Clearview’s tools have enabled them to identify the victims featured in child abuse videos and photos, leading them to names or locations of victims whom they may never have been able to identify otherwise. One retired chief of police said that running images of 21 victims of the same offender returned 14 minors’ IDs, the youngest of whom was 13.
Following the Times’ exposé, New Jersey barred police from using the Clearview app. Canada’s privacy agencies are also investigating Clearview to determine if its technology violates the country’s privacy laws, the agencies said on Friday.
David Forscey, the managing director of the non-profit Aspen Cybersecurity Group, told the Daily Beast that Clearview’s breach should be worrying for its customers:

If you’re a law-enforcement agency, it’s a big deal, because you depend on Clearview as a service provider to have good security, and it seems like they don’t.

Put another way by tech policy advocate Jevan Hutson:

Clearview continues to give us a clear view of why biometric surveillance is an unsalvageable trash fire.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

4 Comments

Wow! It’s amazing so many companies (not only police; even universities) are using Clearview AI (from the list Leaked customer list on Clearview AI Wikipedia).
There should be some reasons that they’re using facial recognition. A balanced act on privacy and justification should be made. In addition, security and protection on those database should be monitoring too by parties / organizations / governments (like our money / information in banks).

Reply

If you live your life as a law-abiding citizen then what do you have to hide? Account user images on Facebook, Twitter and LinkedIn are public to begin with. It’s the criminals that are worried, and so they should be.

Reply

aye, cool, let’s see how you feel about that when some dodgy crim with better tech skills than you swaps your mugshot for theirs…

Reply

I think “Anthony Maw” is way too naive and totally clueless about how digital footprints are shared between corporations and organizations

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!