Skip to content
Naked Security Naked Security

Faceless recognition can identify you, even when your face is hidden

New research shows AI can learn to identify people by matching patterns around their heads and bodies, even when their faces are obscured.

Forget about trying to protect your privacy by scrambling facial recognition technology, whether it’s by pixellating your mug, wearing a creepy T-shirt sporting distorted celebrity likenesses, donning glass nanosphere-coated duds that bounce back camera flashes, or wrapping the so-called privacy visor around your eyeglasses.

Facial recognition is everywhere.

In fact, Facebook appears to have got to the point where its systems don’t even have to see your face to recognize your face.

Microsoft, for its part, has been showing off technology that can decipher emotions from the facial expressions of people who attend political rallies, recognize their genders and guesstimate their ages.

It’s everywhere.

Local law enforcement using it in secret, a sports stadium used it to try to detect criminals at the Super Bowl, retail stores are tracking us with it, and even churches are using it to track attendance.

But, here’s the latest: researchers have demonstrated that algorithms can be trained to identify people by matching previously observed patterns around their heads and bodies, even when their faces are blurred or obscured.

In a new paper from Germany’s Max Planck Institute, the researchers described a system that they’re calling the Faceless Recognition System.

The researchers trained a neural network on a set of photos containing both obscured and visible faces. The network then used that input to predict the identity of obscured faces by looking for similarities in the area around a person’s head and body.

Previous work has concentrated on different types of facial obfuscation, like blur, blacking-out, swirl, and dark spots. But as far as the researchers know, they’re the first to look at person recognition using a trainable system that leverages full-body cues.

For this work, they looked at three facial obfuscations: Gaussian blur, white boxes and black boxes.

They found that, not surprisingly, it’s easier to recognize somebody if you’re working with a set of photos from the same event, where people’s appearance doesn’t change much.

When you have a photo collection that ranges across different events, however, changes in people’s clothing, the context, and illumination make it much more of a challenge to recognize individuals.

The researchers found that when identifying faces obscured by black squares across different events, the system’s performance sagged dramatically, from 47.4 percent to 14.7 percent. Still, that’s three times more accurate than the “naive” method of identifying obscured faces through blind prediction, they noted.

The more visible faces there are in a given photo set, the better the faceless recognition system will work. But even when only 1.25 instances of an individual’s face is fully visible, the system can still identify an obscured face with 69.6% accuracy.

If there are 10 images that show an individual’s face, accuracy increases to as high as 91.5%.

For comparison’s sake, in June 2015, Facebook’s AI team scored 83% accuracy, even for photos where faces weren’t clearly visible, by relying on cues such as a person’s stance and body type.

These results should raise concerns from a privacy perspective, the researchers said, given that when using state of the art techniques, blurring a head “has limited effect.”

Only a handful of tagged heads are enough to enable recognition, even across different events (different day, clothes, poses, point of view). In the most aggressive scenario considered (all user heads blacked-out, tagged images from a different event), the recognition accuracy of the faceless recognition system was 12× higher than chance level.

It’s “very probable” that there are already systems similar to this that are being used secretly, the researchers suggested.

We believe it is the responsibility of the computer vision community to quantify, and disseminate the privacy implications of the images users share online.

5 Comments

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?