Naked Security Naked Security

Facebook aiming for faceless facial recognition

Worried that Facebook can't get your friends right in your selfies when their hair blows around? There might just be an app for that...

Face-print courtesy of Shutterstock

According to an article in New Scientist, Facebook’s facial recognition is moving beyond your face.

Yann Le Cun, who is Head of Artificial Intelligence (the ultimate job to program your way out of!) at Facebook, is quoted as saying something that is both obvious and telling:

People have characteristic aspects, even if you look at them from the back.

In other words, we can sometimes recognise each other, even though we aren’t face-to-face, through all sorts of other factors such as gait, posture, height, body shape, and so on.

Le Cun continues, perhaps a bit less convincingly:

For example, you can recognise Mark Zuckerberg very easily, because he always wears a gray T-shirt.

I’d probably have said it a bit differently:

You can usually recognise Mark Zuckerberg very easily, because he's rather famous, and you tend to see his picture face-on in direct association with banner headlines such as 'Facebook founder kills own food' or 'Zuck can speak Chinese.'

I suspect there is many a legal drama in which cultural assumptions have led to the wrongful prosecution of an innocent person, followed by a rip-snorting cross-examination by the hard-pressed defence counsel to uncover the dreadful mis-identifcation wrought by “a tweed hat,” or “sensible shoes.”

Or, for that matter, your “gray T-shirt.” (Maybe I should stop wearing mine?)

Anyway, the Facebook visual scientists are especially keen on figuring out which of your friends are in your photos, because that means that Facebook’s network can tag your photos by content, as well as when and where they were taken, and thus do funkier things with them.

Indeed, Facebook’s new new Moments app is squarely aimed at making it quicker and easier to share your, ahem, Moments, as Facebook calls them, with your real buddies, by working out who’s in each photo as soon as you take it.

(To be fair, it does seem that Moments also gives you closer control over your photos, thus boosting your privacy, or at least reducing the chance of over-sharing your snapshots.)

Group photo syndrome

The problem is that precisely when you want to share photos super-quickly with others in your group – at a wedding; in the photo-of-everyone at the end of a conference or ski trip – you are stuck with group photo syndrome.

In photo #1, Charlie and Annabelle are blinking; in photo #2, Annabelle and Roger are sharing a parting look at one another and are both side-on; and in photo #3, Rupert and Charlie have turned around and are wandering off in search of another glass of gluehwein.

Sure, you can work out who’s who in the photos, because you were there; because you’re already knew who was in the group; and because, truth be told, you’re a lot better at recognising people than today’s computer algorithms.

So, Le Cun and Facebook want to get better, too, by programming their people-recognition to work even if your face is obscured or turned away.

17% inaccuracy

Apparently, in recent tests, using 40,000 public photos from Flickr, the Facebook AI team scored 83% accuracy, even for photos where faces weren’t clearly visible.

Of course, 83% accuracy (and that includes cases where the face was fully visible) also means “nearly one in five attempts were completely wrong.”

And that sheds doubly-worrying light on the recent news that privacy advocates bailed out of discussions about the commercial use of facial recognition in the USA.

In that story, the discussions seem to have foundered on the rocks of consent, with no industry representatives willing to agree that opt-in (where facial recognition is off by default, and you can’t lawfully be tagged until you agree) has no place.

Now factor in “facial” recognition that needs no face, but will identify the wrong person nearly 20% of the time.

Your ability even to notice that someone just caught you in their vacation photo, or to turn away at the last minute and evade recognition, could evaporate too.

Is this an aspect of AI that we should be afraid of?

Or do we just welcome our new robot recognisers?