If Artificial Intelligence (AI) is increasingly able to recognise and classify faces, then perhaps the only way to counter this creeping surveillance is to use another AI to defeat it.
We’re in the early years of AI-powered image and face recognition but already researchers at the University of Toronto have come up with a way that this might be possible.
The principal at the heart of this technique is adversarial training, in which a neural AI network’s image recognition is disrupted by a second trained to understand how it works.
This makes it possible to apply a filter to an image that alters only a few very specific pixels but makes it much harder for online AI to classify.
The theory behind this sounds simple enough, explains the University of Toronto’s professor Parham Aarabi:
If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they’re less noticeable. It creates very subtle disturbances in the photo, but to the detector they’re significant enough to fool the system.
The researchers even tested their algorithm against the 300-W face dataset, an industry-standard pool based on 600 faces in a range of lighting conditions.
Against this, the University of Toronto system reduced the proportion of faces that could be identified from 100% to between 0.5% and 5%.
However, read the detailed paper published by the team and it becomes clear that there’s still a way to go. For a start, not all image recognition systems work in the same way, with architectures such as the Faster R-CNN offering a much bigger challenge.
But why, you might ask, would anyone want to go to such lengths to thwart facial recognition? According to Aarabi:
Personal privacy is a real issue as facial recognition becomes better and better. This is one way in which beneficial anti-facial-recognition systems can combat that ability.
And you don’t have to take his word for it either. During May’s F8 conference, Facebook’s CTO Mike Schroepfer took to the stage to explain how the company has successfully used 3.5 billion Instagram images to improve the ability of its AI to recognise and classify visual content.
That project is part of an attempt to automate content checking on its platform, but not everyone is impressed with the implications for privacy down the line.
It might sound unfair to pick on Facebook because a lot of companies are investing in image and face recognition, both online and, in a more sinister way, in the kind of real-time face recognition that interests governments and police forces. But Facebook is an important case due to its immense power which could give it the edge when it comes to training and evolving its neural AI to perform this kind of task.
Before we get too carried away, there is a catch inherent in using AI to disrupt AI in almost any field – the AI being disrupted can evolve to stop this happening. This turns the problem into a Sisyphean challenge of perpetual AI v AI, with each trying to out-evolve the other. We can expect this battle to commence when the University of Toronto researchers turn what they have developed into a filter that can be used by anyone as a browser plug-in or app.
It could yet turn into the next privacy war, a conflict of algorithmic attrition fought mostly behind the scenes. A bit more action wouldn’t hurt anyone – to date, it’s been a pretty one-sided walkover.
s31064
OK, so in order to not have the government’s AI-powered facial recognition software figure out who I am and where I’ve been, all I have to do is get control of their servers (or at least control of their cameras) and run my own AI-powered masking software. Piece of cake.
AlexTolley
Isn’t it easier just to hide/camouflage your face? Maybe realistic face masks that are easy to wear will appear. hats with veils might return as a woman’s fashion item. There are so many ways to fool AI, from articles of wear that block features to subtle trompe l’oeil makeup.
DAMARIS
great. how you know about blocking facial recognition?