Lately, the authentication wizards have been focusing on gesture recognition: the interpretation of gestures – typically from the face or hand – that can be turned into algorithms to identify people by how they do things like make a face (that would be gurning to you Brits!) or swipe.
Amazon, for one, filed a patent in March for technology that enables shoppers to buy stuff by taking a selfie, be it by photo or video.
The interest in gestures is spurred by the fact that facial, or fingerprint, recognition alone won’t cut it. It’s too easy to spoof static authentication by holding up a 2D picture to a camera, as Google found out after filing a patent to let users unlock their phones by, say, sticking out your tongue or wiggling your eyebrows…
…or, in the case of fingerprints, by making a dummy fingerprint out of wood glue or a 2D inkjet printout.
Google already filed a patent for “Liveness Checks,” but researchers using the most basic of photo editing tools managed to fool Liveness Check with just a few minutes of editing, animating photos to make them look like subjects were fluttering their eyelashes.
Similarly, researchers have now come up with a way to mimic the swipey touch gestures we use to get into our phones. They did it by whipping up a Lego robot and equipping it with a finger sculpted from Play-Doh.
The robots were put together by DARPA-funded researchers who recently published their report, titled “Robotic Robbery on the Touch Screen”, in the journal ACM Transactions on Information and System Security.
The team came up with two Lego robot “attacks” on touch-based authentication: one was based on teaching the robot how to swipe a touch screen based on common patterns gleaned over time from a large population of users, while the other focused on stealing specific gestures from targeted individuals.
Both attacks were pretty successful.
From the paper:
Both attacks are launched by a Lego robot that is trained on how to swipe on the touch screen. Using seven verification algorithms and a large dataset of users, we show that the attacks cause the system’s mean false acceptance rate (FAR) to increase by up to fivefold relative to the mean FAR seen under the standard zero-effort impostor attack.
In order to launch the first attack, the researchers collected swiping statistics from a group of 41 subjects, mostly college students between the ages of 18 and 25.
The students carried out typical Android phone actions. The researchers came away with 28 different swipe features, including touch-pressure, swipe start and end locations, and swipe duration. Then, they took the data and created the programming to represent a single, generic power user.
That power user was what became the researchers’ Lego robot. They then crafted a swiping finger for it, using Play-Doh.
At the low end of the seven algorithms’ success rates, the robot managed to hit a 70% FAR. That means that at its worst, the robot was still typically able to trick a swiping recognition algorithm.
But the Play-Doh-fingered Lego robot did much better using gesture data stolen from targeted users’ phones, rather than the generic gestures it had first been taught. The robot used the nicked data to recreate a specific target-user’s swiping biometrics, hitting FARs that reached as high as 90%.
They did all that using stuff you can buy at the toy store. It’s a clear case for figuring out how to defeat these kinds of attacks, the researchers said.
From the paper:
Because the attacks require only basic programming skills and are launched using cheap off-the-shelf hardware, they represent a realistic threat that should be expected to be faced by a real deployment of a touch-based authentication system.
The article not only calls for the incorporation of robotic attacks in the standard impostor testing routine of touch-based authentication systems but also calls for research into mechanisms that could defeat these attacks.