Naked Security Naked Security

Fake news doesn’t (always) fool mice

Mice can interpret speech phonemes correctly up to 80% of the time without falling for semantic hoodwinks like humans do.

Mice can’t vote.

They can neither fill in little ovals on ballots nor move voting machine toggles with their itty bitty paws. That’s unfortunate, because the teeny rodents are less inclined than humans to be swayed by the semantics of fake news content in the form of doctored video and audio, according to researchers.

Still, the ability of mice to recognize real vs. fake phonetic construction can come in handy for sniffing out deep fakes. According to researchers at the University of Oregon’s Institute of Neuroscience, who presented their findings during a presentation at the Black Hat security conference last Wednesday (7 August), recent work has shown that “the auditory system of mice resembles closely that of humans in the ability to recognize many complex sound groups.”

Mice do not understand the words, but respond to the stimulus of sounds and can be trained to recognize real vs. fake phonetic construction. We theorize that this may be advantageous in detecting the subtle signals of improper audio manipulation, without being swayed by the semantic content of the speech.

No roomfuls of adorable mice watching YouTube

Jonathan Saunders, one of the project’s researchers, told the BBC that – unfortunately for those who find the notion irresistibly cute – the end goal of the research is not to have battalions of trained mice vetting our news:

While I think the idea of a room full of mice in real time detecting fake audio on YouTube is really adorable, I don’t think that is practical for obvious reasons.

Rather, the goal is to learn from how the mice do it and then to use the insights in order to augment existing automated fakery detection technologies.

Saunders told the BBC that he and his team trained mice to understand a small set of phonemes: the sounds that humans make that distinguish one word from another:

We’ve taught mice to tell us the difference between a ‘buh’ and a ‘guh’ sound across a bunch of different contexts, surrounded by different vowels, so they know ‘boe’ and ‘bih’ and ‘bah’ – all these different fancy things that we take for granted.

And because they can learn this really complex problem of categorising different speech sounds, we think that it should be possible to train the mice to detect fake and real speech.

The mice got a treat when they interpreted the speech correctly, which they did up to 80% of the time. Maybe that’s not stellar, but if you combine it with existing methods of detecting deep fakes, it could be valuable input.

State of the art

As it is, both humans and machines do well at detecting fakes. The researchers conducted a small user study in which participants were asked to differentiate between short clips of real speech vs. fake ones. The humans did OK: our species’ median accuracy was 88%.

That’s close to the median accuracy of 92% for the state of the art algorithms evaluated for the challenge: algorithms that detect unusual head movements or inconsistent lighting, or, in shoddier deep fakes, spot subjects who don’t blink. (The US Defense Advanced Research Projects Agency [DARPA] has found that a lack of blinking, at least as of the circa August 2018 state of the technology’s evolution, was a giveaway.)

In spite of the current, fairly high detection rate, we need all the help we can get to withstand the ever more sophisticated fakes that are coming. Deep fake technology is evolving at breakneck speed, and just because detection is fairly reliable now doesn’t mean it’s going to stay that way. Thus was difficult-to-detect impersonation a “significant” topic at this year’s Black Hat and Def Con conferences, the BBC reports.

An error rate hovering around 10% not only leaves a deluge of online-delivered fakery; it also means that the false positive rate will be fairly high, meaning that real news will be flagged as fake, the researchers noted.

And even with detection rates fairly high, be it via biological or machine means, convincing fakes are already out there. For example, experts believe that a family of dueling computer programs known as generative adversarial networks (GANs) were used to create what an AP investigation recently suggested was a deep fake LinkedIn profile of a comely young woman who was suspiciously well-connected to people in power.

Forensic experts easily spotted 30-year-old “Katie Jones” as a deep fake. But that didn’t keep plenty of well-connected people in the government and military from accepting “her” LinkedIn invitations.

Leave a Reply

Your email address will not be published. Required fields are marked *