Amazon and Google have blocked spying, phishing apps that keep your smart speaker listening after you think it’s gone deaf, lie to you about there being an update you need to install, and then vish (voice-phish) away the password you purportedly need to speak so you can get that bogus install.
Long story short, don’t believe a smart speaker app that asks for your password. No regular app does that.
Eight of these so-called “Smart Spies” were built by Berlin-based Security Research Labs (SRL) and put into app stores under the guise of being horoscope or random-number generators.
SRL says that it managed to sneak in the spyware because third-party developers can extend the capabilities of Amazon Alexa – the voice assistant running in its Echo smart speakers – and Google Home through small voice apps, called Skills on Alexa and Actions on Google Home.
Those apps currently create privacy issues, SRL says, in that they can be abused to eavesdrop on users or to ask for their passwords.
Grabbing sensitive data
To capture sensitive data like passwords or credit card numbers, SRL used the following sequence:
- Put a seemingly innocent application through the Amazon or Google app review process.
- Change the app after the review so that its welcome message sounds like an error, such as “This skill is currently not available in your country”, making users think the app has quit.
- Reinforce the idea that the app has quit by adding a long pause after the welcome message (achieved by having the speaker “say” an unpronounceable character sequence.)
- Have the app say a message that sounds like its coming from the device itself, such as “An important security update is available for your device. Please say start update followed by your password.”
- Capture the password as a slot value (a user input) and send it to the attackers.
Here’s what that looks like on Google Home:
Eavesdropping
To eavesdrop on users, SRL used a variation on the techniques used to grab passwords. On the Amazon Echo, the sequence looks like this:
- Put a seemingly innocent app through the app review process.
- The app has a function triggered by the word “stop”, and another function triggered by a commonly used word, or a word likely to preceed something of interest to the attacker. Both functions capture what’s said immediately after they’re triggered.
- Change the app after the review so that the function triggered by “stop” responds with “goodbye” followed by a long pause, making users think the app has quit.
- Also after the review, change the second function so that it doesn’t respond when it’s triggered. If the user accidentally says the innocuous trigger word in conversation in the several seconds that elapse before the app quits, whatever follows it is sent to the attackers.
The sequence of events is similar on Google Home, but the result is far worse. On that platform, SRL was able to create an app that said Google Home’s bye sound before putting itself into a loop that captured voice data indefinitely.
Here’s what that looks like:
Mop-up
The BBC reports that after SRL informed the companies of the vulnerabilities, Google said that it had removed SRL’s Actions and that it’s “putting additional mechanisms in place to prevent these issues from occurring in the future.”
Amazon said that it too moved fast to block the researchers’ apps and to block this type of exploit in the future:
Customer trust is important to us, and we conduct security reviews as part of the skill certification process.
We quickly blocked the Skill in question and put mitigations in place to prevent and detect this type of Skill behaviour and reject or take them down when identified.