In news that will surely be a surprise to nobody, apps that run on Amazon’s home assistant, Echo, can be turned into silent eavesdroppers: no fancy hacking required, no new Echo vulnerability pried open.
Or at least they could, until Amazon fixed it.
Researchers at information security firm Checkmarx demonstrated what we probably all suspected was possible but hoped wasn’t by tweaking options in Alexa’s software development kit (SDK) – the kit that’s used to develop software, known as skills, for the Echo.
The voice-activated skills are the equivalent of the apps on your phone: discreet bits of software that add capabilities to the device. There are skills for finding open restaurants near you, getting Starbucks started on your coffee order, checking your bank balance, hearing the latest news and turning on the Christmas lights.
And on somebody’s desk at Checkmarx, there’s one for eavesdropping on you. It silently captures transcripts of what you’re saying and sends them to an external log accessible to the researchers who rigged the trap.
The malicious skill is dressed up as a calculator and it’ll happily multiply seven times two, or answer any other basic math questions you throw at it.
In other words, this isn’t a hack of the device itself but a trick that shows what would have been possible if an attacker had succeeded in getting users to install a malicious application, something they’ve been fooling users into doing unwittingly on computers and phones for years.
Just like everything else on the Echo, the calculator is voice activated and it needs to listen to what you tell it in order to understand what you want. Unlike regular skills though, this one is specially designed to carry on listening long after it’s finished helping your kids with their homework.
Anything you say during the app’s very lengthy exit is transcribed.
There is one clue to the Echo’s continued interest in what you’re saying – it’s blue ring remains illuminated. Obviously that would only help you if your Echo was visible, you were looking right at it and you knew what this particular use of the device’s light was trying to tell you.
The researchers demonstrated their surreptitious recording device in a YouTube video:
Details of how they did it can be found in a two-page report that’s accessible in return for a bit of data entry.
If you’ve read about the risks of Amazon Alexa and Google Home, your reaction to the news that a voice-activated, internet-connected personal assistant was easily turned into an in-home spy may well be “no surprise there then.”
These devices, after all, are made to listen.
They’re only supposed to engage with us, and start performing their cloud-based speech recognition, when we use specific trigger words like “Google” or “Alexa” but obviously their microphones have to be on all the time in order to do that.
That’s something that’s of interest to both hackers and police, who may come bearing a search warrant if they think that Alexa may have been triggered at a crime scene, such as the one in which a man was strangled in a hot tub. (A request that Amazon resisted. At any rate, the issue was made moot after the murder suspect agreed to turn over the Echo recordings.)
Search warrants aside, do we now have to worry about seemingly innocuous little requests – “Alexa, start calculator” – potentially adding up to waaaaaay more than “seven times two”?
Amazon told Wired that it’s tightening up checks on its skills store in response to the research:
“We have put mitigations in place for detecting this type of skill behavior and reject or suppress those skills when we do.”
Mahhn
How long until they can turn off the light like they do on webcams while watching.
New hack in 10, 9, 8, 7, 6
Gregory Bell
So they just enabled the feature that Amazon uses to send them the transcripts. This is how these work. They don’t have the power to do much besides send everything through wifi and their servers do all that part