Following reports about text transcriptions of live Skype calls being vetted by humans, meaning that sensitive conversations could have been bugged, Microsoft says it’s moved its human grading of Cortana and Skype recordings into “secure facilities”, none of which are in China.
On Friday, The Guardian published a report after talking to a former Microsoft contractor who lived in Beijing and transcribed thousands of audio recordings from Skype and the company’s Cortana voice assistant – all with little cybersecurity protection, either from hackers or from potential interception by the government.
The former contractor said that he spent two years reviewing potentially sensitive recordings for Microsoft, with “no security measures”, often working from home on his personal laptop. He told the Guardian that Microsoft workers accessed the clips through a web app running in Google’s Chrome browser, on their personal laptops, over the Chinese internet.
They received no help to protect the recordings from eavesdroppers, be they Chinese government, disgruntled workers, or non-state hackers, and were even told to work off new Microsoft accounts that all shared the same password – for “ease of management.”
The Guardian quoted the former contractor:
There were no security measures, I don’t even remember them doing proper KYC [know your customer] on me. I think they just took my Chinese bank account details.
Being British, he was put to work listening to people whose Microsoft devices were set to British English. After a while, he was allowed to work from home in Beijing, where he used a simple username and password to access the clips – a set of login credentials that he said were emailed to new contractors in plaintext. The password was the same for every employee who joined in any given year, he said.
Humans reviewing audio has been par for the course at all the tech companies that are refining their voice assistants’ voice recognition technologies: since April, the “our contractors are listening to your audio clips” club has grown to include Facebook (with some Messenger voice chats), Google, Apple, Microsoft (via Xbox as well as Skype and Cortana) and Amazon.
But Microsoft has taken it a step beyond mere voice assistant clips. In August 2019, Motherboard reported that Microsoft was also using human graders to listen in on calls made using its Skype phone service. As Motherboard reported, Microsoft had been vetting an experimental feature enabling live text translation of Skype calls, meaning that users could have engaged in sensitive conversations without realizing that they were bugged.
Unlike its recalcitrant tech brethren, Microsoft didn’t apologize for any of it, pointing out that its terms of service said that it might “analyze audio” of calls. It did, however, update its privacy policy to be more explicit about playing those calls to human contractors.
As with previous “humans are listening” stories, the story of Microsoft’s Skype/Cortana voice analysis involves workers overhearing deeply personal conversations, including what the former Beijing contractor said sounded potentially like domestic violence. The privacy implications aren’t what’s new, however – rather, what’s new is the lack of cybersecurity safeguards, in a country where the government intercepts just about everything that happens online. Here’s the former Microsoft contractor:
Living in China, working in China, you’re already compromised with nearly everything. I never really thought about it.
They just give me a login over email [in order to access] Cortana recordings. I could then hypothetically share this login with anyone. […] It sounds a bit crazy now, after educating myself on computer security, that they gave me the URL, a username and password sent over email.
Microsoft issued a statement saying that, since Motherboard’s reporting over the summer, it’s ended its grading programs for Skype and Cortana for Xbox and moved the rest of its human grading into “secure facilities”. None of them are in China.
We review short snippets of de-identified voice data from a small percentage of customers to help improve voice-enabled features, and we sometimes engage partner companies in this work. Review snippets are typically fewer than ten seconds long and no one reviewing these snippets would have access to longer conversations. We’ve always disclosed this to customers and operate to the highest privacy standards set out in laws like Europe’s GDPR.
This past summer we carefully reviewed both the process we use and the communications with customers. As a result we updated our privacy statement to be even more clear about this work, and since then we’ve moved these reviews to secure facilities in a small number of countries. We will continue to take steps to give customers greater transparency and control over how we manage their data.
Microsoft didn’t detail what those “steps” may entail.
David Heath
Obviously there is an opt-out for this.
Magyver
Lisa, that type of careless handling of both security and customers personal data is exactly what I would have expected from Microsoft but couldn’t prove. In fact, people routinely say “conspiracy theorist” for even suggesting such a thing.
The worst part? …In China no less.
Thanks for the heads up m’lady. ;)