Skip to content
Naked Security Naked Security

Tech companies resist government hacking back and backdoors

The US government is coming after cybersecurity with a multi-pronged pitchfork.

The US government is coming after cybersecurity with a multi-pronged pitchfork. Again.
The tines include new proposals to break encryption by engineering backdoors into devices and services, according to Reform Government Surveillance (RGS), a coalition of tech companies.
Another tine: the state of Georgia is poised to decriminalize hacking-back and would, as legislation is now (vaguely) written, potentially criminalize the well-intentioned poking around that security researchers do: as in, finding bugs, disclosing them responsibly, or working for bug bounties.
First, the news from Georgia:

Hackles up over hacking-back

The governor’s poised to either sign or veto a bill overwhelmingly passed by the General Assembly – SB 315 – meaning that Georgia’s on the brink of turning the state into a petri dish for offensive hacking, some tech giants are saying.
The bill would decriminalize what it calls “cybersecurity active defense measures that are designed to prevent or detect unauthorized computer access,” or what’s known more succinctly as hacking-back.
In mid-April, Google and Microsoft sent a letter to Georgia Governor Nathan Deal, urging him to veto the bill.
https://twitter.com/JohnnyIK/status/989240280384786433
The companies noted that hacking back is “highly controversial within cybersecurity circles” and that Georgia lawmakers need “a much more thorough understanding” of the potential ramifications of legalizing the hacking of other networks and systems. After all, the companies said, what’s to stop this sudden policy shift from inadvertently rubber-stamping so-called “protective” hacking that’s actually abusive and/or anti-competitive?

We believe that SB 315 will make Georgia a laboratory for offensive cybersecurity practices that may have unintended consequences and that have not been authorized in other jurisdictions.

The bill would also expand the state’s current computer law to create what it calls the “new” crime of unauthorized computer access. It would include penalties for accessing a system without permission even if no information was taken or damaged. In fact, accessing a computer or network would only be valid when done “for a legitimate business activity.”
That’s pretty vague, say security researchers. Critics of the legislation believe it will ice Georgia’s cybersecurity industry, criminalizing security researchers for the sort of non-malicious poking around and bug-finding/bug-reporting that they do.
The same day that Microsoft and Google sent their letter, threat detection and remediation firm Tripwire sent its own, in which it argued that the bill wouldn’t help – rather, it would further weaken security:

It is our firm belief that an explicit exception is required to exclude prosecution when the party in question is acting in good-faith to protect a business or their customers from attack. Without this exclusion, S.B. 315 will discourage good actors from reporting vulnerabilities and ultimately increase the likelihood that adversaries will find and exploit the underlying weaknesses.

Tripwire suggested that the legislation could enable “frivolous lawsuits” brought by the type of vendor that ignores potential vulnerabilities reported by researchers and would rather hide their products’ security defects than address the flaws.
Tripwire might well up and move its employees out of the state, CTO David Meltzer said, rather than risk having them face prosecution under SB 315 for their efforts to get through to such recalcitrant vendors.

When all reasonable attempts to inform a vendor have been exhausted or the vendor demonstrates an unwillingness to act on the information, it is sometimes appropriate to publicly disclose limited details of the security threat so that affected individuals and organizations can take appropriate steps to protect themselves. The vague definitions of S.B. 315 could enable frivolous lawsuits by vendors looking to hide their security defects.

Many critics of the bill think that it was born out of the attorney general’s office getting caught with its pants down, embarrassed by a data breach at Kennesaw State University, whose Election Center was handling some functions for elections in the state. The breach was big news, and it was messy: it spawned a lawsuit over destruction of election data, for one.
The thing about that breach was that it had been responsibly disclosed by a security researcher who wasn’t even targeting the university’s elections systems. Rather, he simply stumbled upon personal information via a Google search, then tried to get authorities to remove it. In other words, he poked around.
The FBI wound up investigating that researcher, but they couldn’t come up with anything, so off they went without a case to prosecute him.
Equifax is another case in point: As the Electronic Frontier Foundation suggested in a letter criticizing the legislation, fear of prosecution under a bill like SB 315 could have dissuaded an independent researcher from disclosing vulnerabilities in the credit broker’s system: vulnerabilities that Equifax ignored when the researcher responsibly disclosed them to the company. Those vulnerabilities led to the leak of sensitive data belonging to some 145 million Americans and 15 million Brits.
The Georgia General Assembly passed SB 315 on 29 March and sent it over to Deal on 5 April. After it landed on his desk, a window of 40 days opened for Deal to either sign or veto it.
But why wait for his decision? The Augusta Chronicle reported on Friday that a hacking group has prematurely retaliated before the fate of SB 315 has even been decided… As if malicious attacks are going to somehow convince Governor Deal that he shouldn’t sign new legislation that – at least on the face of it, to politicians who don’t work in the cybersecurity industry – seems to be taking a proactive move to stem malicious hacking.
It makes no sense. But the hacking, overall, doesn’t seem particularly well thought out. The targets: a website for the Calvary Baptist Church of Augusta, and possibly the city of Augusta itself. The Augusta Chronicle reports that the hacker(s) posted a link on the church’s site, labeled:

This vulnerability could not be ethically reported due to S.B. 315.

The statement purports to be from the EFF and decries the bill as an overreach. On Friday, the EFF said it had nothing to do with it.


And now for the government’s actions on the national level:

Backdoors are back again

Actually, the government’s lust for busting encryption’s kneecaps never went away. The US has clearly been positioning itself for an(other) assault on encryption.
In October, Deputy Attorney General Rod Rosenstein gave a couple of speeches focusing on encryption that sounded like they could have been written by former FBI director James Comey. The same arguments against unbreakable encryption – or what the government likes to refer to as “responsible” encryption – resurfaced. He defined it as the kind of encryption that can be defeated for any law enforcement agency bearing a warrant, but is otherwise bulletproof against anyone but the user.
Or, as the #nobackdoors proponents (including Sophos) put it, weakened security which the bad guys will inevitably figure out how to exploit.
Then, in November, Rosenstein urged prosecutors to challenge encryption in court. The FBI would be “receptive” to pro-backdoor litigation, he said:

I want our prosecutors to know that, if there’s a case where they believe they have an appropriate need for information and there is a legal avenue to get it, they should not be reluctant to pursue it. I wouldn’t say we’re searching for a case. I’d say we’re receptive, if a case arises, that we would litigate.

In March, there was more of the same. This time, the FBI slipped a velvet glove over the iron fist that’s been banging on the encryption door ever since Apple made it a default on the iPhone in September 2014.
We’re not looking for a backdoor, said director Christopher Wray. We just want you to break encryption, he said, though the actual words were more along the lines of “a secure means to access evidence on devices once they’ve shown probable cause and have a warrant.” How that gets done is up to you smart people in technology, Wray said, the “brightest minds doing and creating fantastic things.”
All of which brings us up to more recent anti-encryption efforts, including the Department of Justice’s (DOJ’s) push for a legal mandate to unlock phones.
In March, the New York Times reported that FBI and DOJ officials had been meeting with security researchers, working out how to get around encryption during criminal investigations.
As far as the security researchers go, think big names: Ray Ozzie, a former chief software architect at Microsoft; Stefan Savage, a computer science professor at the University of California, San Diego; and Ernie Brickell, a former chief security officer at Intel, all three of whom are working on techniques to help police get around encryption during investigations.
After 18 months of research, the National Academy of Sciences had, in February, released a report on encryption and exceptional access that shifted the question of whether the government should mandate exceptional access to the contents of encrypted communications, to how it can be done without compromising user security. Presentations from Ozzie, Savage and Brickell were included in that report.
Then, international think tank EastWest Institute published a report (PDF) that proposed “two balanced, risk-informed, middle-ground encryption policy regimes in support of more constructive dialogue.”
Most recently, Wired published a story focusing on Ozzie and his attempt to find an exceptional access model for phones that can supposedly satisfy “both law enforcement and privacy purists.”
He calls his solution Clear. Basically, it’s passcode escrow. It involves generating a public and private key pair that can encrypt and decrypt a secret PIN that each user’s device automatically generates on activation. It’s like an extra password, stored on the device, protected by encrypting it along with the vendor’s public key. After that, the vendor is the only one that can unlock the phone with the private key, which the vendor would store in a highly secured vault that only “highly trusted employees” could get at in response to law enforcement who have court authorization.
Ozzie posted this slide show explaining Clear.
Well, as long as they’re highly trusted, what could possibly go wrong? We don’t need to hypothesise, we can just look at recent examples, such as the theft of exploits and hacking tools from the NSA and their use by criminals in global ransomware outbreaks.
And of course there’s the threat of rogue employees too – people like Edward Snowden, or the Facebook engineers and National Security Agency (NSA) agents who abused their access to sensitive data.
Experts including Matt Green, Steve Bellovin, Matt Blaze, Rob Graham have not been slow to point this out.
For example, here’s just one of Green’s criticisms:

Does this vault sound like it might become a target for organized criminals and well-funded foreign intelligence agencies? If it sounds that way to you, then you’ve hit on one of the most challenging problems with deploying key escrow systems at this scale. Centralized key repositories – that can decrypt every phone in the world – are basically a magnet for the sort of attackers you absolutely don’t want to be forced to defend yourself against.

In a nutshell, backdoor proponents such as Wray might publicly propose the notion that there’s some sort of middle ground that still allows for strong encryption – which, they claim, has created a world in which people can commit crimes without fear of detection – and which accommodates “exceptional access” for law enforcement.
There is no such middle ground, the EFF said on Wednesday. From the statement, written by EFF writer David Ruiz:

The terminology might have changed, but the essential question has not: should technology companies be forced to develop a system that inherently harms their users? The answer hasn’t changed either: no.

But try telling that to the DOJ and the FBI. They clearly don’t believe it, and they show no signs of giving up the quest.


11 Comments

If they ever push a bill though to break encryption. It would be a good day for a world wide IT walk out, to protest the stupidity.

Reply

Someone, somewhere is pushing one of these bills. Right now. This minute. Ignorant authorities won’t stop until they get free, anytime access. See you on the street!

Reply

Why do people keep saying that Apple made unbreakable encryption “default” in 2014? That’s not true. They made it default in countries that would allow unbreakable encryption. I’m pretty sure the Apple phones sold in China have encryption that China can break. Likewise in quite a few other countries.

Reply

“a world in which people can commit crimes without fear of detection” – yes, but only crimes with no manifestation outside of cyphertext!

Reply

The main criticism here is that the big companies, such as Apple, Microsoft and Google, would need to keep the decryption keys safe.
Are we forgetting that they already know how to do that since they sign the updates that they push out to devices with a secret key so that the devices can verify their authenticity?

Reply

There’s a big difference between keeping your own private keys safe and keeping billions of your customers’ private keys safe.

Reply

@PD
On first reading, this seems to skip over the genius of the Ozzie plan which is that all Apple (to take the first example) is required to keep safe is the single, “Clear”, private key. The differences between keeping one private key safe and keeping another private key safe arise purely from how often, and in what ways, the two keys are used. (In the physical world, the key for your ride-on lawnmower is probably more easily kept safe than is the key of your locker at the gym.)
The greater disparities, to which the observation may have alluded, are in detection (of the breach), exploitation, and in remediation. Mass exploitation of an Apple update key compromise would surely not go unobserved by Sophos and its peers for very long. Targeted deployment against an adversary’s device would be complicated (and for this to leak personal data would require the adversary to unlock his device following an evil update of it). Should Apple’s update key be compromised, it is overwhelming likely that Apple could deliver a replacement public key to the vast majority of its customers’ devices before the bad actor could hit them using the compromised private key.

Reply

I think the main difference of how easy it is to keep certain keys safe is how much that key is worth to attackers. It’s much easier to keep the key to my garden shed safe than it is to safeguard a master key to every other building in the world.

Reply

[… the private key, which the vendor would store in a highly secured vault that only “highly trusted employees” could get at in response to law enforcement who have court authorization.]
“Highly trusted employees” like the one at Google that was using his access and authority to stalk women?
All this does is shift the onus of responsibility from the user to the vendor, and I for one don’t trust the employees at the vendor any more than the government trusts me.

Reply

They want to allow offensive hacking to discourage further hacking attempts. Well what about a preemptive hack to take down potential threats or perhaps the competition?

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!