Earlier this week, the headlines flashed with news that Google had disclosed a vulnerability to Microsoft that allows local privilege escalation in Windows 10. This vulnerability is a zero-day, meaning these vulnerabilities did not have an immediate fix, and by making these vulnerabilities public, theoretically an attacker could take advantage of this information and use the vulnerability to their own advantage.
A lesser-discussed note to this story is that Google had disclosed this vulnerability privately to Microsoft a week prior, at the same time it had made a similar private disclosure to Adobe about its own actively exploited vulnerability in Flash (CVE-2016-7855). In response to Google’s disclosure, Adobe made a fix to Flash about five days later, with the security update already available to the public.
This is good news for Adobe, as Google has a published disclosure policy for what it defines as actively exploited critical vulnerabilities: it will privately notify the vendor and give them seven days to make a fix, otherwise it goes public with the news. Google’s reasoning?
We believe that more urgent action—within seven days—is appropriate for critical vulnerabilities under active exploitation. The reason for this special designation is that each day an actively exploited vulnerability remains undisclosed to the public and unpatched, more computers will be compromised.
Google certainly had reason for concern, as Microsoft later identified that the Russian APT group Fancy Bear was already actively using Google’s disclosed zero-day in their own attacks before Google’s disclosure even happened.
Microsoft hasn’t yet issued a fix for the Windows vulnerability, but in a blog post response they say one will be available on the next Patch Tuesday, November 8.
So what’s a normal vulnerability disclosure policy?
There’s no agreement in the security industry for a standard vulnerability disclosure. Many abide by what they call “responsible disclosure,” which means privately reaching out to the vendor with the details of a discovered vulnerability, with the intent to give them time to fix it, and only making the details of the vulnerability public once the issue has already been patched.
Often whoever discovered the vulnerability works with the affected vendor to fix the problem. As long as the vendor is making a good-faith effort, the timeline for fixing a vulnerability can be weeks or even months, although many set a deadline to get things fixed before they go public. Google recommends 60 days, though others will give more time.
Not everyone in the security industry agrees with the principles of responsible disclosure. Though it may strike some as reckless, some believe that the best course of action is to go public as soon as possible, especially if the vulnerability is being actively exploited. While Google gave both Adobe and Microsoft seven days for a fix, others might have gone public straight away, without giving them a chance to fix it.
The idea behind immediately disclosing an actively exploited vulnerability is that the public pressure to fix the vulnerability could make all the difference in the vendor taking it seriously, especially if they have been unresponsive or uncooperative in past disclosure efforts. This approach can sometimes backfire, however, as it makes a potential new avenue for attack public. However, it can also backfire on the vendor if they don’t respond quickly.
Do you think immediately disclosing vulnerabilities is too reckless, or the right thing to do? Or was Google right?
Image by drserg / Shutterstock.com
Anonymous
My opinion: If the vendor is responsive and gives a specific, reasonable date when the fix will be available, going public on the vulnerability before that is not helpful.
Nobody_Holme
I fully support Google’s stance. Vendor gets time to patch, but pressure of public shame if it fails. I suspect in reality, competing businesses would do well to give a little leway instead of being seen to snipe at each other, but equally, they’re going to snipe anyway, so doing it when justified is not a big deal.
Corey Hitchens
7 day advance notice is more than fair for companies with a full development team supporting thousands of consumers. Smaller businesses with less resources should be given 30 days. Many companies won’t even bother fixing something until its public. A lot of times, the product is vulnerable because best practice coding was not made in the first place.
Sean Briggs
I think 7 days is is reasonable. The public should know so they can ensure that systems that are vulnerable can be monitored more closely.
Jim
Seven days seems like enough for an in-the-wild zero-day. But, there are probably exceptions, especially in the area of operating systems. Operating systems are so complex that it can be a daunting task to fix ANY bug in a week.
So, yeah, it’s not written in stone. Maybe it should be (like via a standards body, with input from larger companies and security firms)?
clifford cuellar
I think Google has the right response. Work with the vendor for a week, then go public if that has failed and let public opinion go ti work. My reasoning is that a zero-day is quite possibly already known and used by the bad guys, and failure to fix just gives them more time. At least a warning tells users to be extra vigilant about unusual activity.
Tim Boddington
I agree with the earlier comments. However, maybe there should be an announcement at the outset that there is a security weakness in a product, but not what it is.
Alan Robertson
I’m not a Microsoft fan (I’m a Linux user), however, in my opinion this was reckless of Google. What did this achieve? All this did was take a bad situation and make it worse by advertising the fact that there was an active exploit, which in turn only encourages more people to abuse it.
Google knows full well that Microsoft has adhered to patch Tuesday for quite some time and the timing of this exploit being released to the public seems to be more political than a genuine care for the community. This is the second time that they have done this and the second time they have done so half way between patch Tuesdays – I do not consider this to be coincidental. When did they precisely find out about the vulnerability? Did Google sit on this vulnerability and choose to release it mid way between patch Tuesday cycles or did they release it straight away? If they chose to release this half way between patch Tuesdays then their intention was to damage Microsoft’s reputation and not to report a critical bug. Additionally, how many other “attention diverting” Windows exploits are they sitting on? More transparency please Google – start with log files and a detailed account as to WHEN the zero day team became aware of this.
It’s a shame when your zero day task force starts to become more aligned to damaging a competitor than actually genuinely helping to improve security, especially as Microsoft has botched more updates in the last two years than they have in the last twenty. Wouldn’t it make more sense to get the patch correct first time round? An unplanned, rushed out patch could potentially harm more PC’s than the active exploit could – BSOD / endless boot cycles etc.
As for Google, they have a bit of a cheek! Android is rarely updated:
“More than half of Androids have unpatched security holes” Sophos, 17 SEP 2012
“Android “Master Key” vulnerability – more malware exploits code verification bypass” Sophos 09 AUG 2013
“Shocking” Android browser bug could be a “privacy disaster” Sophos 16 Sep 2014
“87% of Android devices are exposed to at least one critical vulnerability” Sophos, 15 OCT 2015
“900 million Androids vulnerable to Quadrooter bugs” Sophos, 09 AUG 2016
Seriously Google? Why can’t your own zero day team to fix your own operating system first? How about starting with actually having the ability to patch rather than dumping the responsibility on the device manufacturer? You don’t see Microsoft saying it’s the responsibility of Dell / HP / Lenovo etc. to patch Windows? So why does Google get away with this?
John
In the case of an actively exploited zero-day, public disclosure after 7 days also gives end users the ability to implement alternate solutions to either detect or block exploits using technologies such as IPS and end-point behavioural analytics (whichever is applicable in the specific case). So even if there is no patch, customers can still find other ways to protect themselves.
mr x
wonder how many have opted for 30 day delay on updates to be installed, as that can leave some systems open for long time unless MS has option to force updates that are deemed as currently been exploited (windows 10 you can delay updates upto 30 days when they have come out, and feature upgrades for a year)
David Pottage
There is a second reason for a quick public disclosure, an that is so that users and sys-admins can do their own mitigation while waiting for a proper fix to become available.
For example, if it became known that there is an unfixed zero day with MS Powerpoint files, I might update the configuration of my email server or UTM to block such files from external sources, or to add a very stern warning when allowing them through.
Of course such a block would be annoying for my users, so I would hope Microsoft would be working very hard on a fix, but if such a bug where being actively exploited, and dozens of emails with booby trapped attachments where arriving every day, I would want something to reduce the risk while I waited for a more complete fix.
Alan Robertson
Hello David,
Unfortunately this is a kernel bug – your UTM isn’t going to stop that. As always it appears as though Flash was used as a means of a delivery system for this exploit – I can only assume that they were spear phishing with a hacked website and hoping the victim would fall for it.
Paul Ducklin
What I think David is suggesting is not any sort of “Sophos ad,” but merely a way of illustrating how detailed information about an exploit can help you to add mitigations of your own to reduce your exposure, even if your temporary defence [a] only deals with a specific instance of an attack and [b] causes side-effects that you don’t want to suffer for ever.
Jim
Google should never have revealed this information publically. They should technically be liable if there is a ramp in attacks or exploitations after the release of such information. I’m not pro Microsoft or anti-google at all, but this is just careless and actually ends up aiding and abetting exploitation of these things