Naked Security Naked Security

Good guys and bad guys race against time over disclosing vulnerabilities

What's at stake when we don't share vulnerability data?

When a software vulnerability is discovered, especially by a nation state or government agency, that agency might choose to sit on that discovery, secretly hanging on to their findings in case the vulnerability can be used, secret weapon-style, at a convenient time of their choosing. But a new research paper recently examined how often vulnerabilities are independently discovered by researchers and found that time is not always on the side of whoever got there first.

Released from the Cyber Security Project at Harvard’s Belfer Center for Science and International Affairs, this paper dives into how often vulnerabilities in Google Chrome, Mozilla Firefox, Google Android and OpenSSL are rediscovered over a span of several years up to 2016. From their dataset, which studied more than 4,300 vulnerabilities, between 15% and 20% of vulnerabilities were rediscovered within the same year, with rediscovery as high as 23% for Android vulnerabilities in just one year.

In a narrow subset of cases, the same vulnerabilities are rediscovered over a short span of time many times over — for example, 6% of vulnerabilities for Android were rediscovered three times or more in just one year (2015-16), and 4% of Firefox and 2% of Chrome vulnerabilities were rediscovered more than twice between 2012 and 2016.

The paper also found that rediscovery tends to occur over a period of months after the vulnerability is initially found. In Android’s case, 20% of the rediscovered vulnerabilities happen in the same month as the original discovery, with another 20% within the first three months.

This kind of gap might be a boon to defenders if they act quickly to mitigate the vulnerability — assuming mitigation or patching is even possible. On the flipside, this kind of lag can also mean that if someone with malicious intent discovers a vulnerability first and sells it on the black market, it could be several months before a “good guy” catches up.

Whichever set of events happens, there’s a trend that applies to all the data in this paper: the rate of vulnerability rediscovery is going up across the board. For example, 2% of Chrome vulnerabilities were rediscovered in 2009, whereas 19.1% were rediscovered in 2016.

There are a number of possible interpretations here: Perhaps we’re getting better at finding vulns, or more eyes are on the problem, or perhaps we’re getting better at sharing information more effectively. Or, if we’re feeling a bit cynical, perhaps there are just more vulnerabilities to be found as software matures.

It’s worth noting that this paper doesn’t make suppositions about vulnerabilities in the world at large, but only about their own dataset. It is entirely possible that vulnerability rediscovery rate is much higher in actuality, simply because we don’t have the full picture on how quickly criminals make the same discoveries, and (right now at least) they’re not going to share that data.

Why does any of this matter?

The research here now assigns numbers to a long-held principle in security: when a vulnerability is discovered by a “good guy”, chances are someone out there with criminal intent already knew about it and is actively exploiting it. The paper argues that when we better understand the likelihood of a vulnerability’s rediscovery, we can apply more pressure to the vendor who “owns” the vulnerability to pay more attention to it and prioritize a fix. (This same principle can also work in motivating more vendors to support bug bounties.)

The opposite situation also applies: if a type of vulnerability has a higher chance of being rediscovered, and that next discovery is by a criminal actor who intends to prey upon the unpatched, it’s a greater motivation for a vendor to get that patch deployed.

From the paper:

Understanding the speed of rediscovery helps inform companies, showing how quickly a disclosed but unpatched bug could be rediscovered by a malicious party and used to assault the company’s software. This information should drive patch cycles to be more responsive to vulnerabilities with short rediscovery lag, while allowing more time for those where the lag is longer.

Vulnerability rediscovery rates are also a key variable in the discussion about whether government agencies that are stockpiling vulnerabilities in secret should disclose their vulnerabilities more often, especially in light of the NSA-held vulnerability data leaked by the Shadow Brokers earlier this year, which lead to WannaCry and Petya. The potential rate of rediscovery is one of the variables the government agencies need to keep in mind if a vulnerability they’ve discovered is likely to stay secret for long. Is it for everyone’s greater good to always disclose vulnerabilities as soon as possible?

The reality is that a number of the questions addressed in this paper have been around for a while, and while it assigns some valuable data to certain angles of the argument, the issues at hand are still up for debate, especially after the Shadow Brokers leak:

  • How much time does a vendor really have to fix a vulnerability found by a “good guy” before a “bad guy” makes the same discovery?
  • Are government agencies doing more harm than good to themselves and their citizens by seemingly hoarding vulnerabilities?
  • Are there still too many logistical barriers in place for security researchers to responsibly and easily share their vulnerability discoveries?

The paper is an interesting read for those looking for data around the lifecycle of vulnerabilities. Let us know what you think — has this research changed your mind about how organizations should share vulnerability information?


Leave a Reply

Your email address will not be published. Required fields are marked *