Skip to content
Naked Security Naked Security

Should governments keep vulnerabilities secret?

The 'secret vulnerability stash' debate rages on.

The debate over how much US intelligence agencies should hoard, and how they should use, vulnerabilities found in software used by their own citizens will probably never end.

But two recent research papers, presented together at Black Hat, argue that data analysis should carry more weight than, “speculation and anecdote” in setting government policy on the matter.

By now, the cases for both sides are well established:

According to one, intelligence agencies need to maintain a secret stash of vulnerabilities if they are to have any hope of penetrating the communications or cyber weapons of criminals, terrorists and hostile governments. It is a crucial element of protecting the homeland.

According to the other, the problem is that they don’t always remain secret. And if those vulnerabilities are leaked or otherwise discovered by malicious actors, they can be exploited to attack millions of innocent users, or critical systems, before they are patched. Which can be very damaging to the homeland.

Indeed, the release about a year ago by the so-called Shadow Brokers of a cache of top-secret spying capabilities, presumed to belong to the National Security Agency (NSA), intensified the complaints about government surveillance going well beyond targeting terrorists.

Jason Healey, senior research scholar in cyber conflict and risk at Columbia University’s School of International and Public Affairs and also a senior fellow at the Atlantic Council, noted in a paper last November that of the 15 zero-days in the initial release, several, “were in security products produced by Cisco, Juniper, and Fortinet, each widely used to protect US companies and critical infrastructure, as well as other systems worldwide.”

More recently, starting in March, the WikiLeaks “Vault 7” document dump exposed the CIA’s efforts to exploit Microsoft and Apple technology to enable surveillance.

Without wading into the specific debate over keeping vulnerabilities secret, several researchers at Harvard Kennedy School’s Belfer Center for Science and International Affairs recently published a paper they hoped would, “call for data and more rigorous analysis of what it costs the intelligence community when they disclose a vulnerability and what it costs citizens and users when they don’t.”

It reported that their analysis of a dataset of more than 4,300 vulnerabilities found that, “(vulnerability) rediscovery happens more often than previously reported”.

They went on to say:

When combined with an estimate of the total count of vulnerabilities in use by the NSA, these rates suggest that rediscovery of vulnerabilities kept secret by the US government may be the source of as many as one-third of all zero-day vulnerabilities detected in use each year

Another paper, by the RAND Corporation is aimed at establishing, “some baseline metrics regarding the average lifespan of zero-day vulnerabilities, the likelihood of another party discovering a vulnerability within a given time period, and the time and costs involved in developing an exploit for a zero-day vulnerability.”

According to their findings, zero-day exploits and their underlying vulnerabilities, “have a rather long average life expectancy (6.9 years). Only 25% do not survive to 1.51 years, and only 25% live more than 9.5 years.”

The combined Black Hat presentation focused in part on what both groups studied – the Vulnerability Equities Process (VEP), which helps determine if “a software vulnerability known to the US government will be disclosed or kept secret.”

The Belfer Center group said they did not intend to “relitigate” VEP debate in their paper. But given that they and those at RAND have done some of the “rigorous analysis” they called for, why not at least weigh in on it?

Trey Herr, a postdoctoral fellow with the Belfer Center’s Cyber Security Project and a co-author of that paper, said that while government use of vulnerabilities is “necessary,” he thinks government has not “walked the line very well” between disclosure and secrecy.

He said that even former NSA director Gen. Michael Hayden has acknowledged that if the agency can’t keep its secret capabilities secure, it shouldn’t be allowed to have them.

But Herr, in a post on the Lawfare blog, said he is hopeful that the recently introduced PATCH (Protecting our Ability to Counter Hacking) Act in Congress will “codify” the VEP process, “to facilitate accountability and continuity between administrations.”

He said the bill is significant because, “it marks the first time that Congress will be actively involved in meaningful discussion about government disclosure of vulnerabilities.”

But Herr also contends that even if the correct balance between disclosure and secrecy is achieved, “that is only a small piece of the puzzle when it comes to software security.”


6 Comments

This is precisely what holds technological progress back. Consumers have a right to security. Buyer be ware when your government can not secure free market places from financial ruin. We need transparent thoughtful deliberation to stem sources of corruption, digital or non-digital.

Everyone who was hurt by this problem should sue Obama. He gave his people the okay to do whatever they wanted and they created a mess for the world. It’s immoral for any government to do this. In the US the government can’t legally spy on Americans but I’m sure they did with all the secret weapons they had.

It seems to me that the USA’s NSA (and corresponding institutions in other governments) need to keep them secret. That’s their job.
However, they should also monitor the world’s hackers, watching for the same vulnerability to be used by the bad guys*. Once they see it in the wild, they need to quickly move to get the bug identified by the software’s owner(s).
*I’m discounting that some people consider the NSA (etc.) to be included in the list of “bad guys”.

In pure observation of damage done by NSA/CIA malicious/hacking software. It only shows billions in damage from the software, and little if any benefit (maybe they would like to tell the world of some prior benefits to firm up their position?)
In observation only of public data – these departments practices are to the detriment of the world. But do make job security for themselves.

In fairness to the security services there are good reasons why they might choose not to comment so you’ll never get a fair picture of the profit and loss.

For all we know EternalBlue was a stove pipe for information from adversaries for years. Or maybe it was never used. And how do you reconcile the cost of, say, WannaCry where it’s quite easy to put a monetary value on it, vs a prevented terrorist attack or an incremental blunting of an enemy’s ability to spy on you.

yeah, if they would do some disclosure like: “We used exploit #42 to infiltrate the pandawings gang in 2003 and saved 300 puppies from being turned into boots” It would give them some credibility. I get FBI reports as part of my job (not particularly secret), and these are vague at best to show effectiveness of exploits they use.
If would go a long way in swaying opinion if they would do some disclosure of even 10 year old exploits used for the benefit of peace, is all I’m saying.

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?