Skip to content
Naked Security Naked Security

What’s the best approach to patching vulnerabilities?

Researchers ask: with only 1 in 20 vulnerabilities exploited, what's the best approach to patching?

New research shows that most vulnerabilities aren’t exploited and those that are tend to have a high CVSS score (awarded on the basis of how dangerous and easy to exploit the vulnerability is). So, not surprisingly, the most easily exploited flaws are the ones exploited most frequently.

What’s more surprising is that there’s apparently no relationship between the proof-of-concept (PoC) exploit code being published publicly online and the start of real-world attacks.

The numbers: the researchers collected 4,183 unique security flaws used in the wild between 2009 and 2018. That’s less than half of the 9,726 discoveries of exploit code that had been written and posted online.

Those numbers come from a study in which a team of researchers from Cyentia, Virginia Tech, and the RAND Corporation took a look at how to balance the pluses and minuses of two competing strategies for tackling vulnerabilities.

What’s the best way to herd cats?

Fixing them all would get you great coverage, but that’s a lot of time and resources spent on sealing up low-risk vulnerabilities. It would be more efficient to concentrate on patching just some high-risk vulnerabilities, but that approach leaves organizations open to whatever vulnerabilities they didn’t prioritize.

How do you know which vulnerabilities are worth fixing? The researchers sought to figure that out by using data collected from a multitude of sources, along with machine learning to build and then compare a series of remediation strategies to see how they perform with regards to the tradeoff between coverage vs. efficiency.

The team’s white paper, titled Improving Vulnerability Remediation Through Better Exploit Prediction, was presented Monday at the 2019 Workshop on the Economics of Information Security in Boston.

The researchers used a list of all security flaws, scores, and vulnerability characteristics extracted from the National Institute of Standards and Technology’s (NIST’s) National Vulnerability Database (NVD). They also used data relating to exploits found in the wild that was collected from FortiGuard Labs, and evidence of exploitation was also gathered from the SANS Internet Storm Center, Secureworks CTU, Alienvault’s OSSIM metadata, and ReversingLabs metadata.

Information about written exploit code came from Exploit DB, Contagio, Reversing Labs, and Secureworks CTU and exploitation frameworks Metasploit, D2 Security’s Elliot Kit, and Canvas Exploitation Framework.

A crucial point: they made what they considered a significant change, and expansion, to earlier modeling. Namely, in order for a vulnerability to be accounted for in the researchers’ models, predictions about the likelihood that it would be exploited weren’t good enough. Rather, the vulnerability had to have been exploited for real, in the wild, for the researchers to take it into account.

From the white paper:

Notably, we observe exploits in the wild for 5.5% of vulnerabilities in our dataset compared to 1.4% in prior works.

They found that the 4,183 security flaws that had been exploited between 2009 and 2018 were a small percentage of the total of 76,000 of all vulnerabilities discovered during that time.

While that works out to be “only” about 5.5% of vulnerabilities being exploited in the wild, “only” one in 20 vulnerabilities being exploited is quite a lot more than the one in 100 shown “in prior works”.

The best strategy?

The research looked at three strategies for prioritising vulnerabilities: using the CVSS score, patching bugs with known exploits and patching bugs tagged with specific attributes such as “remote code execution”. The researchers also created a machine learning model for each strategy to see if it could outperform simple, rules-based approaches.

For people following a strategy based on CVSS scores, the researchers reckoned the best combination of coverage, accuracy and efficiency was achieved by patching anything with a CVSS score of seven or more:

…a rule-based strategy of remediating all vulnerabilities with CVSS 7 or higher would achieve coverage of slightly over 74% with an efficiency of 9%, and accuracy of only 57%. This appears to be the best balance among CVSS-based strategies, even though it would still result in unnecessarily patching 31k (76k total – 35k patched) unexploited vulnerabilities.

Interestingly, all the rules-based approach based on CVSS ran the machine learning model based on CVSS close, with the model performing only “marginally better” than a strategy based on patching any given CVSS score and above.

When looking at strategies based on the availability of exploit code in one of three exploit repositories – Elliot, Exploit DB, and Metasploit – the researchers found that your choice of repository matters:

…the ​Exploit DB ​strategy has the best coverage, but suffers from considerably poor efficiency, while Metasploit performs exceptionally well in efficiency (highest out of all the rules-based approaches), with considerable reduction in coverage, and drastically smaller level of effort required to satisfy the strategy

Unlike the CVSS-based strategy, the researchers found that their machine learning model following a “published exploit” strategy achieved a significantly better balance of coverage and efficiency than a rules-based approach.

For the final “reference tagging” strategy the researchers patched bugs if they had been tagged with one of 83 different keywords and then looked at the efficacy of a patching approach based on each one. None stood out for the researchers as an effective approach and all were outperformed by a “reference tagging” machine learning model:

Overall, like the other rules-based strategies, focusing on individual features (whether CVSS, published exploit, or reference tags) as a decision point yields inefficient remediation strategies. This holds true for all of the individual multi-word expressions.

And better than all those individual strategies, whether rules-based or driven by machine learning, was a machine learning model that used all the available data, they said.

The researchers think their work might be used to improve the CVSS standard and by bodies that issue threat and risk assessments, like the Department of Homeland Security. It could even by used, they suggested, in the Vulnerability Equities Process that determines whether vulnerabilities should be disclosed to the public or kept secret and used in offensive operations.

1 Comment

A well written article for a non-trivial discipline. The topic of best practices in this space can, and should elicit a host of view points. Determining the right strategy and balance to improve patching efficiencies and increased protectons should be contextualized. Industry vignettes, operational criticality of the technology itself and compensating controls in place matter. Security and IT teams want to reduce attack vectors throughout the tech ecosystem. And, most will prioritize patching vulns detected in Supplychain, ICS and Healthcare systems over network printers, online Learning systems or Yammer. Just saying.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!