Skip to content
Naked Security Naked Security

Equifax felled by a months-old Apache Struts vulnerability

Patching vulnerabilities often means juggling risk and practicality - which can mean gambling with customer data

Equifax today posted an announcement on their website with more information about what they believe is the source of the massive breach.

There are two key statements of interest for us, so let’s take a look:

We know that criminals exploited a US website application vulnerability.

This isn’t terribly surprising: Verizon’s DBIR research has repeatedly shown that web applications are the most common attack target by a large margin. The targets are plentiful, their security generally a bit more lax, and research has shown that the vulnerability/patch gap is even greater for web apps than it is for most other application types. But more on that gap in a moment.

The vulnerability was Apache Struts CVE-2017-5638

Wince. This Struts vulnerability (not to be confused with the more recent Return of Struts) was a nasty server-side remote code execution bug made known to the public in March of this year. Naked Security’s Paul Ducklin did a marvelous deep-dive into how it works in this blog post, but the key point is this:

Without logging in, without fetching the original web form page in the first place, and without even having any form data to upload, a crook may be able trigger this bug simply by visiting the web page listed in the action field of any of your web forms.

If you use Struts 2 somewhere in your network, and still haven’t applied the latest patch, you really ought to, because this vulnerability is easy to exploit by anyone who wants to try.

It’s possible that Equifax’s vulnerable servers weren’t specifically targeted but merely caught in a wide net cast by attackers looking to pwn any unpatched Apache servers they could find. Still, given this vulnerability was known in March and Equifax’s breach is timed for somewhere in May, that’s a more than two-month time span of a vulnerable server left wide open to attackers.

The Equifax breach is, unfortunately, a great example of attackers taking advantage of the dreaded gap between vulnerability discovery and vulnerability remediation. Various researchers have looked into the time it takes the average organization to patch a vulnerability, and the number hovers between 60 to 150 days, depending on the research source.

This means that criminals taking advantage of vulnerabilities tend to have time on their side, and they generally act within 40 to 60 days. So in many cases, the bad guys have about two months of wiggle room before a vulnerability they’re using gets patched. And it looks like in Equifax’s case, that little bit of wiggle room was all the attackers needed to carry out one of the biggest data breaches in history.

The general wisdom when news of a bad bug makes the rounds is to patch as quickly as possible. The asterisk to all this is that this advice is not news to almost any IT professionals — in an ideal world, patches would be tested and then deployed flawlessly the moment they became available thanks to ample resources, and the patch wouldn’t break any processes and the nasty vuln would be gone away, simple as that.

But the reality is always more complicated of course: patches don’t get deployed as quickly as they should because the to-do list of patches to be fixed is already quite long.

And sometimes fixing a security vulnerability can have all sorts of unforeseen issues in production systems that could necessitate rolling the patch back (another nightmarish scenario) — even if the patch was tested first before deployment, which it isn’t always. In the case of Struts, with this being a server-side vulnerability, it’s possible that patching meant taking key systems offline to deploy a fix, which can be a political and logistical quagmire.

When a particularly nasty bug makes the headlines — such as Heartbleed — the patch for that bug may get pushed to the top of the priority pile thanks to the spotlight shone on it (especially from a very concerned C-level exec), but often less-glamorous but just-as-dangerous bugs are added to the lengthening queue, and there’s an element of risk acceptance and hope: What are the chances that this bug will come to bite my systems in the time it takes me to patch?

Ultimately it’s a gamble with customer data, and when the gamble fails it’s customers that suffer most.


This is why you need to regularly audit the dependencies of your software and run updates. If you have to refactor your code to make the application work after the patch, do it!


Any time security depends on humans remembering to do things periodically, there will be trouble. The vulnerability assessments should be automated and then notify the humans when something comes up.


You are correct, but it’s not so simple to fix. There is always going to be a gap between a vulnerability’s fixed date and the date the servers get patched. Too many patches intended to fix one problem open up two more.
Most companies put a few days between those two times, and use the time between to test things as best they can.
Problem is that in businesses, the web servers are sanctified. So, patching them causes an even longer delay. This appears to be what happened to Equifax.
Now, the reason you’re still right is because two months is way too long for the gap. They needed to minimize that time gap, and they didn’t. Whatever their excuse, it’s not enough to validly explain a two-month gap.
(When I was in the security monitoring business, we never went more than a day between a patch’s release and the testing cycle, and never more than a week to implement company-wide. That’s probably a little extreme, but we were a critical infrastructure company, so we needed to always be on top of such things. It cost more, but I think it was worth it.)


Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!