Naked Security Naked Security

S3 Ep144: When threat hunting goes down a rabbit hole

Latest episode - check it out now!


Why your Mac’s calendar app says it’s JUL 17. One patch, one line, one file. Careful with that {axe,file}, Eugene. Storm season for Microsoft. When typos make you sing for joy.

No audio player below? Listen directly on Soundcloud.

With Doug Aamoth and Paul Ducklin. Intro and outro music by Edith Mudge.

You can listen to us on Soundcloud, Apple Podcasts, Google Podcasts, Spotify and anywhere that good podcasts are found. Or just drop the URL of our RSS feed into your favourite podcatcher.


DOUG.  Patching by hand, two kinda/sorta Microsoft zero-days, and “Careful with that file, Eugene.”

All that, and more, on the Naked Security podcast.


Welcome to the podcast, everybody.

I am Doug Aamoth; he is Paul Ducklin.

Paul, how do you do today?

DUCK.  Were you making an allusion to The Pink Floyd?

DOUG.  *THE* Pink Floyd, yes!

DUCK.  That’s the name by which they were originally known, I believe.

DOUG.  Oh, really?

DUCK.  They dropped the “The” because I think it got in the way.

The Pink Floyd.

DOUG.  That’s a fun fact!

And as luck would have it, I have more Fun Facts for you…

You know we start the show with This Week in Tech History, and we’ve got a two-fer today.

This week, on 17 July 2002, Apple rolled out “iCal”: calendar software that featured internet-based calendar sharing and the ability to manage multiple calendars.

“JUL 17” was prominently featured on the app’s icon, which even led July 17 to become World Emoji Day, established in 2014.

It’s quite a cascading effect, Paul!

DUCK.  Although. on your iPhone,, you’ll notice that the icon changes to today’s date, because that’s very handy.

And you’ll notice that other service providers may or may not have chosen different dates, because “why copy your competition”, indeed.

DOUG.  Alright, let’s get into it.

We’ll talk about our first story.

This is about Zimbra and adventures in cross-site scripting.

Good old XSS, Paul:

DUCK.  Yes.

That’s where you are essentially able to hack a website to include rogue JavaScript without breaking into the server itself.

You perform some action, or create some link to that site, that tricks the site into including content in its reply that doesn’t just mention, for example, the search term you typed in, like My Search Term, but includes additional text that shouldn’t be there, like My search <script> rogue JavaScript </script>.

In other words, you trick a site into displaying content, with its own URL in the address bar, that contains untrusted JavaScript in it.

And that means that the JavaScript you have sneakily injected actually has access to all the cookies set by that site.

So it can steal them; it can steal personal data; and, even more importantly, it can probably steal authentication tokens and stuff like that to let the crooks get back in next time.

DOUG.  OK, so what did Zimbra do in this case?

DUCK.  Well, the good news is that they reacted quickly because, of course, it was a zero-day.

Crooks were already using it.

So they actually took the slightly unusual approach of saying, “We’ve got the patch coming. You will get it fairly soon.”

But they said, quite thoughtfully, “We understand that you may want to take action sooner rather than later.”

Now, unfortunately, that does mean writing a script of your own to go and patch one line of code in one file in the product distribution on all your mailbox nodes.

But it’s a very small and simple fix.

And, of course, because it’s one line, you can easily change the file back to what it was if it should cause problems.

If you were dead keen to get ahead of the crooks, you could do that without waiting for the full release to drop…

DOUG.  And what a sense of accomplishment, too!

It’s been a while since we’ve been able to roll up our sleeves and just hand-patch something like this.

It’s like fixing the sink on a Saturday morning… you just feel good afterwards.

So if I was a Zimbra user, I’d be jumping all over this just because I like to get my hands on… [LAUGHTER]

DUCK.  And, unlike patching the sink, there was no crawling around in tight cupboards, and there was no risk of flooding your entire property.

The fix was clear and well-defined.

One line of code changed in one file.

DOUG.  Alright, so if I’m a programmer, what are some steps I can take to avoid cross-site scripting such as this?

DUCK.  Well, the nice thing about this bug, Doug, is it almost acts as documentation for the kind of things you need to look out for in cross-site scripting.

The patch shows that there’s a server side component which was simply taking a string and using that string inside a web form that would appear at the other end, in the user’s browser.

And you can see that what the program *now* does (this particular software is written in Java)… it calls a function escapeXML(), which is, if you like, the One True Way of taking a text string that you want to display and making sure that there are no magic XML or HTML characters in there that could trick the browser.

In particular: less than (<); greater than (>); ampersand (&); double quote ("); or single quote, also known as apostrophe (').

Those get converted into their long-form, safe HTML codes.

If I may use our standard Naked Security cliche, Doug: Sanitise thine inputs is the bottom line here.

DOUG.  Oooh, I love that one!

Great. let’s move on to Pink Floyd, obviously… we’ve been waiting for this all show.

If Pink Floyd were cybersecurity researchers, it’s fun to imagine that they may have written a hit song called “Careful with that file, Eugene” instead, Paul. [Pink Floyd famously produced a song called Careful with that axe, Eugene.]

DUCK.  Indeed.

“Careful with that file” is a reminder that sometimes, when you upload a file to an online service, if you pick the wrong one, you might end up redistributing the file rather than, for example, uploading it for secure storage.

Fortunately, not too much harm was done in this case, but this was something that happened at Google’s Virus Total service.

Listeners will probably know that Virus Total is a very popular service where, if you’ve got a file that either you know it’s malware and you want to know what lots of different products call it (so you know what to go hunting for in your threat logs), or if you think, “Maybe I want to get the sample securely to as many vendors as possible, as quickly as possible”…

…then you upload to Virus Total.

The file is meant to be made available to dozens of cybersecurity companies almost immediately.

That’s not quite the same as broadcasting it to the world, or uploading it to a leaky online cloud storage bucket, but the service *is* meant to share that file with other people.

And unfortunately, it seems that an employee inside Virus Total accidentally uploaded an internal file that was a list of customer email addresses to the Virus Total portal, and not to whatever portal they were supposed to use.

Now, the real reason for writing this story up, Doug, is this.

Before you laugh; before you point fingers; before you say, “What were they thinking?”…

..stop and ask yourself this one question.

“Have I ever sent an email to the wrong person by mistake?” [LAUGHTER]

That’s a rhetorical question. [MORE LAUGHTER]

We’ve all done it…

DOUG.  It is rhetorical!

DUCK.  …some of us more than once. [LAUGHTER]

And if you have ever done that, then what is it that guarantees you won’t upload a file to the wrong *server* by mistake, making a similar kind of error?

It is a reminder that there is many a slip, Douglas, between the cup and the lip.

DOUG.  Alright, we do have some tips for the good people here, starting with, I’d say, arguably one of our most unpopular pieces of advice: Log out from online accounts whenever you aren’t actually using them.

DUCK.  Yes.

Now, ironically, that might not have helped in this case because, as you can imagine, Virus Total is specifically engineered so that anybody can *upload* files (because they’re meant to be shared for the greater good of all, quickly, to people who need to see them), but only trusted customers can *download* stuff (because the assumption is that the uploads often do contain malware, so they’re not meant to be available to just anybody).

But when you think about the number of sites that you probably remain logged into all the time, that just makes it more likely that you will take the right file and upload it to the wrong place.

If you’re not logged into a site and you do try and upload a file there by mistake, then you will get a login prompt…

…and you will protect you from yourself!

It’s a fantastically simple solution, but as you say, it’s also outrageously unpopular because it is modestly inconvenient. [LAUGHTER]

DOUG.  Yes!

DUCK.  Sometimes, however, you’ve got to take one for the team.

DOUG.  Not to shift all the onus to the end users: If you’re in the IT team, consider putting controls on which users can send what sorts of files to whom.

DUCK.  Unfortunately, this kind of blocking is unpopular, if you like for the other-side-of-the-coin reason to why people don’t like logging out of accounts when they’re not using them.

When IT comes along and says, “You know what, we’re going to turn on the Data Loss Prevention [DLP] parts of our cybersecurity endpoint product”…

…people go, “Well, that’s inconvenient. What if it gets in the way? What if it interferes with my workflow? What if it causes a hassle for me? I don’t like it!”

So, a lot of II
T departments may end up staying a little bit shy of potentially interfering with workflow like that.

But, Doug, as I said in the article, you will always get a second chance to send a file that wouldn’t go out the first time, by negotiating with IT, but you never get the chance to unsend a file that was not supposed to go out at all.

DOUG.  [LAUGHS] Exactly!

Alright, good tips there.

Our last story, but certainly not least.

Paul, I don’t have to remind you, but we should remind others…

…applied cryptography is hard, security segmentation is hard, and threat hunting is hard.

So what does that all have to do with Microsoft?

DUCK.  Well, there’s been a lot of news in the media recently about Microsoft and its customers getting turned over, hit up, probed and hacked by a cybercrime group known as Storm.

And one part of this story goes around 25 organisations that had these rogues inside their Exchange business.

They’re sort-of zero-days.

Now, Microsoft published a pretty full and fairly frank report about what happened, because obviously there were at least two blunders by Microsoft.

The way they tell the story can teach you an awful lot about threat hunting, and about threat response when things go wrong.

DOUG.  OK, so it looks like Storm got in via Outlook Web Access [OWA] using a bunch of usurped authentication tokens, which is basically like a temporary cookie that you present that says, “This person’s already logged in, they’re legit, let them in.”


DUCK.  Exactly, Doug.

When that kind of thing happens, which obviously is worrying because it allows the crooks to bypass the strong authentication phase (the bit where you have to type in your username, type in your password, then do a 2FA code; or where you have to present your Yubikey; or you have to swipe your smart card)…

…the obvious assumption, when something like that happens, is that the person at the other end has malware on one or more of their users’ computers.

Malware does get a chance to take a peek at things like browser content before it gets encrypted, which means that it can leech out authentication tokens and send them off to the crooks where they can be abused later.

Microsoft admit in their report that that this was their first assumption.

And if it’s true, it’s problematic because it means that Microsoft and those 25 people have to go running around trying to do the threat hunting.

But if that *isn’t* the explanation, then it’s important to figure that out early on, so you don’t waste your own and everyone else’s time.

Then Microsoft realised, “Actually it looks as though the crooks are basically minting their own authentication tokens, which suggests that they must have stolen one of our supposedly secure Azure Active Directory token-signing keys.”

Well, that’s worrying!

*Then* Microsoft realised, “These tokens are actually apparently digitally signed by a signing key that’s only really supposed to be used for consumer accounts, what are called MSAs, or Microsoft accounts.”

In other words, the kind of signing key that would be used to create an authentication token, say if you or I were logging into our personal service.

Oh, no!

There’s another bug that means that it is possible to take a signed authentication token that is not supposed to work for the attack they have in mind, and then go in and mess around with people’s corporate email.

So, that all sounds very bad, which of course it is.

But there is an upside…

…and that is the irony that because this wasn’t supposed to work, because MSA tokens aren’t supposed to work on the corporate Azure Active Directory side of the house, and vice versa, no one at Microsoft had ever bothered writing code to use one token on the other playing field.

Which meant that all of these rogue tokens stood out.

So there was at least a giant, visible red flag for Microsoft’s threat hunting.

Fixing the problem, fortunately, because it’s a cloud side problem, means that you and I don’t need to rush out and patch our systems.

Basically, the solution is: disown the signing key that’s been compromised, so it doesn’t work anymore, and while we’re about it, let’s fix that bug that allows a consumer signing key to be valid on the corporate side of the Exchange world.

It sort-of is a bit of an “All’s well that ends well.”

But as I said, it’s a big reminder that threat hunting often involves a lot more work than you might at first think.

And if you read through Microsoft’s report, you can imagine just how much work went into this.

DOUG.  Well, in the spirit of catching everything, let’s hear from one of our readers in the Comment of the Week.

I can tell you first-hand after doing this for the better part of ten years, and I’m sure Paul can tell you first-hand after doing this in thousands and thousands of articles…

…typos are a way of life for a tech blogger, and if you’re lucky, sometimes you end up with a typo so good that you’re loath to fix it.

Such is the case with this Microsoft article.

Reader Dave quotes Paul as writing “which seemed to suggest that someone had indeed pinched a company singing [sic] key.”

Dave then follows up the quote by saying, “Singing keys rock.”

Exactly! [LAUGHTER]

DUCK.  Yes, it took me a while to realise that’s a pun… but yes, “singing key.” [LAUGHS]

What do you get if you drop a crate of saxophones into an army camp?


DUCK.  [AS DRY AS POSSIBLE] A-flat major.

DOUG.  [COMBINED LAUGH-AND-GROAN] Alright, very good.

Dave, thank you for pointing that out.

And we do agree that singing keys rock; signing keys less so.

If you have an interesting story, comment or question you’d like to submit, we’d love to read it on the podcast.

You can email, you can comment on any one of our articles, or you can hit us up on social: @nakedsecurity.

That’s our show for today; thanks very much for listening.

For Paul Ducklin, I’m Doug Aamoth, reminding you, until next time, to…

BOTH.  Stay secure!


Leave a Reply

Your email address will not be published. Required fields are marked *