At the tail-end of last week, Microsoft published a report entitled Analysis of Storm-0558 techniques for unauthorized email access.
In this rather dramatic document, the company’s security team revealed the background to a previously unexplained hack in which data including email text, attachments and more were accessed:
from approximately 25 organizations, including government agencies and related consumer accounts in the public cloud.
The bad news, even though only 25 organisations were apparently attacked, is that this cybercrime may nevertheless have affected a large number of individuals, given that some US government bodies employ anywhere from tens to hundreds of thousands of people.
The good news, at least for the vast majority of us who weren’t exposed, is that the tricks and bypasses used in the attack were specific enough that Microsft threat hunters were able to track them down reliably, so the final total of 25 organisations does indeed seem to be a complete hit-list.
Simply put, if you haven’t yet heard directly from Microsoft about being a part of this hack (the company has obviously not published a list of victims), then you may as well assume you’re in the clear.
Better yet, if better is the right word here, the attack relied on two security failings in Microsoft’s back-end operations, meaning that both vulnerabilities could be fixed “in house”, without pushing out any client-side software or configuration updates.
That means there aren’t any critical patches that you need to rush out and install yourself.
The zero-days that weren’t
Zero-days, as you know, are security holes that the Bad Guys found first and figured out how to exploit, thus leaving no days available during which even the keenest and best-informed security teams could have patched in advance of the attacks.
Technically, therefore, these two Storm-0558 holes can be considered zero-days, because the crooks busily exploited the bugs before Microsoft was able to deal with the vulnerabilities involved.
However, given that Microsoft carefully avoided the word “zero-day” in its own coverage, and given that fixing the holes didn’t require all of us to download patches, you’ll see that we referred to them in the headline above as semi-zero days, and we’ll leave the description at that.
Nevertheless, the nature of the two interconnected security problems in this case is a vital reminder of three things, namely that:
- Applied cryptography is hard.
- Security segmentation is hard.
- Threat hunting is hard.
The first signs of evildoing showed crooks sneaking into victims’ Exchange data via Outlook Web Access (OWA), using illicitly acquired authentication tokens.
Typically, an authentication token is a temporary web cookie, specific to each online service you use, that the service sends to your browser once you’ve proved your identity to a satisfactory standard.
To establish your identity strongly at the start of a session, you might need to enter a password and a one-time 2FA code, to present a cryptographic “passkey” device such as a Yubikey, or to unlock and insert a smart card into a reader.
Thereafter, the authentication cookie issued to your browser acts as a short-term pass so that you don’t need to enter your password, or to present your security device, over and over again for every single interaction you have with the site.
You can think of the initial login process like presenting your passport at an airline check-in desk, and the authentication token as the boarding card that lets you into the airport and onto the plane for one specific flight.
Sometimes you might be required to reaffirm your identity by showing your passport again, such as just before you get on the plane, but often showing the boarding card alone will be enough for you to establish your “right to be there” as you make your way around the airside parts of the airport.
Likely explanations aren’t always right
When crooks start showing up with someone else’s authentication token in the HTTP headers of their web requests, one of the most likely explanations is that the criminals have already implanted malware on the victim’s computer.
If that malware is designed to spy on the victim’s network traffic, it typically gets to see the underlying data after it’s been prepared for use, but before it’s been encrypted and send out.
That means the crooks can snoop on and steal vital private browsing data, including authentication tokens.
Generally speaking, attackers can’t sniff out authentication tokens as they travel across the internet any more, as they commonly could until about 2010. That’s because every reputable online service these days requires that traffic to and from logged-on users must travel via HTTPS, and only via HTTPS, short for secure HTTP.
HTTPS uses TLS, short for transport layer security, which does what its name suggests. All data is strongly encrypted as it leaves your browser but before it gets onto the network, and isn’t decrypted it until it reaches the intended server at the other end. The same end-to-end data scrambling process happens in reverse for the data that the server sends back in its replies, even if you try to retrieve data that doesn’t exist and all the server needs to tell you is a perfunctory 404 Page not found
.
Fortunately, Microsoft threat hunters soon realised that the fraudulent email interactions weren’t down to a problem triggered at the client side of the network connection, an assumption that would have sent the victim organisations off on 25 separate wild goose chases looking for malware that wasn’t there.
The next-most-likely explanation is one that in theory is easier to fix (because it can be fixed for everyone in one go), but in practice is more alarming for customers, namely that the crooks have somehow compromised the process of creating authentication tokens in the first place.
One way to do this would be to hack into the servers that generate them and to implant a backdoor to produce a valid token without checking the user’s identity first.
Another way, which is apparently what Microsoft originally investigated, is that the attackers were able to steal enough data from the authentication servers to generate fraudulent but valid-looking authentication tokens for themselves.
This implied that the attackers had managed to steal one of the cryptographic signing keys that the authentication server uses to stamp a “seal of validity” into the tokens it issues, to make it as good-as-impossible for anyone to create a fake token that would pass muster.
By using a secure private key to add a digital signature to every access token issued, an authentication server makes it easy for any other server in the ecosystem to check the validity of the tokens that they receive. That way, the authentication server can even work reliably across different networks and services without ever needing to share (and regularly to update) a leakable list of actual, known-good tokens.
A hack that wasn’t supposed to work
Microsoft ultimately determined that the rogue access tokens in the Storm-0558 attack were legitimately signed, which seemed to suggest that someone had indeed pinched a company signing key…
…but they weren’t actually the right sort of tokens at all.
Corporate accounts are supposed to be authenticated in the cloud using Azure Active Directory (AD) tokens, but these fake attack tokens were signed with what’s known as an MSA key, short for Microsoft account, which is apparent the initialism used to refer to standalone consumer accounts rather than AD-based corporate ones.
Loosely speaking, the crooks were minting fake authentication tokens that passed Microsoft’s security checks, yet those tokens were signed as if for a user logging into a personal Outlook.com account instead of for a corporate user logging into a corporate account.
In one word, “What?!!?!”
Apparently, the crooks weren’t able to steal a corporate-level signing key, only a consumer-level one (that’s not a disparagement of consumer-level users, merely a wise cryptographic precaution to divide-and-separate the two parts of the ecosystem).
But having pulled off this first semi-zero day, namely acquiring a Microsoft cryptographic secret without being noticed, the crooks apparently found a second semi-zero day by means of which they could pass off an access token signed with a consumer-account key that should have signalled “this key does not belong here” as if it were an Azure AD-signed token instead.
In other words, even though the crooks were stuck with the wrong sort of signing key for the attack they had planned, they nevertheless found a way to bypass the divide-and-separate security measures that were supposed to stop their stolen key from working.
More bad-and-good news
The bad news for Microsoft is that this isn’t the only time the company has been found wanting in respect of signing key security in the past year.
The latest Patch Tuesday, indeed, saw Microsoft belatedly offering up blocklist protection against a bunch of rogue, malware-infected Windows kernel drivers that Redmond itself has signed under the aegis of its Windows Hardware Developer Program.
The good news is that, because the crooks were using corporate-style access tokens signed with a consumer-style cryptographic key, their rogue authentication credentials could reliably be threat-hunted once Microsoft’s security team knew what to look for.
In jargon-rich language, Microsoft notes that:
The use of an incorrect key to sign the requests allowed our investigation teams to see all actor access requests which followed this pattern across both our enterprise and consumer systems.
Use of the incorrect key to sign this scope of assertions was an obvious indicator of the actor activity as no Microsoft system signs tokens in this way.
In plainer English, the downside of the fact that no one at Microsoft knew about this in advance (thus preventing it from being patched proactively) led, ironically, to the upside that no one at Microsoft had ever tried to write code to work that way.
And that, in turn, meant that the rogue behaviour in this attack could be used as a reliable, unique IoC, or indicator of compromise.
That, we assume, is why Microsoft now feels confident to state that it has tracked down every instance where these double-semi-zero day holes were exploited, and thus that its 25-strong list of affected customers is an exhaustive one.
What to do?
If you haven’t been contacted by Microsoft about this, then we think you can be confident you weren’t affected.
And because the security remedies have been applied inside Microsoft’s own cloud service (namely, disowning any stolen MSA signing keys and closing the loophole allowing “the wrong sort of key” to be used for corporate authentication), you don’t need to scramble to install any patches yourself.
However, if you are a programmer, a quality assurance practioner, a red teamer/blue teamer, or otherwise involved in IT, please remind yourself of the three points we made at the top of this article:
- Applied cryptography is hard. You don’t just need to choose the right algorithms, and to implement them securely. You also need to use them correctly, and to manage any cryptographic keys that the system relies upon with suitable long-term care.
- Security segmentation is hard. Even when you think you’ve split a complex part of your ecosystem into two or more parts, as Microsoft did here, you need to make sure that the separation really does work as you expect. Probe and test the security of the separation yourself, because if you don’t test it, the crooks certainly will.
- Threat hunting is hard. The first and most obvious explanation isn’t always the right one, or might not be the only one. Don’t stop hunting when you have your first plausible explanation. Keep going until you have not only identified the actual exploits used in the current attack, but also discovered as many other potentially related causes as you can, so you can patch them proactively.
To quote a well-known phrase (and the fact that it’s true means we aren’t worried about it being s cliche): Cybersecurity is a journey, not a destination.
Short of time or expertise to take care of cybersecurity threat hunting? Worried that cybersecurity will end up distracting you from all the other things you need to do?
Learn more about Sophos Managed Detection and Response:
24/7 threat hunting, detection, and response ▶
Dave
“which seemed to suggest that someone had indeed pinched a company singing key…”
Singing keys rock :)
“but these fake attack tokens were signed with what’s known as an MSA key, short for Microsoft consumer account.”
Is that meant to be MCA key?
Good write up BTW. It’s hard to make crypto digestible for normal people ;) But I understood this I think.
Paul Ducklin
Singing keys. I liked that typo enough that I almost decided to approve your comment but not to fix it. (I did, thanks.)
As for MSA, that’s the official initialism. I assume that the S stands for “soft”, as Microsoft account”. I inserted the word consumer for clarity but it is, as you suggest, misleading in there, so I have edited that bit too.
Gyp Joe
Why does OWA need to be available anyway? If it isn’t blocked by country conditional access can the devices be restricted by policy whitelist?
Paul Ducklin
The advantage of OWA means that you don’t need a full-blown Outlook app installed as well as your browser… and that you can access Outlook from platforms where there is no native client, such as Linux or one of the BSDs.
Thus it is both popular and useful…
Gyp Joe
No I get that but couldn’t it be closed optionally if you didn’t need it or would phones fail to connect unless they fell under a device conditional access policy? Seems like a welcome mat nobody really needs.
Stefan B
“…were using corporate-style access tokens signed with a consumer-style cryptographic key, their rogue authentication credentials could reliably be threat-hunted once Microsoft’s security team…”
–> Understood for enterprise accounts, but if this key was also used to create fake tokens for consumer accounts, they would look identical to correct ones, right? In this case, Microsoft would not be able to identify any misuse of consumer accounts. And as many web applications out there use passwort reset user verification based on verification mails, that could be be a much broader problem!
–> Not having strored such powerful keys on HSMs (so that they could be stolen) is indeed grossly negligent!
Paul Ducklin
I wondered about that too.
Microsoft’s article says that the crooks “used forged authentication tokens to access user email from approximately 25 organizations, including government agencies and related consumer accounts in the public cloud”.
So it sounds as though some consumer accounts were accessed, though how the word “related” fits in there isn’t clear.
It seems likely, or at least feasible, that the forged MSA tokens using the stolen MSA key would be detectable, but perhaps not as obviously as tokens signed with “the wrong” key.
Or perhaps the compromised consumer accounts were tracked down via other means?