Site icon Sophos News

How the “Great SIM Heist” could have been avoided

You may very well have read about the latest leak supposedly sourced from the secret data stolen by whistleblower Edward Snowden.

The three-bullet version tells approximately this story:

Actually, there’s a subtle rider to the last item.

Having copies of the keys in the story doesn’t just let you listen in to present and future calls, but theoretically to decrypt old calls, too.

Understandably, a lot of coverage of what The Intercept has boldly entitled “The Great SIM Heist” is focusing on issues such as the audacity of the intelligence services.

There’s also speculation about the possible financial cost to the SIM manufacturer connected with (though not implicated in) the breach.

But how?

But we think there’s a more interesting angle to zoom in on, namely, “What is it about SIM cards that made this possible?”

After all, according to the story, there wasn’t really a “SIM heist” after all.

No SIM card was ever touched, physically or programmatically.

No SIMs were stolen or modified; no sneaky extra steps were inserted into the manufacturing process; there were no interdictions to intercept and substitute SIMs on the way to specific targets; there was no malware or hacking needed on any handsets or servers in the mobile network.

What was grabbed, if we have interpreted the claims correctly, was a giant list of cryptographic keys for an enormous stash of SIMs.

Many, if not most, of these have presumably (given the age of Snowden’s revelations) already been sold, deployed, used, and in some cases, cancelled and thrown away.

And yet these keys still have surveillance and intelligence-gathering value, both for already-intercepted but still uncracked call data, and for calls yet to be made by SIMs on the list.

How again?

How can that be?

The basic purpose of a SIM card is exactly what its name suggests: to act as a Subscriber Identity Module. (That’s why your mobile phone number isn’t – the number goes with your SIM from phone to phone, not the other way around.)

A SIM is a smartcard: it doesn’t just store data, like the magstripe on a non-smartcard does, but is also a miniature computer with secure storage and tamper protection.

That ought to make it ideal for cryptographic purposes, such as:

  1. Secure authentication to the mobile network. (This protects the company’s revenue by ensuring it can bill you accurately for calls.)
  2. Secure authentication of the network to your phone. (This makes it harder for imposters to man-in-the-middle your calls.)
  3. Secure encryption of calls. (This protects you from eavesdropping, which was a real problem with earlier mobile phones.)
  4. Resistance to SIM duplication. (This protects you and the network from “phone cloning,” where someone else racks up calls on your dime.)

You’re probably expecting the techniques used for (1) and (2) to involve public-key cryptography.

That’s where you have an encryption algorithm with two keys: one of them locks messages, so you can give that public key to anybody and everybody; the other is the private key that unlocks messages, which you keep to yourself.

This feature – one key to lock and another to unlock – can be used in two splendidly useful ways.

If I lock a message with your public key, I know that only you can unlock it, if you’ve been careful with your private key.

In other words, I can communicate secrets to you without the tricky prospect of securely and secretly sharing a secret key with you first. (Read that twice, just in case.)

On the other hand, if you scramble a message with your private key, anyone can unscramble it with the public key, but when they do, they know that you must have sent it.

So I can satisfy myself that it really is you at the other end, again without needing secure and secret channel first.

For item (3), you’re probably expecting another use of public-key cryptography, namely something like Diffie-Hellman-Merkle (DHM) key exchange, where each end agrees on a one-time encryption key that can never be recovered from sniffed traffic.

That means that even if someone records your entire call, including the “cryptographic dance” each end does with the other at the start, there isn’t enough data in the intercept alone to decrypt the call later, providing that both ends throw away the one-time key when the call ends.

→ The property of preventing decryption later on is known as forward secrecy, though it’s probably easier to think of it as “backwards security.”

The third how?

Guess what?

That’s not how SIM cards work.

For both the GSM and UMTS networks (the protocols behind 2G and 3G/4G mobile voice and data), SIM authentication and call encryption rely on a good, old-fashioned shared secret key.

You’ll often see that shared secret referred to as Ki, pronounced, simply, “kay-eye.”

It’s the key by which your SIM proves its identity and prepares to place a call.

When a SIM is manufactured, a randomly-generated Ki is burned into its secure storage.

That key can’t be read back out; it can only ever be accessed by software programmed into the SIM that uses it as a cryptographic input; it never emerges, in whole or in part, in the cryptographic output.

If we assume that the SIM’s tamper protection is perfect, and that there are no cryptographic flaws that leak data about Ki (it seems there were some such flaws in the early days, but they have been fixed now), that ought to be that.

Even if I target you by borrowing your phone and getting the SIM into my own grubby hands, I can’t access that key, not even if I have an electron microscope and millions of dollars up my sleeve.

One tiny problem

But there’s one tiny problem: namely that a copy of every Ki for every SIM has to be kept for later, when the SIM is sold to a mobile phone operator and subsequently provided to a subscriber.

And as anyone who has uploaded a dodgy selfie onto a social network and seen it turn up later in unexpected places can tell you, the only way to be sure that no copies of confidential content get into circulation is…

…not to make a copy in the first place.

Sadly, secret-key encryption (also known as symmetric encryption) that involves two different parties, such as you and a mobile phone network, relies on having at least two copies of that secret key: one for you, and one for them.

But why?

Why?

As far as we’re aware, the primary reason that GSM and UMTS rely on shared secret keys, and don’t support forward secrecy, is performance.

The processing power of SIM cards, and of many of the mobile devices they are plugged into, isn’t quite enough to do things properly.

Public key cryptography is well-known, and can be reasonably efficiently implemented, but it nevertheless isn’t anywhere near as efficient, in terms of CPU power and memory usage, as symmetric encryption.

So SIM authentication and call encryption are done nearly-properly instead.

With an unsurprising, if disappointing, outcome, assuming that The Intercept has this story correct.

The bottom line

We’ll keep it short.

If you’re going to encrypt your own stuff, do it properly.

Image of stash of SIMs courtesy of Shutterstock.

Exit mobile version