Skip to content
Naked Security Naked Security

Why government plans to spy on WhatsApp will fail

After last week's attack in London, the home secretary called on television for cryptographic regression - but that won't deliver what she wants

Last week, a man deliberately ran over more than 50 pedestrians on Westminster Bridge in London.

Four of the victims have already died of their injuries.

The attacker then jumped from his car and charged into the area surrounding the Palace of Westminster – the UK’s Houses of Parliament, seat of the national legislature – and stabbed an unarmed policeman to death, before being shot dead himself.

The UK has had to come to grips with what was, in effect, a terrorist attack, albeit by a man who might well turn out to be a lone sympathiser with the so-called Islamic State.

Understandably, people want to know not only how this attack came about, but also how it might have been foreseen and prevented.

When news emerged that the attacker might have used WhatsApp shortly before the attack, anyone with an interest in computer security, privacy and encryption knew just what was coming next…

…an official call to regulate secure messaging services, and to force companies like WhatsApp to deliver its services in a way that makes surveillance and investigation easier.

And that call has come, loud and clear, from none other than Amber Rudd, the UK home secretary, the UK equivalent of American secretary of homeland security.

According to UK newspaper The Guardian:

The home secretary said it was “completely unacceptable” that the government could not read messages protected by end-to-end encryption and said she had summoned leaders of technology companies to a meeting on Thursday 30 March to discuss what to do.

What to do?

Strictly speaking, true end-to-end encryption can’t be intercepted in transit, at least not without the sender or recipient noticing, as a matter of terminology.

If you can decrypt an enciphered data stream along the way – whether for archiving, surveillance or even simply for scanning for risky content such as spam or malware – then you didn’t really have end-to-end encryption in the first place.

Indeed, many services we think of as “encrypted” are subject to what’s called lawful interception, which is supposed to mean that with the right sort of authorisation from the judiciary, supposedly confidential data that was sent or stored using the service can recovered.

Lawful interception may lead to traffic being monitored in real time, or (given the sheer volume of data involved these days) recovered and decrypted later to help an investigation or prosecution.

Decrypting at the end

For example, your online banking transactions are typically encrypted end-to-end as you conduct them, but the bank needs to keep a permanent record of what you did – for its own rather obvious commercial reasons, as well as for regulatory purposes.

Likewise, you typically keep a record at your end, in case of disagreements.

If you’re wise you’ll store your bank statements on an encrypted disk or in an encrypted file, but you can recover them later if you choose.

Courts can, and often do, order the release of banking data for legal purposes: even if you don’t comply, the bank almost certainly will, using its own decryption keys to unlock the data unilaterally if needed.

Decrypting in the middle

Likewise, mobile phone networks are required to make technological provisions for lawful interception, so that they can comply with court-imposed orders to unlock both phone calls and SMS messages.

That way, even if both you and the recipient of a message covered by a court order refuse to co-operate, or claim you no longer have the relevant data, the mobile network operator can, in theory at least, step in and come up with the data you can’t or won’t reveal yourself.

Technically, the lawful interception process in the GSM and UMTS cellular networks isn’t a backdoor, because it’s not covert, or undocumented, or a regulatory secret.

Whether you approve or not, it’s a documented feature rather than a sneaky hole or a bug.

Mobile phone networks therefore don’t really use end-to-end encryption: the traffic is encrypted between each subscriber and the network, but is generally decrypted and re-encrypted in the middle, where it may be subject to lawful interception.

Cutting out the middleman

The “decrypt and disclose on due demand” regulations that apply to industries like online banking and mobile telephony don’t apply to services like WhatsApp, which is neither a financial institution nor a cellular network.

WhatsApp, and similar services, can and do provide a true end-to-end encryption system, implemented so that it’s not possible to decrypt the data in the middle.

All the service sees is that data is flowing – it can’t see what’s inside the traffic, even if it wants to – so there’s no point in a subpoena or warrant demanding services of this sort to reveal and decrypt messages, either in real time or after the fact.

That’s not a mistake – it’s a feature. (Indeed, it’s trickier to program proper end-to-end encryption via a middleman than it is to encrypt just from each end to the middle.)

It’s a feature because if you don’t collect the data in the first place, then you can never leak it by mistake, for example in the event of a data breach.

And you can never be forced to reveal it against your own moral compass, for example in the face of a hostile government, or as a result of an unexpected change in the law that you were unable to warn your users about.

Can Rudd’s will be done?

All of this raises the question, “Given the way that true end-to-end encryption works, is the home secretary wasting her time making her demand in the first place?”

Technically, no, because she isn’t asking for the impossible.

WhatsApp and other products of the “true encryption” sort could indeed be compelled by UK law to behave like mobile phone services, and forced to reimplement their software, regressing it to make lawful interception possible on demand.

Would it work?

Would this be a workable idea in practice, and would it be worthwhile?

We don’t think so, for several important reasons:

  • UK law wouldn’t apply to services run by US companies and operated outside the UK, for example on servers hosted elsewhere in the EU. So the companies could simply do nothing, other than to close down any part of the business run inside the UK. That would leave everyone in the UK, including the vast number of law-abiding WhatsApp users, stuck with less secure communications than people in the rest of the world. It’s hard to see how this could end positively.
  • The UK on its own wouldn’t have much of a stick to beat companies like WhatsApp with, nor much of a carrot to entice them to change. If the entire EU were to stand behind the UK, that might help, as it helped in squeezing Google to accept the so-called “right to be forgotten” rule. But the UK will soon be leaving the EU, and anyway, not all EU countries agree about the principle of weakening encryption in the hope of somehow making it stronger.
  • Governments that have attempted to force secure messaging services to fall back to weaker encryption methods “for the good of the country” have typically lost face internationally as a result. Forced decryption may make a country look oppressive, backwards-looking, or a risky choice for future economic investment. This is especially true in an era of concerns about state-sponsored industrial espionage and other anti-competitive behaviour.
  • Despite having apparently been radicalised in prison, and having come to the notice of the UK security services in recent years, the Westminster attacker wasn’t under scrutiny at the time of his murderous rampage. So no one in the intelligence community or in law enforcement would have been watching his WhatsApp messages anyway.

In other words, if the recent Westminster tragedy is the sort of case that Amber Rudd’s proposed cryptographic regression hopes to deal with…

…we’re talking about after-the-fact investigation of personal communications that were collected en masse “just in case”.

That means the nation-state scale accumulation of personal, private messages – data that will need to be collected from everyone in the UK, if the process is to be effective after the fact – and the concomitant need to store it securely for later, “just in case”.

Can you imagine what an appealing target all that data would make, especially to the very criminals and terrorists against whom it was supposedly collected in the first place?


15 Comments

In regards to your “Understandably, people want to know not only how this attack came about, but also how it might have been foreseen and prevented.” It is very simple, Vet the refugees you bring in extensively because if you don’t garbage like that will sneak in.

It’s not *quite* as simple as you suggest. For example, the Westminster attacker was born in England and seems to have spent his whole life in the UK.

To implement your solution would therefore require the UK to restrict freedom of movement *inside* the country, implementing internal passports, a raft of land border checkpoints, and an officially-managed inland visa system.

In a country that considers it a needless waste of money to switch signposted road distances and speeds from miles and mph to match the system of weights and measures used for everything else…

…all I can say is, “Good luck with your master plan.” That’s on economic grounds alone, long before you get near your first constitutional lawyer.

As Duck points out, this attacker was born and bred in the UK. Not a refugee. In fact, can you point me to any example of a refugee committing a terrorist act in the UK?

refugee, n. (REF-yoo-jee):
see James Holmes, Eric Harris, Dylan Klebold, Aaron Alexis, Michael McLendon, Andrew Kehoe, Adam Lanza, Patrick Sherrill, Mark Barton, William Smith, George Banks, Ronald Simmons.

yep

Maybe they will start doing what the US Government is doing with FIPS and advertising how great it is to get certified under it. Knowing full well that the NSA and CIA can most likely crack everything under it’s banner.

Serious question,

If these companies weakened their encryption, what happens if the terrorist encrypts his message manually with something like a one-time pad?

Would the government be able to do anything about that seeing as you only need paper, a pen, and 10-sided dice?

– Karen

Or even that you take your draft email and run it through Sophos Encryption and send the file to your “accomplice”. If each pair of correspondents have a share password, your content (if not your metadata) is secure. (far easier than using PGP)

That of course is assuming that Sophos Encryption has not been backdoored? Any canary declaration available?

Karen that’s conceivable; it’d likely escalate in a manner similar to recent years’ spam:
1) Atypical messages trigger filters as possibly being manually encrypted, and while spam gets discarded, these would be further scrutinized with varying–and probably minimal–success.
2) To further evade detection the bad guys escalate the amount of tangled messages to confuse the filters and make them less effective, like appending 20 random sentences (not even full/correct ones) to an email. Some spam is designed to sell you; some is designed to render your filter worthless and allow the first batch through.
3) The volume of non-useful messages vastly outweighs the real ones and overwhelms the software (and staff) trying to detect and interpret it.

Organized groups would eventually make the additional effort, but the Westminster Bridge tragedy seems to be a solitary and near-spontaneous act–very difficult to predict, let alone act in time to prevent.
:-(

Another way I’ve seen is someone sends an innocent looking email, except the receiver subtracts say, 5 from the number of words in each sentence, thus giving the numbers to decrypt a message using the one-time pad.

There’s just too many ways the governments idea will fail.
– Karen

It is a conundrum to say the least. On one hand I can understand the desire to be able to intercept messages passed between malicious parties in order to prevent these sort of atrocities. On the other hand you have people’s right to communicate and conduct business privately and securely. And in the middle you have the privacy technologies that both good honest people and the “malicious parties” enjoy. Government surveillance of everyone is not a good thing no-matter how you dress it up. This may boil down to a case of the ever present human factor I see all too often in today’s world where a misbehaving few ruin something good for everyone. I personally don’t want to live and work in a world with “weakened” encryption for the sake of government surveillance and I hope it doesn’t come to that, though in the end I have my doubts that it will.

Such a regulation would be ineffective. What tends to happen is one of the following scenarios:

If the security of the application is weakened the users will migrate off the application. Should ALL applications be mandated to do this it will be nearly impossible to police. Sure you could force apple to not sell those applications on its store but that does not stop people from getting the application other ways. And frankly the people who most don’t want to be listened to by the authorities will quickly use the bypasses to get the applications that are secure.

“Terrorists” have already learnt the lesson of being intercepted by the authorities and have learnt to speak in code phrases even online. These phrases have no keywords or any indication that they have any special meaning. The result is that global surveillance will not pick up on this as they are just not stupid enough to type “Hey, I’m going to bomb that building next Thursday”

Should such messaging applications be insecure then the people who don’t want prying eyes will simply switch away from them to a secure channel. You just can’t win this race of evolution simply because the legislation is always behind the practice. There will always be other ways of communicating securely.

What is left? A weakened security system for all that leaves the country’s communications vulnerable to outside actors. In none of this do you see guarantees that the government will protect the citizens. They want to watch them. If other people can read the data then that is a price the government is willing to pay. But are we?

the legislation is always behind the practice
Well said; that sums up this entire subject.

Whatever WhatsApp does for one country will apply to all countries. And this will enable totalitarian regimes like the one currently in India to selectively persecute their citizens. Freedom of speech will be lost.

The whole rationale of the attack on WhatsApp seems to be that WhatsApp provides e2e encryption for the masses, and therefore impedes the security services’ favourite form of surveillance, namely, bulk collection and retro-analysis – which can be undertaken for any number of purposes, not just terrorist cases.

+1, because once you know you can “take a look” then you can suddenly realise that although the WhatsApp data wasn’t actually much good for the case you used as your raison d’être, it really *is* good for just about everything else, and so it would be immoral not to take advantage of all that juicy crime-fighting data…

…and so on.

It doesn’t take a lot of imagination to envisage, in a world where such large-scale (and, frankly, intrusive) data processing is suddenly “the right thing to do”, that you’ll need third-party help by giving carefully chosen security experts access to the data to mine it more effectively. With appropriate checks and balances to stop breaches, leaks and over-use of the data, of course. History says that ends badly. (Chelsea Manning, QED.)

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?