Skip to content
Naked Security Naked Security

Backdoors ‘punish the wrong people’, EU security body warns

EU security body comes out strongly against governments compromising encryption, warning of decreased trust and innovation

Building backdoors into encryption to counter criminals and terrorists is doomed from the start, EU cybersecurity policy body ENISA has warned in a new discussion document.

The agency offers a number of arguments in support of its conclusion, which can be summed up with reference to the unwritten first law of backdoors: they only work well when the people targeted by them don’t know they’re there.

As soon as they do, they no longer trust that technology or service and stop using it, undermining the point of putting the backdoor there in the first place.

Using them in the way mooted by successive UK and US governments over the last quarter of a century – including through independent key escrow – would risk ushering in a dystopian era of unintended consequences, ENISA said.

The first is that cybercriminals would quickly migrate to alternative encryption platforms or, worse still, start building their own.  At that point, police and intelligence agencies would find themselves at an even greater disadvantage than they are today.

A second consequence is that the same cybercriminals (including nation states) would hunt down the secret backdoors, using any they found to turn the system against ordinary internet users.

By that point, the network of trust that makes the internet possible might start to collapse as billions of users and businesses wondered which bits of software (digital certificates, HTTPS, messaging privacy and encrypted personal data held by organisations) were safe to use.

Backdoors could end up being more like an open door, says the agency: “The use of backdoors in cryptography not a solution. Existing legitimate users are put at risk by the very existence of backdoors. The wrong people are punished.”

ENISA steers clear of mentioning another seeming paradox brought into sharper focus by legislation such as the recent UK Investigatory Powers Act (IPA), also called the “Snooper’s Charter”.

Citizens value encryption because it secures them from criminals. But when governments start watching citizens it also protects them from governments, trust for which is falling post-Snowden.

Ergo, overbearing surveillance risks spreading the very thing that annoys governments most: ever more sophisticated encryption technologies such as the end-to-end design used by the hugely popular WhatsApp.

These work without centrally held keys, which makes life much harder for snoopers. It’s an evolution that makes backdoors futile and blunts surveillance:

“There is every reason to believe that more technology advances will emerge that will continue to erode the possibility of identifying or decrypting electronic communications.”

European governments are under no obligation to take ENISA’s advice on any of this, but at least they can’t say they weren’t warned.

Encryption is the great technological leveller. No matter how much police and intelligence services wish it otherwise, encryption was invented to secure communication, not undermine it.


9 Comments

You said, “At that point, police and intelligence agencies would find themselves at an even greater disadvantage than they are today.”

I doubt that would be possible. First, there are lots of ways law enforcement can crack encryption.

Second, and more importantly, it has taken decades of work by some pretty sharp people, to build encryption that’s as good as it is. Criminal organizations have some sharp people, but they don’t have the experience. It seems likely that they would make mistakes similar to the ones universities, businesses, and governments made over the last few hundred years.

Encryption is not easy to do correctly.

Even assuming law enforcement can ‘crack encryption’ they can’t do it on any scale within a given period of time. They need a warrant and they need time. Replicating that process across thousands or even millions of people is not practical which is why they have engaged in a battle with companies such as Apple to secure a shortcut.

I think you missed my point (or, more likely, I didn’t say it well enough). I’m talking about the effect on law enforcement of criminal organizations using their own encryption “products” in their schemes.
Criminals who create encryption methods (on their own) are much more likely to find their encryption crackable.
Comparing that to their current method (of using industry-standard encryption), it seems to me that criminals would be at a disadvantage. They would have to learn the nuances of hashing, salting, algorithms, etc. If they make a mistake in that process, they’ve opened up their encryption.
Compared to using industrial encryption.

Thank you for pointing to this paper. TIming is great for the current debate in dutch parliament about exactly this. Badgering my representatives with it in 5…4…

Here’s a plain-talking argument along the same lines from a company with many years of experience in the field of cryptography – a company that lived through the period when the US mandated a form of backdoor (deliberately weakened keys for export) and the unintended side effects it had for well over a decade *after the rules were changed*.

So it’s not just politicians saying so :-)

https://sophos.com/nobackdoors

Encryption and security are not easy to do correctly. Particularly when they need to be interoperable i.e. it works between two people using standards.
However, why would any criminal or terrorist trust standard encryption? If I only need to communicate with a few people I can build my own encryption and use the power of my GPU to make it longer – standard encryption tends to be at the lower end to work on lower performance devices. I only need to communicate how to figure out the code to the people I need to communicate with – such as using the date to calculate which verse of the Bible (available in all good hotel rooms) to use as part of the decryption key, add in a few other factors. Then you not only have to decrypt it, but figure out what code methodology is used.
If a backdoor gets into the wrong hands, you have an even bigger problem – especially as it might be years before it comes out (like the Yahoo password breach)

Great article, John – thank you.
How are backdoors expected to be implemented for government purposes? Do they approach the hardware appliance vendors? Or organisations that use them?
What’s to stop organisations using their own end-to-end encryption over the existing ‘standard’ encryption techniques if backdoors are present?

Rumours of US government backdoors go back decades. As for hardware, anything used by US governments in a security context must be approved by NIST and (in most cases), the NSA. Read into that what you will.

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?