Skip to content
Naked Security Naked Security

TalkTalk breach: CEO dismisses encryption, 15-year-old arrested

There's been a lot of strange developments in the days since last week's cyberattack on UK telecom TalkTalk, which resulted in the theft of vital personal data from an unknown number of TalkTalk customers.

TalkTalk breach

There’s been a lot of strange developments in the days since last week’s cyberattack on UK telecom TalkTalk, in which an unknown number of customers may have had their personal data accessed.

First up, the criminal investigation is progressing: the Metropolitan Police announced on Monday, 26 October, that a 15-year-old boy has been arrested “on suspicion of Computer Misuse Act offenses” in connection with the breach.

The boy was taken into custody after being arrested Monday at an address in Antrim County, Northern Ireland, and the address was being searched, according to a statement.

The news of the arrest comes after days of conflicting reports about who was behind the attack.

The BBC initially reported that a “Russian Islamist group” had claimed responsibility for the attack; and on Monday, Motherboard reported that a member of the (defunct) hacktivist group LulzSec claimed responsibility for a distributed denial of service (DDoS) attack on TalkTalk’s website.

Security blogger Brian Krebs, citing sources “close to the investigation,” reported that a hacker group had demanded a ransom of £80,000 in bitcoins (about $122,000) in exchange for a stolen cache of customer data.

Krebs also reported that a user named “Courvoisier” had posted on a Dark Web forum called AlphaBay that he would be selling “hacked TalkTalk customer data.” (Update: Krebs reported that the data had been promised but not yet sold, as we previously wrote).

TalkTalk CEO Dido Harding is doing a lot of talking to the media, and in a video on the TalkTalk website she says she is “sorry for the frustration and concern that this is causing.”

Harding said on Saturday that the number of people affected in the breach was “materially lower” than first thought – certainly less than all of the company’s 4 million customers.

Meanwhile, TalkTalk’s FAQ about the incident says it’s still “too early” in the investigation to know how many people were affected.

TalkTalk is taking a lot of flak at the moment, some of it justifiable, which Harding acknowledged in an interview with The Guardian:

We are understandably the punchball for everybody wanting to make a point at the moment. Nobody is perfect. God knows, we’ve just demonstrated that our website security wasn't perfect – I'm not going to pretend it is – but we take it incredibly seriously.

But the embattled CEO has also made some puzzling comments.

After it was pointed out that an IT security specialist revealed numerous security weaknesses in TalkTalk’s website last year, she responded by saying that TalkTalk’s security is “head and shoulders better than some of our competitors.”

The security specialist, Paul Moore, wrote in a blog post last September that representatives from the TalkTalk CEO’s office were “aggressive, defensive and dismissive” when he pointed out that the company’s My Account website and webmail service did not use TLS/SSL encryption.

Harding also said in an interview that TalkTalk did not encrypt customer financial information but was “not legally required” to do so – because the UK’s 1998 Data Protection Act does not explicitly require encryption.

Of course, if Krebs’s claims are true, and the data was extracted using what’s known as SQL injection – where an outsider tricks a database into serving up unencrypted data – encryption might not have been enough to prevent the breach in this case.

So far, however, all that we know is that we don’t yet know what happened…

Image of data leak courtesy of


Re: Encryption — the data was exfiltrated through sql injection, what good would at-rest encryption give since it was requested from the database by the valid authority — namely the app itself?


That doesn’t change the fact that the CEO’s response was wrong. If somebody breaks into your house through a window you left unlocked and the insurance denies payment because you (also) left your door unlocked, you can’t counter with “yeah but they came in through the unlocked window”. An SQL injection requires an unfixed security hole, which is a knowingly neglected security issue, same as not using encryption. It shows a pattern of disinterest in securing entry points.


It’s time companies be held responsible. In fact it’s long overdue. If the necessary precautions are taken to protect the information and a breach still happens then it may be excusable. But this attitude of “everyone is being breached” doesn’t make it any more excusable nor acceptable.

1- put up a proper firewall
2- encrypt the data
3- keep systems (offline and internal) patched

Is that really so hard? No it’s not. It’s lazy and cheap(so they think). This neglect is costing us all. And it’s costing us in ways people don’t even realize. It’s time to make it obvious that not taking these minimal precautions will cost the breached party in a very direct way. The biggest problem at the moment is that the cost is carried by individuals and society as a whole and not the party being neglectful. And that *must* change in a very immediate way.


I agree that people being held responsible would change things significantly and there are a number of ways that might happen. One is regulation, but you already see the limits of that in Talk Talk’s response (we did everything we were required to…). Regulation is, I think, a back stop—I’m glad it’s there but it will probably never be enough.

Talk Talk’s response also shows that there *are* consequences already though. They wouldn’t be doing so much PR if they didn’t think sales would suffer.

Reputation will be lost, customers will be lost and, just possibly, jobs will be lost.

It will take a long time for the message to sink in though simply because although doing the things you suggest would help, there are a lot of things that you need to do that cannot be controlled centrally. An individual user’s careless actions can cost an entire organisation.

Phishing keeps a lot of IT people up at night because it represents each employee’s daily opportunity to compromise the company. Good defence means meaningful education, which means culture change, which is hard.

Likewise, if this was a SQL injection attack then a Web Application Firewall might have helped but it’s fundamentally a simple mistake by an individual developer or, more broadly, a failure in the review or deployment process for code. Again, something that requires a culture change to fix.

Nothing will change without consequences but my sense is that it takes a *long* time for the consequences to filter down because there are so many individual decisions that people take that have a huge leveraging effect on the security of a whole company and those consequences have to play out on all of them.


In my opinion little will change until the consumer becomes educated enough to ask the proper questions of their online data custodians as P. LeCavalier suggests (sort of).
If companies cannot get customers or lose business because consumers are aware of the the company has not met the basic criteria to protect customer data… They will just “Fail” and companies providing better security will prosper.


A major problem is that even when there are consequences, it is not felt by those at the top of the chain.

When a CEO (on very rare occasions) is held accountable, they may be fired and then go on to be hired by another company to drive that one into the ground as well.

The only way corporate executives will get the picture is to hold them financially responsible for their failures, which is not what today’s society is all about. There is always another excuse why they did not do any more than the absolute minimum required by law instead of doing what they know is right. Doing the latter has an immediate though slightly negative impact on the corporate profits, thus their salaries and bonuses which is intolerable. Their overall goal only seems to be maximizing their personal bank account regardless of the long-range implications for the company.


Why don’t these big money making corporation spend money on security? Why can’t they make the security good enough?


Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!