Skip to content
Naked Security Naked Security

Bank leaks 60,000 account details in three character email slip-up

Exactly what happened isn't certain, but it looks as though a country code left off the end of an email address was all it took.

One of Australia’s “big four” banks has found itself caught up in an ongoing problem caused by an errant email.

According to a report late last year, the issue started with a CC that wasn’t supposed to be there, causing a recipient outside the bank to get a copy of an email they weren’t supposed to see.

CC, of course, is short for the anachronistic term carbon copy, from the days when a carbon-impregnated film was slipped between two sheets of paper that were then typed at the same time.

(You could do three copies, or even more, with a thicker carbon paper sandwich, but each typewriter had a mechanical limit beyond which the bottom sheets of paper would come out too faint to read, or would skid out of position and end up garbled.)

The problem with CC in modern email is that everyone on the list gets a copy of everyone else’s email address, which is often not a good idea, especially if it’s a routine message to lots of different customers who aren’t supposed to learn everyone else’s identity.

That sort of blunder happened to the New Zealand public service back in 2013, when the Ministry for the Environment CCed a list of people who should have been BCCed.

(BCCs are a blind carbon copies, where everyone gets a copy of the message but not of the recipient list.)

The Ministry caused much mirth at the time by issuing a apology for using CC instead of BCC, once again using CC instead of BCC.

The first apology was then followed up with a Monty Pythonic apology for the previous apology.

In this latest case, National Australia Bank (NAB) only CCed one person, apparently, so this was a blunder of a different sort: the sender picked the wrong recipient.

The irony here, as far as we can tell from the information that’s publicly available, is that the sender made the tiniest of slips, and one you can probably imagine making yourself under the circumstances.

Australia, like numerous other countries, won’t let you have a commercial domain such as; you get instead.

Unfortunately for NAB, the problems seems to have arisen from the fact that while the company owns, it doesn’t own, and as far as we can tell, an attachment with basic data about 60,000 bank accounts was send to instead of to

The mail server for is listed as Google, presumably because the domain is signed up to Gmail, but Google won’t help track down recipients in cases like this without a court order.

Apparently, NAB spent the holiday season trying to get Google to figure out what might have happened to the offending data, but without success.

An updated report on the story suggests that NAB is now dealing directly with the owner of, but, judging from the state of the website, it doesn’t look as though he’s doing much with the domain at the moment, and therefore can’t be much help in tracking down the missing data, if indeed it ever reached anyone.

The result therefore seems to be that:

  • The email was accepted by Google’s mail service, so in a formal sense it was delivered.
  • The email didn’t reach any known user, so in an informal sense, it wasn’t received.

In short, it’s highly likely that no harm was done, because the email and its personal data will never be seen again, but it’s impossible to be sure.

And that’s the dilemma that exists after many breaches: your head tells you that everything is OK, but you can’t put your hand our your heart and affirm to your customers (or the privacy commissioner) that it will stay that way.

What to do?

Sending emails to the wrong person is surprisingly easy to do by mistake: if a close-but-not-correct username or domain name doesn’t trip you up, the automatic name completion feature that many email systems provide may do so instead.

Here are some tips to reduce the risk in your organisation:

  • Use an automatic file encryption system to keep internal files safe from outside eyes, even if they are copied or emailed out.

For example, Sophos Safeguard can be configured so that it automatically encrypts any business files you save, but won’t automatically decrypt them when you send them by email. If you do accidentally send an internal email to the outside world, any external recipients will receive attachments that are no more use than shredded cabbage.

  • Use an outbound email filter to block emails to commonly mistyped domains.

It’s hard to predict your users’ most likely typing mistakes in advance, but with experience you will probably be able to make a useful list of common typos. Typosquatting, where unscrupulous businesses deliberately try to profit from obvious blunders, is surprisingly common.

  • Consider using a data loss prevention (DLP) solution to identify content that shouldn’t be leaving the organisation.

For example, the Sophos DLP system can look inside emails and attachments to warn (or to prevent) users sending the right data to the wrong place, or from sending the wrong data to the right place.

  • Create a culture that discourages sharing database dumps by email.

Don’t use email attachments for sharing data internally, but instead use email messages to describe how to get hold of the relevamt data using the appropriate internal server, such as the content management system. Your email administrators will thank you for reducing the size of the email archives they need to maintain, and your auditors will thank you because they don’t have to worry about uncontrolled database copies lying around and getting stolen.

Remember: if in doubt, don’t let it out.


With regards to the following, if Catch-all Email is enabled, then it can be assumed that the email indeed land up in the preconfigured mail box. It could be admins/owners…. On the similar lines, one should also discourage testers/devs to use domains like and instead use or .coms as they are earmarked for testing purposes per RFC:

“The result therefore seems to be that:

The email was accepted by Google’s mail service, so in a formal sense it was delivered.
The email didn’t reach any known user, so in an informal sense, it wasn’t received. “



The relevant RFCs are as follows.

Special-use domain names, e.g. for documentation:

In brief:


Special-use IP numbers, e.g. for private networks, testing and documentation:

(As an aside, private IP numbers are intended for actual use, not for documentation.)


For those of us with less technical know-how, can someone explain what happens to emails sent to a bonafide non-existant email address?

Does Google have a copy of it just because it was sent through thier email server? If so, how long do they keep it for?


If the email address does not exist, google will email back telling you such. If you don’t get such an email, it’s probably sat in an inbox.. (Which may or may not get read…)

Delivery to the following recipient failed permanently:


Technical details of permanent failure:
Google tried to deliver your message, but it was rejected by the server for the recipient domain by [2607:f8b0:400d:c0c::1a].

The error that the other server returned was:
550-5.1.1 The email account that you tried to reach does not exist. Please try
550-5.1.1 double-checking the recipient's email address for typos or
550-5.1.1 unnecessary spaces. Learn more at
550 5.1.1 https://[redacted] - gsmtp


Ok. I get the not deliverable to sender notice. Are you saying that my email sat in RAM and was cleared from the mail server’s live memory buffer after processing that? Or is it possible that some mail servers retain undeliverable emails on more permanent storage media for some amount of time?


Some years ago now a number of Russell Group-like domain registrations were established as


Likewise a software vendor of medical information systems put in their documentation a “sample” IP address which at first glance resembled a private range…except it didn’t–it belonged to a university, IIRC. I still find it exceeds coincidence that I despised this software all five years I worked there.

That address was used in reputedly thousands of hospital implementations and required subsequent network admins to face an irksome choice: sustain an unconventional routing table or eviscerate life-and-death systems mid-flight. My senior network admin hated it but knew he’d never convince anyone to acquiesce to the downtime required to fix it.

Unsurprisingly these Frankensystems still stand in many cases.


Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!