Skip to content
Naked Security Naked Security

ALPACA – the wacky TLS security vulnerability with a funky name

Don't panic - this isn't another Heartbleed. But it's a fascinating reminder of why doing things the easy way isn't always the best way.

TLS, short for Transport Layer Security, is an important part of online cybersecurity these days.

TLS is the data protection protocol that puts the padlock in your browser’s address bar, keeps your email encrypted while it’s being sent (probably), and prevents cybercrooks from casually substituting the software you download with malware and other nasties.

The TLS protocol works by:

  • Agreeing a one-time encryption key with the other end of the connection, to protect your data from snooping and surveillance.
  • Verifying the person or company operating the server at the other end, making it harder for crooks to set up fake sites to trick you.
  • Checking the integrity of data as it arrives, to stop other people on the network from tampering with the content along the way.

So, whenever a vulnerability is announced in TLS, given how much we rely on it, the announcement typically makes big headlines.

Amusingly, perhaps, that’s had a sort of circular effect, with researchers going out of their way to come up with names and logos for TLS vulnerabilities that encourage big headlines in the first place.

We jocularly call them BWAINs – an impressive name that’s short for bug with an impressive name – and examples include vulnerabilities dubbed BEAST, Heartbleed, Logjam, Lucky Thirteen, and now…

…the delightfully named ALPACA.

A real attack, but not too much of a danger

The good news is that ALPACA isn’t a terribly usable attack, and there are some fairly simple ways to ensure it doesn’t happen on your servers (and therefore, indirectly, to your visitors), so there isn’t a clear and present danger to online commerce because of it.

The bad news, of course, is that ALPACA is a vulnerability nevertheless, or more precisely a family of vulnerabilities, and it exists because we, as an internet community, haven’t been quite as careful or as precise as perhaps we should have been when setting up our servers to use TLS in the first place.

TLS certificate overlap

ALPACA is short for Application Layer Protocols Allowing Cross-Protocol Attacks (many BWAINs involve a bit of a linguistic stretch), and it gets that name because TLS connections aren’t tied to any specific application, but instead simply protect the data in a transaction, without any formal way to restrict that transaction to a specific application or purpose.

The researchers discovered that millions of network domains out there not only use TLS on multiple servers for multiple different purposes, such as securing both HTTP (web browsing) and SMTP (email transfer), but also often fail to keep the verification part of the TLS process separate for the different services they offer.

In other words, the same TLS certificate that they use to verify, say, their email server to other email servers would also work to verify their web server to visitors using a browser.

What that means – and bear with us, because this ends up sounding both complicated and unlikely at first glance – is that if a crook could redirect your browser from a company’s website to, say, one of its email (or secure FTP, or IMAP, or POP3) servers instead, then your browser might end up trusting that nearly-but-not-quite-right other server instead.

Sometimes, crooks can pull off traffic redirection inside your network even if they can’t hack into the servers themselves.

ALPACA attacks provide a method whereby that sort of traffic redirection could be used to subvert security, both inside and outside your network, rather than simply causing a disruption or denial of service, as you might assume at first.

The problem is that TLS secures the raw data that gets transported across the connection it’s protecting, and verifies the name of the server it’s been asked to connect to, but it doesn’t formally verify the actual application that’s running at each end of the link, or determine the validity of the data that’s being exchanged.

In other words, in an ALPACA attack, the padlock would show up in your browser, you’d be unaware that you weren’t actually connected to the server you expected, and your browser would innocently, and trustingly, start talking to another server in on the network instead.

So what?

At this point, you are probably thinking, “So what? Browsers talk HTTP, but email servers talk SMTP. The two are incompatible, so the browser will just get blasted with error messages and that will be the end of it.

But one problem that the ALPACA researchers identified is that different types of server are programmed to recognise and defend against different types of error in different ways.

For example, web servers are (or ought to be!) super-cautious about how data that was included in your web request gets represented in the reply that’s sent back.

If you click a search link for a website, for instance, that includes a search parameter such as <script>alert('Ooops!')</script>, then it’s vitally important that the web server doesn’t send back a web page that includes exactly that text.

If the server sends back an error message that literally contains the message Sorry, the text <script>alert('Ooops!')</script> was not found, then it has just served up a web page, with the origin and authority of the server itself, that contains JavaScript decided by an untrusted outsider!

That’s known as XSS, or cross-site scripting (more precisely, it’s a reflective XSS, because the server simply reflects the chosen JavaScript right back into your browser where your browser magically trusts it and runs it).

In case you’re wondering, the parts of this web page above that appear to contain JavaScript tags don’t literally include the text you see on your screen. The web page contains HTML code that tells the browser to display JavaScript tags at the relevant places, without actually containing the raw tags themselves.

A huge security hole

XSS is a huge web security hole, because the reflected script can access data such as login cookies specific to the site you’re currently visiting, and thereby steal your login, raid your shopping cart, or otherwise poke its nose into your online business.

Email servers, on the other hand, don’t generally deal with JavaScript, and their replies are supposed to make sense to email sending applications, so there’s a chance that aiming a browser at a mail server and sending a carefully crafted but fake web request…

…might cause the email server to produce, inamongst its output, an error message that hasn’t gone through the same scrupulous anti-XSS checking that would happen in a web server.

You’re probably once again thinking, “So what? If the email server sends back some rogue, reflected JavaScript, what harm would that do? There aren’t any session coookies, shopping carts or other private web data associated with the email server, so an attacker would get nowhere.

Except for one thing: the browser thinks it’s connected to the real web server, and it made that decision because it was presented with a TLS certificate that would have been valid for the web server, if indeed that’s where it had ended up.

Therefore the rogue script reflected by the well-meaning email server would be able to read out the browser cookies and web data associated with the web server, even though the browser didn’t connect to the web server at all.

Server mixup

All of this raises the question: “But how could a browser mix up a web server’s TLS certificate with an email server’s certificate in the first place?

Well, until certificate issuing companies like Let’s Encrypt came along and made the process of acquiring TLS certificates both free and straightforward, there was usually a fair bit of hassle (and cost) involved in buying and updating certificates for all the servers on your network.

As a result, companies understandably often rely on certificates that are valid for several, many, or even all the possible servers in their network domain.

Instead of getting a separate certificate for, say, www.example.com and mail.example.com, for example, you might choose to use what’s known as a wildcard certificate that’s valid for *.example.com, where the asterisk (star) character denotes “any name at all”, in the same way that most file-finding programs interpret *.DOCX as “all files that end with a DOCX extension”.

And that, very heavily simplified, is the essence of the ALPACA problem.

TLS certificates that are valid for more than one different type of server on your network could be used to perform the CA part of ALPACA, namely the Cross-protocol Attacks.

Your browser ends up trusting the wrong server, and talking to it in the wrong sort of language, but is nevertheless able to pull off some sort of harmful security bypass without directly hacking any of the servers themselves.

What to do?

The researchers have identified several ways to reduce the risk of this sort TLS abuse, if you’re worried about visitors to your network being tricked by an admittedly-unlikely ALPACA attack.

  • 1. Use application-level hardening.

Network programmers often invoke what’s known as the Robustness Principle, proposed by the late Jon Postel in the early, uncommercialised internet era: “TCP implementations should follow a general principle of robustness: be conservative in what you do, be liberal in what you accept from others.

But that “rule” is dangerously out of date in the 2020s, because it encourages programmers to get security details right themselves, but to allow others to break the rules, quite possibly on purpose and with nefarious intent.

A better contemporary rule is: “Get it right yourself, and don’t let others get it wrong, accidentally or otherwise.

The Postfix SMTP server, for example, actively (if not compliantly) watches out for SMTP input lines that look like the start of an HTTP request, rather than merely being mis-spelled commands, and closes the connection immediately if it thinks there’s a web browser at the other end:

 $ mailcat mail.example 25
 [connected, type commands after -->]
 <-- 220 mail.example ESMTP Postfix
 --> RSET                                 -- legal SMTP command
 <-- 250 2.0.0 Ok                         -- expected reply
 --> RESET                                -- harmlessly mis-spelled command
 <-- 502 5.5.2 Error: command not recognized
 --> GET / HTTP/1.1                       -- potentially dangerous HTTP command
 <-- 221 2.7.0 Error: I can break rules, too. Goodbye.
 [connection closed]                      -- Postfix treats this as GAME OVER

 $ mailcat mail.example 25
 [connected, type commands after -->]
 <-- 220 mail.example ESMTP Postfix
 --> QUITE                                 -- mis-typing of QUIT, error is tolerated
 <-- 502 5.5.2 Error: command not recognized
 --> Connection: close                     -- illegal in SMTP, looks like an HTTP header
 <-- 221 2.7.0 Error: I can break rules, too. Goodbye.
 [connection closed]                       -- Postfix treats this as GAME OVER
 $
  • 2. Avoid TLS certificate overlap.

Wildcard certificates are very commonly used, and are handy for administrators who look after dozens or hundreds of different subdomains on a business network.

Nevertheless, try to avoid wildcards if you can, and do your best to limit each certificate so that it only vouches for a list of server names that relate to a specific service or set of services.

For example, instead of acquiring a certificate for *.example.com that your web servers and SMTP servers can all use, consider getting one certificate for each type of server, and identifying the relevant servers specifially in each one:

 # This cross-validates all your servers and is easier to manage...

 $ namedump -subject -san wildcert.pem
 X509 Serial Number              : b876c80b5ae39ee6aa5d9fc4
 X509 Certificate Subject        : CN = *.example.com
 X509v3 Subject Alternative Name : DNS = *.example.com, DNS = example.com

 # These two are more hassle to manage, but identify your resources more precisely...

 $ namedump -subject -san webcert.pem
 X509 Serial Number              : a4a5525983c90e6c667d6ae0
 X509 Certificate Subject        : CN = www.example.com
 X509v3 Subject Alternative Name : DNS = www.example.com, DNS = support.example.com, DNS = downloads.example.com

 $ namedump -subject -san mailcert.pem
 X509 Serial Number              : e511a5732f4e0cd81ae10cb0
 X509 Certificate Subject        : CN = mail.example.com
 X509v3 Subject Alternative Name : DNS = mx1.example.com, DNS = mx2.example.com
  • 3. Use Application Layer Protocol Negotiation (ALPN) if you can.

Modern TLS versions support a feature called ALPN, where the client, such as your web browser, and the server you’re connecting to can specify which application protocols they would like to use over the connection, e.g. HTTP/1.1, HTTP/2 or FTP.

(Unfortunately, and perhaps surprisingly, the application type SMTP is not yet officially recognised [2021-06-11T14:00Z], but custom protocol strings can be used, and smtp can be used for email connections.)

Strictly enforcing ALPN is not currently practicable, because many legitimate programs that connect to your servers – older browsers, for example, or most email sending programs – either won’t be configured to use it, or won’t support it at all.

However, setting up your own servers to respect the requests of clients that do specify what sort of data they plan to exchange will help to immunise well-informed visitors against ALAPCA-style cross-protocol attacks.

  • 4. Use Server Name Indication (SNI) if you can.

Often, especially in the cloud, a single web server will be used to handle requests for many different domains, but will not be able (or will want to avoid) sharing a TLS certificate amongst all of them.

TLS therefore now allows the client to specify up front which service it plans to use on the server it’s connecting to, using a feature known as SNI.

The server typically uses the SNI information to decide which TLS certificate to send out to verify the connection that’s being made.

But you can also use SNI to ensure that you don’t accept connections that have arrived at your server by mistake, or through some sort of criminally-minded redirection.

Strictly enforcing SNI, so that visitors must make their intention clear in advance via SNI or else get kicked out, is unlikely to work well right now, because few companies that send you email are likely to be adding SNI data to their connection requests, and some browsers still don’t bother with SNI, either.

However, when visitors do declaring their intentions up front via SNI but nevertheless end at the wrong server anyway, blocking their request will to protect both you and them from ALPACA-like tricks.

Baaa!


22 Comments

Excellent, another few pages of useless padding to vulnerability scan and pentest reports. These researchers really are making a difference!

I hear you, and I do sympathise… but perhaps that’s more of an indictment of the sort of tester who submits a list of “things that might be worth investigating based on a scan” as the report itself rather than using it as a starting point to kick off the research needed to write the report?

“issuiing” Ha! After years of patient reading Naked Security, I finally found and was the first (I think) to report a typo! Good work as usual, Mr. Ducklin.

Somewhere in the text, ‘broswer’ is also mentioned.
So, there are still typos waiting to be discovered/reported ;)

It’s a shame there’s no way to demand, in smtp, that the sender supply an authority to send to the given address. If that were the case, the same principles from capability theory that allow servers to defend against CSRF could apply to these other protocols, too.

No regular commercial company would buy into that, IMO, because it would effectively prevent, or at least seriously discourage, prospective new customers from contacting them, and would inconvenience existing customers in a way they are unlikely to accept. Few websites demand client certificates of any sort, let alone certificates that show “authority to visit”. Few companies even enforce email senders to add DKIM signatures, or to use TLS at all. So enforcing any kind of client side certification or “pre-authentication seems unlikely ever to catch on.)

Given that ALPACA is not the fault of the client but of the server side, you can sort of sympathise with the idea of not making it the client’s problem to solve…

…but having said that, ALPN is a sort of “co-operative” step to what you suggest, where a client can say, “This connection should be handled by a server of type X only.” So would SNI enforcement, where you could state which server should handle the TLS connection and then expect the recipient to notice if any unexpected redirection has taken place.

Thanks for the article. Considering that the attacker would need network access to sniff the sensitive data from the traffic, I’m wondering if there’s a way to identify packet capture attempts on the network. Thinking that this may help to identify any unauthorized attempts to sniff the traffic. Would this be feasible in this case?

The attacker doesn’t need to sniff the data because the idea is that the exfiltrated information is extracted via the browser, which basically gets fooled because the “same origin policy” is satisfied.

Typically the attacker would need to redirect traffic on the server’s network, and that sort of thing can be detected – for example via ALPN (“Why is a web connection arriving at an SMTP server?”), and by avoiding certificate overlap. (“This certificate is not valid for any of the company’s web servers.”)

I’m sorry, I still don’t get it. Just because the certificate is too broad, does not mean an attacker would have it, does it? And the attacker will have to know cert’s private key(s) to represent himself as another domain from your organization.
In my view, this *is* useless padding, as other commenter said aptly. It is useless, albeit with a cool name. It looks as if the researchers were doing/publishing it for the name rather than for the content.

You’ve missed the point.

The attacker *doesn’t need to “represent himself as another domain from your organisation”* (and therefore doesn’t need a copy of any of private private keys) if your own web server (say) will represent itself as being any subdomain in your organisation.

That’s the whole idea: the attacker tricks one of your not-a-web-server servers to present cryptographic credentials that pass muster as your web server.

I

I think my personal server might be vulnerable to this one, and given that many other geeks probably have a similar setup that would explain why the researchers found so many vulnerable domains when they searched.

I run a personal email server that handles me@vanity-domain.com processing email via IMAP and SMTP, and it also has a roundcube webmail service on www.vanity-domain.com. All three protocols share the same lets-encypt cert using subject alternative names. I am fairly sure that when I set the whole thing up I could not use separate certs for each service, and had to use one certificate with many subject alternative names, though now I can’t remember why. No doubt I will re-discover what the reason was when I get three new certificates issued and try to use them separately.

Are there any other mitigations? Are server side patches available? @duck you said that Postfix will abort a session if it sees http protocol messages. Is are there similar mitigations in Apache or Dovecot?

The ALPACA paper has a table (Table 3, page 10) entitled “Attack Method”, where the authors say they tested a range of SMTP, IMAP, POP3 and FTP servers. Dovecot seems to have come out OK, with the reason “too many errors”, presumably meaning that the server gave up while the browser was still sending HTTP headers, given that each one caused an error until some limit was hit.

However, I was slightly confused by that table because Postfix was listed as being safe for the same reason, whereas in my test Postfix was safe because of “HTTP detection”… which is listed as a reason in its on right.

So I couldn’t see how they could get Postfix to fail with “too many errors” when in my tests it had already failed right at the first hurdle for a different reason.

Thanks @duck for the reference to the table of exploitable servers. Somehow I am not surprised to read that Sendmail is the most exploitable! can we kill that antique bit of software with fire?

Later in the paper, there is a section about port blocking. Except for IE & Edge, All the common browsers will block any request on the well known ports used by IMAP, SMTP etc, so you are only venerable if you run those servers on non standard ports, or you are using the well know to be insecure browser! (Again, no real surprise that IE is the security laggard).

So, in conclusion, I have decided that I probably don’t need to worry. I am running fully patched postfix & dovecot on standard ports, and mostly using Firefox or Chrome, so there are several layers of protection that would prevent this attack from gaining traction. As is frequently the case the BWAIN is unlikely to be exploited in practice.

There’s also the issue, as far as I can make out, that for a JavaScript-based “cookie stealing” attack, your browser not only has to be redirected to a port it doesn’t like, to reach a server that has a wildcard cerificate (or one with lots of Server Alternate Names), and to receive a reply before the server closes the connection for being full of what looks like garbage…

…but also to go out of its way to interpret a reply that’s full of garbage *as though it were HTML*, and to go out of its way to dig out the script tags in the otherwise harmless reply and run them.

(If you feed Firefox a plaintext response full of garbage, it simply displays a page full of plaintext garbage, as you would expect, rendering and interpreting nothing. Send it the plaintext string <script> and it will display exactly that text, unparsed and uninvoked.)

I too run a vanity-domain situation like this where I use a LE wildcard for all the email functions – including the publicly accessible webmail, and the SMTP. I don’t know how changing to two separate LE certs mitigates this. I feel it probably stems from me having incorrectly configured public DNS for email…

The “problem”, such as it is, with a TLS certificate that is valid for *.example.com is simply the meaning of “star” (asterisk). It just feels more circumspect to have a certrificate that is valid only for a specific set of services, e.g. a list of Subj Alt Names like mx1.example.com, mx2.example.com and so on, so that you don’t have a situation where one private key rules them all (or, ipso facto, might let them all down at once).

So the concern is primarily just the wildcard (*) aspect (and associated private key) and not the certificate attributes? Just want to make sure I understand so I can implement the most correct solutions in the wild. Thanks!

The article lists the main things you can do in the “What to do?” section: avoid certificate overlap (e.g. via wildcards or over-zealous Server Alternate Name lists); harden applications against obviously misdirected data; consider using ALPN; consider honouring SNI strictly.

Giving each server a TLS certificate that is also valid for every other server is a bit like installing a building access control system that allows you to decide whether individual staff should be allowed in the server room, into the lift shaft, into plant areas, inside the warehouse, onto the roof space, and so on…

…and then giving every employee an “access all areas” ID badge anyway.

“Avoid TLS certificate overlap” or the suggestion to use SNI makes no sense if the web server and mail server *is actually the same server*, which is common for small domains (eg. small businesses) that have only one server at all. Server available for example under a domain name “somecompany.com” is both a web server and mail server for the company. Only the ports differ :). This is a *very common case*. (My personal server is exactly the same case – it serves both my mail and website. However, I have a self-signed cert anyway, because nobody except me (for admin purposes) needs to use HTTPS on my website, I don’t process any sensitive data there.

You don’t have to use the same name for for two services. You could resolve the server as mail.somecompany.example and as www.somecompany.example, and generate separate certificates for the web server program and the mail server program.

FWIW, if your personal mail and web servers are accessible from the internet (I assume that your mail server will be) then you can one of the ACME clients with Let’s Encrypt to generate valid certificates so that you can speak TLS all the time to anyone and everyone.

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!