Skip to content
Naked Security Naked Security

Knock, knock: Digital key flaw unlocks door control systems

Attackers could be able to unlock doors in office buildings and factories at will, thanks to a flaw in a popular door controller.

Attackers could be able to unlock doors in office buildings, factories and other corporate buildings at will, thanks to a flaw in a popular door controller, discovered by a Google security researcher.
David Tomaschik, who works as senior security engineer and tech lead at Google, uncovered the flaw in devices made by Software House, a Johnson Controls company. Forbes reports that he conducted his research on Google’s own door control system.
Tomaschik, who described his project at a talk in August at DEF CON’s IoT Village, explored two devices. The first was iStar Ultra, a Linux-based door controller that supports hardwired and wireless locks. The second was the IP-ACM Ethernet Door Module, a door controller that communicates with iStar.
When a user presents an RFID badge, the door controller sends the information to the iStar device, which checks to see if the user is authorised. It then returns an instruction to the door controller, telling it to either unlock the door or to deny access.
Software House’s website still promotes the original version of its IP-ACM as a “highly secure option to manage their security”. But judging from Tomaschik’s research, that’s a bit wide of the mark.
The devices were using encryption to protect their network communication – however, digging through their network traffic, Tomaschik found that Software House had apparently been rolling its own crypto rather than relying on tried and tested solutions.
The devices were sending hardcoded encryption keys over the network, and were using a fixed initialization vector, which is an input to the cryptographic function that creates the key. Moreover, the devices didn’t include any message signing, meaning that an imposter could easily send messages pretending to be from a legitimate device, and the recipient wouldn’t check.
This key unlocked the kingdom, so to speak. It enabled him to impersonate Software House devices on the network, doing anything that they could. This included the power to unlock doors, or stop others from unlocking them.
To engineer such an attack, all an intruder would need is access to the same IP network used by the Software House devices. If a company hasn’t carefully segmented and locked down its network and lets these devices communicate over a general office network, and if the attacker can gain access to that, then it presents a potential intrusion point.


We asked Software House for a statement about this, and a spokesperson said:

This issue was publicly reported at the end of December 2017. In early January 2018, we notified our customers of the issue and our plans to address it with a new version of the product.  We released that new version addressing the issue in early February 2018.

Tomaschik blogged about it last December, and a CVE bug relating to this issue was published on 30 December last year. He discovered the issue in July 2017, telling Software House about it in the same month. The company acknowledged the flaw and proposed a fix.
The fix involved a change in the encryption system to an algorithm based on TLS encryption that doesn’t consistently send the same keys across a public network. On its product page, the company says of the v2 Ethernet Door Module:

IP-ACM v2 now supports 801.1X and TLS 1.2 secure network protocols for added protection against the threat of cyber attacks.

However, Tomaschik has argued that this alone might not be enough, because Software House systems’ original IP-ACM units didn’t have enough memory to cope with installing new firmware.
Software House admitted the inability to update the firmware for existing devices in an emailed statement to Naked Security:

We did notify customers that v1 of the product did not have enough physical memory to upgrade to TLS.

Google reportedly took steps to protect its offices by segmenting its network, but there are likely to be many pre-version 2 units installed in the wild that cannot be updated to fix the encryption key problem, and many companies that do not fix this issue. Ethernet-based door unlockers don’t have a speedy refresh cycle, after all.
The situation also highlights the difference between conventional and IoT products, Tomaschik said when blogging about his DEF CON talk (blog also contains slides):

It’s not meant to be a vendor-shaming session. It’s meant to be a discussion of the difficulty of getting physical access control systems that have IP communications features right. It’s meant to show that the designs we use to build a secure system when you have a classic user interface don’t work the same way in the IoT world.

Until a company replaces its door controllers with the new hardware, anyone with the code to execute an attack could theoretically gain access to its facilities. Tomaschik hasn’t released his proof of concept code, but that doesn’t mean someone else couldn’t engineer it.

10 Comments

If only there was a process to check these devices BEFORE they go to market eh?
And um… did they not read about TLS 1.3?

Reply

Surely a modern door controller should be using NFC not RFID?

Reply

NFC is RFID :-)
(NFC and RFID are like fingers and thumbs. All NFC is RFID but not all RFID is NFC.)

Reply

Umm, you seem to have that backward; simple substitution yields the following statement:
“All fingers is thumbs but not all thumbs is fingers.”
(Simple substitution also leaves a bit to be desired grammatically!)
Kidding aside, that was a pretty good analogy.

Reply

Software House had apparently been rolling its own crypto
Question for coders (Danny, Duck, Mark, Maria, and anyone else with the answer):
What would be the perceived advantages to roll one’s own? Most of us are too busy to reinvent the wheel, and even paying for a commercial product is generally accepted as a cost-saving measure when labor to duplicate the turnkey solution is counted.
But I thought there are FOSS options, so without a time machine there’s no way to break even when creating something one can just download in minutes.
Is it just the challenge that attracts people? Do they fear undocumented backdoors?

Reply

I can’t speak to this case in particular but in my experience, which relates mostly to websites, it comes down to a mix of ignorance and laziness.
I’ve never seen anyone “roll their own” encryption or hashing algorithms, but I’ve seen (and no doubt made) plenty of bad decisions. Mostly not using crypto when it should be used, or using the wrong solution entirely as a matter of expedience, or simply because you don’t know what you don’t know.

Reply

Thanks, had a feeling, wondered if there was much more to it than that.
simply because you don’t know what you don’t know
Holy cow, that covers a lot of ground. And retroactive memories. And my question.
:,)

Reply

There are plenty of roll-your-own crypto stories here on Naked Security. They generally get written up because they’ve ended badly.
Mainstream protocols have tried to knit-their-own and failed, such as WPS:
https://nakedsecurity.sophos.com/2015/04/13/we-told-you-not-to-use-wps-on-your-wi-fi-router-we-told-you-not-to-knit-your-own-crypto/
Mainstream cloud services have tried and failed, such as WhatsApp’s infamous “two-time pad”:
https://nakedsecurity.sophos.com/2013/10/10/whatsapp-mobile-messaging-app-in-the-firing-line-again-over-cryptographic-blunder/
Boutique startups selling digital padlocks have had a go at crypto and failed:
https://nakedsecurity.sophos.com/2018/06/18/the-worlds-worst-smart-padlock-its-even-worse-than-we-thought/
Even self-styled anti-surveillance secure messaging services have fallen into cryptochasms;
https://nakedsecurity.sophos.com/2013/07/09/anatomy-of-a-pseudorandom-number-generator-visualising-cryptocats-buggy-prng/
Pick a recognised implementation of a recognised cryptosystem and learn how to use it properly.

Reply

Agreed; I was thinking of examples like those when I asked. Merely wondered the classic question which applies so (unsettlingly) often, in every field–not as an insult but as a genuine question:
What were you thinking?
I figured ignorance would fit into this specific answer, but I expected laziness wouldn’t, since pre-available solutions are easier to implement than starting from scratch. Possibly hubris, possibly obstinance, possibly (ironically) paranoia.

Reply

I think it’s often because developers want as much visibility and control over their cryptography implementation as they can. The more critical a function becomes, the less trusting many devs are of other peoples’ code and the more anxious they are about wanting it done ‘the right way’. But it is always more difficult than people think.
This is why we need secure development lifecycles, to help ensure that misguided ideas don’t make it into production!

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!