Here’s another “security surveillance system SNAFU” story, just two weeks since our last one.
(As we noted back then, plain old webcam bugs are one thing, but vulnerabilities in camera systems that are supposed to increase security are quite another.)
Last time, the problem was a combination of three different bugs, each one modestly dangerous on its own, that could be chained together to construct an critical exploit.
At worst, the trifecta of bugs in that case could have allowed anyone on the internet to wander into your network at will via one of your security devices.
In today’s story, however, the crooks didn’t break into the webcam and steal data out of it.
Instead, the camera uploaded a bunch of data on purpose, but chose the wrong person to send it to.
In fact, the person to who the video data was incorrectly leaked…
…just happened to be a BBC staffer enjoying some off-duty weekend time at home.
Talk about having a fascinating data leakage story dropping into your app!
If you think of CCTV systems from even just a few years ago, you’ll probably wonder what a security camera was doing dropping data into a app in the first place.
Well, surveillance systems have changed a lot recently.
CCTV cameras aren’t just wireless these days, but often also softwareless and serverless too.
OK, strictly speaking, the camera needs a server to connect to, and that server needs special software running to take care of the uploads, but both the server and the software can be hosted in the cloud.
As the owner of the camera, you no longer have to set up any additional hardware or software of your own – you need no more than the camera itself, an internet connection, and a web browser (or a browser-like app on your mobile phone) to login to the camera vendor’s website.
The vendor’s servers take care of collecting the data, processing it to look for anomalies, and sending alerts to your browser or your phone if something suspicious happens…
…and all you have to do is hope that they don’t send your alerts to someone else by mistake, as happened in this case.
What went wrong?
According to the BBC, Swann explained away the mistake as follows:
[H]uman error had caused two cameras to be manufactured that shared the same bank-grade security key – which secures all communications with its owner. This occurred after the [family] connected the duplicate camera to their network and ignored the warning prompt that notified: ‘Camera is already paired to an account’, and left the camera running.
This explanation is feasible, but it doesn’t bring any closure to the incident, because it implies that the problem could easily occur again – after all, how realistic is it to expect a human to check a cryptographic key, say
3c8c0279dd24f6d7c07a00db30767ec4, against a list of all keys used on all previous devices?
Let’s assume that the key we’re talking about here is a public/private key pair, where the vendor’s servers get a copy of the public key so they can validate the camera sending in each data block, and the camera keeps the private key to itself so it’s the only device in the world that can sign content with that key.
Why not get the camera to generate a new keypair when it is first set up (or subjected to a factory reset), thus ensuring both that the private key only ever exists on the camera itself, and that the keypair is always unique?
Granted, it’s easy to make a cryptographic blunder when you program an IoT device to generate a new, random keypair, because random numbers can be tricky to generate in software on embedded devices,
Many pseudo-random number generators rely on mixing in ever-changing data such as the time of day, the number of milliseconds since the computer was turned on, or the distance that the mouse moved in the past 30 seconds, as a way of reducing the predictability.of the algorithm – a process known in the jargon as increasing entropy. On embedded devices fresh out of the box, however, there’s no mouse to monitor, the clock always starts off set to zero (on Linux-based systems, zero typically denotes midnight on 01 January 1970), and you can guess within a few seconds either way how long the initial setup software is like to take to get to the part where the cryptographic keys are generated. This means you need to be really careful not to generate “predictable randomness” when doing cryptographic programming on stripped-down hardware.
Nevertheless, with suitable care and attention, it is possible to ensure that each device you sell will automatically end up registered uniquely with your cloud services – Apple, for instance, can reliably tell its iPhones apart, even though it has sold more than a billion of them.
Could it happen again?
The BBC documented a second case in the UK of a Swann security system sending one customer’s data to another – a couple in Leicestershire, England who started receiving camera footage of an unknown pub.
In an amusing conclusion (albeit one that proves that even banal and harmless looking images can harm your privacy) the couple actually managed to identify the pub concerned.
Turns out it was near their house, so they paid a visit – and in a fit of wit, took a selfie using the pub’s camera!
Great to meet the manager @newtownlinford and share our concerns that @swannsecurity remote access CCTV system is giving us images from his cameras in place of our own. Bizarre to be able to take a selfie using someone else's CCTV camera pic.twitter.com/fTgmAVoPle— The Obscure Brewer (@Battwave) June 3, 2018
What to do?
Let’s hope that Swann identifies the problems in its manufacturing workflow that make this sort of “doppelgänger camera” situation possible…
…and eliminates them.
At the moment, the company doesn’t sound very convincing in its response to what is an unusual, though unsettling, data breach dilemma.