Skip to content
Naked Security Naked Security

Poor robot security could lead to ‘Skynet’ nightmare, warn researchers

Generic robot tools mean robots pose some of the same risks as poorly secured IoT devices - but those fears could be overstated

The robot industry has become better at building eye-catching demonstration machines than securing them, consultancy IOActive has concluded after pen-testing some famous examples.

After a process described as “not even a deep, extensive security audit”, Hacking Robots Before Skynet uncovered 50 vulnerabilities affecting communications, encryption, authentication and robot open-source software development libraries.

Screw-ups included cleartext communication and weak default passwords (including ones that couldn’t be changed), adding up to the risk that robots could be remotely hacked, co-opted for surveillance, and even held hostage by ransomware. Security auditing seems to be non-existent.

Robots tested were the “human-like” NAO and Pepper robots from SoftBank Robotics (10,000 sales), the OP2 and THORMANG3 from ROBOTIS, and designs using Asratec’s V-Sido technology.

Others examined included the Alpha 1S and Alpha educational toy from UBTECH, and robotic arm systems from Rethink Robotics and Universal Robots, so this was a good cross-section of robots people can buy.

An underlying issue is that many robots emerge from academic research projects that are then later spun out commercially. As with the similar Internet of Things growing pains, security is an afterthought:

A broad problem in the robotics community: researchers and enthusiasts use the same or very similar tools, software, and design practices worldwide.

Everyone replicates the same model, leaving robots susceptible to generic vulnerabilities.

Arnold Schwarzenegger’s Terminator movies are never far away from any discussion of robots running amok, which explains the over-the-top references to “Skynet” in the title of the report.

There are two rebuttals to robot pessimism. First, these robots are not representative of the best systems coming out of big-budget labs, and second, robotics (especially autonomous robotics) is a very green industry and talking up a few security flaws as something frightening is technophobia.

By coincidence, new footage emerged last week of a robot called Handle, a remarkable two-wheeled robot from Boston Dynamics. Presumably, Handle’s autonomy is limited, but it’s not a stretch surely to imagine a Robocop along these lines patrolling a shopping mall, airport or private prison a decade or two from now.

It’s a world that alarms some people, including apparently Boston Dynamics’ owner Alphabet (Google’s parent company), which put the BD up for sale a year ago. Chips in IOActive’s CTO, Cesar Cerrudo, told Newsweek:

A robot is just a computer with arms and legs or wheels. Therefore, the cyberthreat is much bigger. Compromised robots can be used to physically damage something or even hurt or kill someone.

Except that the weak security of current robots is really an extension of weak security in everything else, be that IOT, network equipment, or even (stand up, CloudPets) children’s toys – just with more dire warnings from popular fiction.

Robotics spending was estimated by IDC at $92bn (£74bn) in 2016, the same year that the number of US factory robots increased by 10%. People will continue to build more and more robots whether they’re secure or not.

So far, we should point out, examples of robots hacked maliciously are non-existent. Nobody is likely to be harmed by the weaknesses found by IOActive. But if the industry doesn’t mandate development standards soon, that might change and little by little Arnie could turn from prediction to prophecy.


2 Comments

This still-nascent phase of robot development is a great time to get the underlying software secured. However, who’s going to do it? A classic case of scientists enjoying their toys without doing all the proper scientific things, like DevOps, to make it work properly. Or rather, making it work “just well enough”.
Bad actors, given the opportunity, will gladly invade a robot system and program it with harmful tendencies. They will enjoy their work as they sit back and watch the damage from a safe distance. Exactly the same as less ambulatory computers. Exactly the same as computers, it will start out with general malevolent software and graduate to more sophisticated methods and purposes.
An ounce of prevention is worth a pound of cure…

Reply

robotics is a very green industry and talking up a few security flaws as something frightening is technophobia
Yep, no point in building good habits from the start–that’s silly. There’s no reason to worry about security until the robot is capable of autonomously mowing down everyone in the neighborhood.

(shouting above the explosions and screams)
“what, what? I thought YOU were going to make a better password!”

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!