Researchers have discovered the latest way to drive a Tesla off track (and into oncoming traffic), and it needs only the simplest of hacking tools: a can of paint and a brush will do it, or small, inconspicuous stickers that can trick the Enhanced Autopilot of a Model S 75 into detecting and then following a change in the current lane.
Mind you, the feature they “attacked” for their research project is for driver assistance, not really an autopilot, regardless of what Tesla calls it.
Tesla’s Enhanced Autopilot mode has a range of features, including lane centering, self-parking, automatic lane changes with driver’s confirmation, and the ability to summon the car out of a garage or parking spot.
To do all that, it relies on cameras, ultrasonic sensors and radar, as well as hardware that allows a car to process data using deep learning to react to conditions in real-time. APE, the Autopilot engine control unit module, is the key component of Tesla’s auto-driving technology, and it’s where researchers at Keen Security Lab – a division of the Chinese internet giant Tencent Keen – focused their lane-change attack.
They explained their latest Tesla attack in a recent paper, reverse-engineering several of Tesla’s automated processes to see how they’d do when environmental variables changed.
One of the most unnerving things they accomplished was to figure out how to induce Tesla’s Autopilot to steer into oncoming traffic. In the best of all possible worlds, in real life, that wouldn’t happen, given that a responsible, law-abiding driver would have their hands on the wheel and would notice that the car’s steering was acting as if it were drunk.
How did they do it?
By slapping three stickers onto the road. Those stickers were unobtrusive – nearly invisible – to drivers, but machine-learning algorithms used by the Autopilot detected them as a line that indicated the lane was shifting to the left. Hence, Autopilot steered the car in that direction.
From the report:
Tesla autopilot module’s lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference), but it still doesn’t handle the situation correctly in our test scenario. This kind of attack is simple to deploy, and the materials are easy to obtain…
Tesla uses a pure computer vision solution for lane recognition, and we found in this attack experiment that the vehicle driving decision is only based on computer vision lane recognition results. Our experiments proved that this architecture has security risks and reverse lane recognition is one of the necessary functions for autonomous driving in non-closed roads. In the scene we build, if the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident.
The Tesla teasers
Keen researchers have remotely flummoxed Teslas before. These are the guys who, a few years ago, remotely slammed on the brakes of a Tesla Model S from 12 miles away, popped the trunk and folded in the side mirror, all while the car was moving.
In their recent work with forced lane-change, they noted that Autopilot uses a variety of measures to prevent incorrect detections, including the position of road shoulders, lane histories, and the size and distance of various objects.
Another section of the paper explained how the researchers exploited a now-patched, root-privileged access vulnerability in the APE by using a game pad to remotely control a car. Tesla fixed that vulnerability in its 2018.24 firmware release.
The report also showed how researchers could tamper with a Tesla’s auto-wiper system to activate wipers when rain isn’t falling. Tesla’s auto-wiper system, unlike traditional systems that use optical sensors to detect raindrops, uses a suite of cameras that feed data into an artificial intelligence network to determine when wipers should be turned on.
The researchers found that they could make small changes to alter images in a way that would throw off Tesla’s AI-based image recognition but would be undetectable to the human eye. Hence, they tweaked an image of a panda to the extent that the AI system interpreted it as a gibbon, though it still looks to humans like a picture of a panda. Using those pixel-level changes, they tricked Tesla’s auto-wiper feature into thinking rain was falling.
However, that trickery requires direct feeding of images into the system. Eventually, the researchers say it may be possible for attackers to display an “adversarial image” that’s displayed on road signs or other cars that does the same thing.
This isn’t the first time that researchers have fooled self-driving cars by slapping stickers somewhere in their view. In 2017, they showed that putting stickers onto road signs could confuse autonomous cars’ systems.
Currently, fiddling with the external, physical environment isn’t where the efforts are going to secure self-driving systems against attack. That should perhaps change, the Keen researchers believe, given that such attacks are feasible, and they should be factored in to design companies’ efforts to secure the cars.
Having said that, it’s debatable whether attackers will crawl out onto the highway and paint redirecting lane markers or affix stickers into the oncoming path of a Tesla. Yes, the Keen researchers used a controlled environment to demonstrate that a Tesla Model S 75 can be forced to follow a fake path without asking the driver for permission, as the Autopilot component is supposed to do when changing lanes…
…which should serve as another reminder that getting behind the wheel of a car comes with responsibilities, like keeping your hands on said wheel in accordance with the relevant laws, and keeping one’s eyes on the road to make sure you’re not being led astray by stickers stuck on by researchers trying to fool the car’s computer into seeing a lane where it shouldn’t be.