Skip to content
Naked Security Naked Security

Researchers trick Tesla’s Autopilot into driving into oncoming traffic

They placed unobtrusive stickers that drivers wouldn't see but would fool autopilot into thinking the lane was veering off to the left.

Researchers have discovered the latest way to drive a Tesla off track (and into oncoming traffic), and it needs only the simplest of hacking tools: a can of paint and a brush will do it, or small, inconspicuous stickers that can trick the Enhanced Autopilot of a Model S 75 into detecting and then following a change in the current lane.

Mind you, the feature they “attacked” for their research project is for driver assistance, not really an autopilot, regardless of what Tesla calls it.

Tesla’s Enhanced Autopilot mode has a range of features, including lane centering, self-parking, automatic lane changes with driver’s confirmation, and the ability to summon the car out of a garage or parking spot.

To do all that, it relies on cameras, ultrasonic sensors and radar, as well as hardware that allows a car to process data using deep learning to react to conditions in real-time. APE, the Autopilot engine control unit module, is the key component of Tesla’s auto-driving technology, and it’s where researchers at Keen Security Lab – a division of the Chinese internet giant Tencent Keen – focused their lane-change attack.

They explained their latest Tesla attack in a recent paper, reverse-engineering several of Tesla’s automated processes to see how they’d do when environmental variables changed.

One of the most unnerving things they accomplished was to figure out how to induce Tesla’s Autopilot to steer into oncoming traffic. In the best of all possible worlds, in real life, that wouldn’t happen, given that a responsible, law-abiding driver would have their hands on the wheel and would notice that the car’s steering was acting as if it were drunk.

How did they do it?

By slapping three stickers onto the road. Those stickers were unobtrusive – nearly invisible – to drivers, but machine-learning algorithms used by the Autopilot detected them as a line that indicated the lane was shifting to the left. Hence, Autopilot steered the car in that direction.

From the report:

Tesla autopilot module’s lane recognition function has a good robustness in an ordinary external environment (no strong light, rain, snow, sand and dust interference), but it still doesn’t handle the situation correctly in our test scenario. This kind of attack is simple to deploy, and the materials are easy to obtain…

Tesla uses a pure computer vision solution for lane recognition, and we found in this attack experiment that the vehicle driving decision is only based on computer vision lane recognition results. Our experiments proved that this architecture has security risks and reverse lane recognition is one of the necessary functions for autonomous driving in non-closed roads. In the scene we build, if the vehicle knows that the fake lane is pointing to the reverse lane, it should ignore this fake lane and then it could avoid a traffic accident.

The Tesla teasers

Keen researchers have remotely flummoxed Teslas before. These are the guys who, a few years ago, remotely slammed on the brakes of a Tesla Model S from 12 miles away, popped the trunk and folded in the side mirror, all while the car was moving.

In their recent work with forced lane-change, they noted that Autopilot uses a variety of measures to prevent incorrect detections, including the position of road shoulders, lane histories, and the size and distance of various objects.

Another section of the paper explained how the researchers exploited a now-patched, root-privileged access vulnerability in the APE by using a game pad to remotely control a car. Tesla fixed that vulnerability in its 2018.24 firmware release.

The report also showed how researchers could tamper with a Tesla’s auto-wiper system to activate wipers when rain isn’t falling. Tesla’s auto-wiper system, unlike traditional systems that use optical sensors to detect raindrops, uses a suite of cameras that feed data into an artificial intelligence network to determine when wipers should be turned on.

The researchers found that they could make small changes to alter images in a way that would throw off Tesla’s AI-based image recognition but would be undetectable to the human eye. Hence, they tweaked an image of a panda to the extent that the AI system interpreted it as a gibbon, though it still looks to humans like a picture of a panda. Using those pixel-level changes, they tricked Tesla’s auto-wiper feature into thinking rain was falling.

However, that trickery requires direct feeding of images into the system. Eventually, the researchers say it may be possible for attackers to display an “adversarial image” that’s displayed on road signs or other cars that does the same thing.

This isn’t the first time that researchers have fooled self-driving cars by slapping stickers somewhere in their view. In 2017, they showed that putting stickers onto road signs could confuse autonomous cars’ systems.

Currently, fiddling with the external, physical environment isn’t where the efforts are going to secure self-driving systems against attack. That should perhaps change, the Keen researchers believe, given that such attacks are feasible, and they should be factored in to design companies’ efforts to secure the cars.

Having said that, it’s debatable whether attackers will crawl out onto the highway and paint redirecting lane markers or affix stickers into the oncoming path of a Tesla. Yes, the Keen researchers used a controlled environment to demonstrate that a Tesla Model S 75 can be forced to follow a fake path without asking the driver for permission, as the Autopilot component is supposed to do when changing lanes…

…which should serve as another reminder that getting behind the wheel of a car comes with responsibilities, like keeping your hands on said wheel in accordance with the relevant laws, and keeping one’s eyes on the road to make sure you’re not being led astray by stickers stuck on by researchers trying to fool the car’s computer into seeing a lane where it shouldn’t be.

10 Comments

My ford focus lane tracker (which is only a warning system so no danger) regularly follows the blacked out lines from roadworks. I dread to think how often it would change lanes into other cars if it were steering!

Just a thought, with BAE and others looking at Military versions of the self driving trucks and vehicles. Could this be a free play ground to practice their craft for a future operation or threats?
The Chinese PLA sponsor a lot of technology companies in China that provide solutions to perceived Western threats on and off the next battlefield. The fact that Tesla was the first publicly accessible deployment of this technology that is easy to get your hands on and drive, makes me wonder. I give a hat-tip to the team that found this.
The Chinese to great credit are masters at the long game, multiple steps ahead for the Western politicians.
Anonymous Deep Thinker

> getting behind the wheel of a car comes with responsibilities, like keeping your hands on said wheel in accordance with the relevant laws, and keeping one’s eyes on the road to make sure you’re not being led astray by stickers stuck on by researchers trying to fool the car’s computer into seeing a lane where it shouldn’t be

Absolutely correct. However it inexorably raises the question, “then why have a self-driving vehicle?” I’d rather know I’m responsible for safe navigation (and just DO IT) than be an agitated passenger, watching the most dull movie of all time (my daily drive to work), constantly on-edge waiting for the moment it suddenly becomes my job to correct the error of another (virtual) driver.

Exactly! Furthermore, it’s very difficult to maintain the situational awareness needed to take immediate corrective action when the human driver/passenger isn’t engaged in the moment-by-moment decision-making of driving.

I’m still in the apparent minority, having vehement opposition to a car’s computer surpassing basic notification.
A tire is low. Engine overheating. I’m even okay with “check air filter.”

I’ve still yet to see a report even remotely close to changing my mind on letting Elon drive me anywhere from his couch. And I like the guy.

Yes…I sound like a grandpa in Scooby Doo telling kids to stay off my lawn, but everything I read is a variant on the same headline:
COMPUTER TRIES TO OUTSMART HUMAN, FAILS DISAPPOINTINGLY.

My daily interactions with JavaScript predicting (poorly) how I’ll use a web page.
The Boeing 737 Max grandiosely assuming a trained pilot can’t recognize a stall situation–despite not consulting the artificial horizon, airspeed, or altimeter for a few seconds.
Every interface needs a button that says
“Sit back, shut up, and quit trying to help–I know what the hell I’m doing.“

Minority perhaps, but certainly not alone.
I’m ok with ‘defensive’ smarts for avoiding accidents (braking, *maybe* swerving though it makes me very nervous), but totally against having self-drive. We *already* have perfectly good self-driving vehicles – they’re called taxi’s and buses.

Anything that allows the human to become the ‘back-up driver’ is a nightmare waiting to happen – we cannot concentrate for long periods of inactivity. The whole over the air updating thing also sounds like a remote hack waiting to happen, and when there’s control over throttle and steering…well, the mind boggles. At bear minimum, there needs to be a driver accessible kill switch/fuse that powers off the ‘driving computer’ but leaves the basic car running functions, err, running. Idea being the human can revert the car to a ‘dumb’ car at the yank of fuse/flick of a switch when weird stuff happens – but I suspect that would require a re-architecture.

yeah think about how computer failed to outsmart you next time your smartphone reveals exact location in time to google (oh you switched off location? tough luck, doesn’t help)

I honestly think that the current style of self driving cars are the worst and most dangerous options possible. You have a car that is completely in control unless the human operator actively takes over. It’s been shown in several studies that humans are pretty bad (or very bad) at maintaining concentration on a process they are not involved in. So the car is driving and can be fooled, the driver is half asleep with bordom (or fully asleep in some Tesla cases) yet teh driver is responsible not the autopilot company.

Either we need to get to the point where we can trust the cars to drive themselves 100% of the time and I can be safely and legally drunk and asleep in the back seat or the human needs to keep driving actively in order to maintain concentration (I’ll give you cruise control and some aids).

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?