Skip to content
Naked Security Naked Security

Uber car software detected woman before fatal crash but failed to stop

Uber has reportedly discovered that the fatal crash was likely caused by a software bug in its self-driving car technology.

In March, 49-year-old Elaine Herzberg became what’s believed to be the first pedestrian killed by a self-driving car.
It was one of Uber’s prototypes that struck Herzberg as she walked her bicycle across a street in Tempe, Arizona on a Saturday night. There was a human test driver behind the wheel, but video from the car’s dash cam published by SF Chronicle shows that they were looking down, not at the road, in the seconds leading up to the crash.
Police say that the car didn’t try to avoid hitting the woman.
The SF Chronicle reports that Uber’s self-driving car was equipped with sensors, including video cameras, radar and lidar, a laser form of radar. Given that Herzberg was dressed in dark clothes, at night, the video cameras might have had a tough time: they work better with more light. But the other sensors should have functioned well during the nighttime test.
But now, Uber has reportedly discovered that the fatal crash was likely caused by a software bug in its self-driving car technology, according to what two anonymous sources told The Information.
Uber’s autonomous programming detects objects in the road. Its sensitivity can be fine-tuned to ensure that the car only responds to true threats and ignores the rest – for example, a plastic bag blowing across the road would be considered a false flag, not something to slow down or brake to avoid.
The sources who talked to The Information said that Uber’s sensors did, in fact, detect Herzberg, but the software incorrectly identified her as a “false positive” and concluded that the car did not need to stop for her.
The Information’s Amir Efrati on Monday reported that self-driving car technologies have to make a trade-off: either you can have a car that rides slow and jerky as it slows down or slams on the brakes to avoid objects that aren’t a real threat, or you have a smoother ride that runs the risk of having the software dismiss objects, potentially leading to the catastrophic decision that pedestrians aren’t actual objects.


Efrati pointed to GM’s Cruise self-driving cars as being prone to falling on the overly cautious end of the spectrum, as they “frequently swerve and hesitate.”

[Cruise cars] sometimes slow down or stop if they see a bush on the side of a street or a lane-dividing pole, mistaking it for an object in their path.

In March, Uber settled with Herzberg’s family, avoiding a civil suit and thereby sidestepping questions about liability in the case of self-driving cars, particularly after they’re out of the test phase and operated by private citizens.
Arizona halted all of Uber’s self-driving tests following the crash. Other companies, including Toyota and Nvidia, voluntarily suspended autonomous vehicle tests in the wake of Herzberg’s death, while Boston asked local self-driving car companies to halt ongoing testing in the Seaport District.


15 Comments

When people rode/drove horses both the person and the horse could decide to stop if necessary (two heads are better than one, even if one is a horse). Maybe relying on software isn’t such an advancement.

Reply

Seems there was a human “assistant” in the car who wasn’t paying attention… sort of double-failure situation?

Reply

This right here. When digital computer systems and physical worlds merge, we have to acknowledge this not only as an issue with the software but also the legacy fail-safe (the human) functioning improperly.

Reply

Doesn’t excuse the inaccuracy of the car’s software… but if I were that co-driver I would have my eyes glued to the road for self-preservation, let alone the safety of innocent bystanders! (Of course, we don’t know whether the person in the car actually had controls to let the take over – not clear from the video – or if they would have been able to react in time, or if they would have been able to swerve clear anyway. And we don’t actually know – short of anonymous hearsay from a journalist – whether this really wsa a “software bug”, or a misconfiguration of its image recognition, or even some sort of shortcoming in the sensor systems being used… or all of the above.)

Reply

If you watch the OB video, you’ll see that the woman effectively steps out from the shadows in front of the car, it would take a superhuman driver to have seen or predicted this event yet you all criticize the lidar and it’s software ??

Reply

I don’t see why it is unreasonable to expect self-driving vehicles to do better than this one did. The big idea of self-driving cars is to harness computing power to make cars *safer*, in the same sort of way they make numerical calculations faster and arithmetic less error-prone in accounting.
The car isn’t relying on a low-frame rate dashboard videocam, after all. My hope for self-driving cars is that they will ultimately be much more reliable in the dark than humans because they don’t have to work only with visible light to detect objects, and that they will allow higher average speeds but with lower top speeds by co-operating more effectively to make traffic more consistent.
I agree that the pedestrian seems to have made a dreadful mistake here – it seems she just crossed the road without looking. I am guessing this was an electric vehicle and thus whisper quiet.
Sadly, I doubt any of us here would have avoided crashing into her under the circumstances, but I sincerely hope that most of us would have got our foot onto the brake pedal before impact, which might have saved her life.
The man in the car spotted her before impact (the look on his face is terrifying), and he wasn’t looking forwards until the last moment. So the car’s software does indeed seem to have judged incorrrectly.
We just don’t know yet whether it was a bug (e.g. object spotted but software took a wrong path and didn’t brake), a configuration problem (e.g. object spotted but misrecognised as far away and tiny), a sensor limitation (e.g. a sideways bicycle in front of a human messes up the backscatter used for detection), or what.

Reply

(linked from today’s article)
> co-operating more effectively to make traffic more consistent
This is my one expectation, once A.I. drivers are ubiquitous, that has potential to see an improvement within my lifetime (or even the next 100y). No need for signals when cars have their own NFC protocol (but ironically signals will likely be used more–hah), and traffic jams only a distant memory. Even navigating around other wrecks will become more efficient, and traffic will merely slow, as opposed to standing still.
Back to this incident, the pedestrian absolutely should have been more careful in the dark, dressed in black. Kept a watchful eye until the crossing was complete.
I’ve not seen the video–nor do I wish to–but as I understand six months later, the vehicle had plenty of time to stop yet didn’t. Maybe more video has since been released. Sadly, each of us could easily have prevented Elaine’s death.

Reply

Agree w/Paul: far-higher expectations are reasonable. Either give me a car I can sleep in–or I’ll work the controls ThankYouVeryMuch.
And like Duck, as an Uber “safety” driver I’d be paranoid to cease my vigilance–but I get it. After miles and miles of someone else driving, lapses happen. Humans are simply terrible at focusing for long spans of inactivity. I’m still reluctant to let go of passwords. However a chief argument (Mark stated it rather eloquently once) is that expecting humans to change their behavior [in this case to eschew weak passwds] is an inherently flawed approach. People will rapidly become complacent with a car on “autopilot,” even if it’s only level 2 or 3.
Self-driving cars have superior “vision” and can react in milliseconds. Before they’re ubiquitous they need to get massively better at assessment–and we can’t let them enjoy the fallback haven of “but the human should’ve seen it.”
To elaborate upon my prior comment’s loose ends:
I’ve seen the dashcam vid–unsurprisingly like mine: significantly lower quality than human vision, particularly in the dark.
A few seconds before impact, Elaine briefly occluded something bright in the distance–a light or a reflective sign. Though it’s a barely-noticeable part of this video, an attentive driver would’ve caught it. “Something up ahead” needn’t be massive–just enough to prompt slowing a bit, peering into the gloom.
Also, she was nearly clear, and even at one-second notice, a swerve to the left may likely have been the line between clipping her bike’s rear tire and direct collision at full speed. Coupled with slowing from “I may have seen something,” we’d have had a close call, but no incident.
An alert human watching the road would’ve missed her easily.
:,(
That’s not to say of course that humans always pay attention–and Elaine indisputably should’ve been watching for headlights, particularly with no crosswalk at night. She is (was) culpable as well.

Reply

I always heard that even though the car could “drive itself” that the human behind the wheel was still responsible for any accidents caused by a failure of the technology. It’s clear that the technology, like any other software, will have bugs and the person needs to be ready to take control when necessary.

Reply

Problem is people are lazy and easily distracted. We wouldn’t have nearly as many accidents as we do if people were always attentive. Self driving just gives the human even more excuse to be blind, unfortunately…..you only have to google stuff like using an orange to fool a Tesla into ‘thinking’ you have your hands on the wheel to see that.
I wish the ‘self driving’ tech was focused more towards be the ‘2nd head’ and acting as a collision avoidance system rather than actually driving. Let the human drive and the computer play backup with the ability to brake and swerve (but not accelerate)

Reply

Part of the reason behind self driving cars is to allow people who cannot operate a car to get about safely. In those situations it would not make sense for the passenger to be liable.

Reply

“thereby sidestepping questions about liability in the case of self-driving cars”
That’s disappointing. Someone needs to be held accountable for the Involuntary manslaughter charges that would be in effect if a person was driving. This isn’t a toaster, it’s a program(s)operating a machine that is used in over 30,000 fatalities a year in the use. 1.3B world wide. We cannot allow machines/their operators/engineers become immune to charges from killing people.

Reply

It doesn’t exonerate Uber or mean that the company has avoided the issue of liability – I guess it just means that there won’t be a civil suit with the victim’s family that gets heard in court. (Reaching a civil settlement doesn’t absolve you from all criminal charges, does it?)

Reply

It depends if the attorney general (state) wants to peruse charges. Some of that decision is based on public pressure and victim request.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!