You’re crossing a road on a dark night. There’s a self-driving car on course to run you down.
What are the chances that after it (hopefully) detects you, it will make a split-second decision that, if it has to risk killing somebody, it’s you rather than somebody else?
Sure, it sounds like a purely hypothetical twist on the famous ethical thought experiment about the trolley, where you have to choose between running over five people on the tracks or reaching out to pull a switch to thereby divert the trolley to a side track where only one person is killed.
However, it’s far from hypothetical. These cars are now being trained to make decisions that are being played out on roadways. Tragically, a decision made by an autonomous Uber car in March ended in the death of 49-year-old Elaine Herzberg. She’s believed to be the first pedestrian killed by a self-driving car.
The fatal choice made by artificial intelligence (AI) in that case was reportedly made because of a software glitch, though already we’re seeing choices made in autonomous vehicle AI training that have more to do with what might seem like trivialities: namely, do we want a smoother ride that’s more prone to ignore potential false positives (bags blowing around, for example, or bushes on the side of the road), or a jerky ride that errs on the side of “that object might be a human”?
Unsurprisingly, answers to the question of who gets to be roadkill differ by culture, as is made evident by a platform called Moral Machine that’s been created by MIT Media Lab and Harvard University, the University of British Columbia in Canada, and the Université Toulouse Capitole in France.
Moral Machine enabled the public to have a say about how autonomous vehicles make decisions in scenarios that use the trolley problem paradigm. The researchers wanted to determine what humans expect of their AI-powered computers, with an eye toward being sensitive to a given culture’s values when training AI. The results were published in Nature on Wednesday.
Preferences for whom to spare differ according to a number of cultural characteristics.
The researchers outlined clusters of countries with homogeneous moral preferences, using geolocation to identify Moral Machine participants’ countries of residence:
- The Western cluster, containing North American and many European countries of Protestant, Catholic, and Orthodox Christian cultural groups.
- The Eastern cluster, including countries such as Japan and Taiwan that belong to the Confucianist cultural group as well as Islamic countries such as Indonesia, Pakistan and Saudi Arabia.
- The Southern cluster, including Latin American countries of Central and South America, in addition to some countries with French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership).
One result: respondents who come from individualistic cultures, which emphasize the value of each individual, showed a stronger preference for sparing the greater number of characters in a scenario.
Those from collectivistic cultures didn’t show that much preference for sparing larger numbers of individuals. Nor were they that impressed by youth: not surprising, given that such cultures tend to hold more reverence for their communities’ elders.
The differences between valuing the young vs. the elderly is the most important for policymakers to consider, the researchers said, when it comes to trying to come up with a universal set of ethics for machines.
Other questions: Should more value be put on the lives of those who are abiding by the law? In more affluent regions, where there are more rigorous systems of laws and adherence to those laws, there’s more preference given to offing scofflaws. That gets flipped in areas with weaker institutions and poorer residents. From the write-up:
Participants from countries that are poorer and suffer from weaker institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation.
Preference for sacrificing characters in the scenario based on gender is another interesting aspect of differing cultural values. In nearly all countries, Moral Machine participants showed a preference for sparing the lives of women. However, rather predictably, that preference wasn’t as strong in regions with a higher rate of female infanticide and anti-female selective abortion.
[T]his preference [to spare the lives of females] was stronger in nations with better health and survival prospects for women. In other words, in places where there is less devaluation of women’s lives in health and at birth, males are seen as more expendable in Moral Machine decision-making.
When it comes to running over the indigent, Moral Machine preferences were to favor the well-off in countries with less economic equality.
Schone die Kinder?
The Moral Machine has precedence: Germany has been looking at the moral implications of autonomous driving since 2016 and has defined 20 rules for autonomous driving that will be obligatory for upcoming laws regarding the production of autonomous cars.
But the researchers behind the Moral Machine note that their findings suggest that the German ethics guidelines aren’t that easy to generalize to other cultures.
Take, for example, German Ethical Rule No. 9, which states that any distinction based on personal features, such as age, should be prohibited. That’s not going to fly well with the strong preference expressed through the Moral Machine for sparing children and babies… nor the preference for sparing women over men, athletes over the obese, or executives over homeless people, for that matter.
That doesn’t mean that those who train autonomous vehicles shouldn’t follow that particular German guideline and refuse to give preferential treatment to, for example, babies over the geriatric… but policymakers should expect backlash if and when the day comes when such an AI sacrifices children in a “dilemma situation,” the researchers note.
TL;DR
The four most spared characters – as in, those characters that people across the world tend to agree they should avoid auto-slaying, be they passengers in cars that may be swerved into in order to avoid killing pedestrians or pedestrians ripe for auto-well-they-aren’t-all-that-important selection – are babies, little girls, little boys, and pregnant women.
But hey, if you’re not in that group, chin up: at least you’re human. Universally, you and I are considered more worthwhile than pets. Perhaps the day shall come when, as Steve Wozniak predicted (only half joking), when we’ll be the pets of our AI robot overlords, but for now, we’re still top of the pile.
Howard Gordon
Ms. Vaas, I believe that you meant to sat “crash” instead of “cash”.
Mark Stockley
That one’s on me. Fixed, thanks.
IT Guy
Mr. Gordon, I believe you meant to say ‘say’ instead of ‘sat’ ;)
G-Man
The three laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
Nick
*If* self driving cars ever get to be common place enough for you to worry about who they mow down then I can see older people wearing masks and coats with images of children on.
Bryan
> *If* self driving cars ever get to be common place enough for you to worry about
This is paradoxical when paired with my other comment, but if one car in 20,000 is autonomous, that’s even scarier. At least knowing one in five could be a computer would at least keep buggy A.I. drivers at the forefront of one’s thoughts when crossing a street.
Elaine Herzberg may or may not have known her town was being used as a testing ground, but if every other car is computer-piloted, would she have looked up rather than assuming an approaching human on the highway would slow for her?
It’s absolutely on her for skipping the crosswalk and then not being extra-careful. Maybe an iPod was involved, maybe she misjudged the time she had, maybe she was zoned out. But a human would’ve slowed or swerved, missing her.
Maybe the close call would piss her off. But better ruining her night than ending it.
Tony Gore
Where I live and go out running, I would be a lot safer with self driving cars. In one 2 hr run, I got hit once and had to take evasive action e.g. jumping out of way 20 times. I have had drivers deliberately drive at me. The police are of little use – when given registrations, they simply tracked the driver down who said “I didn’t see anyone” and so that was OK. Having ethical rules for self driving cars is one thing, but who is going to police them? But with Specsavers research showing that 25% of people are driving who don’t meet the vision requirements (round here I can well believe that), I think even imperfect decision making of self driving cars will be vastly superior to a significant percentage of current drivers. At least self driving cars will anticipate the need to slow down, unlike so many drivers who hope you will magically beam yourself out of the way and then only hit the brakes hard at the last moment so they stop inches from you.
ratteau
One could also look at 20 instances within 2 hours and see that the common denominator in all events was you and maybe your running habits contributed to the high frequency of close calls, at least in some small part?
Bryan
Tony,
1) first, as a fellow runner–stay safe buddy. Holy cow, I feel for you.
2) Until your comment I didn’t really appreciate how plentiful the trails are in my area; thanks for that. My standard 8-mile route is probably only ~8% busy road (maybe 35% on rural, road). Not sure where you are; I’m the US midwest, lots of fresh air and room to see, not many blind corners.
However,
3) I’ve frequently complained, calling my area’s drivers terrible–but it stems from indifference and selfishness to other vehicles, not blindness–they’d still stop for you or me. I’d rather run in front of someone who will honk and shout than a computerized car who will assume I was going to yield because I didn’t traverse the same crosswalk 600 times to get my eight miles.
4) No doubt there are those who should not drive yet do, but I wonder how much of the Specsavers research is driven by the fact that they’d like to sell more glasses.
roleary
Be interesting to see how this breaks down based on which countries do and do not have jaywalking laws.
brianc6234
So self driving cars don’t have brakes?
Anonymous
Maybe, maybe not
Bryan
No offense to coders in general, but they are human and therefore prone to mistakes.
After nearly 20 years dealing with buggy software of many stripes–and at times finding some shockingly-obvious-in-hindsight mistakes, I do not ever want to see self-driving cars. Not if the code was written by humans at least. The well-oiled-machine depicted in Minority Report is thousands of years into our future.
We humans, for all our faults, are mostly trying to do the right thing. And while human drivers make mistakes, all human drivers know how people look without requiring thousands of lines of code*** for a description. And know to not watch television while driving, and we can all tell a plastic bag from a person walking a bike.
*** (written by those same buggy humans)
Bryan
I feel like this approaches admitting I may be wrong–but maybe I’m simply regarding auto-automobiles as a replacement for my driving. I can objectively say I’m a good driver.
However… two days ago an Indiana woman killed three children (all siblings, aged six, six, and eleven) and severely wounded a fourth, ultimately saying she hadn’t realized it was a school bus she was flying past–despite the flashing lights. Two twin brothers and their sister. Appallingly tragic, and completely in the face of everything I said above.
…maybe A.I. driving won’t be superceding a vast sea of superior drivers, merely an isolated few.
:,(
RKB
This (very short) story STET published in Fireside Fiction looks at this concept:
[URL removed]