You’re crossing a road on a dark night. There’s a self-driving car on course to run you down.
What are the chances that after it (hopefully) detects you, it will make a split-second decision that, if it has to risk killing somebody, it’s you rather than somebody else?
Sure, it sounds like a purely hypothetical twist on the famous ethical thought experiment about the trolley, where you have to choose between running over five people on the tracks or reaching out to pull a switch to thereby divert the trolley to a side track where only one person is killed.
However, it’s far from hypothetical. These cars are now being trained to make decisions that are being played out on roadways. Tragically, a decision made by an autonomous Uber car in March ended in the death of 49-year-old Elaine Herzberg. She’s believed to be the first pedestrian killed by a self-driving car.
The fatal choice made by artificial intelligence (AI) in that case was reportedly made because of a software glitch, though already we’re seeing choices made in autonomous vehicle AI training that have more to do with what might seem like trivialities: namely, do we want a smoother ride that’s more prone to ignore potential false positives (bags blowing around, for example, or bushes on the side of the road), or a jerky ride that errs on the side of “that object might be a human”?
Unsurprisingly, answers to the question of who gets to be roadkill differ by culture, as is made evident by a platform called Moral Machine that’s been created by MIT Media Lab and Harvard University, the University of British Columbia in Canada, and the Université Toulouse Capitole in France.
Moral Machine enabled the public to have a say about how autonomous vehicles make decisions in scenarios that use the trolley problem paradigm. The researchers wanted to determine what humans expect of their AI-powered computers, with an eye toward being sensitive to a given culture’s values when training AI. The results were published in Nature on Wednesday.
Preferences for whom to spare differ according to a number of cultural characteristics.
The researchers outlined clusters of countries with homogeneous moral preferences, using geolocation to identify Moral Machine participants’ countries of residence:
- The Western cluster, containing North American and many European countries of Protestant, Catholic, and Orthodox Christian cultural groups.
- The Eastern cluster, including countries such as Japan and Taiwan that belong to the Confucianist cultural group as well as Islamic countries such as Indonesia, Pakistan and Saudi Arabia.
- The Southern cluster, including Latin American countries of Central and South America, in addition to some countries with French influence (for example, metropolitan France, French overseas territories, and territories that were at some point under French leadership).
One result: respondents who come from individualistic cultures, which emphasize the value of each individual, showed a stronger preference for sparing the greater number of characters in a scenario.
Those from collectivistic cultures didn’t show that much preference for sparing larger numbers of individuals. Nor were they that impressed by youth: not surprising, given that such cultures tend to hold more reverence for their communities’ elders.
The differences between valuing the young vs. the elderly is the most important for policymakers to consider, the researchers said, when it comes to trying to come up with a universal set of ethics for machines.
Other questions: Should more value be put on the lives of those who are abiding by the law? In more affluent regions, where there are more rigorous systems of laws and adherence to those laws, there’s more preference given to offing scofflaws. That gets flipped in areas with weaker institutions and poorer residents. From the write-up:
Participants from countries that are poorer and suffer from weaker institutions are more tolerant of pedestrians who cross illegally, presumably because of their experience of lower rule compliance and weaker punishment of rule deviation.
Preference for sacrificing characters in the scenario based on gender is another interesting aspect of differing cultural values. In nearly all countries, Moral Machine participants showed a preference for sparing the lives of women. However, rather predictably, that preference wasn’t as strong in regions with a higher rate of female infanticide and anti-female selective abortion.
[T]his preference [to spare the lives of females] was stronger in nations with better health and survival prospects for women. In other words, in places where there is less devaluation of women’s lives in health and at birth, males are seen as more expendable in Moral Machine decision-making.
When it comes to running over the indigent, Moral Machine preferences were to favor the well-off in countries with less economic equality.
Schone die Kinder?
The Moral Machine has precedence: Germany has been looking at the moral implications of autonomous driving since 2016 and has defined 20 rules for autonomous driving that will be obligatory for upcoming laws regarding the production of autonomous cars.
But the researchers behind the Moral Machine note that their findings suggest that the German ethics guidelines aren’t that easy to generalize to other cultures.
Take, for example, German Ethical Rule No. 9, which states that any distinction based on personal features, such as age, should be prohibited. That’s not going to fly well with the strong preference expressed through the Moral Machine for sparing children and babies… nor the preference for sparing women over men, athletes over the obese, or executives over homeless people, for that matter.
That doesn’t mean that those who train autonomous vehicles shouldn’t follow that particular German guideline and refuse to give preferential treatment to, for example, babies over the geriatric… but policymakers should expect backlash if and when the day comes when such an AI sacrifices children in a “dilemma situation,” the researchers note.
TL;DR
The four most spared characters – as in, those characters that people across the world tend to agree they should avoid auto-slaying, be they passengers in cars that may be swerved into in order to avoid killing pedestrians or pedestrians ripe for auto-well-they-aren’t-all-that-important selection – are babies, little girls, little boys, and pregnant women.
But hey, if you’re not in that group, chin up: at least you’re human. Universally, you and I are considered more worthwhile than pets. Perhaps the day shall come when, as Steve Wozniak predicted (only half joking), when we’ll be the pets of our AI robot overlords, but for now, we’re still top of the pile.