Trends

MIT study explores the ‘trolley problem’ and self-driving cars

As many as 10 million autonomous cars are predicted to hit public roads by 2020, and when they do, they’ll have difficult decisions to make. Understandably, there’s some urgency behind building decision-making systems capable of tackling the classic ‘trolley problem’, in which a person — or computer, as the case may be — is forced to decide whether to sacrifice the lives of several people or the life of one.
Encouragingly, scientists have begun laying the groundwork for this moral conundrum.
A new paper published today by MIT analyzes the results of an online quiz — the Moral Machine — that tasked respondents with making ethical choices regarding fictional driving scenarios. Over 2 million people from more than 200 million countries addressed nine grisly dilemmas, which included choosing to kill pedestrians or jaywalkers, young people or the elderly, and women or men.
Some of the findings aren’t terribly surprising. Collectively, those who responded to the poll said they’d save more lives over fewer, children over adults, and humans instead of animals.
But not every trend crossed geographic, ethnic, and socioeconomic lines.
People from less prosperous nations — particularly nations with a lower gross domestic product (GDP) per capita — weren’t as likely as folks from industrialized countries with strong civic institutions to crash into jaywalkers.
Residents of Asia and the Middle East, meanwhile — countries like China, Japan, and Saudia Arabia — were less inclined to save young people over older pedestrians and were more likely to spare wealthy people than were survey takers from North America and Europe. (The researchers chalk this up to a collectivist mentality.)
The authors admit the study can’t be taken as gospel truth. The Moral Machines quiz was self-selecting, and questions were posed in a binary, somewhat contrived fashion — every outcome resulted in the deaths of people or animals.
It is, however, intended to prompt further discussion.
“[The quizzes] remove messy variables to focus in on the particular ones we’re interested in,” Lin, one of the lead authors of the study, told the Verge. “[It’s] fundamentally an ethics problem … so this is a conversation we need to have right now.”

Moving forward

Even the most sophisticated artificial intelligence (AI) systems are far from being able to reason like a human, but some are coming closer to that point.
Tel Aviv, Israel-based Mobileye, which Intel acquired in a $15.3 billion deal last April, proposed a solution to this problem — Responsibility-Sensitive Safety (RSS) — last October at the World Knowledge Forum in Seoul, South Korea. In an accompanying whitepaper, Intel characterized a “common sense” approach to on-the-road decision-making that codifies good habits, like maintaining a safe following distance and giving other cars the right of way.
“The ability to assign fault is the key. Just like the best human drivers in the world, self-driving cars cannot avoid accidents due to actions beyond their control,” Amnon Shashua, Mobileye CEO and Intel senior vice president, said in a statement last year. “But the most responsible, aware, and cautious driver is very unlikely to cause an accident of his or her own fault, particularly if they had 360-degree vision and lightning-fast reaction times like autonomous vehicles will.”
Google has conducted its own experiments. In 2014, Sebastian Thrun, founder of the search giant’s experimental X division, said its driverless cars would choose to collide with the smaller of two objects in the event of a crash. Two years later, in 2016, then-Google engineer Chris Urmson said they would “try hardest to avoid hitting unprotected road users: cyclists and pedestrians.”
And the Defense Advanced Research Projects Agency (DARPA), a division of the U.S. Department of Defense, is investigating computational models that mimic core domains of cognition — objects (intuitive physics), places (spatial navigation), and agents (intentional actors) — as part of its Machine Common Sense Program.
Legislation might soon compel the development of such systems. As the Verge notes, Germany last year became the first country to propose guidelines for the decisions made by autonomous cars, suggesting that all human life be valued equally. Europe is working on policies of its own, which it’ll likely enforce through a certification program or legislation. And in the U.S., Congress has made public principles for potential regulation.
In any case, carmakers have their work cut out for them. A number of high-profile accidents involving autonomous cars has depressed public confidence in the technology. Three separate studies this summer — by the Brookings Institution, think tank HNTB, and the Advocates for Highway and Auto Safety (AHAS) — found that a majority of people aren’t convinced of driverless cars’ safety. More than 60 percent said they were “not inclined” to ride in self-driving cars, while almost 70 percent expressed “concerns” about sharing the road with them.
Source: VentureBeat
To Read Our Daily News Updates, Please Visit Inventiva Or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker