Character  &  Context

The Science of Who We Are and How We Relate
Editors: Mark Leary, Shira Gabriel, Brett Pelham
Apr 08, 2019

Your (Future) Car’s Moral Compass

by Edmond Awad
a smart car drives on a virtual road. Several hazards exist in different color boxes inclding a pedestrain in a crossswalk, people on the side of the road and way finding signs that might have useful information

Picture a driverless car cruising down the street. Suddenly, three pedestrians run in front of it. The brakes fail and the car is about to hit and kill all of them. The only way out is if the car crosses to the other lane and swerves into a barrier. But that would kill the passenger it’s carrying. What should the self-driving car do?

Would you change your answer if you knew that the three pedestrians are a male doctor, a female doctor, and their dog, while the passenger is a female athlete? Does it matter if the three pedestrians are jaywalking?

Millions of similar scenarios were generated by an experimental website my fellow researchers and I created and named “Moral Machine.”

After the website received substantial media attention, more than four million people from 233 countries and territories visited the website between June 2016 and December 2017. They rated scenarios like the one described above, which were inspired by the famous philosophical conundrum the trolley problem. Though all of the scenarios are unlikely in real life, what we learned from visitors’ appraisal of them could help inform the regulation and programming of autonomous vehicles (AVs) and may also have implications for machine ethics generally. The main question we wanted to answer: How does the public think autonomous vehicles should resolve moral trade-offs? And could we use their responses to build a new kind of moral compass?

But before we dive into that, it’s important to understand how driverless cars make moral decisions in the real world. This might seem like a problem for the future, but cars already are making such decisions. For example, let’s say a car is programmed to drive in the middle of a lane. Sometimes, the car may “decide” to drive closer to the right side or left side of the lane, a response to programming meant to optimize for various objectives. These could include maximizing passenger convenience or minimizing liability.

Read more at Behavioral Scientist.

Edmond Awad is a postdoctoral associate in Iyad Rahwan's Scalable Cooperation group at MIT Media Lab.

This post is a shared excerpt from the Behavioral Scientist and is shared with permission.

About our Blog

Why is this blog called Character & Context?

Everything that people think, feel, and do is affected by some combination of their personal characteristics and features of the social context they are in at the time. Character & Context explores the latest insights about human behavior from research in personality and social psychology, the scientific field that studies the causes of everyday behaviors.  

Learn More
Contribute a Blog to Character & Context
Visit the Blog Roll for other feeds

Search the Blog

Get Email Updates from the Blog