Moral Machine MIT autonomous car crash game
MIT's Moral Machine questions what ethical factors should influence an autonomous car's decision making in the event of a deadly scenario iStock

The advent of autonomous road vehicles from the likes of BMW, Google, Ford and other car manufacturers has raised innumerous questions around passenger and pedestrian safety. How, for example, would autonomous car tech handle a crash situation where it deems that the only outcome will cause at least a single fatality?

A website from the boffins at Massachusetts Institute of Technology (MIT) examines this deadly conundrum in a particularly morbid fashion – by asking you to pick who dies in a number of preset and user-made "no-win" scenarios.

Called Moral Machine, the site poses a single life-or-death scene at a time, each with a guarantee that one or more of the people (or animals) shown are in inescapable mortal danger. The ethical dilemma lies in the hands of the "player", as you are forced to decide who is expendable and pick one of two possible outcomes.

The majority of the "judge" scenarios created for the test result in either the death of the autonomous car's passengers, or the pedestrians after the car suffers a "sudden brake failure". Each scenario provides minor details about the potential victims.

For example, in one scene a young female girl and an elderly female woman are shown as potential victims while crossing either side of a road. The former has disobeyed the law by crossing while a red pedestrian light shows, while the latter is abiding by the light signals. Whether age or legality becomes the defining factor in the autonomous car's decision making is left to you.

According to the website, Moral Machine has been designed as a platform for "building a crowd-sourced picture of human opinion" and "discussion of potential scenarios of moral consequence," when autonomous machines are faced with moral dilemmas.

"From self-driving cars on public roads to self-piloting reusable rockets landing on self-sailing ships, machine intelligence is supporting or entirely taking over ever more complex human activities at an ever increasing pace," it states.

"The greater autonomy given machine intelligence in these roles can result in situations where they have to make autonomous choices involving human life and limb. This calls for not just a clearer understanding of how humans make such choices, but also a clearer understanding of how humans perceive machine intelligence making such choices."

The subject of moral choice and autonomous cars is a contentious one, with multiple manufacturers suggesting various long-term solutions. In May, the senior technical leader of crash avoidance at Volvo, Trent Victor, told IBTimes UK that driverless cars would never get themselves into situations where they would have to make an ethical decision to save lives by "proactively [staying] within a zone where conflicts are resolvable."

You can try out MIT's Moral Machine for yourself here.