top of page
Search

Autonomous Cars: Whom Should We Choose to Kill in the Event of an Accident?

  • Writer: Franck Negro
    Franck Negro
  • Sep 21, 2025
  • 10 min read

At a time when artificial intelligence is entering every domain of social life, the deployment of autonomous cars offers a privileged terrain for examining some of the most fundamental questions in the ethics of artificial intelligence. These are the questions addressed by Jean-François Bonnefon’s book La voiture qui en savait trop. L’intelligence artificielle a-t-elle une morale ? (Éditions HumenSciences – 2019). A PhD in cognitive psychology, the author is Research Director at the Toulouse School of Economics and is internationally recognized for his work on the moral dilemmas raised by the use of artificial intelligence. He became particularly well known with the publication, in 2018, of a scientific article co-authored with a team of researchers from several academic institutions: The Moral Machine Experiment.


This project explores—through a research protocol based on collecting and analyzing data from more than 40 million individuals across 233 countries and territories—the unprecedented moral dilemmas raised by putting autonomous vehicles on the road. It is precisely the genesis and history of this project that Bonnefon retraces in his book La voiture qui en savait trop. L’intelligence artificielle a-t-elle une morale ? Here is how he explains, on the CNRS website, the project’s initial intentions:


"The Moral Machine project does not claim to determine what is ethical or moral. But it seems to us that before legislating and putting these cars on the roads, it is important for public authorities and manufacturers to know which solutions are the most socially acceptable in the eyes of the population."

In the constellation of questions related to ethics, it is essential to distinguish those that belong to normative ethics—whose aim is to determine the rules and principles we should follow in a given situation (normative ethics answers the question: What should I do? or What should we do?)—from those that belong to a purely descriptive approach, seeking to account, as objectively as possible, for: 1) the way individuals behave in a given situation, and 2) the reasons why they act as they do. The first type of question falls within philosophy, and more precisely moral philosophy, whereas the second belongs to psychology, but also to sociology, anthropology, and history. Bonnefon is careful to recall this distinction. In other words, if we are to locate The Moral Machine Experiment within a specific disciplinary framework, it belongs to what is called moral psychology.


Two unprecedented ethical questions. – It must indeed be acknowledged that the deployment of autonomous cars raises ethical questions that are not only entirely new, but also on an entirely unprecedented scale, given the place transport vehicles occupy in contemporary societies. And behind the promise of polluting less and significantly reducing the number of accidents—and therefore deaths—loom questions that are nothing short of vertiginous, because totally new for humanity, namely:


  • How many fatal accidents will we allow these cars to have?

  • How, and according to what criteria, will we distribute the victims—who may in turn be the passengers of the vehicle, pedestrians, passengers in other vehicles, children, married couples, elderly people, athletes, people with disabilities, celebrities, homeless people, etc.?


Why are these questions unprecedented? – Why are these questions unprecedented? For a simple reason that seems to fall under common sense: surprised by the speed and unpredictability of unfolding events, the human driver does not have—unlike an autonomous vehicle—the capacity to analyze the moral dilemma in which he is engaged at the moment an accident becomes imminent; his first reflex is to save himself. Now, since any moral decision and action implies, on the part of the one who performs it, the possibility (that is, a form of freedom), even if relative, to analyze and choose among different options at stake—something the very notion of a “moral dilemma” presupposes—it clearly appears that we cannot analyze accidents involving humans at the wheel in terms of moral dilemmas. In other words, when confronted with an accident whose outcomes are at least two (a necessary condition for constituting a moral dilemma), the human driver, driven more by reflexive behaviors, loses his quality as a moral agent, that is, as free, rational, and responsible. Unfortunately, it is not (yet) possible to “program” our mind in such a way that it can act in a morally adequate manner in situations (car accidents) where our capacities as a rational and free agent are, in a sense, neutralized by automatic, involuntary, and immediate reactions.


But what is not possible for a human driver, we can—and above all must—do for autonomous vehicles. The deployment of the latter enjoins us to decide according to which principles and moral rules they should operate in accident cases where moral dilemmas are involved. Which amounts to asking the following question:


  • Whose life will we risk first in the event of an accident? Or again: are we, as drivers, ready to sacrifice the lives of the passengers in the vehicle we are in, in the name of moral principles we judge superior?


The notion of a moral dilemma. – At the starting point of Bonnefon’s research project lies the notion of a “moral dilemma.” By “moral dilemma” one must understand a situation in which a person is confronted with two or more mutually exclusive options, yet all justifiable from the standpoint of morality. A moral dilemma is thus characterized by a conflict of values and ethical principles that are in competition within a given context, and about which we find ourselves obliged to decide in order to untie the knot of the conflict at stake. In other words, it forces us to make choices and to renounce one or several other options that nonetheless contain moral dimensions to which we attach importance—if not equal, at least real. Should one, for example, tell the truth to someone one loves, at the risk of causing them profound discomfort, or continue lying to preserve the tranquility and mental well-being in which they find themselves, which the knowledge of the truth would inevitably disrupt?


The trolley problem. – One of the dilemmas—indeed, arguably the most famous dilemma in ethical philosophy—is the famous trolley problem. It was first formulated in 1967 by the British philosopher Philippa Foot, in an article concerning the right to abortion: The Problem of Abortion and the Doctrine of the Double Effect, translated by Fabien Cayla and reproduced in a collection of texts devoted to the philosophical question of responsibility under the title: Le problème de l’avortement et la doctrine de l’acte à double effet. This dilemma has since undergone many variations, notably those proposed by Judith J. Thomson in a 1985 article titled precisely: Le problème du tramway. Imagine the following situation:


"A runaway trolley is heading down a track, while five people are tied to that same track. If the trolley continues on its path, the five people will inevitably be killed. Now, it happens that you have the possibility of diverting the trolley onto a side track on which there is only one person. If you pull the lever, that person will be killed, but you will have saved the other five. What should you do?"

Yet, by the magic of technological progress and artificial intelligence, the trolley problem suddenly seems to move—through the case of autonomous cars—from a mere thought experiment for philosophy students to a tangible problem, about which all the stakeholders involved more or less directly in deploying these vehicles—states, international and regional organizations, academics, research centers, manufacturers, companies, civil society, etc.—are compelled to think. In this context, Bonnefon proposes a new, more modern version of the trolley problem, which he formulates as follows:


"And if a driverless car could not avoid the accident and had to choose between two groups of victims, how should it choose?"

First experiment: moral dilemma versus social dilemma. – Inspired by the trolley problem, our group of researchers thus imagined two scenarios: What action would you judge to be the most moral, and how would you want your car to be programmed if, in an accident it cannot avoid, 1) it had the choice between killing ten pedestrians, or 2) swerving and killing only one? And if, instead of killing only one pedestrian, 3) it was the passenger inside the car whom the swerve condemned, by sending the vehicle to crash into an obstacle? From a “moral” dilemma, this first experiment would reveal a second type of dilemma, which economists call “social.” Indeed, analysis of the results showed that the vast majority of participants considered it more moral for the autonomous car to be programmed to save the greatest number of pedestrians, even if that meant sacrificing the passenger. But these same participants were not ready to buy a car programmed to kill its passenger. In other words, the results showed “that if driverless cars were obliged to sacrifice their passenger for the benefit of the greatest number, their sales would likely plummet drastically.”


The attempt to resolve once again the famous trolley problem thus ran into a commercial impasse, which had very real implications for the implementation of public policies aimed at reducing the number of deaths on the roads. Assuming autonomous cars would cause fewer fatal accidents, a consistent policy-maker would find himself obliged to impose on the market cars programmed to kill as few people as possible, even if that meant sacrificing passengers. This would inevitably, according to the studies conducted, deter a large proportion of drivers from buying autonomous cars, in favor of traditional cars, thereby continuing to cause accidents that could have been avoided. “In other words, our results suggest that to save more lives, we may have to program autonomous cars… to save fewer!”


Moral Machine: 40 million responses. – It is in this context that, in 2016, Moral Machine was launched by a team of researchers at the MIT Media Lab. Inspired by the famous trolley problem, Moral Machine is a web platform that aims to capture individuals’ moral preferences and then study their decisions in the face of the dilemmas posed by deploying autonomous vehicles, taking into account variables linked to culture and geographic location worldwide. Its main goal is therefore:


"(…) to explore how citizens want driverless cars to be programmed."

The means made available—made possible by massive web-based data collection—allow researchers to analyze citizens’ ethical preferences across different accident scenarios presented in pairs, associated with victims’ characteristics (age, sex, social status, etc.), as well as environmental variables such as being inside the car, on the road, in front of the car or on another trajectory, or the color of the traffic light (red or green) for pedestrians. In other words, the quality and depth of the collected data make it possible “to calculate the importance of each characteristic in predicting which accident the user will click on.” The user is confronted with a series of hypothetical scenarios in which an autonomous car must choose among several options involving human losses, such as sacrificing passengers or pedestrians, privileging children or adults, and so on.


How, and according to what criteria, will we distribute the victims—who may in turn be passengers in the vehicle, pedestrians, passengers in other vehicles, children, married couples, elderly people, athletes, people with disabilities, celebrities, or homeless people? It is thus more than 40 million decisions, made by users from 233 different countries and territories, that led in 2018 to the publication of a study in the journal Nature revealing participants’ moral choices and the associated cultural variations. These are the results that Bonnefon presents in chapters 21 and 22 of his book La voiture qui en savait trop. L’intelligence artificielle a-t-elle une morale ?


Nine preferences, with three clearly leading. – Chapter 21 thus highlights nine preferences, gathered and ranked within three groups according to the respective weight of each variable. These correspond either to pairs of accident scenarios (for example, killing an elderly person or five dogs), or to victims’ characteristics (age, gender, social status), or to environmental variables (being inside the car, being on the road, obeying a green light, etc.). In total, nine dimensions were analyzed: number of people, gender, age, health status, social status, species, position on the road, legality, and finally the status quo (going straight or changing direction). The data drawn from the 40 million decisions collected worldwide make it possible to define the moral preferences as follows:


  • Three dimensions clearly in the lead: 1) species (do you prefer to kill an animal or a human?), 2) number (do you prefer to save the larger group?), 3) age (do you prefer to save babies, children, adults, or elderly people?).

  • Two preferences emerging next: 1) saving pedestrians who cross in compliance with traffic laws (legality), 2) sparing people of high social status.

  • Four weaker preferences: 1) saving athletes rather than overweight people, 2) sparing women, 3) saving pedestrians rather than the car’s passengers, 4) preferring to let the car go straight rather than change direction.


Can we observe cultural preferences that bring to light differences in the weight each of the nine variables carries in the moral choices of a given country or territory? In other words, can countries or territories be grouped into relatively homogeneous blocs according to the intensity of each of the nine preferences? This is what Moral Machine sets out to analyze, isolating the 130 countries and territories that provided the greatest number of responses. On this basis, the authors construct, for each of these 130 countries, a nine-dimensional vector to which a score is assigned, determined by the weight of each variable composing Moral Machine. The aim is to construct the “moral profile” of each country or territory, in order to see whether it is possible to form blocs with “relatively similar” profiles, without the algorithm tasked with establishing them knowing the geographic location of a given country. Three major blocs emerge, each containing sub-blocs in turn:


  • A “West” bloc. Corresponding to the so-called “Western” world, consisting of almost all European countries, with a sub-bloc grouping Protestant countries (Germany, Denmark, Finland, Iceland, Norway, etc.) and another including the United Kingdom and its former colonies. This result reassures researchers in the idea that the answers given to the moral dilemmas proposed by Moral Machine capture effects of geographic, historical, and religious proximity.

  • An “East” bloc. Consisting essentially of countries in Asia and the Middle East, ranging from Egypt to China, Japan, and Indonesia.

  • A “South” bloc. This third bloc is divided into two sub-blocs: one grouping all South American countries, while the other includes metropolitan France and its overseas territories (Martinique, Réunion, New Caledonia, French Polynesia), but also Morocco and Algeria—countries historically linked to France.


Thus, whereas the “East” bloc accords relatively less importance to age (the elderly are sacrificed there more than the young, but less than in other regions of the world), the “South” bloc, for its part, manifests a marked preference for saving women. These cultural variations underscore several major difficulties:


  • That it will be extremely difficult to agree on a global moral code applicable to autonomous cars;

  • That attempts to elaborate universal moral principles for AI—such as the 23 Asilomar principles or the Montreal Declaration—largely underestimate cultural differences;

  • That the central problem of AI ethics, consisting in aligning machines’ behavior with fundamental moral values, is profoundly challenged by the social sciences;

  • That before seeking to align machines’ behavior with moral values, we must first develop tools capable of quantifying those values and their cultural variations.

Recent Posts

See All

Comments


Restez informé! Ne manquez aucun article!

This blog explores the ethical, philosophical and societal issues of AI.

Thanks for submitting!

  • Grey Twitter Icon
bottom of page