Six arguments in favor of banning lethal autonomous weapons.
- Franck Negro

- Aug 8, 2025
- 8 min read
In an open letter published on July 27, 2015, more than a thousand AI and robotics researchers, as well as industry figures, called for a ban on lethal autonomous weapons. Among the notable signatories were the head of Tesla and SpaceX, Elon Musk; the astrophysicist Stephen Hawking; Apple cofounder Steve Wozniak; the founder of DeepMind, Demis Hassabis; and the linguist Noam Chomsky. The authors propose a minimal definition of lethal autonomous weapons, distinguishing them from remotely operated or “remote-controlled” weapons, for which humans make all targeting decisions. As examples, they cite cruise missiles (flying munitions carrying an explosive payload over long distances and guided to strike a specific target) or certain drones (pilotless aircraft used for various purposes, such as surveillance, reconnaissance, or even targeted strikes that can be controlled remotely). What characterizes a lethal autonomous weapon, first and foremost, is thus its ability to select and engage targets without human intervention, as the definition offered at the beginning of the text also recalls:
"Autonomous weapons select and engage targets without human intervention. They could include, for example, armed quadcopters (a type of drone with four rotors enabling it to fly) that can search for and eliminate humans who meet certain predefined criteria."
Published on the occasion of IJCAI—an international AI conference held in Buenos Aires from July 25 to 31 of that same year—the letter warns about the existential risks that the possible triggering of a military arms race through the use of AI could pose to humanity, and calls for an outright ban on lethal autonomous weapons, more commonly referred to as “killer robots.” The authors and signatories remind us that research in artificial intelligence should, in its entirety, work toward AI that benefits society, rather than toward creating new means of destruction—means that could prove even more dangerous than chemical, biological, or nuclear weapons.
It is therefore a polemical document, one of whose interests lies in foregrounding six key arguments in favor of banning lethal autonomous weapons: 1) the risk of an arms race; 2) the relatively easy production of such weapons; 3) the risk of uncontrolled proliferation, including toward non-state actors; 4) the threat to security and fundamental rights; 5) the lowering of the threshold for entering war; 6) the misdirection of AI’s potential. What does each of these arguments consist in? What, precisely, does each contain?
The risk of an arms race. – The first argument, which one might call the “domino-effect argument” or the “vicious-circle argument,” designates a situation in which a set of countries rival one another in order to increase their power and security. In such a context of tension and competition between powers—as is the case today, for example, between the United States, China, and Russia—each country would attempt to maintain an advantage over others by strengthening its military deterrent, which would force, by way of mimetic reaction, each competing country to intensify its own armament efforts. We would thus witness an escalation of armaments that AI technologies would render all the more perilous in that they entail a genuine existential risk, on a par with nuclear weapons. In other words, a technological advance such as AI can be perceived by states as a threat to their security and sovereignty, and thus produces, by ricochet, an increase in geopolitical instability by installing a climate of mutual distrust that heightens the risk of conflict. Not to mention that these risks of conflict could be triggered accidentally and uncontrollably, given the increasingly autonomous nature of weapons operating on the basis of AI algorithms which could decide, on their own, and without taking into account the political and ethical aspects of a given situation, to take human lives—without any human intervention or decision being possible.
Relatively easy production. – The second strong argument in favor of banning lethal autonomous weapons lies in their relative ease of production. Unlike nuclear weapons, to which LAWs are most often compared because of their level of dangerousness, the manufacture of lethal autonomous weapons requires no costly raw material nor other complex infrastructures. They rely primarily on software and hardware components, such as sensors, advanced processors, and machine-learning algorithms, which not only evolve rapidly but are also becoming widely available, at ever-lower production costs (open source).
The risk of uncontrolled proliferation. – The third argument follows logically from the second. It is indeed the combination of “easy mass production” and “relatively low costs” that could make the large-scale diffusion of these weapons difficult to control—and even more dangerous than conventional or nuclear weapons. One of the most worrying aspects highlighted by the authors of the letter is the potential exploitation of these weapons in any type of conflict (guerrilla warfare, civil wars, or asymmetric conflicts opposing irregular armed forces and governments), and by actors not controlled by states, such as terrorist groups, criminal organizations, or dictators. Such actors could easily acquire them on the black market and commit the worst atrocities: carrying out ethnic cleansing, conducting targeted assassinations, destabilizing nations, subjugating populations, or organizing the selective murder of a given ethnic group.
The threat to security and fundamental rights. – Hence the fourth argument, which appears in the background of the letter and concerns the major concerns that the uncontrolled proliferation of lethal autonomous weapons would pose for the security of persons and the respect of their fundamental rights: the right to life, to human dignity, to personal security, to respect for private life, to freedom of conscience, opinion, religion, or expression. This underscores the risk of malicious use by ill-intentioned groups such as those mentioned above, including also the risk of genocide as defined in Article II of the Convention on the Prevention and Punishment of the Crime of Genocide, adopted on December 9, 1948.
Lowering the threshold for entering war. – This fifth argument—certainly the most controversial, because the hardest to adjudicate—highlights the fundamental change in the dynamics and perception of armed conflicts by political decision-makers, the military, and public opinion that the use of LAWs would induce, since human soldiers could now be replaced by autonomous machines.
The notion of the “threshold for entering war,” drawn from military history and strategy, expresses the idea of a level of tension or threat beyond which a state decides to engage in armed conflict. While the notion is widely accepted and used by military strategists, its phrasing may vary. One sometimes speaks of a “tolerance threshold” or a “trigger threshold” to indicate critical limits that must not be crossed before the outbreak of a war or a military intervention. This threshold can vary according to a given context or situation, and can take into account a multitude of factors such as a country’s history, its military and technological capabilities, its economy, the alliances it has forged with other countries or within international organizations, the strategic vision of its leaders, the political ideology they defend, or the geopolitical context at a given moment in the historical development of international relations. Finally, it most often fits within the military doctrine defined by the leaders of the country in question, which is generally formalized in official documents such as defense white papers, government publications, strategic directives, or parliamentary reports.
Now, by its very meaning, war has always been associated with significant human costs. Deciding to go to war is not merely a logical decision consisting in assessing—on the basis of premises clearly defined by doctrine—whether a threshold has been crossed that justifies a war or military intervention; it is also and above all accepting from the outset that populations of men and women, both military and civilian, mostly young, will die, often after atrocious suffering. It is this drama—above all human for an entire generation—that makes the decision to go to war, or not, so difficult to take.
Governments must not only account to their fellow citizens for human losses, but also—and above all—for the meaning those losses can have in light of a cause supposedly justifying them, since it is precisely in that cause’s name that the decision to go to war was taken. In other words, at the heart of war lies first and foremost the political and moral decision to assume responsibility for large-scale human losses, without any certainty as to the final outcome. Which means that the factors that trigger an entry into conflict are not only economic, political, strategic, military, geographic, geopolitical, or diplomatic, but also psychological.
Consequently, the drastic reduction in the number of human victims that LAWs promise seems, at first glance, to argue in favor of their use, since they would remove from war, in the traditional sense, what makes it a phenomenon of unequal dramatic scope: the expectation of significant human losses. They would radically transform the habitual and centuries-old way we perceive war by neutralizing its most fundamental attribute: the legitimization of the death of thousands, or even millions, of human beings. Yet, even as they might drastically reduce the human cost of war by replacing biological soldiers with autonomous machines, LAWs would thereby lower the psychological and moral obstacles weighing on political decision-makers and enjoining them to favor, as much as possible, a diplomatic solution over a military solution in resolving a conflict between states. In other words, the use of lethal autonomous weapons would have the consequence of lowering the threshold for entering war—or “tolerance threshold”—beyond which a state might decide to intervene militarily, and would thus increase the temptation to use force to resolve crisis situations. What we would gain on the one hand (a reduction in victims), we might lose on the other (an increase in the use of force and in the number of armed conflicts), with medium- and long-term consequences that would be characterized by prolonged conflicts, a risk of growing instability, constant geopolitical tensions between states, and permanent threats weighing on civilian populations.
The authors of the letter highlight a well-known principle among economists—particularly in energy economics—called the “rebound effect.” First theorized by the British economist William Stanley Jevons (1835–1882), it was observed when improvements in the efficiency of coal use in the United Kingdom ultimately led to an increase in total energy consumption. More generally, the rebound effect is characterized by a situation in which short-term efficiency gains brought about by a technological innovation can produce effects that were not initially foreseen in the longer term.
Although generally applied to the domain of energy, the notion of the rebound effect can be extended to any situation likely to generate counter-intuitive secondary effects triggered by the introduction of a new technology. It is therefore a powerful tool for attempting to analyze and anticipate, in a holistic way, the risks that could arise from the use of AI technologies in specific contexts. It mobilizes a form of consequentialist reasoning (centered on effects) that takes the form of a calculation of gains and losses. In the military and geopolitical context invoked by the authors of the letter, they indicate that the initial gain measured in terms of human lives could be offset by an overall increase in armed conflicts, which could in the end—on top of the undesirable effects already mentioned—result in even more human losses.
The misdirection of AI’s potential. – Another major argument advanced in the letter concerns the misdirection of financial and human resources, ultimately devoted to increasing our destructive capacities, when we ought instead to concentrate them on the exceptional potential of AI to improve the general well-being of our societies: in medicine, in combating climate change, in education, in optimizing natural resources, and so on. The authors even go so far as to invoke a research ethics, claiming the right of researchers (chemists, biologists, physicists, engineers, etc.) to contest the diversion of their work toward the production of technical systems—such as AI systems—that would reduce the societal benefits that these systems could otherwise bring. The primary vocation of technologies in general, and of artificial intelligence in particular, is not to pursue military objectives of power, but to improve human well-being and to address the major challenges we face today (the ecological crisis, improving healthy life expectancy, access to education and knowledge for all, etc.).
Not to mention the distrust that massive military use of artificial intelligence could provoke among the general public—whether toward its use, toward AI research, and ultimately toward the scientific community as a whole. In that context, AI would be negatively associated with technologies put at the service of great powers to consolidate their domination, conduct mass surveillance, or threaten the privacy and freedom of persons…
Comments