top of page
Search

Ethics Put to the Test by AI: The Dangers of a “Minimalist” Ethics. (2)

  • Writer: Franck Negro
    Franck Negro
  • Jan 23
  • 9 min read

What kind of ethics does AI require? Given the radical, structural, and systemic reconfiguration—and indeed metamorphosis—that artificial intelligence brings about across the whole range of individual, social, and institutional activities, it is no longer a matter merely of calling upon the philosophical tradition to respond to the new ethical stakes raised by these transformations. Rather, we must in a sense compel philosophy to work upon itself, and to subject its historical foundations to critical scrutiny. In other words, it is by examining its concepts, its methods of analysis, and the theoretical frameworks it has inherited from its own history—and by exploring new trajectories in a logic of self-transformation of philosophy itself—that we will not only remain faithful to its original spirit, but also become capable of addressing, in an adequate manner, the unprecedented ethical problems raised by artificial intelligence.


Within this framework, a first danger appears: the a priori privilege one might grant to a particular form of ethics without even taking the trouble to assess whether it is fully adequate to the new situations generated by the deployment of AI systems in our societies. No one, Ménissier notes, can today say with certainty which form of ethics would be most adequate to AI, given the stake that is ours: the preservation of a framework without which no ethics is possible, namely the clear designation of an author to whom the responsibility for an act can be imputed. To understand the entirely justified remarks raised by Ménissier, it is necessary to place them, at least briefly, within the theoretical framework of the history of ethical philosophy to which they belong.


Official historiography, strongly influenced by analytic ethics, has made a habit of distinguishing three major traditions of normative ethics—or, again, three major currents of moral philosophy—within which it is possible to classify the principal authors of the Western ethical tradition: (1) consequentialism, (2) deontologism, and (3) virtue ethics. Chronologically, virtue ethics is the oldest. The reference author remains Aristotle (384–322 BCE), and his foundational work, Nicomachean Ethics. Although Aristotle also authored other works devoted to ethics—including the Eudemian Ethics and the Magna Moralia—it is always the Nicomachean Ethics that is invoked when speaking of the first truly systematic attempt to develop a theory of virtue understood as an acquired disposition (as opposed to a natural one) to act morally in a continuous and voluntary manner.


Although still present in certain scholastic authors (Thomas Aquinas) and in the classical age (Baruch Spinoza), this particular conception of moral life would be largely eclipsed in the modern and contemporary periods by the two other major traditions of philosophical ethics: deontologism in its Kantian version, with a key work, Groundwork of the Metaphysics of Morals (1785); and consequentialism—particularly Jeremy Bentham and John Stuart Mill—with two major books, Bentham’s An Introduction to the Principles of Morals and Legislation (1789) and Mill’s Utilitarianism (1861).


Virtue ethics would make its great return at the very beginning of the 1980s with the publication of the Scottish philosopher Alasdair MacIntyre’s After Virtue. Many agree that this book had, in the field of ethical philosophy, an impact roughly comparable to that of John Rawls’s A Theory of Justice in political philosophy upon its publication in 1971. Among other contemporary thinkers who contributed to renewing this type of ethics, one may cite Alasdair MacIntyre (1929–2025) of course, but also Elizabeth Anscombe (1919–2001), Philippa Foot (1920–2010), Bernard Williams (1929–2003), or again Amartya Sen (born 1933) and Martha Nussbaum (born 1947).


Let us hasten to add that these three major moral theories—virtue ethics, deontologism, and consequentialism—are meant only to provide the mind with a simple and convenient classificatory scheme, grounded in broad characteristic traits shared by ethical theories, independently of their specific differences. This convenient but reductive way of compressing Western ethical thought into three ideal types—abstract constructions intended to bring out the most significant traits of a given phenomenon—unfortunately most often prevents us from taking into consideration the richness of the theories bequeathed by more than 2,500 years of philosophical history, without even mentioning other traditions, such as Buddhism or Confucianism, for example.


All three theories aim to provide a universal framework for addressing and resolving moral dilemmas, and they converge upon three fundamental questions that each attempts, in its own way, to answer: (1) What is a moral problem? (2) What is it to act morally? (3) How can the value of our moral choices be justified? It is therefore common today among academic philosophers to examine our moral intuitions in the light of these three major currents of the Western tradition, which can, in broad strokes, be summarized as follows.


  • Consequentialism: For consequentialists, the moral value of an action depends neither on the agent’s intentions nor on his character or dispositions to act in accordance with virtue, but on the consequences or states of affairs the action is likely to produce. In other words, a consistent consequentialist must specify with precision the central notion of “desirable consequences” or “good consequences” aimed at by the action in question. Is it justice? Pleasure? Well-being? Knowledge? The amount of wealth produced? Equity among individuals? Social justice? The avoidance of suffering? An aesthetic gain, such as the beautification of a neighborhood or a landscape? And if it is justice, pleasure, or well-being, what precisely is meant by these terms? In what sense are they not only definable but, above all, reliably identifiable? One may also note the extreme breadth of the domains to which the consequentialist approach applies, which are not only ethical but also political (political consequentialism), economic (economic consequentialism), even aesthetic (aesthetic consequentialism). Consequentialism in general may thus be described as “teleological” by nature (from the Greek telos, meaning end, aim, or objective). For it, whatever the forms it may take, it is the objectives pursued by an action that ultimately justify whether it is praiseworthy or not. Do the choices or decisions you have made produce the desirable consequences, according to clear criteria defined beforehand? If so, then the action will be judged good, whatever the intentions—commendable or not—that were yours at the outset.

  • Deontologism: In contrast to consequentialism, deontologism rejects the idea that consequences constitute the primary criterion for evaluating the moral value of an action, and that the end might, in some sense, justify the means. What matters above all are the intentions—Kant, the principal representative of deontological morality, speaks of “good will”—that motivated the moral agent to perform such and such an action. The deontologist holds that there exist absolute moral constraints—rights, obligations, and duties (deontos in Greek)—that categorically limit what we may do, independently of the consequences of our decisions. In other words, the moral value of our actions rests upon reciprocal obligations we owe one another and that we have the duty to respect absolutely, on pain of infringing upon the rights of others. One may find an early version of deontologism in the Decalogue of the Old Testament. Its modern form, however, comes to us from the German philosopher Immanuel Kant and his notion of the categorical imperative. Among contemporary philosophers who follow in his wake, one may cite Robert Nozick, Ronald Dworkin, and Philippe Van Parijs.

  • Virtue ethics: Virtue ethics, finally, is oriented first toward the notion of the “good”—understood as an ideal of life to be realized—rather than toward that of the “just,” understood as compliance with a rule of action defined beforehand. According to proponents of virtue ethics, for a person to act morally it is not enough that he possess an appropriate concept of what it is right or wrong to do, nor even that he have argumentative resources to resolve moral dilemmas and justify his choices. He must above all develop and possess traits of character—the “virtues”—necessary for the realization of the fundamental aim he pursues, namely the conception he has of the “good” or of the “good life,” assimilated by the Greeks to happiness. To act morally, the English philosopher Bernard Williams will say, what we need is not so much a moral theory, or even a theory of virtues, as the effective possession of virtues. In this sense, virtue ethics does not seek to answer the question “What ought I to do?” as deontologism and consequentialism do, but rather the question “What kind of person do I wish to become?” It is therefore an ethics oriented toward self-realization, in which the traits of character and affective dispositions of individuals constitute the true object of moral evaluation.


Yet these three broad ethical visions are valid only insofar as they are taken up and interrogated within the context and specificity of the profound changes and upheavals that artificial intelligence is currently producing in every domain of human action. It is precisely such a movement that has occurred within ethical philosophy itself: from its primarily speculative orientation in the period 1900–1950—concerned above all, following the publication in 1903 of the Principia Ethica of the English philosopher George Edward Moore (1873–1958), with questions centered on the meaning and specificity of ethical statements and moral judgments, that is, metaethics—to a turn, from the 1960s onward, toward questions more anchored in practical realities and contemporary societal stakes.


It was within this context that, in the 1960s and 1970s, a new branch of ethical philosophy emerged called “applied ethics,” which first took the form of bioethics, in reaction to the progress and advances made in the life sciences—organ transplants, DNA research, cloning, medically assisted reproduction, and so on—but also to the shock produced by the discovery of Nazi medical experiments carried out on prisoners in concentration camps (see my article From Bioethics to the Ethics of Artificial Intelligence).

It thus became common to organize the field of ethical philosophy into three major and distinct domains of inquiry:


  • Normative ethics, which seeks to establish, as we have seen above, the moral principles and evaluative criteria that allow us to judge whether an action is good or bad; it answers the question: “What ought we to do?”

  • Metaethics, or second-order ethics, since it does not seek to determine what we ought or ought not to do, but rather to clarify the meaning and the ontological and epistemological grounding of our moral utterances; it addresses questions such as: “What do we mean when we say that an action is good?”, “Can our moral judgments be assigned a truth value?”, “What exactly are we referring to when we invoke moral values?”

  • Applied ethics, finally, which examines concrete cases of moral dilemmas as they emerge within social and institutional practices, such as medicine (bioethics), economic life (business ethics), war (the ethics of international relations), our relations with nature (environmental ethics) and the animal world (animal ethics), or again the use of digital systems and artificial intelligence (digital ethics or the ethics of artificial intelligence).


This division of major ethical questions—which holds only theoretically and from the standpoint of intellectual rigor—must not lead us to forget that its most practical side, which concerns us here, namely applied ethics, cannot be severed from its more theoretical base constituted by metaethics and normative ethics. How, indeed, can one provide concrete answers to the ethical implications raised by the emergence of new domains of activity or new practices such as artificial intelligence, without first having to justify what grounds our choices?


If ethics in general poses the central question “What ought we to do?”, and if answering this question immediately requires us, in all circumstances, to say why we have decided to behave in such and such a manner—ethical reasoning and argumentation—in short, to clarify the concepts we use and to make explicit the underlying principles from which we act—metaethics and normative ethics—then applied ethics systematically requires more theoretical levels of discourse, whose connection to practice may appear less obvious, namely normative ethics and metaethics. In other words, the resolution of ethical dilemmas in specialized fields such as AI ethics requires us to be able to encompass, within a single approach and with due regard for the concrete situations in which they take on meaning, questions belonging to the other two domains of ethics: normative ethics and metaethics.


These brief remarks on the richness of the conceptual, argumentative, and doctrinal frameworks of ethical inquiry allow us to point to a second danger that threatens AI ethics: the reductive checklist, rightly denounced in the report L’éthique au cœur de l’IA by a multidisciplinary group of researchers brought together within the International Observatory on the Societal Impacts of AI (Obvia). The authors call into question the form AI ethics has taken since, in particular, the creation of the 23 Asilomar Principles (2017) and the Montreal Declaration (2018): namely, a catalogue of principles, rules, or more or less numerous technical norms that one would merely have to tick off and follow to the letter in order to secure, at minimal cost and without engaging in reflection, a certificate of good conduct. In this framework, AI ethics tends to be assimilated to a simple instrument of strategic legitimation, aimed at reassuring civil society and legislators while avoiding an excess of regulation.


Hence the largely dominant tendency—particularly among computer scientists and Big Tech giants—to conceive of a “minimalist” ethics, centered primarily on risk prevention. Yet, the authors of the report remind us, ethics, as a philosophical discipline, and as it has been defined since its Greek origins, is neither reducible to a list of principles from which one might simply pick and choose as needed, nor to the strict and mechanical application of rules of law; it is, above all, a reflective, argued, and rational démarche, which first and foremost interrogates the foundations and legitimacy of moral rules and values as they come into conflict within determinate contexts and social practices.


From this perspective, the constitution of lists of principles appears as an ancillary and secondary activity, which may intervene only at the end of a conceptual, argumentative, and deliberative process—one that is largely obscured in prevailing discourses on AI ethics. Everything then proceeds as if the principles and values we invoke were self-evident and no longer required interrogation, while moral reasoning would demand no effort to understand situations or to take into account the complexity and diversity of the dilemmas with which moral agents are in fact confronted. Yet it is precisely this form of “algorithmization” of ethics, which seeks to calculate and automate—through different methods—behaviors deemed morally acceptable, that is today largely at work.

Recent Posts

See All

Comments


Restez informé! Ne manquez aucun article!

This blog explores the ethical, philosophical and societal issues of AI.

Thanks for submitting!

  • Grey Twitter Icon
bottom of page