Ethics Put to the Test by AI: Agency and Responsibility. (1)
- Franck Negro

- Jan 23
- 13 min read
The text that follows constitutes the first installment of three articles which, in reality, form a single whole. The aim is to introduce, as didactically as possible, a number of fundamental problems in AI ethics, while also offering an overarching view. My point of departure is a text by the contemporary philosopher Thierry Ménissier, What Ethics for AI? (2019), drawn from a lecture delivered at the conference entitled “Birth and Developments of Artificial Intelligence in Grenoble,” held on 19 October 2019 in the auditorium of the Musée de Grenoble. On the basis of the central thesis of that text, which I present in this first article (1), I seek to bring to light the aporias to which the attempt—widely shared among engineers and computer scientists—to establish a “procedural” ethics leads (2), before finally concluding with the limits of machine ethics (3), by emphasizing the plural character of ethical discourses concerning AI.
As is often the case in such circumstances, the development and deployment of new socio-technical systems such as AI, together with the societal anxieties they inevitably carry, have in parallel generated—on the part of institutions, companies, opinion leaders, researchers, and experts of every kind—an inflation of discourses calling for the supervision and regulation of these systems, grounded in principles of ethics, transparency, protection of human rights, and social responsibility. It is in this context that a number of declarations of principles and other ethical charters have proliferated in recent years—such as the 23 Asilomar Principles (2016), the Montreal Declaration (2017), the OECD Principles on Artificial Intelligence (2019), the Ethics Guidelines for Trustworthy AI of the High-Level Expert Group on Artificial Intelligence (2019), or the UNESCO Recommendation on the Ethics of Artificial Intelligence (2021), to mention only the most widely known—alongside the now almost systematic pairing of the terms “ethics” and “artificial intelligence,” to the point of giving rise to the idea of a new branch of applied ethics devoted to AI.
But have we truly taken the trouble, asks Thierry Ménissier, to clarify what is meant by the expression “AI ethics”? Why, moreover, would there be a specific need for ethics to confront the deployment of AI systems, as if philosophy—since speaking of ethics inevitably refers to one of its fundamental branches—were suddenly being called upon to illuminate, or even to accompany, these technological developments? And finally, if an ethics now seems required, what kind of ethics should AI lay claim to? These are the three fundamental questions that must be answered, if we are not to leave intact the very ambiguity that surrounds the now generalized use of the expression “AI ethics,” so often mobilized without its meaning having first been clarified.
Clarifying the meaning of the expression “AI ethics”. - While philosophy is not, a priori, meant to intervene in the purely technical aspects of artificial intelligence, it can nevertheless play an essential role in clarifying the concepts we mobilize when we speak of AI—especially when one of its historical disciplines, since its Socratic origin, is invoked in the great majority of debates concerning the societal impacts of a technology often considered decisive for the future of our societies: ethics. The ambiguity of the expression “AI ethics” seems to lie entirely in the interpretation we give to the preposition “of,” which points to two distinct ways of conceiving the relation that ethics—understood as the set of principles and values on the basis of which we judge what it is good (right) or bad (wrong) to do—maintains with AI, understood as a set of socio-technical systems capable of producing societal and environmental effects that are undesired from the standpoint of the very values a society seeks to defend.
AI ethics: In a first sense, the expression “AI ethics” can be understood as ethics applied to AI. As a sectoral ethics belonging to the increasingly broad field of applied ethics, AI ethics would be nothing other than the philosophical discipline whose vocation is to reflect concretely on the moral implications generated by the emergence, development, and deployment of artificial intelligence systems (AIS), whose potential risks we can already discern in light of the concerns they arouse, and to question the social uses that may be made of them. Alongside digital ethics—with which it moreover tends to merge—it would constitute a sub-branch of the ethics of technology, enriching the field of so-called “applied” ethics devoted to specific domains of human activity, such as medical ethics, animal ethics, environmental ethics, business ethics, the ethics of war, or the ethics of international relations.
Ethics integrated into AI: In a second sense, the preposition “of” would refer less to an external domain of ethics—AI—which ethics would examine reflexively in order to interrogate its uses in the light of pre-existing values, principles, and ethical doctrines, than to a set of moral rules integrated into the practices of designing and developing AIS—for example in the form of guidelines or ethical protocols—or even directly incorporated into the algorithms themselves. In this framework, AI ethics would take the form of a set of rules examined, debated, and shared by the professional community of computer scientists, much as bioethics is for physicians, business ethics for managers, or professional ethics for lawyers. It would then no longer be primarily the work of philosophers, but that of computer scientists. The preposition “of” would no longer mean “applied to,” in the sense that a philosophical reflection is applied to a new domain in the history of technologies, but “integrated into,” in the sense that moral rules decided and shared by a given professional community would be literally incorporated into the chain of design, production, and deployment of AI systems, in order to guarantee that their functioning and the decisions they produce are aligned, in all circumstances, with our moral values. This second meaning of the preposition “of” thus tends to shift the semantic interpretation of the expression “AI ethics” toward the second term of the phrase, emphasizing above all the eminently scientific and technical character of questions relating to AI ethics rather than their philosophical character. It also indicates, by analogy with the need for medical knowledge in bioethics, that it is indispensable to possess technical skills in computer science and artificial intelligence in order truly to do AI ethics. Yet this pull of ethics toward the computer scientist—at the same time expressing a technicist orientation of ethics—places upon him an increased responsibility: that of having to account, at every moment, for the moral rules and the overall meaning that presided over the technical development of a system.
Far from being trivial, the distinction brought to light by the text is in fact crucial, insofar as it commits us to two fundamentally different conceptions of moral responsibility. Above all, it seeks to prevent AI ethics, understood as a discipline and a philosophical reflection in its own right—essential to the formation of critical thinking in a democratic society—from ultimately being reduced to a mere technical functionality added to machines, with the aim of conferring upon them a capacity that would profoundly blur the conditions of human agency. This is the second major question raised by Ménissier’s text: in what way do the development and deployment of artificial intelligence systems give rise to novel ethical problems, as problems specifically linked to AI? And in what way do these problems touch the very foundation of what we generally mean by the term “ethics”?
The Problem of Agency and Responsibility. - Speaking of ethics indeed means, first and foremost, referring to a type of action that is specifically human, guided by principles and values. An ethical action thus possesses three essential characteristics. It is first intentional, in the sense of being “oriented toward an end” that an agent represents to himself as desirable. It is next conscious, in the sense of that particular form of consciousness called “moral consciousness,” which includes a normative dimension allowing the agent to evaluate and assess his actions according to criteria such as justice, dignity, or respect—in other words, according to moral values. Finally, it is motivated by specifically ethical reasons, which the philosophical tradition, notably following Kant (1724–1804), characterizes as disinterested, or at least as primarily taking into account the interest, dignity, and well-being of others.
This is what the late Ruwen Ogien (1949–2017) reminds us in an introductory text to moral philosophy drawn from the analytic tradition (Précis de philosophie analytique, PUF, 2000), when he implicitly distinguishes moral reasons for acting from two other types of reasons: legal reasons for acting (respecting the law in order to avoid the sanctions provided in the event of transgression), and prudential reasons for acting, which stem from calculations of personal interest, such as refraining from doing something that might compromise the prospect of an imminent promotion. Ogien thus writes:
“For various reasons that will gradually appear, it is not very easy to say what it means to act ‘morally’ or to define the limits of what may be called ‘moral’ or ‘ethical.’ One of the criteria that may serve us, as a first approximation, is the specificity of certain reasons for acting. Among the reasons we may invoke to justify an action, some seem to be ‘moral’ and others not. If, in order to justify my refusal to accept a bribe, I invoke the fact that I am being watched by my colleagues and that I risk missing out on an important promotion should I be reported, this justification will probably not be judged ‘moral’ (absent further specification). If, on the contrary, I invoke the common good, fairness, honesty, or integrity, one will also probably consider that I am proposing a moral justification (even if no one takes it seriously). At first glance, moral reasons for acting are altruistic in type, insofar as they appeal to a certain conception of respect for the person.”
These clarifications above all allow us to distinguish moral action from any other type of purely automatic, mechanical, or conditioned behavior. Such behaviors run precisely counter to the central notion of the agent, which implies the capacity, for an individual or a given entity, to make choices, to orient one’s conduct according to clearly assumed reasons or motives, and thereby to recognize oneself as the author of one’s acts. In other words, what Anglo-Saxon philosophers generally call agency presupposes self-identification as an acting subject, self-affirmation as the presumed free author of one’s acts, and, consequently, the possibility of being held responsible for them.
The notion of responsibility, a central concept both in ethical philosophy and in law, is therefore consubstantial with that of agency. One cannot be thought without the other: responsibility is conceivable only when borne by a subject who is at least theoretically free and conscious of his acts, while agency contains, within its very definition, the requirement of responsibility. Although it is necessary to distinguish responsibility in the philosophical sense—belonging to metaphysics, since it touches upon the fundamental question of whether free will exists or is merely an illusion—from responsibility in the legal sense, codified in legal texts according to the fundamental distinction between civil and criminal liability, the two nevertheless remain closely linked in practice.
It is in this sense that one may say that philosophy grounds law. It is indeed no accident that our legal systems rest upon the fundamental distinction (summa divisio) between, on the one hand, persons, subjects of rights and therefore bound by obligations—especially the obligation to answer for their acts, which defines precisely the notion of subject and person through that of responsibility—and, on the other hand, things, objects of rights, such as material goods, upon which rights may be exercised (for example property rights) but to which no responsibility can be imputed, precisely because things, by definition, lack the capacity to act in the strict philosophical sense of agency.
For analogous reasons, despite Article 515-14 of the French Civil Code introduced by the 2015 law, which recognizes animals as “living beings endowed with sensitivity,” they remain subject to the legal regime of property. In other words, although animals may be holders of certain rights, such as the right not to suffer, they cannot be classified as legal persons, insofar as, acting largely through instinct or conditioning, they cannot be fully held responsible for their acts. This distinction corresponds to that established in animal ethics between, on the one hand, moral patients, who must be morally considered by virtue of their capacity to suffer without thereby being subject to obligations, and, on the other hand, moral agents, capable of evaluating their actions, anticipating their consequences, and motivating them through values.
Finally, whether in moral philosophy or in law, the notion of responsibility—connected respectively to the notions of agency and of the person as a subject of rights—cannot be thought without being articulated with three other fundamental notions of ethical philosophy: freedom, duty, and obligation. How, indeed, could we attribute responsibility for an act to an individual if we did not simultaneously assume, at least as a postulate, that he could have acted otherwise? Law, like morality, thus rests, as a condition of possibility and in order not ultimately to deny itself, upon the recognition of the human being as a moral agent, that is, as a being who is free and responsible for his acts. It is ultimately upon these two simple but fundamental ideas—first, that human beings possess the capacity to choose freely what they do and the meaning they give to their actions (agency); second, that by virtue of this quality as free agents, human beings must answer for their choices and their acts before others (responsibility)—that modern societies have been constituted over the last centuries.
But what happens when we no longer merely ask technical devices to facilitate human labor and improve the production of goods or services, but to act in place of the human being across an ever-growing number of sectors? What happens when the increasingly widespread deployment of AI systems—in health, transportation, security, energy, and beyond—though still devoid of consciousness and therefore incomparable to human cognitive capacities, begins to call into question the two essential attributes of moral agency, precisely because we delegate to them more and more of our capacity to act? In other words—and this is precisely the central point forcefully highlighted by Thierry Ménissier’s article—each time we decide to delegate our power to act to AI systems, first, we break the centuries-old link between agency and responsibility; second, we render problematic the very question of responsibility and the attribution of fault in cases of harm, error, or unforeseen consequences.
In short, under the pretext of efficiency—whose persuasive power it must be acknowledged is difficult to dismiss without yielding to a form of imagined moral purity—we are in the process, and this may well be the most remarkable aspect of the current technological revolution, given its simultaneously latent and rapid character, of weakening our status as agents and responsible authors of our acts. This process, already largely observable in practices and in the ways AI was contributing to the profound transformation of entire domains of human activity at the time Thierry Ménissier was writing—the text dates from 19 October 2019—has been greatly amplified since the emergence and massive diffusion of generative AI following the advent of the ChatGPT moment on 30 November 2022.
Until now, we have been able to hold the driver of a vehicle responsible in the event of a road accident—but what of the autonomous car? Should blame be attributed to the vehicle manufacturer? Or should we accuse the manufacturer of the sensors or embedded systems—radars, cameras—whose malfunction at the time of the accident might be implicated? What about the owner of the vehicle? Might he have failed to respect the terms of service by neglecting to update the embedded AI software? Or is the fault that of the software developers themselves? Or perhaps a defect in road infrastructure? The case of the autonomous vehicle illustrates in an almost paradigmatic manner the questioning directed at the central notion of agency-responsibility. How, in such cases, can one still be recognized as the free and responsible author of one’s acts when driving is automated? In short, when it is no longer we who drive.
Similar dilemmas arise in the military domain with armed drones and lethal autonomous weapon systems. In the event of civilian casualties caused by an armed drone, should the company that developed the targeting algorithm and executed the firing decision be condemned? What about the operator who programmed the mission, even though he did not intervene during its execution? Or should we instead question the responsibility of military and political decision-makers who authorized the mission and the use of armed drones, with the ostensibly legitimate aim of protecting their soldiers? When technology profoundly reshapes the traditional idea of war, and when the act—or even the tragic decision to take life, at the risk of sacrificing civilians—is delegated to remotely piloted or fully autonomous technical systems, it is the very perception of the gravity of what that act entails, from the standpoint of the agent, that becomes blurred. In other words, the act of delegation not only makes it problematic to attribute full responsibility for consequences to an agent; it also constitutes an act of de-responsibilization—in the strong sense of rendering one irresponsible—by removing from the agent the emotional and psychological burden inevitably involved in resolving a moral dilemma.
In the domain of healthcare and assistance to vulnerable persons—the elderly, the sick, or people with disabilities—the rise and use of connected devices such as the IoT (Internet of Things) increasingly enable the management of significant portions of patients’ daily lives, even allowing systems to make autonomous “decisions” based on the data they receive from their environment. Connected sensors can continuously monitor the health status of an elderly or disabled person, measure vital signs such as heart rate, blood pressure, or glucose levels, alert caregivers to the deterioration of a chronic condition, detect falls or abnormal movements, adjust heating systems, or distribute medication according to precise schedules. By taking charge of an ever-growing number of tasks previously entrusted to medical professionals or relatives, this “substitution agency”—technological systems acting in place of, or as relays for, human agency—constitutes a fundamental rupture in the way care has traditionally been provided. Although the benefits of such devices appear evident, insofar as they aim to improve quality of care, strengthen safety and well-being, and alleviate the burden on caregivers in contexts of emergency and staff shortages, they nevertheless raise new questions at the intersection of medical ethics and AI ethics. How far should we go in delegating tasks and decisions? Does the growing process of automation risk depriving vulnerable individuals of the limited space of autonomy they still possess? In short, do we risk placing all actors involved—caregivers, patients, and relatives—in a situation of total dependence upon technical systems they themselves helped to implement, and ultimately upon the companies that designed them?
These three examples, drawn from transportation, defense, and healthcare, could easily be extended to many other sectors, such as smart cities, finance, or security. The central idea, masterfully highlighted by Ménissier, is the following: wherever tasks, actions, processes, or decisions are automated and delegated to artificial intelligence systems—whose capabilities and fields of intervention continue to expand and are destined to grow further in the years to come, potentially performing more efficiently everything that human beings, collectively, are capable of doing—we profoundly transform the way we act upon ourselves, upon others, and upon society, and we contribute to blurring the historically established and ultimately foundational link, at the heart of philosophical ethics, between agency and responsibility.
The deployment of artificial intelligence systems thus has the consequence of deeply unsettling all spheres of human activity, sparing none. It creates new, largely unanticipated situations that are experienced as profoundly disorienting by most individuals. The irruption of generative AI into everyday life, and the natural language processing capacities it enables, constitutes a particularly striking example. These systems bring forth unprecedented ethical problems in a context where inherited moral reference points are strongly challenged. We are therefore confronted with the demanding task of constructing a new ethics within a world characterized by a form of generalized automation of human physical and cognitive activities. This observation compels us to answer a third question, whose central stake is the preservation of human agency-responsibility in an increasingly automated world—in short, to define an ethics that guarantees the conditions of possibility for a human being who remains both active and the author of his acts (agency), and fully responsible (responsibility).
Comments