top of page
Search

From bioethics to the ethics of artificial intelligence.

  • Writer: Franck Negro
    Franck Negro
  • Nov 15, 2024
  • 9 min read

The term "applied ethics" emerged in the United States during the 1960s. It was only from the 1970s onward, however, that philosophical ethics underwent a shift in its research orientation and gradually moved from a theoretical and speculative ethics centered on the semantic and epistemological analysis of ethical discourse (what is called "metaethics"), to a more concrete and sectoral ethics, more focused on the moral implications brought about by certain developments in the contemporary world, linked in particular to the evolution of social mores, the emergence of the environmental question and its compatibility with capitalist and neoliberal economics, and the striking progress achieved in the domains of technology and science, notably biology and medicine. Hence the increasingly structured organization of ethics around several different branches, such as bioethics, medical ethics, animal ethics, environmental ethics, business ethics, the ethics of international relations, the ethics of war, and, more recently, the ethics of algorithms or AI ethics.


From the standpoint of history, medical ethics and bioethics were the first forms of applied ethics. They were born from the trauma of Nazi experiments carried out on prisoners in concentration camps. It was in this context that, between December 1946 and August 1947, in the immediate aftermath of the Second World War, the Nuremberg trial of Nazi doctors and officials took place, conducted by the American military tribunal. It played a major role in the emergence of applied ethics in general, and of bioethics in particular, and led to the publication, in 1947, of the famous Nuremberg Code, which for the first time sets out a set of ten deontological principles intended to govern experimental research on human beings.


It was this Code that served as a reference, seventeen years later, for drafting the first version of the Declaration of Helsinki, adopted by the World Medical Association in 1964. Since then, the latter has undergone seventeen revisions, all of which aim to ensure that the ethical principles stated and intended to govern "medical research involving human participants" respond to the ethical challenges of the moment.


Finally, one must mention the report of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, better known as the Belmont Report, published in April 1979 by the United States Department of Health, Education, and Welfare. Like its predecessors, the Nuremberg Code and the Declaration of Helsinki, the Belmont Report aims to establish ethical principles in order to govern research involving human subjects. It sets out three major fundamental ethical principles, namely: 1) respect for the autonomy of persons, 2) the principle of beneficence, and 3) justice or fairness.


While these three important declarations deal primarily with medical research and experimentation when they operate on human subjects, they do not govern medical practice properly speaking. In contrast to other important texts in medical ethics such as, for example, the famous Hippocratic Oath, written in the fourth century before Jesus Christ, or, more recently, the Declaration of Geneva, also adopted by the World Medical Association in 1948, just after the drafting of the Nuremberg Code (1947).


The four fundamental principles of biomedical ethics. — The four fundamental principles of biomedical ethics were finally set out by the American philosophers Thomas Beauchamp and James Childress in 1979, in what has become a classic work: Principles of Biomedical Ethics. These are the four principles most often invoked by medical ethics. They provide a framework widely shared today by health professionals, intended to serve as the basis for evaluating situations of ethical dilemmas encountered in the practice of medicine and biomedical research. These four principles are:


  • Principle of beneficence: it is the obligation (in the sense of a duty), on the part of a health professional, to act in the interest and for the benefit of the patient, always seeking to maximize the latter’s well-being and health, and to minimize suffering. Mainly centered on the patient’s interest, the principle of beneficence implies, on the part of the health professional, acts such as: preventing disease, relieving pain, recommending the most appropriate treatments with regard to the objective of improving health and well-being, etc. This principle is therefore at the foundation of the medical act properly speaking, since it describes its purpose and its reason for being.

  • Principle of non-maleficence: it is the obligation, on the part of a health professional, never to intentionally cause harm to the patient. It is generally summarized by the Latin maxim Primum non nocere ("first, do no harm"), which is already found in the Hippocratic Oath. This implies, on the part of a physician, establishing, when necessary, a risk–benefit calculation at the time of recommending a treatment or a medical intervention. In other words, suffering is justified only on the condition that it is necessary for the realization of a greater future well-being, and for the improvement of the patient’s health.

  • Principle of autonomy: according to the Belmont Report, "an autonomous person is a person capable of reflecting on personal goals and deciding for himself or herself to act in accordance with that reflection." The principle of autonomy thus implies obtaining the patient’s free and informed consent before any treatment, intervention, or even experimentation. It is already present in this form in the first article of the Nuremberg Code (1947), and implies, on the part of the health professional, conveying to the patient clear and honest information aimed at best informing judgment and enabling choices to be made knowingly and freely. It also assumes that the patient must, ultimately, have the final word regarding the choices he or she makes about his or her own existence: "The voluntary consent of the human subject is absolutely essential. This means that the person involved should have legal capacity to give consent; should be so situated as to be able to exercise free power of choice, without the intervention of any element of force, fraud, deceit, duress, overreaching, or other ulterior form of constraint or coercion; and should have sufficient knowledge and comprehension of the elements of the subject matter involved as to enable him to make an understanding and enlightened decision. This latter element requires that before the acceptance of an affirmative decision by the experimental subject there should be made known to him the nature, duration, and purpose of the experiment; the method and means by which it is to be conducted; all inconveniences and hazards reasonably to be expected; and the effects upon his health or person which may possibly come from his participation in the experiment. The duty and responsibility for ascertaining the quality of the consent rests upon each individual who initiates, directs, or engages in the experiment. It is a personal duty and responsibility which may not be delegated to another with impunity."

  • Principle of justice (fairness): the principle of justice, also called the principle of fairness, must guarantee to all, equally, access to care, health, and well-being, without distinction of social status, gender, age, race, or income. It is a principle of a deontological nature based on the value of equality, and which has been at the foundation of the French health system since the end of the Second World War with the creation of Social Security.


An AI ethics inspired by bioethics. — While applied ethics in the domain of digital technology and computing is largely contemporaneous with the web’s becoming a mass public phenomenon in the 1990s, it is clear that artificial intelligence, as a subfield of computing and the digital world, has crystallized a large part of the questions since the beginning of the 2010s. The symbolic moment of this shift from digital ethics to AI ethics is marked, according to Raja Chatila, by the publication, in December 2012, of the founding article by Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton: ImageNet Classification with Deep Convolutional Neural Networks. It is this article that is said to have allowed deep learning (Deep Learning) to become, in a sense, the norm in image classification and computer vision. Among the key ethical questions opened by AI, and cited by Chatila, are "biases due to the quality or representativeness of data, data security, the absence of causal reasoning, the absence of semantics, a lack of understanding of the context of the physical world, the lack of robustness to certain attacks or data modifications. The very large number of parameters and the opacity of the learning process raise questions of transparency and explanation of results."


In the same article, devoted precisely to the influence that bioethics has had on reflections in the ethics of technologies in general, but also in the ethics of digital technologies and in AI ethics, the author cites, by way of illustration, the work carried out by the High-Level Expert Group on Artificial Intelligence appointed by the European Commission in 2019, and author of the white paper: Ethics Guidelines for Trustworthy AI. He recalls that three of the four ethical principles for trustworthy AI set out by the expert group are directly derived from biomedical ethics, namely: respect for human autonomy; prevention of harm; fairness. Only the fourth, explicability, is specific to AI ethics, particularly in view of the limitations of the latter in terms of transparency and explanation of results, notably those generated by what is called connectionist AI, as opposed to symbolic AI.


How, then, can an ethics of the digital in general, or an ethics of AI in particular, distinguish itself from bioethics, if it borrows from it the majority of its fundamental principles?


  • A technology is not designed with an objective of beneficence: at the heart of bioethics or medical ethics lie the principles of beneficence and non-maleficence. In other words, bioethics states explicitly that the primary objective of medical practices (medicine) and research (biology, genetics) is to do good while ensuring respect for the interests and autonomy of persons. In this framework, ethical deliberation most often takes the form of a risk–benefit calculation. Bioethics is therefore mainly consequentialist. By contrast, the creation of a technology primarily responds to economic and commercial interests. First and foremost, it is a matter of responding to concrete needs arising from the observation of practical cases, in order to facilitate, optimize, or make more productive a task or a job. Unlike medicine or biology, a technology is not a priori designed with an objective of beneficence or non-maleficence.

  • The long-term impacts of a technology are difficult to evaluate: it is not only difficult to evaluate the impacts that a technology will have in the long term on individuals and societies, but the ethical issues that a technology might raise are far from obvious at first glance. Unforeseen harmful uses and consequences at the time of the design and deployment of a technology can emerge over time. Can we, for example, foresee the long-term impacts of technologies and AI and answer precisely questions such as: will digital technology and AI ultimately destroy or create jobs? Is it good or bad to trade one’s personal data and receive targeted advertising constantly in order to be able to use a search engine for free? Why and how is the manipulation of opinion by traditional media different from that which takes place on social networks? Should we accept being monitored at the risk of losing a bit of our freedom, or put security — and that of our loved ones — at risk? Is it better to use a transparent and explainable AI system rather than an opaque but more efficient system? How far should we delegate our power of decision to AI and take the risk of losing part of our autonomy? What exactly should be understood by algorithmic bias and how does it differ from a human bias? Should we deploy lethal autonomous weapons with the power to autonomously select their target? In what way do all these questions contain an ethical dimension?

  • Digital and AI ethics are deontological: in contrast to biomedical ethics, the ethics of the digital and the ethics of artificial intelligence are mainly deontological. The first is most often rooted in processes of collective and individual reflection aimed at resolving a moral dilemma or a societal question, through a risk–benefit type analysis. It is a matter of evaluating the possible consequences of a decision, and seeing to what extent the potential risks considered are counterbalanced by the expected benefits. The second seems rather to adopt a deontological approach based notably on the protection of fundamental rights. This is illustrated by the seven "requirements" — the term referring to notions of duty or obligation proper to deontological ethics — of the expert group in the Ethics Guidelines for Trustworthy AI, namely: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; accountability.

  • From ethics to law (the example of the AI Act): this deontological ethics, which seems to ground AI ethics, is accompanied by a virtue ethics, since one seeks, through the requirement of transparency and explicability, to instill in developers, engineers, and designers of AI systems virtues that should guide their behaviors and practices in the absence of any deontological code that would impose itself on them within the framework, for example, of a compliance program. The shift from the ethical perspective to the legal perspective (AI Act) has nonetheless led the European Commission to transform the initial deontological and aretaic approach (from the Greek arèté meaning "virtue") into a consequentialist approach.

Recent Posts

See All

Comments


Restez informé! Ne manquez aucun article!

This blog explores the ethical, philosophical and societal issues of AI.

Thanks for submitting!

  • Grey Twitter Icon
bottom of page