The five major principles of AI ethics according to Floridi.
- Franck Negro

- Nov 5, 2024
- 3 min read
Italian philosopher Luciano Floridi, widely recognized for his work in information ethics and the philosophy of technology, is also one of the leading figures in contemporary reflections on artificial intelligence ethics. The second part of his book The Ethics of Artificial Intelligence (French edition published in 2023), entitled “Assessing AI,” contains ten chapters that offer, in his own words, “an analysis of some of the most urgent questions raised by the ethics of artificial intelligence.” Chapter 4, which opens this section, proposes a unified framework of ethical principles for AI. To do so, Floridi draws on a comparative analysis of what he considers the most significant ethical texts published since 2017 — a turning point marked by the emergence of the first initiatives aimed at formulating ethical principles to guide the development and deployment of AI systems in ways beneficial to society while minimizing risks and negative impacts. Two foundational documents stand out in this context: the Asilomar Principles (2017) and the Montreal Declaration for a Responsible Development of Artificial Intelligence (2018).
Floridi advances two main claims. First, the proliferation of ethical principles — more than 160 by 2020 according to AlgorithmWatch’s AI Ethics Guidelines Global Inventory — risks generating confusion. Second, these principles display “a high degree of overlap,” which he seeks to clarify. He warns against the emergence of a “market of principles,” where stakeholders might selectively adopt those that best suit their interests. Based on the analysis of six major sets of guidelines, including the two mentioned above, Floridi argues that a general framework of five core ethical principles can be identified. Four of them, particularly well suited to the ethical challenges posed by AI, are borrowed from bioethics: beneficence, non-maleficence, autonomy, and justice. The fifth principle — explicability — is specific to AI. Understood both in its epistemological sense of intelligibility (“How does it work?”) and in its ethical sense of accountability (“Who is responsible for how it works?”), explicability provides the missing link that completes the ethical architecture. According to Floridi, this framework can serve both as a structural foundation for laws, technical standards, and best practices across sectors and jurisdictions, and as a reference point for ethics-based AI audits.
Floridi also emphasizes that most ethical frameworks underlying these principles have emerged from Western democratic societies and therefore cannot be applied uncritically to cultures and regions not represented in the selected texts. “Ethics is not the monopoly of a single continent or culture,” he insists. This observation underscores the need for companies, governments, and academic institutions developing AI to adopt broader ethical frameworks incorporating more diverse social, cultural, and geographical perspectives.
Beneficence refers to “promoting well-being, preserving dignity, and ensuring the sustainability of the planet.” Although expressed in different ways across ethical documents, it remains the most easily recognizable of the traditional bioethical principles. It emphasizes that AI should contribute positively to human flourishing and environmental sustainability.
Non-maleficence, encompassing privacy, security, and prudence regarding technological capabilities, complements rather than duplicates beneficence. While the latter promotes positive outcomes, non-maleficence stresses the need to anticipate and prevent harm — including threats to privacy, AI arms races, and risks associated with unsafe or uncontrolled systems.
Autonomy is understood as the “power to decide to decide.” Floridi highlights the temptation to delegate increasing amounts of decision-making authority to technological artefacts and warns against the risk that artificial autonomy may undermine human autonomy. The principle requires that users retain the ability to make free and informed choices without manipulation or coercion by AI systems. It introduces the idea of meta-decision — the human capacity to decide whether or not to delegate decision-making to machines, and to revoke that delegation at any time.
Justice, defined as “promoting prosperity, preserving solidarity, and preventing injustice,” addresses the unequal distribution of decision-making power within society. Floridi notes that the concept remains difficult to define precisely, as it encompasses multiple concerns: combating discrimination, ensuring fair access to AI’s benefits, mitigating data bias, fostering shared prosperity, and safeguarding social solidarity, particularly in domains such as healthcare.
Explicability — “enabling the other principles through intelligibility and accountability” — constitutes the cornerstone that makes the others operational. Floridi describes it as the “crucial missing piece of the puzzle” of AI ethics. It includes two complementary dimensions: intelligibility, which seeks to explain how an AI system reaches its decisions, addressing the problem of opacity; and accountability, which determines who bears responsibility for those decisions. Explicability underpins the other principles in several ways: understanding AI’s mechanisms is necessary to assess its benefits and harms (beneficence and non-maleficence); it allows users to make informed choices about delegation (autonomy); and it enables the attribution of responsibility in cases of harm or injustice (justice).
Comments