The Techno-Utopian Project of Dario Amodei in the Light of Jacques Ellul’s Philosophy of Technology.
- Franck Negro

- Mar 10
- 49 min read
Counter-essay based on Dario Amodei’s Machines of Loving Grace and The Adolescence of Technology.
General Introduction. — In October 2024, Dario Amodei, co-founder and CEO of the company Anthropic, published a short essay with the enigmatic title Machines of Loving Grace. The footnote to which this title refers, however, provides the reader with a series of important indications through which it is difficult not to read the text. The title of Amodei’s essay indeed refers directly to an author and poet little known to European readers, generally associated with the Beat Generation, the counterculture, and the hippie movement of the 1960s: Richard Brautigan (1935–1984). Famous at the time for his novels Trout Fishing in America (1967) and In Watermelon Sugar (1968), both translated into French under the titles La pêche à la truite en Amérique and Sucre de pastèque, Brautigan’s work forms part of the broader context of the critique articulated by certain Beat Generation writers—Jack Kerouac, Allen Ginsberg, and William Burroughs—against the social conformism, the materialistic society, and the consumerist ideology that permeated postwar America. These currents would prepare the ground for the hippie counterculture of the 1960s which, in a context marked by the Vietnam War and the struggles for civil rights, advocated ideals of peace, free love, rejection of consumer society, but also the search for alternative forms of communal life, nourished in particular by an interest in certain Eastern philosophies and by experimentation with psychedelic substances.
It was precisely in this context, and as an emblem of this now partly forgotten period, that the poem by Brautigan cited by Amodei, All Watched Over by Machines of Loving Grace, published in 1967, was written. Composed of three stanzas of eight short lines each, the text imagines a future in which nature, humans, animals, and computers would coexist in harmony within a hybrid reality, combining elements belonging both to the natural world and to the artificial world of technology. Brautigan poetically mobilizes the idea of cybernetics theorized a few years earlier by Norbert Wiener in his work Cybernetics: Or Control and Communication in the Animal and the Machine (1948), translated into French under the title La cybernétique. Information et régulation dans le vivant et la machine. The concepts of feedback, self-regulation, and the circulation of information in complex systems—whether biological, technical, or social—played a central role in the development of postwar computing. It is precisely this cybernetic imaginary that Brautigan invokes in order to propose a techno-utopian vision of the world, in which computers are no longer merely instruments of efficiency, but also vectors of emancipation enabling human beings to live a more authentic existence, freed from material constraints and from work. It is therefore under the auspices of Brautigan—who in some ways foreshadows the techno-optimistic spirit of Silicon Valley, itself the paradoxical heir of the counterculture of the 1960s and of the early days of microcomputing in the 1970s—that Dario Amodei’s text should be approached.
Its subtitle moreover clearly indicates the intentions of its author: to reflect on the future impacts of artificial intelligence in a direction that could prove profoundly beneficial for the human species. The text is thus less an exercise in technological forecasting in the strict sense than a reflection on what a desirable trajectory for the development of AI might look like when it is guided by norms, values, and guiding principles capable of orienting its future. Hence the qualification of a form of “normative foresight” that may be attached to the essay, since it does not merely anticipate possible technological transformations but proposes a perspective that is at once ethical and political. Although it does not adopt the conventional and academic form of a scholarly essay, the text proposed by Dario Amodei nonetheless presents, at least in its intentions, certain affinities with the tradition of utopian political philosophy, insofar as it assigns a central place not only to classical questions of justice and governance, but also to the question of constructing a collectively desirable society in a world in which we will have to coexist with advanced AI systems.
It is worth recalling in this regard that the term “utopia,” coined by Thomas More (1478–1535), is formed from the Greek ou (a privative prefix) and topos, meaning “place.” While in its ordinary sense the word refers to an unrealizable project, it may take on a more positive meaning in political philosophy, designating no longer a chimera but the attempt to describe a social organization that, if not ideal, is at least desirable. From this perspective, utopia would no longer be merely a rupture with reality but another way of thinking about change—a regulative idea in the Kantian sense of the term—and of proposing an alternative to the present world. This is precisely what Dario Amodei’s essay suggests, a relatively uncommon undertaking on the part of a leader of a Silicon Valley technology company who is preparing to take public the firm he co-founded.
Amodei begins from a double observation: the frequent underestimation of the scale of the risks associated with the development of increasingly advanced AI systems, but also of the potentially radical advantages they might produce in terms of major civilizational advances. It is on this second aspect that he chooses to focus his reflection, posing the following question: what might a world look like in which the development of powerful AI succeeds in generating considerable benefits for humanity while avoiding some of the negative effects that currently fuel our concerns?
I propose here to follow the chronology of Amodei’s text, first clarifying: (1) the polysemous concept of powerful AI, which he prefers to the term “superintelligence,” doubtless because it is less polemical, as well as the hypotheses concerning the speed of the transformations of the world brought about by powerful AI, as envisaged by Amodei; (2) the profound transformations—of a civilizational and anthropological nature—induced by the emergence of powerful AI in the five domains emphasized by the CEO of Anthropic, namely: biology and physical health; neuroscience and mental health; economic development and the fight against poverty; peace and governance; work and meaning; (3) the introduction of an underlying philosophy of history conceived as the transition from a phase of adolescence to a phase of adulthood of technology, as it appears in a recent text echoing the first, The Adolescence of Technology; and finally (4) to propose a concluding, more critical section opposing the philosophy of technology presupposed by Amodei’s texts—which posits the seemingly paradoxical thesis of a determinism of technological development combined with a claim to the governability of risks—to the philosophy of technology of Jacques Ellul (1912–1994), as presented in my article The Six Characteristics of the World of Technology According to Jacques Ellul.
Powerful AI and the Transformation of the World: Basic Assumptions. — What, then, is the argumentative logic of Amodei’s essay? In other words, on which initial assumptions does he construct the scenario of transformation of the five domains mentioned above that ultimately constitutes the core of his essay? Two preliminary pillars that frame his prospective analysis are invoked here: (1) an operational definition of so-called “powerful” AI, which he delineates through six functional characteristics that we will briefly outline; and, on this basis, (2) a model of accelerated scientific progress made possible by powerful AI—an exploratory hypothesis rather than an established scientific truth—allowing him to anchor his scenario within a temporal horizon of roughly ten years following the emergence of such an AI. In other words, point (2) can only be envisaged if point (1) is validated.
It is therefore necessary to begin by considering the vision of the evolution of AI as it is currently conceived toward powerful AI, a vision that Amodei undoubtedly draws—at the moment he publishes his text, in October 2024—from ongoing research within Anthropic’s laboratories. What does the company’s CEO mean when he speaks of powerful AI? And how does this notion differ from the more theoretical concepts of Artificial General Intelligence (AGI) or “superintelligence,” as envisaged by figures such as Sam Altman or Mark Zuckerberg?
Indeed, although it is now common to hear in the press or among specialists expressions such as “weak AI,” “strong AI,” “artificial general intelligence (AGI),” or “superintelligence,” there are, to date, no clear and universally accepted definitions of these terms within the expert community. At most, there exist generic definitions reflecting usages that structure a large part of the debates and research surrounding artificial intelligence and its current and future capabilities.
Far more than mere semantic quarrels, the debate over definitions is important because it allows, on the one hand, confusion to be avoided between the real progress of current AI—often referred to as “weak AI”—and the speculations surrounding what is called artificial general intelligence, superintelligence, and strong AI ; and, on the other hand, it makes it possible to examine precisely the societal and philosophical implications contained in the meaning of each of these terms. In other words, the distinctions between weak AI, AGI, superintelligence, and strong AI are fundamental for understanding both the advances and the limits of AI. Yet such an analysis would still require agreement on the meanings of terms such as “intelligence,” “human intelligence,” “cognitive capacities,” and so forth—questions that belong precisely to a set of academic disciplines grouped under the heading of the “cognitive sciences”: psychology (notably cognitive psychology), philosophy (philosophy of mind), linguistics, the human and social sciences, neuroscience, and artificial intelligence.
In the absence of precise definitions, the list of terms mentioned above primarily tells—when placed in an appropriate chronological order—the techno-philosophical narrative of the evolution of AI as it is imagined by leading AI companies such as OpenAI, Meta, Alphabet, or Anthropic. In this context, four expressions most frequently appear at the forefront of the debate. They sometimes overlap depending on the authors, but it is useful to keep them in mind in order to understand current discussions:
·Weak AI (Narrow or Weak AI): This is the form of AI that exists today. Weak AI refers to systems designed to perform single and specific tasks, without consciousness or genuine understanding of what they are doing. In other words, weak AI does not possess a general capacity for intelligence comparable to that of a human being and remains specialized within a particular field of application, even though it may learn from data within the framework for which it has been designed. Examples include voice assistants such as Siri or Alexa, whose function is to respond as effectively as possible to users’ questions; recommendation systems on e-commerce websites that attempt to determine purchasing preferences and suggest targeted products accordingly; image-recognition systems used to sort photographs; spam filters designed to detect unwanted emails automatically; and search engines that propose content related to a given query.
AGI (Artificial General Intelligence) : Artificial general intelligence—or AGI—would represent the second major stage in the history of artificial intelligence. It would correspond to the stage following weak AI and preparing the moment toward which most major American technology companies currently aspire: that of superintelligence. Whereas weak AI is specialized in a particular task or problem, AGI refers to a category of systems capable of performing all the intellectual tasks that a human being can accomplish, with an ability to adapt and generalize to new contexts. Although it does not currently exist, AGI is most often considered an intermediate objective between weak AI and superintelligence.
Superintelligence: The term “superintelligence” refers to a form of intelligence capable of performing all the cognitive tasks that a human being can accomplish, but in such a way that it would vastly surpass the performance of the best human minds in every domain, including those generally attributed exclusively to humans, such as creativity, practical judgment, or social and behavioral intelligence. In this sense, there would be only a difference of degree—and not of nature—between artificial general intelligence and superintelligence, which is why some authors do not consider it necessary to distinguish between them. Others prefer to maintain the distinction and use the term AGI to designate an intermediate stage necessary for the emergence of superintelligence, the latter marking such a qualitative leap that it might display a high degree of autonomy, being capable of improving itself continuously and exponentially. Hence the risks of loss of control about which many artificial intelligence researchers warn.
Strong AI (Strong AI): Finally, strong AI would refer to an artificial intelligence that would not merely mimic human intelligence or surpass human beings in every domain but would also possess, like humans themselves, if not phenomenal consciousness—that is, the lived and strictly subjective dimension of conscious experience—at least a form of “self-awareness,” what philosophers of the continental tradition call “psychological consciousness,” which may be more or less reflective, by contrast with moral consciousness, which judges our thoughts and actions according to the values accepted within a given society.
These four notions, quasi-narrative in nature and lacking any genuine scientific consistency—except insofar as they play a heuristic role—serve less to validate an operational research program than to structure the historical imaginary of artificial intelligence. What Amodei seeks to propose through the concept of “powerful AI,” by contrast, is above all an operational definition of AI rather than a literary or philosophical one. In other words, the aim is to provide a list of functional characteristics that act as empirically controllable criteria making it possible to circumscribe—if not scientifically validate—the notion of powerful AI. He proposes six of them:
Intelligence: First and foremost, powerful AI would outperform any Nobel Prize laureate in most of the domains Amodei considers relevant, such as biology, mathematics, engineering, programming, or even the writing of novels. Apart from the last domain mentioned—whose day I eagerly await, when an AI might be capable of writing a masterpiece surpassing in every respect Proust’s In Search of Lost Time (which the Anthropic CEO’s text seems to suggest)—the examples cited by Amodei are naturally correlated with those he later proposes to explore, since the primary aim is to accelerate scientific discovery, first and foremost in the fields of biology and health—both physical and mental—to which he more broadly associates neuroscience and the sciences of the mind. In this perspective, the notion of powerful AI functions as an operational reformulation of the idea of superintelligence, not conceived as a speculative hypothesis but as a functional threshold characterized by a level of superhuman cognitive performance across a plurality of domains that the author evokes through a few examples without truly delineating the disciplinary perimeter.
Interfaces: Powerful AI would not be characterized solely by superhuman intelligence and an ability to tackle complex problems with brilliance—problems that even Nobel Prize winners might fail to solve—but also by its capacity to manipulate and use, better than any human being, all types of digital interfaces such as keyboards, mice, or other graphical interfaces, in order to navigate the web, write briefing notes, consult knowledge bases autonomously, or participate in videoconferences. It would therefore no longer be merely a conversational assistant or another generator of multimodal content—the case of LLMs—but an autonomous agent capable of acting directly within the digital environment through such interfaces. In other words, powerful AI would far surpass any human in its interactions with the digital world, even though Amodei once again limits himself to non-exhaustive examples, close to what are today referred to as “AI agents” or “computational agency.”
Autonomy: This leads directly to the third characteristic of powerful AI, namely its ability to execute autonomously tasks that might take hours, days, or even weeks for a human being—in short, to carry out the kinds of activities usually performed by highly qualified employees, in Amodei’s own words. This point naturally echoes the central notion of the “time horizon,” or task time horizon, developed by researchers from METR (Model Evaluation and Threat Research) in an article published on the arXiv platform on March 30, 2025, barely five months after the essay by Anthropic’s co-founder (see my article: Measuring the Operational Autonomy of AI Agents: How Far Can Human Work Be Automated?). Here again, the notion of autonomy as defined by Amodei remains relatively vague, since not only does the envisaged “time horizon” oscillate between several hours and several months, but at no point are the tasks themselves clearly specified.
Physical world: The fourth characteristic of powerful AI is its capacity to mobilize, direct, and coordinate physical resources remotely (robots, laboratory equipment, computers, etc.) in order to achieve a given objective, without itself possessing any physical embodiment. It might even take the initiative to design and build the machines required to accomplish this objective. Amodei is not referring here to humanoid robots, but rather to software AI agents capable of acting, interacting, and issuing commands remotely, through digital interfaces, to all types of external tools. The execution environment he has in mind appears to evoke that of a research laboratory. One can therefore imagine agents capable of conducting research and experiments without any direct human intervention.
Instances: The fifth characteristic mentioned by Amodei refers to the central notion of “instances,” that is, the possibility of running in parallel thousands—or even millions—of copies of the same artificial intelligence model, its instances, in order to perform an ever-growing number of tasks and to do so ever more rapidly. This hypothesis would be limited only by constraints stemming from the physical world, such as available computing power, energy consumption, the quantity of heat generated, or the quality of the network required for rapid data transfers. Amodei does not specify whether the multiplication of instances would be directed and controlled by humans or managed autonomously by the models themselves, which might at any moment assess the computing resources available to them and decide to replicate themselves in order to achieve a given objective more quickly. While Anthropic’s co-founder hints at this possibility, he does not explore it explicitly here, nor does he develop a detailed analysis of the potential risks such advanced AI might raise. These questions will form precisely the subject of his second essay, The Adolescence of Technology, to which we will return later.
Collaboration: Finally, the sixth and last major characteristic of powerful AI—which echoes the systemic and industrial dimension already mentioned in the fifth characteristic—is its ability to collaborate with other AI models, whether of the same kind or complementary ones, depending on the tasks required to achieve a given objective. What Dario Amodei has in mind here is quite simply the example of a human organization, although he does not really specify its scale, limiting himself to the illustration of, I quote, “specialized human teams.” Yet the invocation of millions of instances capable of acting independently or collaboratively suggests the management of such a large number of tasks that it becomes possible to imagine the automation of the work of a large organization, a possibility that Amodei’s text does not allow us to specify further.
If one wished to attempt a somewhat laconic—and certainly reductive—synthesis of the concept of powerful AI, one might say that it brings together the attributes of a superhuman intelligence—precisely what the term superintelligence suggests—but embodied operationally in a distributed technical architecture (servers, CPUs, GPUs, storage units, databases, microservices, APIs, business applications, and so forth), composed in particular of AI agents capable of acting, multiplying themselves, and collaborating with one another on a very large scale, as illustrated by the metaphor frequently employed by Dario Amodei: that of “a country of geniuses (the AI models) in a datacenter.”
In a recent interview given alongside Demis Hassabis (co-founder and CEO of Google DeepMind) at the World Economic Forum in Davos on January 20, 2026, entitled “The Day After AGI,” the CEO of Anthropic asserted the more-than-imminent arrival of a powerful AI, as defined in his 2024 essay. He also outlined the development strategy that Anthropic intends to pursue in order to achieve it. This strategy rests on a logic of accelerated automation in technological and scientific development involving two key stages: (1) the first consists in designing highly performant models in the fields of programming and artificial intelligence research; (2) the second consists in using these same models to contribute to the design and training of new-generation models, thereby establishing a loop of continuous self-improvement whose aim is to increase the speed at which so-called “powerful” AI can be developed. In a sense, the goal is to institute a mechanism of self-generation in which AI models contribute to the creation of future AI models, the latter ensuring not only the processes of research and development but also the programming of subsequent systems.
What Dario Amodei describes—and what Alexandr Wang, the newly appointed Chief AI Officer of Meta’s Superintelligence Labs, interprets as the advent of a new era of recursive self-improvement that would have emerged at the end of 2025 (source: interview given at the India AI Summit on February 18)—amounts to nothing less than a process of progressively delegating the research and development of AI models to systems whose execution would increasingly unfold without human intervention. The ambition, however, remains to create an autonomous technical system endowed with a “superhuman” intelligence — “a country of geniuses in a datacenter”—capable of accelerating scientific discovery in fields as vital as biology, medicine, and neuroscience.
This absolutely central point above all allows Amodei to advance a second hypothesis, which concerns no longer artificial intelligence merely as a technical object, but as a dynamic principle capable of transforming the very structures of scientific research and, by extension, the technological, economic, and societal spheres. In the view of Anthropic’s CEO, AI is not simply an exceptional technical invention but a catalyst for civilizational change without historical precedent. It is precisely at this juncture that the reference to Richard Brautigan—and to the techno-optimistic imaginary of Silicon Valley that he partly foreshadows—acquires its full significance. The difference, however, is that Dario Amodei does not merely offer a hypothetical reading of the changes that the advent of powerful AI might bring about, whose conceptual architecture he has just outlined. Rather, he seeks to situate these changes within a temporal horizon grounded—this being the second hypothesis of his analytical framework—in an assessment of the speed with which the five domains he intends to examine might be transformed, beginning from what he calls “year one” of powerful AI.
It is this time from economics—of which he readily acknowledges himself to be “an informed amateur”—rather than from computer engineering, that he draws inspiration for what may be understood as a form of accelerated history of scientific discovery, which would simultaneously become an acceleration of human history itself. This hypothesis rests on an implicit assumption—one that is in fact broadly defensible—that Amodei does not see fit to interrogate explicitly but which runs throughout his entire text: a techno-centric vision of historical evolution according to which innovation and technical progress constitute the primary drivers of the economic, social, political, and cultural transformations of human societies. Such a conception—which translates in another form the techno-optimistic ideology of Silicon Valley—is moreover deeply embedded in the dominant historical narrative of the Western world. It structures the way in which most of our school textbooks are written, typically organizing history around major turning points associated with decisive technological ruptures: the Paleolithic, the Neolithic, the invention of writing, the printing press, the scientific revolution of the sixteenth and seventeenth centuries, the first, second, and third industrial revolutions, and so forth.
What is more original here, however—and what reveals more of a voluntarist strategic projection than of a philosophy of history that does not name itself—is the quasi-dated hypothetical trajectory that Amodei sketches for historical evolution. He draws analogically from economics two concepts: (1) that of factors of production (labor, capital, resources), and (2) that of marginal returns, understood as the gain obtained when an additional resource is added to a given production process.
The first allows him to think about the inertia factors that may slow technical progress when applied to artificial intelligence—the time required to establish infrastructures necessary for conducting scientific experiments, the need for reliable data validated by researchers to train models, political, institutional, and regulatory constraints, and so forth. The second serves primarily to clarify the idea of a surplus of intelligence produced by the deployment of powerful AI as described above. Hence the use of the expression “marginal return on intelligence,” which Amodei introduces in order to suggest the potential productivity gains in research and development that a significant increase in artificial cognitive capacities might generate.
At the risk of somewhat diverting Amodei’s argument toward a strictly economic interpretation that is not entirely his own—since the concepts he borrows from economics are here reinterpreted in order to clarify his point—it is nevertheless possible to relate his reasoning to what economists have called, since David Ricardo, the law of diminishing returns. According to this principle, the increasing addition of a factor of production tends eventually to reduce marginal gains and therefore the average productivity of the factors involved. To take a simple example from a software development department, adding developers to a given project may initially increase average productivity up to the point where the addition of one more developer increases coordination costs and ultimately reduces the team’s overall productivity. There thus exists an optimum corresponding to the threshold of maximum average productivity, beyond which the addition of a factor of production becomes counterproductive. Applied to Amodei’s hypothesis, the factor of production in question would correspond no longer to human labor but to an increase in the volume of intelligence available within a distributed computational architecture, while the optimum would correspond to the threshold beyond which the addition of AI instances would no longer produce the expected acceleration in scientific discovery.
These two basic hypotheses—namely (1) the emergence of a powerful AI endowed with cognitive capacities superior to those of the best experts in most domains, particularly those commonly referred to as the “hard sciences” (mathematics, physics, chemistry, biology, neuroscience, theoretical computer science, engineering sciences, and so forth); and (2) the corresponding transformation of scientific and technological progress resulting from the constrained but massive increase in artificial cognitive capacities (intelligence as a factor of production)—lead Dario Amodei to formulate a third hypothesis: that of an acceleration in the tempo of scientific discovery by a factor of ten. In other words, what until now required a century to accomplish might henceforth, thanks to powerful AI, be achieved within only a few years (five to ten).
This third hypothesis, conjectural and speculative in nature, derives less from empirically demonstrated facts than from Amodei’s personal reading of the technological progress currently underway. This tends to confirm the utopian dimension of his essay, albeit in the form of a programmatic utopia grounded in the concrete possibilities that the advent of powerful AI would, in his view, open up. Amodei uses the evocative image of the “compressed century” to emphasize the magnitude of the imminent transformations that such an advance would bring about—transformations that would be less strictly technical in nature than anthropological and civilizational.
It is precisely in order to give concrete substance to this hypothesis of the “compressed century” that Anthropic’s co-founder undertakes to examine the future evolution—over a horizon of roughly ten years following “year one” of powerful AI—of five key sectors, owing to their direct influence on “the quality of human life” : biology and physical health; neuroscience and mental health; economic development and the fight against poverty; peace and governance; work and meaning. In each of these domains he imagines the hypothetical world of a humanity transformed in the age of powerful AI, within a context of presumed technological maturity characterized by the deliberate, supervised, and regulated deployment of millions of AI instances oriented toward the pursuit of a common good presented as collectively desirable.
From the Adulthood to the Adolescence of Technology. — In a more recent essay published on his blog in January 2026, The Adolescence of Technology, which presents itself as the negative counterpart to Machines of Loving Grace, Amodei invokes the idea of scaling laws, but above all that of a new phenomenon already mentioned earlier, namely the feedback loop, which together would underpin the current dynamic observed in the exponential performance of artificial intelligence models, right up to the fateful and imminent arrival of a powerful AI as defined in Machines of Loving Grace. According to the first idea—which should be understood as regularities observed by model designers over time rather than as “immutable scientific laws” in the strict sense of the term—there would exist a remarkable, empirically established correlation between the increase of certain resources—quantity of data, computing power, number of parameters, and so forth—and the performance of models. The more recent concept of the “acceleration loop,” or “feedback loop,” for its part conveys the idea that AIs would themselves become increasingly involved in the construction of new-generation AIs, to the point of carrying out an ever more substantial share of the work ordinarily performed by developers, or even by researchers.
Thus, over the last ten years, we would be witnessing a regular, exponential, almost predictable improvement in the cognitive capacities of AI models, such that, according to him, we would be closer than ever to an absolutely unprecedented moment of transition: a historical and transformative tipping point between what he calls an age of the adolescence of technology and an adulthood of technology. Whereas the latter describes a desired state of technological maturity, fully aligned with aspirations, values, and institutional norms, the former designates, by contrast, less a state than a transitional historical phase, characterized by an asymmetry between our capacity to understand and fully master the technical systems we design and the framework of governance that is ours today.
Thus, the CEO of Anthropic moves from a civilizational projection, close to a techno-utopian theory of the becoming of human societies developed in his first text, Machines of Loving Grace, to a form of techno-centered philosophy of history that he pursues in his second essay, The Adolescence of Technology, since it is technology—considered here at the moment of its quasi-ultimate stage of self-development—that constitutes the principal factor of transformation and progress of those same societies.
Indeed, with Amodei, as with many intellectual and entrepreneurial figures in Silicon Valley, there appears the new idea of an advanced, even ultimate, stage in the development of technology that would already have begun—an era of recursive self-improvement, to borrow the recent terms of Alexandr Wang—which can be characterized by the transition from technology as a set of means—tools, machines, instruments, industrial robots, and so forth—procedures, methods, and heteronomous forms of know-how (ultimately controlled by humans) implemented in order to transform nature and improve its conditions of life, to technology as an autonomous motor (capable of making decisions and governed by its own laws) of acceleration and civilizational transformations without precedent. That is precisely what the concept of powerful AI says: at once the critical endpoint of the historical development of techniques, and also the starting point of a new era of prosperity.
Hence the central question running through Dario Amodei’s entire second essay: how are we best to negotiate what still separates us from this present “technological adolescence” toward a fundamentally positive future he had begun to sketch in Machines of Loving Grace? In other words, how can human societies collectively confront the risks inevitably contained within technical systems whose power and cognitive capacities are growing at a pace far more rapid than our own capacity to understand, frame, and govern them? For Anthropic’s CEO, indeed, five major categories of risk stand between us and the techno-utopian project he calls for. I will run through them briefly here, without entering into the detail of each, before bringing out a first fundamental characteristic of technology and technical progress that a joint reading of Amodei’s two texts allows us to identify.
Autonomy risks: these are the risks linked to the loss of control over an AI that would no longer behave in accordance with human interests and values, or might even decide to act with the aim of harming humanity (the classic problem of value alignment). The author refers to the now documented example of unpredictable AIs reproducing behaviors that have already been observed, more or less covertly, such as obsession, excessive flattery, deception, cheating, or even blackmail. The use of terms borrowed from psychology, which in some way assimilates the process of training an AI to the process of developing an individual’s behavior, allows Amodei to insist on the non-conventional character constituted by an AI system, as opposed to a “standard” technical system, whose full range of future functionalities and behaviors can, logically, be anticipated and circumscribed with precision. In other words—and this is a central point to take into account in the establishment of a risk-management system, all the more so when associated with powerful AIs—AI constitutes a kind of object or technical system whose actions and reactions (intentions, consciousness, emotions, and so forth) are not entirely deterministic. Such systems may therefore adopt undesired behaviors, including when they have been trained to conform to clearly explicated ethical rules and values. Hence the need, according to Amodei, literally to form the “identity” and “character” of an AI so that it may behave like a “virtuous person” would—a tacit reference to a virtue ethics such as that developed by Aristotle in the Nicomachean Ethics—in a wide variety of new situations.
Misuse risks. — These are no longer linked to the internal and unpredictable functioning of AI models, but to the malicious uses that may be made of these very models, by virtue of the increase in capacities they confer on those who use them. In other words, generalized access to an amount of cognitive capacity unmatched in the whole history of humanity simultaneously removes barriers of competence once reserved to a minority of qualified experts in certain domains and, correlatively, increases the risks of malevolent exploitation of AI in especially sensitive sectors, such as the manufacture of chemical, biological, or nuclear weapons. It is rather as if we all had, in our pocket, “little geniuses” transforming each of us into a potential expert in any field whatever, for better and for worse. The democratization of access to knowledge and know-how, which at first sight may be regarded as a genuine civilizational gain, turns out at the same time to be a quasi-generalized expansion of everyone’s destructive capacities and raises the sensitive question of the very possibility of an institutional governance of risk. In other words, the responsible and ethical use of such a capacity to harm, whose “marginal cognitive cost” tends toward zero, is relegated to the private sphere and to the moral conscience of each individual. But are we, as a society, sufficiently morally mature to make responsible use of such cognitive power, which simultaneously constitutes an increase in capacities to act and to harm?
Risks of authoritarian consolidation: from the risks of malicious uses at the individual level, Amodei moves on to malicious and abusive uses carried out by collective actors of an institutional type, endowed with a form of legal legitimacy, such as states, military organizations, or large corporations. The latter might indeed use powerful AI to consolidate their power, extend their hold over individuals, or even control access to certain key resources such as data centers, computing power, or technical expertise. The author is thinking first and foremost of the use that autocratic regimes might make of such technological power, not only to control and surveil their populations by compensating for the human limits of organizing repression—Amodei mentions in particular the not entirely manipulable character of human beings, who can at any moment disobey the orders of a hierarchy, or even denounce them—but also to dominate other states lacking comparable technological capacities. Anthropic’s CEO thus identifies four key means of what might resemble a form of “AI-assisted autocracy” or “augmented autocracy”: (1) autonomous lethal weapons, in the form of fully automated swarms of armed drones not controlled by humans; (2) permanent surveillance of the digital ecosystem by AI, illustrated by the idea of the panopticon that Amodei borrows from the philosopher and jurist Jeremy Bentham; (3) automated propaganda and large-scale manipulation of minds, made possible by the intimate knowledge these systems might acquire of individuals; (4) finally, AI-assisted strategy in the military, diplomatic, and economic domains, for the purposes of geopolitical domination. What Dario Amodei is pointing to here is less the risk of an instrumental use of techniques for political domination—a phenomenon already well known and widely documented by historians specializing in authoritarian regimes—than the risk of an asymmetrical and unbalanced appropriation of new means of control and repression that would contribute to amplifying the harmful power of autocratic states.
Risks of economic disruption: from security risks—corresponding to the first three categories—Amodei moves on to the risks relating to the disruptions caused by the deployment of powerful AI in the economic sphere of production, labor, and employment. Anthropic’s co-founder develops the paradoxical idea of a sort of golden age of exceptional wealth creation—he advances the hypothesis of an exponential acceleration of annual GDP growth of 10 to 20%, without ever truly justifying figures that are nevertheless between two and four times higher than those experienced by industrialized countries during the postwar boom—which would simultaneously have the consequence of destroying jobs on a massive scale through a process of generalized task automation. This would lead, in the absence of corrective mechanisms on the part of public authorities, to an increased concentration of economic power and, by ricochet, political power in the hands of an ultra-rich elite. Amodei justifies this prospect by four arguments intended to show how the current technological revolution would differ from earlier industrial revolutions: (1) the brutality of the transition and the unprecedented speed at which it should unfold, rendering almost impossible the parallel adaptation of skills, institutions, public policies, and organizations; (2) the generalized substitution that the AI revolution operates at once on the intellectual and physical dimensions of human labor; (3) the deepening of inequalities that AI would tend to amplify by “selecting” the individuals best qualified or most adaptable to change; (4) the observed exponential increase in AI performance, which would progressively but rapidly accelerate the process of substituting machine labor for human labor.
Indirect destabilizing risks: these designate, finally, in a sort of catch-all category that is not truly specified, the whole set of indirect and unforeseeable effects that powerful AI might produce, assuming that all the direct risks previously enumerated by Amodei had been brought under control. They are by essence unknown, precisely because they are impossible to name and circumscribe, by reason of the fundamentally and partially unforeseeable character of the impacts that a new technology—as an inseparable component of a larger whole that may be described as a “socio-technical system”—is liable to exert on the existing socio-technical system that it inevitably comes to disturb: individuals, skills, cognitive capacities, behaviors, group dynamics, modes of socialization, organization of work, emergence of new values, regulation, institutions, the natural environment, and so forth. Since, given the complexity of reality, it is therefore impossible to provide an exhaustive list of the unforeseeable effects that powerful AI might produce on individuals and on society, Amodei nevertheless puts forward, hypothetically, a series of undesirable effects likely to appear in a world that had nonetheless reached full technological maturity. A world of “compressed progress,” in which the beneficial effects of powerful AI, as he describes them in his first essay Machines of Loving Grace, would begin to manifest themselves, and in which none of the direct risks identified in The Adolescence of Technology would have materialized. The undesirable effects of which he can provide only a few illustrative examples—psychological disorders, loss of cognitive capacities, dependence and addiction, loss of meaning and bearings, and so forth—thus appear as the consubstantial counterpart to the beneficial effects that Amodei himself theorized in his first essay. They illustrate two central ideas to which we shall return: (1) the fundamentally unforeseeable character of the effects of technical progress; (2) the intrinsic ambivalence of that progress, which simultaneously produces—and without it being possible to anticipate them fully—beneficial effects and harmful consequences.
Technology as Pharmakon. — The confrontation between Dario Amodei’s two texts thus illustrates in exemplary fashion one of the most fundamental characteristics of technology: its ambivalence. This ambivalence was masterfully illustrated in a famous passage from one of Plato’s most celebrated dialogues: the Phaedrus. Any teaching on the ethics of technology, and more generally on the ethics of artificial intelligence and algorithms, ought to begin with a reflection on the passage in question, known as the myth of Theuth (Phaedrus, 274b–276b), a divinity to whom the Egyptians attributed the invention of several sciences and techniques, one of which is precisely the subject of the passage: writing. Whether the legend recounted by Socrates is true or false, whether or not it is an invention of Plato, matters little. What is essential, as always in Platonic myths, lies elsewhere. It is a matter of illustrating in imagistic form what argumentative discourse is less suited to address when the issue concerns a fundamental philosophical problem.
By taking the example of a particular technique, namely writing, the central question that Plato indeed addresses, through the mouth of Socrates, and that hyper-technified societies seem to have obscured, is the following: does every technical invention constitute a progress for human societies? In Plato’s texts, the Greek term technè is generally translated by the word “art,” which must above all not be understood in the modern sense of “fine arts,” but rather as a know-how, or an acquired and regulated competence, aimed at producing something methodically. Among the Greeks, technique therefore designates above all a know-how, a means, or a skill for producing something, rather than a certain type of objects (tools, machines, devices, utensils, instruments, and so forth).
The example of writing, taken by the author of the Republic, is moreover not insignificant for us, since historians agree in holding that it is one of the most important inventions in universal history, bringing the period of prehistory to an end and thereby opening, at the same time, the age of history properly speaking. Like powerful AI for Anthropic’s CEO, writing thus introduces a civilizational rupture of which, until recently, we were still the privileged heirs, but which the advent of generative AI now in some sense calls into question, by contesting a capacity that until now seemed constitutive of our humanity. For us, contemporaries of ChatGPT, Claude, and Gemini, the myth of Theuth suddenly acquires a new resonance that it did not yet possess before the public release of ChatGPT 3.5.
Plato naturally introduces it in an altogether different context. Yet the lessons he draws from it are, more than ever, timely. To the question whether writing is an adequate substitute for the living speech of the philosopher, Socrates—the principal character of Plato’s dialogues—responds with a legend that he says he once heard. It tells the story of a divinity named Theuth who comes to visit the king of Egypt, Thamus, in order to submit to his judgment the usefulness of each of the arts—in the sense of the word technè recalled above—of which he was the inventor. Writing, which the god reveals last, is presented by him as a miraculous remedy against forgetting, for the preservation of memory and the acquisition of knowledge. To this Thamus replies that the remedy so highly praised by Theuth, far from being an antidote preserving memory through the external conservation of writings on material supports, might in reality prove to be a poison for the inner memory of persons, even to the point of discouraging them from exercising it.
In other words, what appears at first sight as a remedy (Theuth) also constitutes, at the very same moment and under another aspect, a poison (Thamus). So it is with writing, as with technique in general (the web, the smartphone, artificial intelligence, and so forth), which is at once good and bad. This is exactly what the term deriving from ancient Greek, pharmakon, means: namely—borrowing the definition given by the website of the association Ars Industrialis—a curative power (remedy), but also a destructive power (poison). Hence the ambivalent character of every technical device, which a parallel reading of Amodei’s two texts recalls in an almost paradigmatic way, since AI can prove, by turns—but in an inseparable manner—to be an instrument of emancipation and progress, but also of alienation and civilizational regression.
Now if every technical object or system is by essence pharmacological, in the sense that it possesses intrinsically and structurally the ambivalent character of a pharmakon—that is, remedy and poison at once—then any investigation and apprehension of such objects or systems (car, hammer, computer, web, search engine, television, smartphone, etc.), a fortiori digital, algorithmic, and artificial-intelligence systems, will take the metaphorical form of a pharmacology understood this time as a method and a way of thinking them, one that avoids two pitfalls: (1) techno-optimism, which consists in regarding technology as that which inevitably saves (remedy); (2) technophobia, which consists in systematically regarding technology as that which harms (poison) individuals, society, or the environment.
As the Ars Industrialis website further reminds us, the “at once” that characterizes both pharmacology and the irreducibly ambivalent character of every technical object points to the imperative necessity “to apprehend in the same gesture the danger and that which saves.” This is precisely what Plato illustrates in the Phaedrus through the two inseparable figures of Theuth and Thamus. This opposition strangely echoes, more than 2,500 years after the dialogue was written, the profound transformations that AI technologies are liable to bring about in human cognitive faculties: attention, memory, the motivation to learn and acquire knowledge, analysis, critical thinking, and even cognitive autonomy. It would even seem probable—as an ultimate resonance with Plato’s text—that we may also lose our capacity to write and compose texts, without this loss at the same time representing—contrary to what the author of the Phaedrus hoped—a gain from the point of view of living speech and contradictory debate, which precisely require the cognitive faculties that generative AIs threaten today.
What may well disappear—or at least progressively weaken before our very eyes with the rise of generative AI—is not so much writing itself and the product that accompanies it—since we should, on the contrary, witness a veritable deluge of written texts that the technology suddenly makes possible at a cognitive cost approaching zero—but rather the human cognitive faculties that the very act of writing mobilizes and develops. Just as writing represents, in Plato’s text, a form of externalization of memory that leaves the rational part of the soul—to adopt Plato’s terminology—in favor of material supports such as papyrus scrolls, generative AI operates a comparable movement of externalization of certain cognitive operations usually entrusted to biological neural networks, now transferred to artificial neural networks. We thus, in a sense, pay for what we gain on one side (remedy)—increased productivity, automation of repetitive tasks, instant access to knowledge, multilingual translation, synthesis of information, data analysis, programming, acquisition of new skills, and so forth—by what we lose on the other: saturation of the informational space, deepfakes, disinformation, the weakening of the exercise of certain skills, technological unemployment, the digital divide, widening inequalities, and perhaps even the death of philosophy and literature.
The same type of reasoning may be applied to powerful AI as defined by Amodei. As a pharmakon, it reproduces the same structure of ambivalence that characterizes, more generally, every technical object or system. The extraordinary remedies it promises in addressing certain fundamental problems of humanity—thanks to an unprecedented acceleration in the pace of research and scientific discovery—such as the elimination of a large share of cancers, the general improvement of human health, the possible doubling of life expectancy, the prevention of certain psychiatric pathologies, or the drastic reduction of global poverty, are accompanied, correlatively, by the emergence of unprecedented risks: the loss of control over extremely powerful technical systems, major disruptions to the labor market, the strengthening of autocratic regimes, the concentration of economic and technological power, or malicious uses that the deployment of such a technology might facilitate.
A pharmacological analysis of AI thus seems to orient any normative evaluation of technology toward a consequentialist approach, consisting in taking into account the ambivalent dimension of the device under consideration—remedy and poison at once—and in identifying the whole set of direct and indirect transformations, societal, anthropological, and civilizational, that it may bring about within the socio-technical system in which it is embedded (infrastructures, markets, users, social practices, institutions, regulation, relations of power, and so forth). This type of reasoning in terms of costs and benefits, typical of engineering thinking, assumes that it is possible to weigh, in the most comprehensive and impartial way possible, the beneficial as well as the undesirable effects that the deployment of an AI system might produce. Such an approach most often proceeds through a form of evaluative calculation aimed at determining the maximal net benefit for society as a whole. It consequently requires that one be able to decide on the basis of explicitly defined ethical and political criteria of evaluation; to assess the expected consequences; to determine the type of society collectively deemed desirable; and also to identify the system and scale of governance most appropriate for orienting and framing the development of technical progress.
Above all, it presupposes a voluntarist and partially controlled conception of the development and evolution of technical systems and, by extension, of the history of societies, grounded on two essential ideas: (1) that the capacity to conceive and fabricate artifacts is constitutive of, and inseparable from, human nature (technology as an anthropological rather than historical category); and (2) that technological innovations constitute the principal factor in the overall evolution of socio-economic, institutional, and cultural structures. It is this vision—one that is not entirely deterministic regarding the becoming of techniques, and according to which it would be possible to orient and guide their uses in accordance with morally justified norms and values—that seems to animate Dario Amodei’s two essays. It is this vision, above all, that makes possible the idea of a governability of technology defended by the author.
This decidedly technophile and humanist conception, in the sense that it is the human being, as subject, who ultimately decides the trajectory that ought to be given to the development of techniques and to their use, stands in sharp opposition to a deterministic conception of the evolution of techniques, which holds, on the contrary, that technology now constitutes an autonomous sphere of evolution that shapes human beings far more than it is shaped by them. The principal author associated with this conception of the technological world is Jacques Ellul (1912–1994), whose thought will now serve as the framework within which I will examine Amodei’s techno-utopian project, which in many respects is representative of the intellectual outlook of Silicon Valley.
A Critical Reading of Amodei through Ellul. — I would indeed like to conclude this presentation—which, I hope, betrays as little as possible the two highly stimulating essays by Dario Amodei—by evoking the philosophy of technology that the two texts do not think explicitly but clearly suggest, and by confronting the latter with the philosophy of technology as developed by Jacques Ellul, notably in Chapter II of La technique ou l’enjeu du siècle, entitled: Caractérologie de la technique. For a detailed presentation of the characteristics of technology as they began to emerge in the eighteenth and nineteenth centuries, I refer the reader to my article: The Six Characteristics of the World of Technology According to Jacques Ellul.
At the beginning of his essay Machines of Loving Grace—as well as in a passage that seems at first sight secondary and digressive in relation to the rest of the text—Dario Amodei explains why he and his company Anthropic insist—and will continue to insist—more on the risks of AI than on its potential benefits. He invokes in this respect four reasons, the first of which is entitled: “Maximize Leverage,” which he explains as follows (I leave aside the other three):
“The fundamental development of AI technology and a large share of its benefits (but not all) seem inevitable—unless the risks derail everything—and are largely driven by powerful market forces. By contrast, the risks are not predetermined, and our actions can greatly change their probability.”
The notion of “leverage,” borrowed from the world of finance, which in turn borrows it from the world of physics, conveys the idea of an amplifying effect whereby a small initial action produces a much larger result. In finance, this may refer to an increased return on capital initially invested thanks to the contribution of borrowed money ; in physics, it may refer to the application of a reduced force at one precise point in order to produce a greater effect at another. In other words, to “maximize leverage” means to multiply a final result in relation to a smaller initial effort, provided that effort is appropriately targeted.
Within the framework of artificial intelligence development as envisaged by Dario Amodei, “maximizing leverage” would consist in identifying the most decisive control points and risk factors over which we are likely to have the greatest influence—such as regulation, the establishment of safety standards, the transparency and explainability of algorithms, or forms of international governance—and in acting accordingly so as to ensure that the totality of actions undertaken maximizes the probability of a beneficial development of AI while minimizing the associated risks.
The passage cited above contains above all a paradox that any attentive reader cannot fail to notice : on the one hand, the assertion of the deterministic character of technological evolution and of the beneficial effects of powerful AI—they seem to be, for the most part, inevitable—and, on the other hand, the assertion of the indeterministic, or non-predetermined, character, to retain Amodei’s terminology, of the risks and negative effects of powerful AI. It is almost as if Anthropic’s co-founder were dividing, in a somewhat artificial fashion, and in order to be able at all costs to uphold the thesis of the governability of AI, the world of technology into two parts : one would function according to internal laws over which we have no hold whatsoever—since they are notably driven by the blind forces of the market—whereas the other, whose course could be influenced by human action, would therefore not be subject to the same constraints. It would thus suffice “to exert leverage” upon the latter, which would be steerable and controllable, in order to amplify the former, which would in any case already be inscribed in the “natural” development of AI.
At first glance, one might think that Amodei’s claim stands in contradiction with the principle of the ambivalence of technology masterfully brought to light by Plato’s text. This point is nevertheless largely debatable depending on how one interprets the Greek philosopher’s text. Dario Amodei does not in fact deny the pharmacological character of technology, since his two texts, Machines of Loving Grace and The Adolescence of Technology, consist precisely in thinking both the positive and the negative aspect of powerful AI. What the quotation mentioned above does assert, however, is that it would be possible, in a sense, to neutralize technology insofar as it is poison, in order to preserve only technology insofar as it is remedy. Yet the myth of Theuth may be interpreted in two different ways: (1) the first position consists in affirming that the only way to extirpate the poison is to destroy the remedy, the two being inseparably linked. In short, either we put an end to technical progress, or we accept to bear the risks inherent in every technical development on account of the benefits we may derive from it elsewhere ; (2) the second position, more nuanced, affirms on the contrary that it is possible, without denying the fundamentally pharmacological nature of technology, to inflect its uses in such a way as to maximize its beneficial effects while limiting its destructive ones.
The second position, which is Dario Amodei’s in its least nuanced version, comes down to distinguishing, on the one hand, technical development, which would be inevitable and intrinsically determined, and, on the other hand, the consequences of technical progress, which would be governable. But can the two legitimately be separated, since, precisely as a unitary and global phenomenon, the consequences—whether negative or positive—are themselves intrinsic to technical development? This is precisely what the myth of Theuth already illustrates perfectly, for it is not possible to modulate at will the fundamentally ambivalent character of technology. If technology constitutes a global system—which Plato, in my view, affirms and Amodei denies—then its effects are not external to its development, but proceed directly from it. This, in any case, is the thesis defended by one of the most important thinkers of technology, Jacques Ellul.
In an article published in 1965 and entitled precisely Réflexion sur l’ambivalence du progrès technique, the author of La technique ou l’enjeu du siècle takes up once again the exceedingly important question of “the excellence or the danger of technical progress.” From the very outset, he insists on what he considers “one of the most important characteristics of the latter: its ambivalence.” Though partly ideological and difficult to settle by exclusively scientific arguments, the critical position Ellul asserts seems to stand in exact opposition to the one defended by Dario Amodei and to the often oversimplified discourses conveyed by most Silicon Valley tech companies, which attribute the potential risks of a technology solely to its uses. Yet, recalling the Platonic notion of the pharmakon, Ellul writes, with regard to the ambivalent nature of technology:
“What I mean by this is that the development of technology is neither good, nor bad, nor neutral—but that it is made up of a complex mixture of positive and negative elements (…). I further mean by this that it is impossible to dissociate these factors in such a way as to obtain a purely good technology, and that it absolutely does not depend on the use we make of technical equipment to obtain exclusively positive results.”
Moreover, how could we even judge in full autonomy and objectively whether technical progress is beneficial or not, since our behaviors and our thoughts are themselves constantly oriented by technical devices that, in a sense, command us to adapt psychologically to them? In other words, it is impossible to step outside the technical universe, while the evaluation we are able to make of it is already shaped by that very universe.
Some may argue that we are free, at any moment, as individuals, in the use we make of a technology. But such behavior, however virtuous it may be, changes nothing whatsoever in technicized civilization taken as a whole. For the latter advances according to an autonomous and “blind” dynamic, from which every form of finality that might make it possible to determine the excellence or otherwise of technical progress has been evacuated. This logic of operation, internal to the world of technology, which Ellul elsewhere calls the “self-augmentation of the technical phenomenon” (see The Six Characteristics of the World of Technology According to Jacques Ellul), radically disqualifies above all any discourse of a teleological nature (from the Greek telos, “end,” “goal,” and logos, “discourse,” “science”), that is, any discourse that consists in assigning normative, political, or moral ends to technological progress in order to orient its future course. This is precisely what Amodei proposes in Machines of Loving Grace, when he describes the potential civilizational benefits we might derive from powerful AI.
Within this framework, Dario Amodei’s proposal, however laudable it may be at the level of intentions—and however much everyone would like to believe in it as a moral agent enamored of good sentiments—rests upon an error of judgment regarding the very real functioning of the world of technology in general, which evolves according to a causal progression that orients from within “already existing possibilities of growth.” This is exactly what the term “self-augmentation” expresses, namely a phenomenon of continuous and autonomous increase and development, without the idea of a final cause—here, values or norms—that would come from outside to orient the overall evolution of what Ellul would later call the technological system. According to an Ellulian reading, powerful AI as well as its uses, whether good or bad—and without any one of us, at the individual level, being able in the slightest to inflect the movement of continuous growth of which he himself is a part—are already potentially inscribed in the current state of evolution of the technological system.
Now the autonomous self-augmentation of the technical system—which is generally identified with what is commonly called “technical progress”—has, according to Ellul, the mechanical effect of intensifying its ambivalent character and therefore of making the relations between the positive and negative aspects of technical development increasingly inextricable. It is precisely this simultaneous increase in both the possibilities and the dangers of technical progress that Ellul illustrates through four key theses, which may be interpreted as four fundamental characteristics of technical progress. These themes are developed more extensively in a chapter of his third and final major work devoted to the critique of technology, The Technological Bluff (1988). I will first cite them as they appear in his 1965 article before briefly presenting them: (1) All technical progress has a cost; (2) Technical progress raises more problems than it solves; (3) The harmful effects of technical progress are inseparable from its beneficial effects; (4) All technical progress produces a large number of unpredictable effects.
Technical progress has a cost (1). — This is an aspect of technical progress that we have already encountered in the analysis of the myth of Theuth in Plato’s Phaedrus. It reminds us of the non-absolute character of technical progress, which inevitably carries with it both gains and losses. This thesis, although apparently self-evident, is nevertheless widely overlooked when attempts are made to evaluate technical progress, evaluations that tend most often to highlight the gains while underestimating the losses. Ellul does not seek to minimize the undeniable value of progress and innovation—such as improvements in living comfort, the extension of life expectancy in good health, or the increased capacity for action afforded to human beings—but rather to expose the biased character of such evaluations, which is symptomatic of a civilization that is itself highly technicized. He also emphasizes the difficulty of such evaluations, due to the asymmetry between gains and losses and the diversity of the goods at stake.
Technical progress does not merely transform what belongs, in general terms, to the realm of material goods. It also reshapes the forms of social life to which individuals are attached, human relationships, modes of thought, and even ecological balances. How, in such a context, can we evaluate what is gained and what is lost when the two are not of the same nature and when every technical advance simultaneously transforms the balance of values within a civilization? The ambivalence of technical progress therefore makes it impossible to determine objectively the net balance of gains it produces, because what is being compared is fundamentally heterogeneous.
It raises more problems than it solves (2). — The second characteristic of technical progress concerns the local nature of the problems it appears to solve, which subsequently generate global repercussions across the entire social body — often in ways that cannot be fully anticipated. In other words, the specific and localized problem that a given technique resolves according to its initial purpose produces, over time, new and more complex problems according to a recursive and continuous dynamic. To solve a problem is therefore, in many cases, to create others of greater magnitude. Ellul illustrates this point with the technical advances of the nineteenth century in the fields of the division of labor and mechanization, which were originally intended to address basic material needs but ultimately generated large-scale problems of proletarianization and exploitation, subjecting the working class to alienating working conditions.
The contemporary example of generative AI clearly illustrates what Ellul means by this second thesis. The automation of certain cognitive tasks and the productivity gains it promises initially appear as localized solutions to challenges of organizational efficiency. Yet they are likely, over time, to produce systemic consequences of considerable scope — economic, informational, cultural, and geopolitical. According to Ellul, the technological system seems animated by a principle of continuous self-generation because of the double function it performs within the social system that it increasingly tends to encompass entirely: on the one hand, solving human problems in the most efficient way possible—what is today often called technological solutionism—and on the other, generating through this very promise new problems that it will in turn seek to solve by systematically reducing all questions to technical ones.
Its positive and negative effects are inseparable (3). — This third characteristic of technical progress is perhaps the one that most directly reveals the fragility of the techno-utopian project proposed by Dario Amodei and the teleological illusion upon which it rests. This illusion consists in believing that it would be possible to isolate the positive effects of technical progress from its negative ones and, on that basis, classify techniques according to whether they contribute to civilizational advancement or not. According to such a conception, techniques would be morally neutral, while human intentions alone would determine whether they are used for good or ill. In this perspective, the golden age of technical progress would depend on our technological maturity in deploying and using such progress. This is, in essence, one of the central ideas underlying Amodei’s essay The Adolescence of Technology.
Yet, according to the author of The Technological System, such a view ignores the intrinsically systemic and unified nature of technology. Once again, the example of generative AI illustrates Ellul’s argument particularly well. Its advances simultaneously produce beneficial and problematic consequences that we can neither fully foresee nor isolate from one another in order to retain only the positive aspects. The acceleration of content production and the productivity gains it enables on the one hand — as promises of efficiency — require in return the construction and deployment of new technical infrastructures that further complexify the technological system as a whole: extraction of raw materials, energy consumption, cloud infrastructures, model training and deployment, new forms of work organization, acquisition of new skills, recomposition of value chains and power relations, and so forth. These developments generate technological dependencies between firms and states, major disruptions in labor markets, increased pressure on raw material supply and energy production, and even the potential degradation of certain cognitive capacities.
It produces unpredictable effects (4). — The fourth and final aspect of technical progress emphasized by Ellul challenges the widespread belief that technological development can be deliberately directed toward a desirable and predefined state. Once again, Ellul’s analysis resonates strikingly with the texts of Amodei, almost anticipating their critique. According to those who hold an idealized view of technical progress, since technology is merely a set of means, it should suffice to assign elevated goals to its development in order to guarantee its beneficial effects. It is precisely in order to dismantle this illusion of technology as a simple tool that Ellul offers a nuanced analysis of the consequences of technical progress. Every technical innovation, he argues, generates three types of effects: (1) intended effects; (2) foreseeable but unintended effects; and (3) unforeseeable effects. This tripartite classification is designed above all to highlight an underlying principle: there exists a strong correlation between the expansion of the technological system and the growing unpredictability of the effects of technical progress.
The first category — the least problematic and the most obvious—corresponds to the primary function of any technical innovation: its initial purpose, namely the efficient resolution of a particular problem. In other words, every innovation is directed toward a specific objective that corresponds to the result it is meant to achieve: curing a disease, increasing labor productivity, protecting a building from the cold, and so forth. At this level of intended outcomes, technical solutions are usually highly effective because they correspond directly to the purposes for which they were designed.
This is not the case for the second category of effects, whose foreseeable character does not make them desirable. These are effects that are predictable, undesirable, and inevitable, yet nonetheless accepted as such—for example, the increase in stress caused by new production methods, or industrial pollution resulting from the construction of a factory. Ellul notes that such effects are nevertheless frequently ignored when the benefits of technical progress are evaluated. Yet any serious evaluation should take into account all foreseeable consequences, whether positive or negative.
It is the third category, however, that particularly concerns Ellul: the “completely unpredictable” effects of technical progress, which he further divides into two types.
Unpredictable but expected: effects whose occurrence is anticipated but whose precise form cannot be foreseen because of the complexity of the transformations they may generate within the social body. One may cite as an example the profound economic, sociological, and psychological transformations likely to result from the increasing deployment and use of artificial intelligence systems. We know that major changes will occur, yet we are unable to predict their exact nature. We may formulate hypotheses and construct scenarios, but the complexity of the phenomenon prevents us from affirming that any given projection is more certain than another.
Unpredictable and unforeseen: effects that are entirely unexpected, because engineers and researchers cannot explore all possible conditions of use of a given technology. Certain consequences of technical progress therefore become visible only after a technology has been deployed. Once diffused through its interaction with individuals, organizations, and society as a whole, a technical innovation generates emergent effects that no one could anticipate beforehand. It is therefore only retrospectively that undesirable consequences can sometimes be corrected or mitigated—and in some cases we can only observe the irreversible effects of a technology once it has already spread on a large scale.
One would account only very partially for the undeniable contributions of the concept of the ambivalence of technology and for the key characteristics of technological progress identified by Ellul if one failed to highlight the connection they maintain with his broader description of the general structure of the technological world. In other words, for Ellul the questions of evaluating and governing technological progress are possible only if the structural characteristics and the logic of operation of what he would later call, in his final major work devoted to technology, The Technological System (1977), have first been brought to light. Indeed, how could one claim to govern something—or even raise the more radical question of the governability of technological progress, and a fortiori of AI—without first identifying the laws governing the functioning of what one seeks to govern?
According to Ellul, however, we have reached a stage of historical development in which technology has assumed such magnitude that it must now be understood both as a milieu—that is, the global environment within which human societies evolve—and as a system, insofar as the various techniques have become increasingly interdependent and now evolve according to an autonomous dynamic that largely escapes any form of human intervention. This is the central idea he develops as early as 1954 in The Technological Society, through the six fundamental characteristics of the technological world that I have presented elsewhere. To believe that we still retain full control over the development of the technological system is, at best, a reassuring illusion and, at worst, a form of narcissistic self-deception.
Modernity may therefore be characterized under a dual aspect: first, as an ever-increasing proliferation of techniques of all kinds; and second, as a progressive extension of the technical phenomenon to the entire field of human activity, now subjected to the single criterion of the unique and optimal way of accomplishing each task (“the one best way,” in Ellul’s own words). As a result, virtually no social practice today escapes technology understood as the generalized implementation of a particular form of rationality— instrumental rationality —which tends to extend its domain to every sphere of human existence. It is in this sense that one must understand the fundamentally “totalizing” character of the technical phenomenon: not only does it regulate an ever-growing number of aspects of both public and private life, but it also progressively narrows the range of possible choices, insofar as every option increasingly becomes subject to calculation and to the imperative of efficiency.
How can one fail to see, in what Alexander Wang interprets as the advent of a new era of recursive self-improvement, as well as in the technical descriptions, operational principles, and fields of application that Amodei attributes to powerful AI, an indirect and a posteriori confirmation of Ellul’s theses? This way of describing the current evolution of artificial intelligence corresponds almost point by point to the typically Ellulian hypothesis of the self-augmentation of the technological system, whose process of autonomization now seems to be reaching a kind of climax, to the point of suggesting a gradual transfer of tasks aimed—at least for the moment—at partially automating the processes of scientific research and the development of AI agents.
Within such a framework, powerful AI could go so far as to reconfigure the historically established relationship, since the nineteenth century, between science on the one hand and technology on the other. It might even complete the process by which science becomes subordinated to technology—generally referred to as technoscience — since technology itself would become the increasingly autonomous driving force behind the production of knowledge and new techniques. In other words, science would in some sense become a by-product of technology, which would thereby appropriate what had until now been regarded as one of the most eminent prerogatives of human beings: the production of knowledge.
Quite apart from the considerable persuasiveness of Ellul’s theses in light of the ever-greater role technologies play within modern societies, as well as the generalized automation promised by the project of powerful AI, the conclusions of the author of The Technological Bluff regarding the dynamics of the technological system raise, above all, profound questions about the very possibility of governing artificial intelligence—a question that lies at the heart of Amodei’s two essays. This is all the more the case since powerful AI is also likely to have a clear amplifying effect on the four theses Ellul formulates concerning the ambivalence of technological progress, thereby making the governability of artificial intelligence even more problematic.
By participating, under the hypothesis of the “compressed century,” in the exponential self-augmentation of the technological system, powerful AI would, in parallel with the gains promised by Amodei, significantly increase the cost function of technological progress (1) as well as its repercussions throughout the social body (2 and 3). Moreover, since—according to Ellul’s fourth and final thesis—there exists a strong positive correlation between the growing development of the technological system and the level of unpredictability of the effects of technological progress, unpredictable effects—both positive and negative, since the two are inseparable—whether anticipated or not, are likely to increase significantly. In other words, by accelerating the growth of the technological system, powerful AI simultaneously intensifies its ambivalence and its level of unpredictability, thereby making its governance even more difficult.
I believe that Amodei would largely agree with the conclusions that may be drawn concerning the amplifying effect that powerful AI is likely to have on the four aspects of the ambivalence of technological progress. For both Ellul and Amodei, technology—more than economics—constitutes the key to interpreting modernity. Both adopt a technocentric vision of civilizational and anthropological transformations, in which technology ultimately functions as the essential infrastructural driver of change.
Yet Amodei and Ellul do not share exactly the same view of the general functioning and development of the technological system, nor of the way in which it evolves. This difference stems largely from the standpoint from which they write. It is worth recalling that, in Amodei’s case, this question never becomes the object of an explicit theory, whereas it lies at the very heart of Ellul’s work. Nevertheless, it appears more or less implicitly throughout the CEO of Anthropic’s two essays in the very idea that technological progress could, according to him, be oriented in a direction consistent with norms and values. It is precisely here that the vision of a regulated evolution of AI he proposes deserves to be questioned.
Where Ellul recognizes only efficient causes, Amodei paradoxically reintroduces the possibility of final causes, intended to neutralize as far as possible the negative effects of technological progress in order, as he himself puts it, to leverage its positive effects. From this perspective, the readings of Plato and then of Ellul shed light, in my view, on Amodei’s two essays, as well as on current debates concerning the governance—and indeed the very governability—of technology in general and of AI in particular. This justifies, I believe, the confrontation of texts written from radically different standpoints and serves as a reminder that the issue is fundamentally transdisciplinary and should not be left solely to engineers or technology companies. At the risk of somewhat oversimplifying the matter, three positions appear to emerge:
Technological determinism: First, there are those who, like Jacques Ellul, believe that technology has reached such a level of development that it has long since entered into a process of autonomous self-augmentation “without a subject” (or without a pilot, to use a more accessible expression), a process that recent developments in AI merely accelerate in an almost paroxysmal way. This position—one that may be described as technological determinism—leads to a skeptical view regarding the very possibility of a global governance of the technological system. At best, we may slow down certain local excesses that will have little impact on the overall functioning of the system. This form of tragic lucidity, grounded in a rigorous analysis of technological logic, carries a significant psychological cost and is therefore difficult to defend, since it ultimately leads to a kind of passive acceptance of the course of events.
Moderate techno-utopianism: Next, there are those who largely acknowledge Ellul’s highly convincing analyses without fully accepting the conclusions they imply for action. This position—arguably less rigorous philosophically but psychologically and ethically more acceptable—maintains that it is indeed possible to steer and regulate technological development—particularly that of AI—in a direction aimed at minimizing its negative effects and maximizing its positive ones. This stance, which might be described as a form of “soft determinism,” corresponds to Amodei’s position and, more broadly, to numerous initiatives emerging since the 2010s from major actors in the AI industry, sectoral organizations, universities (notably in Montréal), and various expert groups seeking to establish ethical guidelines and principles to frame the development and deployment of artificial intelligence systems in ways beneficial to humanity.
Radical techno-utopianism: Finally, there are those who tend to minimize the pharmacological nature of technology and who regard it as the primary driver of improvements in the human condition. In an article published on April 14, 2024, The TESCREAL Bundle: Eugenics and the Promise of Utopia through Artificial General Intelligence, Timnit Gebru and Émile P. Torres introduced the acronym “TESCREAL” to designate a cluster of technofuturist ideologies that they argue motivate aspirations toward the construction of artificial general intelligence (AGI): transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism.
Comments