top of page
Search

The Generative AI Value Chain: General Framework. (1)

  • Writer: Franck Negro
    Franck Negro
  • Feb 9
  • 9 min read

Value chain or ecosystem. - November 30, 2022 marks an important date in the history of AI with the launch of ChatGPT-3. One week later, in its December 6 edition, the newspaper Le Monde referred to “a small company in San Francisco”—OpenAI—which had already caused a sensation a few months earlier with DALL-E 2, an AI model capable of generating ultra-realistic images from simple textual descriptions. While some were already worried about the conversational robot’s potential dangers—erroneous, inappropriate, or biased answers; hallucinations; and so on—tech specialists preferred above all to emphasize that this was the first truly genuine chatbot in history, while wondering about the threat it might pose to Google’s search engine and to the future of the Web as it emerged in the early 1990s. Other observers, meanwhile, began to question the future of occupations whose primary vocation is content production, foremost among them the creative professions, writers, and, more generally, intellectual workers. Two other central questions then emerged: “How will this application be received by the general public?” and “What concrete use cases can the corporate world expect?”


While the answer to the first question became known a few weeks later, the second remains fully relevant today, while the new geopolitical context inaugurated by Donald Trump immediately after his official inauguration on January 20, 2025 raises another important issue: the near-absolute dependence of European economies on American technologies—particularly in AI—and the resulting loss of sovereignty. Yet it is difficult to grasp the economic, political, and geopolitical stakes currently at play without, at the very least, a general representation of the generative AI value chain.


It is in this context that, on June 28, 2024—barely a year and a half after the launch of ChatGPT-3.5—the French Competition Authority published an opinion—No. 24-A-05—devoted to the competitive functioning of the generative AI sector. Without claiming to provide an in-depth technical analysis of generative AI systems, this opinion aimed to assess whether competitive conditions were likely to foster innovation, diversity of actors, and effective market access, or whether, on the contrary, certain economic and industrial characteristics of the sector were liable to lead to an excessive concentration of market power. The authors of the opinion also had the particularly fruitful idea of preceding the competitive analysis proper with a proposal for a detailed mapping of the generative AI value chain and its principal actors, before concluding with a series of recommendations addressed to public authorities and economic decision-makers.


The use of the term “value chain,” rather than that of “ecosystem,” frequently mobilized in the field of corporate strategy, is employed here deliberately. The choice is not incidental. It refers first of all to a major figure in strategic management, Michael Porter, who introduced this notion in his book Competitive Advantage: Creating and Sustaining Superior Performance, published in 1985. In Porter’s context, a value chain designates a set of distinct but interconnected activities—essentially operational functions—that contribute to value creation within a given firm. The processes analyzed are therefore, strictly speaking, internal to an organization. In a different way, the concept of “value chain” as used in industrial economics has a more generic scope. It encompasses all the productive stages, as well as the actors who take part in them, from upstream to downstream, in the production of a good or a service. In this sense, the expression “value chain” is largely synonymous with the more common notion of an industrial “sectoral chain” or “industry stream.”


Another term, which in my view would have been more adequate, could have been mobilized: that of “business ecosystem,” in the broad sense given to it by James F. Moore in his founding article Predators and Prey: A New Ecology of Competition, first published in the May–June 1993 issue of the Harvard Business Review. Borrowed from ecology—understood as the study of interactions between living beings and their environment—and in contrast to the notion of a value chain, the expression “business ecosystem” designates a broad set of heterogeneous actors—not solely economic—gravitating around a technological innovation or a shared value proposition, whose relations follow a systemic and co-evolutionary logic rather than a linear and sequential one.


Unlike a value chain, which primarily analyzes economic actors engaged in productive and transactional relations, the business ecosystem also includes state, institutional, and social actors who contribute to structuring and shaping the evolution of the value-creation system. From this perspective, the business ecosystem of generative AI would include not only private or public companies, but also states, international organizations, financial institutions, public authorities, regulators, standards bodies, universities, research centers, developer communities, associations, and so on. Yet it is within a value-chain approach—characterized by the decomposition of the production cycle into three moments, from upstream to downstream, and by the identification of dominant actors at each level—that the generative AI industry is apprehended in the work of the Competition Authority.


The actors in the generative AI chain. - During a public exchange with Larry Fink, CEO of the American asset manager BlackRock, held at the World Economic Forum in Davos on January 21, 2026, Jensen Huang, cofounder and CEO of Nvidia, described the current phase of AI development as a platform shift, by which he means a major technological transition marked by the emergence of a new generation of so-called foundational technologies capable of profoundly transforming the existing computing architecture, along with the uses and applications that flow from it.


In a very didactic manner, he compares the current generative AI revolution to the three previous platform shifts that have punctuated the recent history of computing: (1) the personal computer (PC) revolution, which not only brought forth a new type of machine contrasting with mainframe environments, but also the emergence of a new application ecosystem symbolized in particular by the Microsoft Windows operating system and office productivity applications; (2) the rise of the Internet, understood as a global computing platform hosting a multitude of interconnected services and applications—web, search engines, messaging services, wikis, discussion forums, blogs, e-commerce, and social networks; and finally (3) the convergence of mobile—smartphones, tablets, app stores—and cloud computing, with the offshoring of storage and compute power via so-called SaaS (Software as a Service) architectures and the associated business models, which once again redefined the underlying technical architectures. With each platform shift, the computing infrastructure was profoundly rethought, and a new application ecosystem gradually took shape.


From a more strictly industrial perspective, Nvidia’s CEO offers a representation of generative AI platforms in the form of a “five-layer cake,” allowing the value chain to be decomposed into distinct functional levels that are nonetheless strictly interdependent. At the base of this technical stack lies the energy layer, indispensable to the functioning of systems capable of processing, in real time, massive volumes of unstructured data. The second layer corresponds to hardware compute components, and more particularly to specialized processors and massively parallel compute units—a segment in which Nvidia occupies a central position, alongside other actors such as AMD and Intel, but also TSMC or ASML.


The third layer consists of cloud-computing infrastructures, which provide the capacities necessary for the design, training, and deployment of large-scale AI systems. It corresponds in particular to the core business of major cloud service providers such as Amazon, Google, or Microsoft. The fourth layer brings together the AI models themselves, to which public debate today tends to reduce AI as a whole. It is at this level that specialized companies such as OpenAI, Anthropic, Mistral AI, xAI—officially absorbed by SpaceX on February 2, 2026—or Cohere operate.


Finally, the fifth and last layer—which Jensen Huang considers the most decisive in the long run—corresponds to the application layer. Here one finds both conversational agents such as ChatGPT or Claude and the whole range of business and sectoral applications built on generative AI, which constitute the most visible part of these technologies for the general public. This application layer rests closely on all the underlying layers, without which no effective deployment would be possible. It is also at this level that, according to Nvidia’s CEO, the strongest expectations are concentrated in terms of economic gains and societal impacts.


Nvidia’s CEO also believes—and this is where his entire analysis of the historical and industrial dynamics of generative AI fully comes into focus—that humanity is on the threshold of the largest deployment of infrastructure ever undertaken. This would entail unprecedented capital requirements, in order to sustain both the increase in energy production capacities, the design and manufacture of semiconductors, and the expansion of cloud-computing infrastructures necessary for the development and diffusion of AI systems on an unprecedented scale.


The generative AI value chain, as analyzed by the Competition Authority, more or less reprises—in a more linear reading—a large part of the five layers described by Jensen Huang. It nonetheless partially regroupes them by bringing together the second and third layers within a single category described as the infrastructural component, encompassing both hardware compute resources and cloud-computing infrastructures. In this slightly simplified version, the AI value chain would thus be structured, from upstream to downstream, around three major complementary components, which can in turn be subdivided into more elementary elements.


  • Infrastructure: first component, infrastructure, understood here in the broad sense of the term, insofar as it integrates all the material, human, and organizational resources required for the training and deployment of generative AI models. It includes: (1) compute power (IT hardware, cloud services, and public supercomputers); (2) data, from collection through cleaning and processing. Data may be public and freely accessible; proprietary—when held by a company or private organization; or third-party and exploited under license, via specialized data providers or aggregators; and finally (3) skilled labor.

  • Modeling: second component, the process of designing and producing the model proper. It includes, in order: (1) the phases of development and training of so-called “general-purpose” foundation models, which may be proprietary—closed and fully controlled by the model publisher—or open, accessible and modifiable under certain licensing conditions (open source); (2) the phase of specialization or adjustment of foundation models in order to have them perform more specific and controlled tasks (what English-language authors call fine-tuning, and what is translated into French as “réglage fin”).

  • Deployment: finally, the third component, the stage of putting pretrained or specialized models into production in a given application environment, and of commercializing services addressed to end users, whether individual users or companies. Generative AI models can be deployed and made available according to four main modalities: (1) in the form of conversational agents—ChatGPT, Gemini, Claude, Le Chat, and so on—with which an end user can interact directly, in writing or by voice, via a minimalist interface; (2) through API connections (Application Programming Interface), allowing companies to integrate the model’s capabilities into their own products and services; (3) by deploying these models on cloud infrastructures such as Microsoft Azure or Google Cloud Platform for internal organizational uses; (4) by integrating generative AI models within existing applications such as image-editing software (for example Adobe Photoshop), an office suite (Microsoft 365), or a CRM (Customer Relationship Management).


One may also observe that the authors of the opinion make no reference whatsoever to the energy layer mentioned by Jensen Huang, even though it now constitutes one of the major concerns of the major technology companies engaged in developing and deploying generative AI. During a public exchange at the World Economic Forum in Davos with Larry Fink on January 22, 2026—on the very day after Nvidia’s CEO spoke—Elon Musk stated that electrical energy was, in his view, the most critical limiting factor for large-scale deployment of AI systems. His reasoning rested on a simple observation: a growing mismatch between, on the one hand, the rapid—indeed potentially exponential—increase in the production capacity of AI-dedicated chips and, on the other hand, the relatively modest growth of global electricity production, estimated at about 3 to 4% per year. In other words, AI’s rapid advances are now running up against a major structural constraint: the available energy capacity.


It is precisely in this context that one must understand the announcement made on February 2, 2026, of the absorption of xAI—the AI company founded by Elon Musk, also owner of the social network X (formerly Twitter)—into his space company SpaceX, with a view to an IPO planned for 2026. Musk’s explicitly stated objective is the following: “to build giant solar power plants, more efficient than on Earth, combined with AI data centers” (source: Le Monde, February 3, 2026). This would be, in his view, the only way to meet, in the long term, the world’s electricity demand induced by the production of large-scale AI systems.


The platform approach defended by Jensen Huang, combined with Elon Musk’s remarks on the energy cost of scaling AI, thus clearly underscores the necessity of taking into account the strategic positioning of the different actors within the value chain, as well as the logics of dependence—both technical and industrial—that structure it. While certain actors such as Alphabet (Google) and Microsoft are present across the entire generative AI value chain, others, more specialized, such as Amazon, Meta, Nvidia, or Anthropic, intervene at more circumscribed levels, depending on their strategic positioning and their technological, human, and financial capacities.


A first part will therefore be devoted to the general introduction and to an analysis of the role of the major historical digital players, whose inherited dominant positions profoundly structure the development of the generative AI market. A second part will address the emergence of new, more specialized actors, whose innovations contribute to a reconfiguration of power relations, but also of the sector’s overall strategic orientations, in a context of platform shift. Finally, a third part will aim to situate all these actors within a more classical value-chain reading, distinguishing upstream and downstream positions, and aligning, to a certain extent, with the industrial perspective introduced by Jensen Huang.

Recent Posts

See All

Comments


Restez informé! Ne manquez aucun article!

This blog explores the ethical, philosophical and societal issues of AI.

Thanks for submitting!

  • Grey Twitter Icon
bottom of page