top of page
Search

The Generative AI Value Chain: Historical Actors. (2)

  • Writer: Franck Negro
    Franck Negro
  • Feb 9
  • 15 min read

The Historical Giants of the Digital Industry. — With the exception of Nvidia, a company that remained relatively unknown to the general public before the spectacular rise of generative artificial intelligence, the principal actors currently dominating this sector are the large historical firms that emerged from the digital revolution of the 1990s and 2000s. What they share is a long-standing and structuring presence across several key digital markets, as well as the capacity to mobilize considerable resources in terms of infrastructure, data, talent, and capital. In other words, the hegemonic position they have built over the past fifty years—through successive waves of innovation involving the transistor, microprocessors, the personal computer, software, the web, browsers, search engines, e-commerce, social networks, Software as a Service (SaaS), cloud computing, mobile technologies, and more—has made these giants of computing and digital technology the natural candidates for the development and diffusion of artificial intelligence technologies.


The centrality of historical digital actors within the generative AI ecosystem appears particularly clear in the case of Google, which is undoubtedly among the companies that have benefited most from the current momentum, if one takes as an indicator the evolution of its market capitalization over the past two years. Founded in 1998 with the launch of its search engine, the company became, in 2015, a core component of Alphabet, a holding structure created to consolidate and organize the group’s strategic activities. These span an exceptionally broad spectrum, ranging from the Android operating system to the Google Ads advertising platform, as well as cloud computing services delivered through Google Cloud Platform.


As part of this vertically integrated strategy, Alphabet has also developed its own specialized processors—the Tensor Processing Units (TPUs)—and, on October 23, 2025, entered into a strategic partnership with Anthropic aimed at deploying nearly one million of these chips to accelerate the development of its artificial intelligence models. Initially designed for internal use within its data centers beginning in 2015, these processors have been commercially available since 2018 and are now used both for inference and for training Google’s foundation model, Gemini, whose version 3 was officially launched on November 18, 2025.


Thanks to its search engine and its video streaming platform YouTube, Alphabet possesses the largest web index and one of the world’s largest video databases, constituting a decisive advantage in terms of access to diverse forms of data—textual, video, image, audio, metadata, and more. The group also relies on a leading artificial intelligence research division, notably strengthened by the acquisition of DeepMind in 2014. Founded by Demis Hassabis, recipient of the 2024 Nobel Prize in Chemistry, the company is known in particular for developing programs such as AlphaGo (2016) and AlphaFold (2018). In 2023, DeepMind merged with Google Brain—originator of the TensorFlow framework, one of the most widely used open-source libraries for building and training machine learning models, alongside Meta’s PyTorch—thereby reinforcing, under Hassabis’s leadership, the integration of Alphabet’s research and development capabilities.


Google is also the publisher of the conversational agent Gemini—initially launched under the name Bard in March 2023—which is now integrated across much of its application ecosystem, including its search engine through the “AI Overviews” feature, its productivity tools within Google Workspace (Gmail, Google Docs, Google Sheets, Google Slides, etc.), its Chrome browser, and its Android operating system. This platform logic and deep integration within a globally deployed application ecosystem allow Google to leverage a vast active user base to accelerate the adoption of its generative AI models at scale.


It is also important to recall Google’s central role in the development of generative AI models and the enthusiasm they have generated since the release of version 3.5 of OpenAI’s conversational agent ChatGPT. At the foundation of the generative AI revolution lies a major research paper published in 2017 by Google researchers, Attention Is All You Need. This work introduced the Transformer architecture, a revolutionary neural network design capable of capturing the meaning of words and sentences by focusing on the most relevant elements while accounting for positional relationships across increasingly long sequences. This architecture enabled the rise of Large Language Models (LLMs) and the democratization of AI at scale, thanks to their ability to produce multimodal content—text, images, videos—of previously unprecedented quality. The “T” in the acronym GPT refers precisely to the Transformer architecture, standing for Generative Pre-trained Transformer.


Amazon represents another major actor within the generative AI value chain, less generalist than companies such as Google or Microsoft, and more strongly focused on providing cloud services to enterprises. Founded in 1994 by Jeff Bezos and led by Andy Jassy since 2021, the company initially developed around e-commerce before massively diversifying into cloud computing through Amazon Web Services (AWS). Since 2018, AWS has made specialized chips available to its clients for training machine learning and deep learning models (Trainium) as well as for inference at deployment time (Inferentia). Amazon also offers a comprehensive suite of tools and services known as Amazon SageMaker—complementary to solutions such as TensorFlow and PyTorch—directly integrated into the AWS offering, enabling support across the entire model lifecycle, from design and training to deployment.


At the same time, the group positions itself as a developer of AI models through its Titan family, accessible via the Amazon Bedrock platform alongside models developed by third parties. Its data marketplace, Amazon Data Exchange (ADX), allows AWS customers to access third-party datasets, both public and commercial, including industry-specific data intended for AI model training. Finally, Amazon integrates generative AI into several of its products and services, including Alexa (voice assistant), Rufus (a conversational shopping assistant integrated into Amazon’s website and app), and Amazon Q (an AI assistant enabling developers to search knowledge bases and receive precise answers in natural language).


Apple, primarily known for designing and marketing consumer electronics such as the iPhone, iPad, and Mac, appears more restrained than other major actors in the field of generative AI—a position often interpreted as a sign of weakness and viewed negatively by many financial analysts. Yet this has not prevented the company from surpassing a market capitalization of four trillion dollars on October 28, 2025, even as it lost its position as the world’s most valuable company to Nvidia and Google, whose stock prices more than doubled over the previous two years.


Nevertheless, Apple announced in the first half of 2024 its first proprietary multimodal AI models—capable of processing and combining text and images—under the name MM1, along with an open-source model family called OpenELM, primarily designed to run directly on mobile devices and perform text generation tasks. During its annual developer conference on June 10, 2024, Apple introduced new generative AI features grouped under the label “Apple Intelligence,” intended to be integrated directly into its products, and announced a non-exclusive strategic partnership with OpenAI. Yet it was ultimately with Google that the company, led by Tim Cook since August 2011, chose to collaborate in order to provide, over time, an AI assistant integrated into its ecosystem of devices, including the Siri voice assistant (Le Monde, January 12, 2026).


Despite notable advances, Apple remains a secondary player in generative AI compared to historical actors such as Microsoft, Google, and Amazon. Its principal contribution lies in offering a premium distribution platform for AI technologies developed by other companies, allowing them to integrate their models natively into Apple’s hardware and software ecosystem and benefit from its massive global installed base. Within this framework, Google signed a historic agreement worth 20 billion dollars per year with Apple to make its search engine the default on iPhones.


Founded in 2004 by Mark Zuckerberg under the name Facebook, the company became Meta in October 2021, signaling its strategic orientation toward the metaverse. Today, Meta operates several major social platforms: Facebook (2004), Instagram (2010, acquired in 2012), and WhatsApp (2009, acquired in 2014). This position grants the company privileged access to immense volumes of data, particularly images and videos.


Although Meta possesses extensive computing infrastructure and data centers conducive to the development of advanced AI models, its relatively late strategic pivot toward generative AI illustrates the volatile and unpredictable nature of today’s technology market. During the Facebook Connect—now Meta Connect—conference on October 28, 2021, barely a year before the launch of ChatGPT 3.5, the company had decided to orient massive investments toward the metaverse. While initially coherent with its transformation strategy, this direction had to be reconsidered—or at least postponed—following the sudden enthusiasm and economic expectations generated by generative AI. This initial strategy, disrupted by an unforeseen technological shift, resulted in a significant lag behind previously little-known competitors such as OpenAI (ChatGPT) and Anthropic (Claude), as well as established actors like Microsoft (Copilot) and, especially, Google (Gemini).


Yet Facebook AI Research (FAIR), which grouped the company’s fundamental AI research labs, contributed to the development of language models such as LLaMA, whose first version was released in February 2023. These models were distributed according to an open-weights approach—foundation models whose parameters are accessible—under licenses permitting commercial reuse. In April 2024, Meta also launched Meta AI, a conversational agent based on the LLaMA family, and integrated generative AI tools across its platforms. Although this open strategy was praised by the open-source community, it never achieved the same commercial and media impact as those of competitors such as OpenAI, Anthropic, or Google.


Within this context of intense competition, Meta finalized on June 13, 2025, a 49% stake acquisition in the startup Scale AI, amounting to approximately 14.3 billion dollars; officially created, on June 30, 2025, the Meta Superintelligence Lab (MSL), bringing together multiple teams working on foundation models, including projects originating from FAIR; appointed former Scale AI founder Alexandr Wang to lead the laboratory; and accelerated its investments in AI with the ambition of becoming the first company to develop and deploy a superintelligence—defined by Zuckerberg as an AI that would “surpass human intelligence in every respect.” The group also announced, on December 30, 2025, the acquisition of Manus, a startup developing AI agents capable of autonomously executing complex tasks, for an estimated two billion dollars, thereby strengthening Meta’s capabilities in the field of agentic AI.


Although these initiatives unfold within a context marked by the departure of major historical figures such as Yann Le Cun, who joined Facebook (Meta) in 2013 to found and lead FAIR, they nevertheless reflect Meta’s determination to close the gap with other generative AI actors.


Among the great pioneers of the microcomputer and digital revolution, Microsoft occupies a singular position at the intersection of several key segments of the generative AI value chain. A historical actor in operating systems through Microsoft Windows and productivity software through Microsoft 365—including applications such as Word, Excel, PowerPoint, and Outlook—the company founded by Gates and Allen in 1975 also operates the Bing search engine and ranks among the world’s leading cloud providers. This platform, known as Microsoft Azure, offers an AI model catalog and relies on specialized chips optimized for its own infrastructure, known as Maia and Cobalt.

Microsoft also publishes the Microsoft Edge browser, which officially replaced Internet Explorer in June 2022, and has integrated generative AI capabilities through Bing Chat. As early as February 2023, the company transformed its Bing search engine by augmenting traditional search with conversational agent features, allowing users to ask questions in natural language rather than enter simple keywords. It had already launched GitHub Copilot in 2021, a programming assistant based on GPT-type models, designed to assist developers in writing and completing code across multiple programming languages, including Python and JavaScript.


Complementing its strategic partnership and equity investment in OpenAI—13 billion dollars invested since 2019—the Redmond-based firm has progressively integrated generative AI tools into its historical products and services through its Copilot assistant, which relies on OpenAI’s GPT architecture. This wide application landscape, spanning front-office tools (Word, Excel, PowerPoint) and back-office systems (ERP and CRM), combined with a vast installed base, creates an exceptionally favorable environment for the rapid deployment of generative AI functionalities across numerous professional use cases.


Finally, Nvidia occupies a central position at the upstream end of the value chain. Founded in 1993, the company is widely regarded by analysts as the primary beneficiary of the deep learning boom since the early 2010s and, above all, of generative AI since the launch of ChatGPT-3 on November 30, 2022. Initially, however, nothing seemed to predestine this meteoric rise: at its creation, the company’s objective was to design and sell chips capable of calculating real-time 3D environments, at a moment when video games and graphical applications were evolving toward more immersive digital worlds. The first major milestone in Nvidia’s history came with the release, in 1999, of the GeForce 256, the first truly programmable GPU—a technological foundation that would later propel the company into AI beginning around 2012.


The second key moment was the launch, in early 2007, of the CUDA development platform—now the standard for GPU programming—which allowed GPUs to extend beyond graphics into other computation-intensive domains such as scientific simulation, climate modeling, and financial risk analysis. Yet it was only in 2012, with breakthroughs in deep neural networks, that Nvidia’s GPUs, supported by CUDA, suddenly found a massive market in artificial intelligence. The neural network AlexNet, developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, which won the ImageNet competition in 2012, was trained on two Nvidia GTX 580 GPUs using CUDA. This decisive moment demonstrated that combining deep neural networks with large datasets and powerful computation could transform AI development, opening the path to deep learning and positioning Nvidia at the core of AI infrastructure.


Driven by the generative AI boom, Nvidia became in June 2024 the first company in the world to reach a market capitalization of four trillion dollars. At the end of November 2022, at the time of ChatGPT 3.5’s launch, its valuation stood at only 420 billion dollars, with revenue of 26.91 billion dollars. Three years later, revenue had multiplied by 4.8—reaching 130.5 billion dollars in fiscal year 2025—while its valuation surpassed 4.5 trillion dollars. The extraordinary growth in revenue and net income—73 billion dollars—has been primarily driven by the Data Center segment and the massive investments of the three hyperscalers, Microsoft, Amazon, and Google, themselves propelled by the generative AI wave and the growing computational demands of model developers such as OpenAI, Anthropic, xAI, or France’s Mistral AI.


In this context, the large-scale sale of flagship GPUs such as the A100 and then the H100 has confirmed Nvidia’s dominant position in the market for high-performance processors, particularly in generative AI. According to TrendForce, the company captured approximately 70% of the AI chip market in 2025, including 95% of the GPU segment. The term “AI chip market” refers here to processors dedicated to artificial intelligence—including GPUs, ASICs, and other specialized units designed for training and inference.


Nevertheless, Nvidia faces an increasingly competitive environment from two main categories of actors: direct competitors offering similar GPU solutions, notably AMD and, to a lesser extent, Intel; and more structural competition from companies such as Google, Amazon, and Microsoft, which seek to reduce dependence on Nvidia by developing specialized accelerators such as TPUs or ASICs, alongside the emergence of AI hardware startups operating in niche markets, including Cerebras and Graphcore.


To properly understand Nvidia’s competitive environment, it is essential to keep in mind the overall structure of the semiconductor market, which can be organized around four major families of chips—processors, integrated circuits, and compute units—each of which encapsulates an entire chapter of computing history over the past sixty years, with its key milestones, use cases, devices, and leading suppliers:


  • CPU (Central Processing Unit): These are the oldest processors, which first appeared in the early 1970s with the development of the first microcomputers. Located at the heart of virtually all computing systems, CPUs still power most laptops and desktop computers, as well as enterprise servers and workstations. The two main historical suppliers are Intel and AMD, to which one can add players such as Apple, Qualcomm, or Samsung, which manufacture chips based on Arm architectures—widely used in mobile devices and embedded systems (cars, game consoles, medical devices, connected objects, etc.)—and which are rapidly gaining ground in the laptop market.

  • GPU (Graphics Processing Unit): GPUs emerged in the late 1990s to solve 3D video game graphics rendering problems. Unlike CPUs, they can perform billions of calculations in parallel, making them particularly well suited to applications requiring intensive computation on very large volumes of data. They truly found their full purpose with the rise of AI in the 2000s, and especially deep learning in the early 2010s. They can be integrated both into high-end graphics workstations and into AI-dedicated data centers at very large scale. The undisputed market leader today is Nvidia, challenged mainly by AMD and, to a lesser extent, Intel.

  • AI accelerators: The term “AI accelerators” refers to a broad set of chips more specialized than GPUs, designed to boost—hence the term “accelerator”—the computations required by machine learning and deep learning applications, both during training and inference. They emerged in the mid-2010s in the context of deep neural network–based applications, and they now sit at the core of modern AI compute architectures (cloud architectures). Google’s TPUs (Tensor Processing Units), designed for specific deep learning computations, are a particular type of AI accelerator. Major suppliers include Nvidia, AMD, Intel, the large cloud infrastructure providers (Google, Microsoft, Amazon), as well as AI startups such as Graphcore, Groq, or Cerebras.

  • ASIC (Application-Specific Integrated Circuit): Finally, ASICs could arguably be grouped with AI accelerators given how specialized their circuits are, since they are designed to carry out extremely specific tasks—such as particular AI workloads. The most frequently cited ASICs are Google’s TPUs, used to optimize neural network computations. Although this type of chip first appeared in the 1970s and 1980s, it is only recently—driven by the demand for intensive computation induced by AI—that ASICs have taken an increasingly important place in cloud infrastructures. The main suppliers are therefore Google, Amazon, and, in some cases, AI startups.


While all the chip families just described are necessary for AI development—an AI server typically combines several of them to handle different, complementary tasks—it is especially GPUs, and to a lesser extent ultra-specialized chips (AI accelerators, TPUs, ASICs), that have captured the market’s attention in recent years, given the foundational role they play in the evolution of hardware architectures and the exponential demand for compute power required for training and inference in AI models. This largely explains the exceptional financial performance of companies such as Nvidia—and also AMD.

With its AMD Instinct data center AI lineup, which includes high-performance GPUs specifically designed for artificial intelligence, servers, and data centers, the American company AMD (Advanced Micro Devices) is now widely regarded as Nvidia’s most serious rival. A historical competitor of the other American giant, Intel, in the CPU segment since the 1980s—within a market driven primarily by the rise of microcomputing—AMD has gradually made AI a strategic priority under the leadership of its current CEO, Lisa Su.


She took office in October 2014 at a time when the company faced commercial difficulties (loss of ground in CPUs), financial strain (negative results, low market capitalization), and strategic uncertainty (lack of a clear direction). This semiconductor giant, founded in 1969 in Silicon Valley by former Fairchild Semiconductor engineers—like Intel—saw its market capitalization rise from $2.1 billion to nearly $400 billion in a decade.


The first major turning point came in 2017 with the launch of its data center GPU family, branded Instinct. However, it was only from 2022 onward that AMD positioned AI as a central pillar of its roadmap, entering a phase of active competition with Nvidia. Its promise: to offer integrated solutions comparable to those of its rival at a more competitive price, in a context of exponentially growing demand for intensive compute driven by the rise of generative AI models.


Intel, for its part—also founded by two former Fairchild Semiconductor engineers in 1968—long the undisputed leader in CPUs and the pioneer of the very first commercial microprocessor in history in 1971 (the Intel 4004), has never truly managed, to this day, to establish itself as a major GPU player. The Californian company, whose name long symbolized excellence and innovation in processors and semiconductors, now finds itself squeezed between Nvidia’s near-monopolistic dominance in the high-potential GPU segment and the competition of its historical rival AMD, both in CPUs and in GPUs dedicated to intensive compute and AI.


In an article published in Le Monde on September 23, 2024, the economic columnist Philippe Escande referred to Intel’s “three missed turns” and the reasons behind its decline: (1) the mobile shift, which saw the emergence of a major competitor, Qualcomm, and its fabless model; (2) the choice of an integrated model—from design to in-house manufacturing of microprocessors—rather than specialization, unlike the dominant actors in the chip and semiconductor value chain; and (3) the missed AI shift with the rise of GPUs, indispensable for scaling data center compute capacity, leaving room for its competitor and compatriot Nvidia, which succeeded in anticipating these changes.


Recent efforts by the microprocessor giant nevertheless show a real willingness to position itself in the high-performance compute and AI acceleration market alongside Nvidia and AMD. This includes the announcement and launch of the Intel Arc family in 2022—primarily targeting consumers and gaming—as well as, more importantly, projects more directly oriented toward AI data centers, such as an AI accelerator called “Gaudi 3,” dedicated to training and inference for large AI models. A number of initiatives aimed at catching up in GPUs and AI platforms have been relatively well received by analysts and financial markets. After a sharp decline in market capitalization and revenues since 2022—revenues stabilizing around $53 billion since 2023—Intel’s valuation rebounded to around $240 billion in early 2026, compared to $105 billion at the end of 2022.


This overview of the chip market and its central role in the generative AI value chain would remain incomplete, however, without mentioning another decisive category of actors—no longer responsible for chip design, but for manufacturing. Companies such as Nvidia and AMD, unlike an exceptional case such as Intel (which historically chose to internalize the manufacturing of its chips under an Integrated Device Manufacturer—IDM—model), operate under a fabless model. This model relies on outsourcing chip fabrication to specialized foundries (pure-play foundries), foremost among which is the Taiwanese company TSMC (Taiwan Semiconductor Manufacturing Company).

Within the semiconductor industry, two complementary stages are classically distinguished:


  • The design stage, primarily handled by actors such as Nvidia and AMD, which define and design—in the context of AI—specialized hardware architectures (electronic circuits and massively parallel processors) optimized for intensive compute workloads related to deep learning and neural networks.

  • The manufacturing stage, based on the operation of extremely capital-intensive fabrication plants (fabs). The equipment required to produce advanced chips is particularly costly and involves investments that can reach tens of billions of dollars. This stage consists in transforming silicon wafers into integrated circuits through photolithography processes—techniques that etch extremely fine electronic patterns, on the order of a few nanometers (a nanometer being 10⁻⁹ meters), using ultra-high-precision light sources.


In this context, two companies that remain relatively unknown to the general public occupy an absolutely central position in the chip manufacturing chain. The first is TSMC, now the world’s leading semiconductor foundry, far ahead of competitors such as Samsung Foundry, GlobalFoundries, or UMC, with market shares approaching 70% in the most advanced segments. The second is the Dutch group ASML (Advanced Semiconductor Materials Lithography), specialized in the design and manufacturing of photolithography equipment for the semiconductor industry. ASML’s main clients are precisely the major advanced-chip foundries, foremost among them TSMC and Samsung Foundry.


ASML today holds a near-monopolistic position in the global market for advanced lithography equipment, particularly EUV (Extreme Ultraviolet) systems, which are indispensable for manufacturing next-generation chips, especially those destined for artificial intelligence applications. These technologies make it possible to etch an ever-growing number of transistors onto a single chip, resulting both in significant gains in energy efficiency and a substantial increase in compute power.

Recent Posts

See All

Comments


Restez informé! Ne manquez aucun article!

This blog explores the ethical, philosophical and societal issues of AI.

Thanks for submitting!

  • Grey Twitter Icon
bottom of page