Environmental challenges of AI and the problem of measuring its ecological footprint.
- Franck Negro

- Jan 5, 2025
- 5 min read
Among the central questions in AI ethics are those concerning the ecological consequences it entails. Current research focuses primarily on evaluating Machine Learning and Deep Learning systems—often grouped under the label of connectionist AI, as opposed to symbolic AI. The design and deployment of large models rely on massive volumes of data (Big Data) and considerable computing resources, leading to undeniable environmental impacts. The computing power required to train such models has grown exponentially, measured by computer scientists in FLOPs (Floating Point Operations per Second), that is, the number of floating-point calculations a computer—especially a supercomputer—can perform in one second. The higher this number, the more capable a system is of training complex models on vast datasets. Yet this performance comes at a cost: it directly increases the energy footprint of AI.
Artificial intelligence, however, should not be seen solely as a source of environmental pressure. Based on prospective scenario analyses conducted by various organizations such as ADEME, AI can also be viewed as a tool for combating climate change—through climate monitoring, energy optimization, or improved transportation systems. Numerous reports seek to promote AI as part of the solution to environmental challenges, including those produced by the European Parliament (The Role of Artificial Intelligence in the Green Deal) and the IPCC (Climate Change 2022: Mitigation of Climate Change). Both emphasize the ambivalent nature of the technology, which can serve environmental goals while simultaneously contributing to an increase in carbon emissions.
A dilemma therefore emerges between, on the one hand, the promises of AI as an instrument of ecological transition and, on the other, its growing environmental impacts linked to the training and deployment of ever more powerful models. Over the past few years, an entire field of research devoted to assessing the environmental footprint of AI systems has taken shape. How can we accurately evaluate the global carbon, energy, and environmental impact of these technologies? Which indicators should be prioritized to account not only for electricity consumption but also for the full life cycle of the equipment involved? To what extent can AI genuinely contribute to mitigating climate change without itself becoming an aggravating factor? And finally, how might we establish an ethical and systemic governance framework capable of reconciling technological innovation with environmental sustainability?
Measuring impact: a basic approach. – A naïve method for calculating carbon footprint would consist simply in measuring the electricity consumption of a computer running an AI model over a given period. In this perspective, electricity consumption expressed in kWh—associated with greenhouse-gas emissions—would allow us to estimate carbon footprint, that is, the CO₂ emissions produced by a human activity. Electricity consumption is converted into carbon impact through emission factors, which vary from one country to another depending on the energy mix. In France, for example, the emission factor is around 101 grams of CO₂ equivalent per kWh, due to a largely low-carbon energy mix, whereas in Norway—where electricity production is almost entirely renewable—the same consumption would correspond to roughly 22 grams of CO₂ equivalent per kWh. In the context of AI, this means that the carbon footprint of a model depends not only on its energy use but also on the geographic location of its training infrastructure. The environmental siting of data centers thus becomes a decisive lever for climate mitigation.
Measuring the full range of impacts. – Yet this approach remains far too simplistic to capture the totality of environmental effects generated by AI systems. A computing center includes not only processing servers but also storage systems, network infrastructure, cooling equipment, and backup generators. A simplified calculation that considers only the computing servers ignores the indirect electricity consumption of these essential components. To approximate this global consumption, analysts use the PUE (Power Usage Effectiveness), which measures the ratio between the total energy consumed by a data center and the energy used solely by computing equipment. It is through such metrics that more realistic assessments of environmental impact become possible. One of the earliest studies to adopt this approach was Strubell et al. (2019), Energy and Policy Considerations for Deep Learning in NLP, which analyzed the energy consumption associated with training several NLP models. Subsequent research has explored the relationship between model accuracy and energy expenditure, raising a crucial question: does the marginal gain in performance truly justify the marginal increase in energy consumption?
Life cycle of equipment and models. – Even these improved studies often fail to account for the complete life cycle of hardware and models. Life-cycle analysis encompasses impacts from raw-material extraction through manufacturing, distribution, use, and end-of-life stages, each involving energy, water, and resource consumption. A 2021 study by Facebook (Wu et al.) estimates that the production phase of an AI model accounts for roughly 20–40 % of carbon emissions, while the usage phase represents approximately 60–80 %. Evaluation must therefore consider five categories of tasks associated with different devices—sensors, computers, servers, and smartphones: data acquisition, data processing, data storage, model training, and inference. Yet these analyses still capture only first-order impacts, those directly tied to digital infrastructures.
A comprehensive perspective must also include second-order impacts (for instance, positive efficiency gains in sectors such as buildings or transport) and third-order impacts, which are broader and systemic, such as behavioral changes induced by digital technologies (Kaack et al., 2021).
Toward a systemic framework. – At the conclusion of a conference devoted to AI’s environmental challenges, Anne-Laure Ligozat, professor and researcher in computer science at ENSIIE, calls for a systemic and interdisciplinary evaluation of AI’s impacts. She draws on the framework proposed by Lynn H. Kaack in the article Aligning Artificial Intelligence with Climate Change Mitigation (2021), which distinguishes three major categories of effects of Machine Learning on greenhouse-gas emissions: impacts linked to computing infrastructures, immediate impacts related to applications, and system-level impacts.
According to Ligozat, most existing studies adopt too narrow a perspective. They focus on isolated aspects—carbon footprint derived from energy consumption, performance of data centers, or the technical phases of training and inference—without considering the broader context. What is needed is a comprehensive assessment capable of encompassing the entire life cycle of AI systems, from production to use and end-of-life, while integrating both direct ecological indicators and indirect effects that are often overlooked but equally decisive.
Impacts of digital equipment and AI. – The first dimension concerns the life cycle of all equipment necessary to develop and deploy an AI system: extraction of raw materials, manufacturing, transport, usage, and disposal. For AI specifically, the assessment must also include data acquisition, processing, storage, model training, and inference phases.
Environmental, behavioral, economic, and societal impacts. – Much more ambitious and difficult to quantify, the second dimension aims at a global environmental assessment, taking into account qualitative effects such as obsolescence (accelerated renewal of equipment), direct rebound effects (increased usage enabled by efficiency gains), indirect rebound effects (economic gains reinvested in high-impact activities such as air travel), and broader social transformations affecting mobility patterns, biodiversity, and ecosystems. Ultimately, the environmental question raised by AI is not reducible to a simple calculation of electricity consumption. It requires a systemic approach capable of articulating technological performance, economic incentives, and societal transformations within a coherent ethical framework—one that reconciles innovation with ecological responsibility.
Comments