AI and Disinformation: Are We Facing a Global Information Crisis?
- Franck Negro

- Sep 22, 2025
- 7 min read
Disinformation, echo chambers, and polarization. – Disinformation, defined as false information intentionally propagated with the aim of harming or causing prejudice, has become a significant threat to our democratic institutions and social stability. The development of artificial intelligence over the past decade—and more particularly the rise of generative AI since the launch of ChatGPT version 3.0—has considerably increased the possibilities for spreading disinformation, with the risks this poses to our democracies.
The use of recommendation systems based on AI algorithms makes it possible, for example, to target certain populations with extreme precision and to feed them the content to which they are most receptive. Made available on platforms such as social networks, these targeted contents can be propagated massively and almost instantaneously, with the aim of influencing voters’ thoughts and behaviors during national or regional elections. They thus help shape public opinion and undermine trust in institutions. This was the case, for example, in the Cambridge Analytica scandal: this political consulting firm, founded in 2013, exploited—without their consent—the data of millions of voters in order to build psychological profiles intended to influence their electoral behavior. The company played an important role in the Brexit campaign as well as in Donald Trump’s presidential election in 2016. The scandal, revealed in 2018, led to the company’s closure that same year.
Yet the algorithms used by digital platforms—especially social networks, now favored by younger generations as a source of information—do not foster the dissemination of reliable, vetted information. They above all prioritize user engagement, by promoting emotionally charged content, which studies show spreads far more rapidly than any other type of content. The AI algorithms deployed by platforms such as Facebook, YouTube, or X (formerly Twitter) thus help create, automatically and at scale, digital environments in which extremist and divisive narratives proliferate—precisely because they generate high levels of interaction. This mechanism favors the constitution of what researchers in media sociology and communication studies have, since the rise of social networks, called “echo chambers.” The expression refers to the fact that users of these platforms are mainly exposed to information or opinions that reinforce their beliefs and convictions, rather than confronting them with divergent points of view. In other words, they find themselves confined within an informational bubble, with limited capacity to open themselves to other perspectives.
Personalization, recommendation engines, and algorithmic filtering therefore do not merely reinforce individuals’ exposure to ideas close to their own; they also protect them from the irruption of contradictory opinions, which are nonetheless indispensable to the construction of critical thinking. These mechanisms, diffused at scale by ever more sophisticated algorithms, thus generate heightened risks of polarization in public debate and radicalization of opinions—hardly conducive to democratic dialogue and deliberation.
The use of generative AI during elections. – To these well-known phenomena is added the recent use of generative AI, whose effects on the global information sphere increasingly worry researchers in the social sciences. It still remains, however, to document the possible risks that these technologies may pose to our democracies. Thus, in May 2025, a report was published by a collective of experts affiliated with the IPIE (International Panel on the Information Environment), entitled: Generative AI in Electoral Campaigns. Its ambition is to provide the first global analysis devoted to the use of generative artificial intelligence during elections held over the year 2024.
This year indeed represents a privileged research terrain, since more than half of the world’s population was called upon to vote in national elections. Yet, on the occasion of each of these elections, the authors of the report remind us, generative AI tools were mobilized to influence voters, disseminate false information, and disrupt democratic processes. The study relies on an open database listing 215 incidents in 50 countries, with a double objective: 1) to provide a global account of the use of generative AI in an electoral context, 2) to provide public decision-makers and actors in the technology sector with recommendations regarding governance and transparency, so as to strengthen citizens’ trust and the resilience of electoral systems.
The synthesis document intended for policy-makers highlights four main findings, available in its English version on the IPIE website:
Generative AI is involved in most elections. – Indeed, 80% of countries that organized national elections in 2024 experienced some use of generative AI during the campaign. Among the fifty countries analyzed, eight recorded ten incidents or more, including thirty in the United States and thirty in India. Romania’s case drew particular attention: according to the conclusions of the Constitutional Court, the risks of destabilization and information manipulation weighing on electoral integrity led to the annulment of the results of the first round of the presidential election of November 24, 2024. One of the major triggers was the disclosure of documents suggesting that a large-scale operation to manipulate public opinion had been conducted via TikTok—a platform particularly popular in Romania—in favor of the independent candidate Călin Georgescu.
Generative AI is mainly used for content production. – In 90% of the recorded incidents, generative AI served to rapidly produce audio, visual, or textual content—especially doctored videos or deepfakes—and to integrate it into systems enabling the targeting of specific publics through personalized messages. It thus most often forms part of broader influence operations. The report cites, for example, a synthetic video falsely showing, during elections in Bangladesh, the Assembly candidate Abdullah Nahid Nigar announcing her withdrawal from the race. The AI-generated video quickly spread on social networks. Such examples show how generative AI, combined with mass-dissemination platforms, enables actors to shape political narratives at will, impersonate public figures, and manipulate democratic debate.
Most users remain unknown. – In nearly 50% of cases (46% exactly), the origin of the AI-generated content could not be determined. In other words, no author, producer, or organization could be identified. This anonymity, which most often responds to malicious or strategic aims, greatly complicates attribution, accountability, and platform regulation processes. In the long run, it represents a major challenge for the transparency and quality of political communication. The report’s authors therefore recommend strengthening detection capacities, establishing disclosure norms, and putting effective oversight mechanisms in place.
Most uses are harmful during elections. – Finally, the authors classified all recorded incidents in the database into three categories: beneficial, harmful, or uncertain as to their electoral implications. Yet 69%—more than two thirds—proved to have had a harmful impact on the electoral process. Among the cases attributed to foreign actors, all had a harmful purpose, which testifies, the authors emphasize, to a constant interest on the part of external entities in interfering with electoral processes and manipulating voters’ perceptions.
While generative AI can, in theory, be mobilized to produce positive—even inspiring—content likely to serve voters’ civic education and help them fully exercise their duties as citizens, the report shows that, in practice, it is most often used for malicious purposes aimed at manipulating public opinion and influencing electoral behaviors. Its capacity to rapidly generate varied content is reinforced by the ease with which it can be integrated into other dissemination technologies, especially social media platforms. This coupling offers malicious actors—whether local or foreign—the possibility of placing generative AI at the heart of mass communication and personalized targeting strategies, thereby transforming these technologies into powerful instruments of propaganda and disinformation in electoral periods.
A global information crisis – But disinformation does not affect only the sphere of public and democratic debate: it can also have direct repercussions on production, the economy, and financial markets. Economic decision-makers and investors base part of their choices on the information disseminated daily by digital media. It is therefore these same information flows that profoundly influence investment behaviors and can, if manipulated or altered, trigger instability in financial and stock markets. In other words, information—and the level of trust we accord it—constitutes an essential pillar not only for the proper functioning of our democracies, but also for the prosperity of our economies. This crucial point is forcefully recalled by an op-ed published in Le Monde on September 22, 2025.
In this text, a collective of eleven internationally renowned economists, among them Daron Acemoglu, Joseph Stiglitz, and Philippe Aghion, warns of what it calls an unprecedented “global information crisis,” threatening economic prosperity and human progress. Indeed, while most governments seem to have developed a passion for artificial intelligence, they neglect to invest in what remains, according to the authors, one of the essential pillars of the twenty-first-century economy: information. “Without reliable information,” they remind us, “it is impossible to meet the most urgent economic, social, and environmental challenges of our time.”
The authors highlight the considerable risks linked to the weakening of independent, verified information, as well as the threats that disinformation poses to economic and financial stability. Thus, in 2024, no fewer than 90 countries are said to have been targeted by “attempts to manipulate information by foreign states.” The rise of technologies such as generative artificial intelligence risks, in their view, further amplifying this phenomenon and making the fight against the spread of false information online ever more difficult.
They stress that market forces alone are incapable of protecting this essential public good that is information. They channel revenues toward the dominant business models of the major platforms, which directly weakens traditional and independent media. Added to this are the massive investments of countries such as Russia in disinformation and propaganda, while the major democracies struggle to mobilize sufficient resources to support free media.
In light of this diagnosis, they call for “decisive public action,” structured around two priorities: 1) protecting public-interest media and 2) shaping the information markets of tomorrow. The first measure would involve public investment in “new models necessary to promote, support, and protect free and independent media,” notably through the creation of an International Fund for Public Interest Media. The second would consist in instituting a genuine “industrial policy for information,” intended to foster the emergence of a viable and independent media ecosystem, while also putting in place regulation adapted to an economy transformed by AI.
What is at stake, the authors insist, is the viability of an informational space undergoing rapid degradation, whose fragility compromises both the proper functioning of economies and the benefits we expect from the artificial intelligence revolution.
Comments