top of page
Search

For a “Socratic” AI.

  • Writer: Franck Negro
    Franck Negro
  • Aug 31, 2025
  • 5 min read

Drawing on the experience acquired over the past few years, and in light of an ever-growing body of studies on the effects of digital tools on human cognitive capacities, some observers are now warning about the risks of a progressive loss of autonomy that we might undergo as a result of the generalized use of artificial intelligence tools—especially generative AI. In other words, by increasingly delegating to artificial systems tasks that were previously carried out by humans, we would become more and more dependent on these tools and would, as a result, lose abilities as fundamental as attention and memory, but also—and above all—the ability to reason and to argue. In short, we would be less and less capable of thinking for ourselves. It is thus the whole process of searching for information, and more broadly our relationship to knowledge, that is being called into question, as Ngaire Woods reminds us in an op-ed published in Le Monde on September 2, 2025.


Indeed, the advent of the Web in the early 1990s, followed shortly thereafter by the development of search engines—what was called the “Google moment” at the end of that same decade—had already profoundly transformed the way we accessed information and, in so doing, the way we built our knowledge of the world. The Web and the associated tools it carried with it lent credibility to the idea—promoted by its founders—of access that was henceforth quasi-unlimited and low-cost to a stock of knowledge without equivalent in the history of humanity. Initially dazzled by this promise, we did not yet foresee what the Web would gradually become from the early 2000s onward, with in particular the rise of the first social networks and the launch of LinkedIn in 2003, and then especially Facebook in 2004.


It is at that moment, historians tell us, that we moved from the Web of origins—mainly static and based on consulting content produced by professionals or institutions, the so-called “Web 1.0”—to a Web described as “2.0,” marked by the rise of blogs and forums. These would now allow internet users not only to consume content passively, but also to produce it, to comment on it, and above all to share it. It is also during this period that a notion, now part of everyday language, was propelled to the forefront: “Big Data.” The term first appeared in scientific articles published by the Association for Computing Machinery (ACM) in October 1997, to designate datasets so vast, so varied, and generated at such a speed—hence the famous triptych “volume,” “velocity,” and “variety,” to which it is now customary to add “veracity” and “value”—that they could no longer be analyzed using traditional tools and methods for data management and analysis, such as relational SQL databases or centralized servers with limited storage and computing capacity.


In this context, one application would play an absolutely central role for all the internet users we had become: search engines. It was indeed indispensable, as Google’s mission statement—founded in 1998—recalls, “to organize the world’s information and make it universally accessible and useful.” In other words, the task was to facilitate access to knowledge for the greatest possible number of people, regardless of the language in which queries were formulated. The release of the iPhone, at the beginning of 2007, would further amplify this phenomenon, by making it possible to consult the Internet everywhere and at any moment. We could then truly speak of access to information that was at once universal, immediate, and mobile.


What Ngaire Woods’s column suggests in the background is the consubstantial link that exists between the way we access knowledge and the mobilization of the cognitive capacities that this activity requires. In other words, the way we gain access to knowledge directly influences the cognitive faculties called upon and, consequently, the development or erosion of certain competencies. It is precisely this link that the rise and growing use of generative AI tools such as ChatGPT, Gemini, or DeepSeek comes to interrogate. When we previously searched for information on the Web, we had to make a cognitive effort consisting in exploring the dozens of pages returned in the form of “blue links” by search engines, assessing the reliability of sources, selecting the most relevant ones, then reading and cross-checking these contents in order to develop our own synthesis.


Yet it is precisely this process that generative AIs—or the new search experiences of the Google SGE (Search Generative Experience) type—are now calling into question. It is now enough to formulate a question in natural language to receive in return a synthetic answer supposedly bringing together all the relevant information, without the user having to search for it, collect it, or prioritize it themselves. In other words, the machine takes charge of the entire process of processing information, so that the user no longer has to make the effort of cognitive analysis and mental synthesis that the use of traditional search engines demanded. By simplifying access to information and its processing to an extreme degree, and by making that access ever more transparent, we thus run the risk of delegating to AI systems an ever-growing number of cognitive functions, of becoming dependent on these tools, and of gradually losing capacities that are essential to the exercise of critical thinking.


To this is added the intrinsic complacency of generative AIs, which are above all trained, as Ngaire Woods emphasizes, “to please and to seek users’ approval,” with the aim of reducing as much as possible the “cognitive frictions” that are nevertheless indispensable to sharpening intellectual faculties. For a generative AI, each interaction with a user constitutes feedback that may influence its future answers. Yet these systems possess neither a semantic understanding of the contents they process, nor a conception of truth and falsity of their own. They produce text by relying on statistical correlations between words and linguistic sequences. It can thus happen that a model treats as plausible—because coherent with its training corpus—information that is nonetheless factually false. In this sense, it tends to align itself with what is statistically majority, independently of whether the statements it produces are true.


Of course, these models will very probably continue to improve over time. Perhaps researchers will succeed in integrating ever more sophisticated verification mechanisms, making AIs less complacent toward users and more rigorous as to the accuracy of their answers—provided, however, that a strong requirement regarding truth value is maintained.


We could then imagine a “Socratic” AI which, like Socrates as staged by Plato, would ceaselessly ask new questions and would compel us, through an exercise in maieutics, to interrogate our most deeply entrenched beliefs and certainties. In short, an AI to which we would not surrender our cognitive capacities and our critical mind, but which would, on the contrary, help to strengthen and develop them.

Recent Posts

See All

Comments


Restez informé! Ne manquez aucun article!

This blog explores the ethical, philosophical and societal issues of AI.

Thanks for submitting!

  • Grey Twitter Icon
bottom of page