This critical analysis examines information disorders in the age of artificial intelligence. By tracing the evolution of manipulation strategies from ancient rhetoric to modern, state-backed, computational propaganda, it shows that while persuasion techniques are not new, the automation, multiplication and personalisation of these techniques through digital technologies constitutes an intensification on an unprecedented scale.

The manipulation of information is as old as human society itself and has taken different forms across historical, cultural and political contexts. Since ancient times, thinkers such as Aristotle, in his ‘Rhetoric’, have analysed the tools for influencing opinion, such as persuasion, figures of speech, and emotional appeal.
Disinformation was originally used as a military strategy to deceive the enemy and gain the upper hand. For ages, the manipulation of information has been a tool of power, shaping beliefs and behaviours. Nevertheless, rumour and propaganda became subjects of study in the first half of the 20th century, when rumour was considered a social symptom revealing the tensions, fractures and imaginations of an era. Propaganda, on the other hand, was framed as a formidable technique for the mass manipulation of people. It is used not only to convey a message, but also to shape collective emotions, saturate public spaces, rewrite history and construct an imaginary that aligns with the transmitter’s objectives.
These persuasion and manipulation techniques are effective because they exploit our emotions, reflexes and cognitive biases. As early as the 1930s, figures such as Joseph Goebbels in Nazi Germany had already systematised highly effective techniques for mass manipulation. Goebbels made this very clear: to convince the masses, you must touch the heart before touching the mind. You must arouse fear, fuel hatred, flatter national pride and create a sense of unity in the face of a designated enemy.
Emotion, repetition, polarisation and saturation are the fundamental drivers of information disorder today. These dynamics have reached unprecedented proportions in the contemporary digital ecosystem, where the logic of virality largely prevails over that of verification. It is not the quality of the content that determines its reach, but its ability to provoke a reaction. Well-formulated rumours, images taken out of context, and simplistic yet emotionally powerful stories can now cross borders and influence millions of people in a matter of hours. Previously, it took days or weeks and a cumbersome infrastructure to reach a mass audience.
The many faces of information disorder
Claire Wardle and Hossein Derakhshan conceptualised the notion of information disorder, which encompasses the various phenomena that disrupt the circulation, reception and understanding of information in the public sphere. Their typology distinguishes three main forms. Misinformation is the unintentional sharing of false information, often due to misunderstanding or a failure to verify. For instance, individuals may circulate inaccurate news believing it to be factual. Disinformation, by contrast, involves the deliberate creation and dissemination of false or misleading content, usually with the intention of manipulating, deceiving, or causing harm. Finally, malinformation involves the use of genuine information that has been taken out of context or disclosed in a way that is designed to damage individuals or institutions.
Among the many forms of information disorder, conspiracy theories occupy a central position. In social psychology, they are defined as explanations of events based on the supposed secret and coordinated action of a malevolent group. Such narratives are far from mere fiction; they meet deep psychological needs by offering a sense of control, reducing uncertainty and strengthening group identity against a perceived threat. Periods of crisis, marked by instability and declining trust in institutions, provide particularly fertile ground for their spread.
The idea of ‘alternative facts’, popularised during Donald Trump’s presidency, is closely related. These are statements that cannot be objectively verified, yet are accepted as true within certain groups based on political or emotional identification rather than evidence. The important thing is not factual accuracy, but belief and loyalty to the source. The assault on the U.S. Capitol on 6 January 2021 was a striking example of this: many participants acted on the conviction that the 2020 election had been stolen, a claim that was relentlessly amplified by partisan media and online networks.
Finally, propaganda is another major form of information disorder. It consists of deliberate, systematic communication strategies employed by powerful individuals or organisations to influence attitudes, opinions and behaviours. Its aim is not merely to convince, but to reshape perceptions of reality by blurring the line between truth and falsehood. Contemporary examples abound: Russian propaganda, particularly in the context of the war in Ukraine, seeks to legitimise state narratives and sow confusion abroad, while Chinese state propaganda combines censorship, algorithmic control and international media outreach to promote a vision of stability and unity.
Artificial intelligence, an amplifying agent
All forms of information manipulation, whether local or transnational, now take place in a digital ecosystem that is profoundly shaped by artificial intelligence (AI). Although the term is often used broadly, AI refers to a set of technologies capable of automating tasks and decision-making processes by imitating certain human cognitive functions. These technologies structure the online information landscape, influencing what we see, believe and share.
On commercial platforms such as Amazon, recommendation systems use AI algorithms to analyse user data and suggest personalised content. They convert traces of online behaviour — searches, clicks and purchases — into tailored suggestions. Although these algorithms are designed to enhance the user experience, they can also amplify biased or misleading content, thereby reinforcing certain ideas and limiting exposure to diverse viewpoints. For example, a 2023 study by Check First and AI Forensics found that 72% of Amazon search results for ‘Covid’ featured books by authors who had previously been known to spread misinformation. Thus, recommendation systems can unintentionally become powerful vectors of falsehood.
AI-driven recommendation mechanisms on social media pose similar risks. Algorithms prioritise content that generates strong emotional engagement, such as likes, comments and shares, regardless of accuracy. This logic fosters filter bubbles and echo chambers, where users primarily encounter content that aligns with their existing beliefs, thereby reinforcing confirmation bias and polarisation. In such an environment, political disinformation campaigns and foreign propaganda find fertile ground. Platforms operating within an attention economy are designed to maximise time spent online and advertising revenue, favouring emotionally charged or divisive posts over verified information.
Search engines and content aggregators play a crucial role in how users access information, but their internal mechanisms can perpetuate misinformation. Google, for example, indexes and ranks billions of web pages according to their relevance and reliability. However, Aslett et al. (2023) have shown that users attempting to verify dubious claims online can sometimes become more convinced of the falsehoods due to ‘data voids’ — areas of the web dominated by low-quality sources that cite and validate one another. This creates a closed informational ecosystem that simulates legitimacy while circulating falsehoods.
Aggregators such as Google News and MSN further exemplify this issue. By compiling articles from various outlets into unified feeds, they speed up the flow of information, but they also reproduce editorial biases and prioritise visibility over veracity. Algorithms often highlight what is most popular or engaging, rather than what is accurate. A striking example of this occurred in the summer of 2024, when an anonymous X account falsely accused Tim Walz, who was then Kamala Harris’s running mate, of sexual assault. The unverified story was picked up by the Hindustan Times and automatically relayed by MSN’s algorithmic aggregation system, giving it an air of credibility. Investigations later revealed that this was part of a Russian disinformation campaign targeting the US election.
Finally, voice assistants such as Alexa use AI, speech recognition and natural language processing to answer user queries. However, their reliance on online sources leaves them vulnerable to the same risks. For example, Alexa has frequently been criticised for disseminating false information and conspiracy theories, demonstrating that the objective of an AI assistant is to provide rapid responses that may not be accurate.
The case of generative artificial intelligence
Many citizens now confuse artificial intelligence with generative AI. The arrival of ChatGPT on the market at the end of 2022 has sparked considerable enthusiasm for this previously little-known technology. However, behind this fascination lie significant concerns regarding disinformation. Firstly, the data used to train these systems can contain biases, errors or problematic content. On the other hand, generative models can produce coherent and credible-sounding text that is factually inaccurate, a phenomenon known as ‘artificial hallucinations’.
Large language models (LLMs), one application of generative AI, are trained using vast amounts of data from a variety of sources, including user publications, collaborative encyclopaedias such as Wikipedia, partisan blogs, forums and ideologically charged websites. While this diversity promotes linguistic richness, it also introduces significant biases into the training data. These biases are not only reproduced. They are both cultural and linguistic. They can be amplified, producing false or biased answers.
The models’ fluid language, empathetic tone and ability to tailor their responses to user expectations create an illusion of understanding and dialogue. This anthropomorphisation, combined with the implicit flattery of personalised discourse, establishes a climate of trust that makes users more receptive — and therefore more vulnerable — to inaccurate or manipulated content. This power of conviction, which is based not on truth, but on plausibility, marks a significant change in how disinformation can infiltrate our digital interactions today.
The integration of LLMs into search engines is transforming our relationship with information. Tools such as Google Gemini and Perplexity now provide written, synthesised answers rather than just a simple list of results. While this makes knowledge more accessible, it also raises questions about the reliability, transparency and quality of the content generated.
So-called « augmented » search engines have several major weaknesses. Firstly, there is the issue of context: the user’s true intention can be misinterpreted, which can result in an inaccurate response. Secondly, there is a lack of transparency; it is often unclear what criteria are used to weight, prioritise or exclude sources. The algorithms themselves can introduce biases, subtly influencing the way results are presented. Furthermore, the syntheses produced tend to be overly generalised, erasing nuances, uncertainties and contradictions between sources. Citation reliability is another critical issue. A study published earlier this year showed that over 60% of responses generated by AI-powered search engines contained inaccurate references.
From apparent neutrality to instruments of disinformation
Generative AI is not a neutral technology. The content these systems produce is shaped by the technical, ethical and ideological choices underlying their operation. They are also influenced by their training data, design parameters and the intentions of their creators. They do not reflect an objective or universal truth, but rather a worldview encoded in their architectures. Each model embodies implicit norms, value hierarchies and cultural assumptions that influence its tone, style and the limits of what it can express. Therefore, GenAI functions not merely as a computational tool, but also as an ideological device that mediates access to information, knowledge and meaning in ways that are aligned with its underlying design philosophy.
The spread of GenAI has profoundly transformed the dynamics of online information. These technologies can produce text, images, sounds and videos that are increasingly indistinguishable from authentic content, making them powerful instruments for communication — but also for manipulation. Their ability to generate credible and coherent material on a large scale enables the automation of disinformation. Artificially created news stories, modified images and simulated voices can fabricate persuasive yet false narratives. When disseminated through social networks or automated accounts, such content can reinforce emotional reactions, polarisation and mistrust. The distinction between genuine and synthetic information becomes blurred, undermining traditional verification processes and eroding public trust in reliable sources.
Part of the problem and part of the solution
While AI can amplify the spread of disinformation, it can also play a crucial role in its detection and regulation. This duality lies at the heart of the current debate about the ethical and strategic use of AI in the public sphere. As a growing proportion of online content is now generated by AI systems, concerns about the quality and authenticity of information are increasing. The increasing sophistication of synthetic media makes it harder to distinguish between genuine and fabricated content. Malicious actors exploit these technologies to target specific audiences with personalised falsehoods that exploit their beliefs and vulnerabilities. This personalisation blurs the boundaries between fact and fiction, contributing to a broader erosion of trust in the digital information environment.
However, the same technological advances that enable manipulation can also be used to counter it. Artificial intelligence can support the detection of disinformation through automated monitoring, content verification and identifying coordinated influence operations. Many fact-checking organisations already use AI to analyse online networks, detect emerging false narratives and trace their diffusion across platforms.
Machine learning models can flag inconsistencies, recognise synthetically generated content and assess the credibility of sources at an unprecedented speed and scale. These developments demonstrate the potential of hybrid systems that combine human expertise with computational power — an approach that strengthens our collective resilience against information manipulation while upholding ethical and democratic principles.
Artificial intelligence did not invent propaganda or manipulation, but it has dramatically increased their scale, speed and subtlety. As Kranzberg’s Law reminds us, technology is neither inherently good nor bad; its effects depend on human intent and societal use. The shift from an information society to a disinformation society highlights the pressing need for civic education, media literacy, and critical thinking.