X

Generative AI and journalism: A critical examination of epistemic authority and knowledge production

Laurence Dierickx

2026-04-03

Situated between pragmatic professional applications and broader structural transformations, the integration of generative AI into journalism introduces substantial epistemic and epistemological challenges. This post advances a critical analysis, tracing developments from emerging practices to forms of algorithmic mediation and their ethical consequences.

Artificial intelligence technologies have been used in journalism for almost two decades, often in discreet and largely naturalised forms. Search engines, recommendation systems , transcription tools, and machine translation have gradually transformed professional routines without directly challenging the epistemological foundations of journalism. Added to this were forms of automation affecting the very mechanisms of information production and dissemination, fueling ethical debates that were still relatively uncommon, questioning biases, the reliability and transparency of processes, and their impact on the quality of information.

With the rise of generative artificial intelligence, the widespread view is that it is no longer simply a matter of assisting journalistic work or taking over time-consuming tasks, but of intervening directly in the production of discourse, narratives, and factual reporting. This evolution entails a redefinition of the very conditions of knowledge, truth, and authority in the public sphere. However, empirical studies devoted to journalistic uses qualify the idea of generalised automation . They show that journalists primarily use generative AI as an assistance tool integrated into their routines: summarising documents, compiling news reports, helping with wording, reformulating, or adapting style.

These practices remain subject to heightened vigilance regarding risks to veracity, professional ethics, and journalistic authority. The emphasis is on optimising workflows rather than delegating the production of factual content, revealing a persistent tension between efficiency gains and the preservation of the journalist’s role as guarantor of meaning and truth.

The great misunderstanding

A major misunderstanding often structures the debate. Generative AI is often perceived as a system capable of reproducing knowledge or even accessing reality. However , it is neither a database nor a knowledge system in the classical sense. It is a statistical language model trained on massive corpora to predict the probability of a linguistic sequence. It does not store facts, does not refer to events, and possesses neither historical memory nor a semantic understanding of the world.

Their performance relies on the ability to produce fluent, coherent, and contextually appropriate statements. This fluency is precisely what fuels an illusion of intelligence, and even more so, an illusion of epistemic authority. The responses produced are plausible, often convincing, but inherently independent of any requirement of truth or proof.

This fundamental characteristic has immediate epistemic effects. The models are trained on data of heterogeneous quality, largely sourced from the internet, that includes expert knowledge, opinions, biased content, extremist discourse, and disinformation. They necessarily reproduce and amplify these asymmetries.: cultural and linguistic biases, social stereotypes , dominant worldviews. The phenomenon of hallucinations, far from being marginal, is structural.

Added to this is reinforcement training based on human feedback, which prioritises conformity, perceived satisfaction, and social acceptability over factual accuracy. Algorithmic sycophancy, defined as the tendency of models to flatter, approve, or validate the user’s premises, clearly illustrates this logic. It poses a particular risk in sensitive journalistic or informational contexts, where contradiction, critical distance, and verification are essential conditions for knowledge production.

These effects, however, should not be considered solely as technological deviations. They are part of a broader epistemological framework concerning how knowledge is produced, perceived, and legitimised. The speed and immediacy inherent in generative AI contribute to a radical reconfiguration of our relationship with time. Immediate response becomes the norm, waiting for a sign of inefficiency. This temporality favours the delegation of cognitive tasks to machines at the expense of practices based on doubt, deliberation, and reflexivity.

Human agency, understood as the capacity of an individual to recognise themselves as the author of their actions and decisions, is thereby weakened. The risk is not that journalists will be replaced by generative AI, but rather that they will be gradually dispossessed of certain dimensions of their cognitive work, including the production of meaning.

An epistemological reading of the Toronto school sheds light on this transformation. Following in the footsteps of Innis and McLuhan, the focus is not solely on analysing the content produced by technologies, but on understanding how the medium itself shapes structures of perception and thought. Generative AI is not simply a tool. They constitute a cognitive environment. Their interface suggests a dialogic exchange, an understanding, and sometimes even intentionality. The medium blurs the message: not by necessarily producing false content, but by changing the conditions under which truth is perceived as such.

In this context, the concept of emergent fact is central. In traditional journalism, a fact rests on a requirement of truth, whether conceived as empirical, interpretive, or institutional. It is the result of a process of selection, verification, and narrative construction undertaken by an identifiable actor. In generative AI, a fact is a probabilistic product. It emerges from the interaction between training data, model architecture, technical parameters, and user instructions. It is not a pre-existing, revealed truth, but a probabilistic, adaptive, variable, and opaque outcome. Its linguistic coherence can mask the lack of empirical grounding. An emergent fact cannot be stable, since it is contingent and contextual.

A centralisation of knowledge

The development of generative AI is inextricably linked to its industrial and political logic. Large-scale models are produced by a limited number of private actors, primarily American, embedded in specific economic, ideological, and geopolitical dynamics. The cases of ChatGPT, Grok, and DeepSeek illustrate distinct but converging approaches. : ideological biases , political control, exploitation of unfiltered speech or censorship of sensitive subjects.
The centralisation of data and discursive production capabilities intensifies the control of knowledge and standardises ways of telling the world’s story. This homogenization is hardly compatible with the democratic ideal of journalism, founded on pluralism, the diversity of voices, and editorial responsibility.

This dimension is all the more concerning given that generative artificial intelligence is increasingly integrated upstream in the information access process, particularly through search engines powered by large-scale language models. These systems no longer simply prioritise sources or guide content visibility, but synthesise, reformulate, and recompose information in response to user queries.

By blurring the line between research and discursive production, they tend to dilute the traceability of sources, obscure the chains of mediation, and weaken the epistemic autonomy of both users and professionals. The risk is therefore not only that of a visibility bias, but also that of a reconfiguration of facts, interpretive frameworks, and criteria of relevance upstream, according to algorithmic logics largely removed from editorial and democratic control.

Ambivalent news media discourse

Media discourse on AI itself contributes to this reconfiguration. Technological hype, the adoption of the narrative of inevitability, and the uncritical appropriation of Big Tech language all contribute to naturalising these technologies and making them unquestionable horizons of modernisation. AI-washing. This masks the technical limitations of the systems, their considerable environmental costs, the issues of digital sovereignty, and the epistemological risks they raise. This naturalisation is all the more effective because it relies on a rhetoric of fear: the fear of being left behind, of job losses, of obsolescence. Here, fear is used as an incentive for rapid adoption rather than for informed debate. Fear is mobilised as a tool to accelerate adoption, rather than to foster informed discussion.

Meanwhile, figures of « experts », highly visible, often from the business world or consulting, occupy the public space to the detriment of more nuanced knowledge produced by computer science, the social sciences, or critical technology studies. Their discourse tends to oversimplify complex notions, to promote the idea of an ‘intelligent AI’ or almost autonomous, and to obscure the structural constraints of the models. This media selection of expertise promotes a performative, enchanted vision of AI, while weakening journalists’ and the public’s critical capacity.

That said, discourse becomes more cautious, even critical, when it directly concerns newsrooms: threats to employment, blurring of the lines between information and fiction, and the proliferation of misleading content. But this vigilance coexists with a pragmatic and sometimes enthusiastic adoption of these tools, often presented as neutral assistants that improve productivity. This results in a persistent tension between, on the one hand, a measured and controlled use of generative AI in professional routines, and, on the other hand, media narratives that amplify its symbolic power and perceived authority.

These discourses also contribute to reinforcing powerful imaginaries: that of the machine that “understands,” “reasons,” and surpasses the human, or conversely, that of a miraculous tool of progress and efficiency. By fostering anthropomorphism and promoting fluid conversational interfaces, the media help blur the distinction between statistical computation and human intelligence. The risk, then, is not so much a naïve embrace of overvalued technologies, but rather a silent transformation of the cognitive frameworks through which knowledge, information, and truth are conceived.

The central question, therefore, is not only that of epistemic uses and risks, concerning the quality and diversity of information, but also that of epistemological risks, relating to the norms of truth, authority, and responsibility. It is also that of journalism’s capacity to preserve critical agency, to maintain the traceability of knowledge, and to resist the naturalisation of probabilistic calculation as a substitute for reality.

This ethical requirement is widely recognised in recommendations and normative frameworks for generative artificial intelligence, whether institutional, professional, or deontological. These documents generally do not adopt a monolithic conception of ethics, but explicitly articulate several complementary registers.: an ethics of virtue, centered on the integrity , responsibility and judgment of human actorsA consequentialist ethic , attentive to the social, democratic, and informational impacts of uses; and an ethic of duty, based on principles and rules aimed at guaranteeing transparency, accuracy, traceability, and accountability. However, while this ethical plurality is now well integrated at the normative level, in practice, it encounters the complexity of technical and editorial environments, where decisions are increasingly distributed between humans and systems.

Translation made with the help of Google Translate