X

Ten reasons why generative AI is not disruptive

Laurence Dierickx

2025-06-14

Generative AI is often seen as a disruptive force in journalism. However, this overlooks the way the industry works, as well as the fact that journalistic value extends far beyond simply generating text. Much of the misunderstanding stems from hype and technical misconceptions, as well as an insufficient appreciation of journalism. The following ten arguments challenge the actual impact of generative AI.

1. The potential of generative AI is often overestimated due to hype, technical misconceptions and a lack of nuance specific to the relevant field. The real disruption comes from the expectations we place on generative AI, which are often detached from its real-world limitations.

2. The institutional coherence of journalism comes from verification, judgement, and accountability. Although tools like ChatGPT can replicate surface-level outputs, they do not challenge journalism’s deeper structure unless humans allow them to.

3. Current generative AI use cases are assistive, not substitutive. While transcription, summarisation and basic code support can enhance workflows, they do not replace core editorial or investigative labour.

4. Generative AI systems do not understand ethics, truth, consequences, intent or goals. Their output is entirely shaped by human input, corporate design and legal governance. Even in generative AI-augmented workflows, editorial standards and journalistic ethics remain the foundation of the field.

5. The idea that generative AI ‘complicates what it means to be a journalist’ is speculative. Most journalistic roles remain under human editorial control. As Barbie Zelizer has emphasised, scholarship all too often overlooks what remains stable in journalism amid change.

6. Institutions rely on stable norms, shared accountability and a sense of professional identity. However, generative AI systems such as ChatGPT and Sora are corporate products, not social actors or institutions.

7. The concept of framing generative AI as an ‘institution’ is shaky. This approach overlooks generative AI’s reliance on human agency and reinforces a form of techno-mysticism that obscures the fact that power lies with tech corporations and platform owners.

8. Generative AI is not free from bias. It inherits and amplifies the biases present in its training data. Therefore, it cannot reliably meet the epistemic or ethical standards of responsible journalism.

9. It is misleading to equate experimentation with transformation. Early adoption and interest in generative AI does not necessarily lead to institutional upheaval. Journalism has historically embraced new technologies while staying true to its core principles, and this moment is no exception.

10. Much of the current discourse is conceptual or speculative. What is needed are longitudinal, empirical studies that trace changes in newsroom practices, decision-making authority and audience trust. Without evidence, disruption remains merely a narrative.

#