Since the launch of ChatGPT in November 2022, the role of AI in journalism has sparked new discussions, highlighting ethical concerns about its use, accuracy, and impact on journalistic standards. While AI tools like large language models (LLMs) offer potential, they cannot replace human judgment and may contribute to misinformation due to their limitations.
Journalists use AI-based tools daily, from automated interview transcription to machine translation and advanced search engines, often without questioning the technology behind them. However, since the launch of ChatGPT in November 2022, discussions about the role of AI in journalism have taken a new direction, fuelled by hype and driven by marketing strategies that exaggerate the technology’s capabilities. This shift has sparked ethical reflection on the responsible use of AI, human agency and journalistic standards. While AI technology is not new to the news media sector, it has never been so widely discussed or scrutinised.
Despite their potential, large language models (LLMs) such as ChatGPT cannot replace the human judgement required to produce credible, nuanced stories. Instead, they are more likely to be used for secondary tasks due to their many drawbacks, including semantic noise, lack of nuanced understanding, and the generation of inaccuracies that can contribute to misinformation. From a public perspective, using LLMs and other generative AI systems raises critical questions about factuality in the age of AI. Because AI can inform and mislead, it challenges traditional notions of truth and can lead to paradoxical outcomes, including the increasingly complex issue of trust and the growing rejection of AI-generated content as legitimate news. Trust is also central to the relationship between LLMs and journalists. In this area, research is beginning to show that trust is not always a prerequisite for use, as many professionals use the technology despite not fully trusting it or its results.
Technology has been part of the journalistic apparatus for years, often reflecting an ambiguous relationship with professionals, ranging from dystopian to utopian visions. At this stage in the history of journalism, it would be misleading to think of AI as reshaping journalism. Rather, it should be seen as a ‘remix’ of the editorial process. AI provides tools that can augment existing workflows and routines while raising important questions about the role of journalists in AI-driven societies, where quality journalism serves as a shield against the growing flow of disinformation.
Effective integration of AI requires a clear understanding of its capabilities and limitations of technology, and it needs to focus on risk mitigation strategies that include AI literacy, human oversight and responsible practices. AI is not a silver bullet, and the hype surrounding it should not obscure the challenges the news media sector has faced for at least two decades, including the continued deterioration of working conditions. Therefore, AI does not offer solutions but creates new challenges. Furthermore, AI literacy should also be seen as a powerful tool to explore the complexities of AI-driven societies that are not prone to bias. After all, tools are just tools designed by humans to serve human purposes.
Explore the ethical implications and challenges of AI in journalism at the FARI AI Happy Hour event. This in-person event will take place on Monday 27 January at 17:30 in the FARI Test and Experience Center (Cantersteen 16, 1000 Brussels). More info can be found here