This chapter is published as part of Histories of Digital Journalism (2024) and explores the historical development of artificial intelligence (AI) in journalism. Highlighting the ethical complexities and professional challenges associated with integrating AI into newsrooms, it underscores journalism’s ability to adapt and engage with evolving technologies.
The first conference of the European Fact-Checking Standards Network (EFCSN) took place in Brussels, marking a significant milestone for European fact-checkers as they unite to tackle the rapidly evolving challenges of disinformation.
AI is testing the boundaries of human intelligence. AI will shape your soul. This AI really wants to know you. AI loves you. We still know little about how AI thinks. AI is racist and sexist. AI lies. AI does not want to be regulated. AI doesn’t care about you. AI cooks perfect steaks.
This list ilustrates the potential for using Large Language Models (LLMs) in journalism, and how they can be used under human supervision.
Results show that while AI technologies offer valuable functionalities, fact-checkers remain critical and cautious, particularly toward AI, due to concerns about accuracy and reliability. Despite acknowledging the potential of AI to augment human expertise and streamline specific tasks, these concerns limit its wider use. AI and GAI-based solutions are framed as “enablers” rather than comprehensive or end-to-end solutions, recognizing their limitations in replacing or augmenting complex human cognitive skills.
This paper aims to bridge the gap between machine learning development and journalism by focusing on three key areas: assessing the quality of machine learning datasets, guiding the development of machine learning solutions through a data-centric approach, and promoting AI and data literacy among journalists. It also examines the development of machine learning systems in newsrooms from the perspective of trustworthy AI, highlighting the need to reflect ethical journalism standards and their corollary, high data quality.
This study examines the challenges fact-checkers face when dealing with war propaganda and how their socio-professional contexts influence these obstacles.
Generative AI (GAI) systems have demonstrated their ability to support, augment or take over various tasks, including intellectual activities such as brainstorming and writing. However, the challenge lies in integrating journalistic values, as these systems may rely on biased, unbalanced or copyrighted data during training, which hinders their alignment with ethical journalistic standards.