Since the launch of ChatGPT in November 2022, the role of AI in journalism has sparked new discussions, highlighting ethical concerns about its use, accuracy, and impact on journalistic standards. While AI tools like large language models (LLMs) offer potential, they cannot replace human judgment and may contribute to misinformation due to their limitations.
This paper investigates the use of multimedia verification, in particular, computational tools and Open-source Intelligence (OSINT) methods, for verifying online multimedia content in the context of the ongoing wars in Ukraine and Gaza.
Transparency is seen as a means to achieve accountability and credibility in reporting and as a tool to hold public figures accountable. However, transparency does not protect Nordic fact-checkers from criticism or harassment for delivering uncomfortable truths. Transparency is not without flaws, even in societies characterised by a culture of openness… and transparency.
This chapter is published as part of Histories of Digital Journalism (2024) and explores the historical development of artificial intelligence (AI) in journalism. Highlighting the ethical complexities and professional challenges associated with integrating AI into newsrooms, it underscores journalism’s ability to adapt and engage with evolving technologies.
The first conference of the European Fact-Checking Standards Network (EFCSN) took place in Brussels, marking a significant milestone for European fact-checkers as they unite to tackle the rapidly evolving challenges of disinformation.
AI is testing the boundaries of human intelligence. AI will shape your soul. This AI really wants to know you. AI loves you. We still know little about how AI thinks. AI is racist and sexist. AI lies. AI does not want to be regulated. AI doesn’t care about you. AI cooks perfect steaks.
This list ilustrates the potential for using Large Language Models (LLMs) in journalism, and how they can be used under human supervision.
Results show that while AI technologies offer valuable functionalities, fact-checkers remain critical and cautious, particularly toward AI, due to concerns about accuracy and reliability. Despite acknowledging the potential of AI to augment human expertise and streamline specific tasks, these concerns limit its wider use. AI and GAI-based solutions are framed as “enablers” rather than comprehensive or end-to-end solutions, recognizing their limitations in replacing or augmenting complex human cognitive skills.