Whether it’s maintaining transparency in journalism, integrating cutting-edge technology, or countering war propaganda, fact-checking involves innovation, resilience, and adaptation. These six studies were published this year, demonstrating a dynamic involvement into disinformation research and interdisciplinary approaches at the University of Bergen through the Nordic Observatory for Digital Media and Information Disorder (NORDIS).
Chapter 1: The promises and challenges of using AI
The practice of fact-checking involves using technological tools to monitor online disinformation, gather information, and verify content. How do fact-checkers in the Nordic region engage with these technologies, especially artificial intelligence (AI) and generative AI (GAI) systems? Using the theory of affordances as an analytical framework for understanding the factors that influence technology adoption, this exploratory study show that while AI technologies offer valuable functionalities, fact-checkers remain critical and cautious, due to concerns about accuracy and reliability. Despite acknowledging the potential of AI to augment human expertise and streamline specific tasks, these concerns limit its wider use. Nordic fact-checkers show openness to integrating advanced AI technology but emphasize the need for a collaborative approach that combines the strengths of both humans and AI. As a result, AI and GAI-based solutions are framed as “enablers” rather than comprehensive or end-to-end solutions, recognizing their limitations in replacing or augmenting complex human cognitive skills. Available in open access.
Dierickx, L., Sirén-Heikel, S., & Lindén, C. G. (2024). Outsourcing, augmenting, or complicating: The dynamics of AI in fact-checking practices in the Nordics. Emerging Media, 2(3), 449-473.
Chapter 2: Three risks mitigation strategies for using LLMs in fact-checking
The launch of ChatGPT at the end of November 2022 triggered a general reflection on its benefits for supporting fact-checking workflows and practices. Between the excitement of the availability of AI systems that no longer require the mastery of programming skills and the exploration of a new field of experimentation, academics and professionals foresaw the benefits of such technology. Critics have raised concerns about the fairness and of the data used to train Large Language Models (LLMs), including the risk of artificial hallucinations and the proliferation of machine-generated content that could spread misinformation. As LLMs pose ethical challenges, how can professional fact-checking mitigate risks? This narrative literature review explores the current state of LLMs in the context of fact-checking practice, highlighting three key complementary mitigation strategies related to education, ethics and professional practice. Available here and here.
Dierickx, L., Van Dalen, A., Opdahl, A. L., & Lindén, C. G. (2024, August). Striking the balance in using LLMs for fact-checking: A narrative literature review. In Multidisciplinary International Symposium on Disinformation in Open Online Media (pp. 1-15). Cham: Springer Nature Switzerland.
Chapter 3: The Nordic approach of transparency
Transparency is more than a motto for professional fact-checkers; it is a professional requirement that permeates their daily practice. Although transparency has been theorised and critiqued extensively in journalism studies, there has been less research on its practical implications for news workers. This paper aims to fill this gap by focusing on fact-checking practices in the Nordic countries. Transparency is seen as a means to achieve accountability and credibility in reporting and as a tool to hold public figures accountable. However, it does not protect Nordic fact-checkers from criticism or harassment for delivering uncomfortable truths. Available in open access.
Dierickx, L., & Lindén, C. G. (2024). Transparency and fact-checking in open societies. Journalism.
Chapter 4: War propaganda and the human cost of seeking the truth
This study examines the challenges fact-checkers face when dealing with war propaganda and how their socio-professional contexts influence these obstacles. Using a mixed-methods approach, the research identifies common difficulties such as time constraints, resource limitations, and the struggle to find reliable information amidst language barriers and geographical distances. The findings highlight the impact of socio-professional contexts on investigative methods, ranging from traditional journalism to advanced open-source intelligence methods. The study underscores the importance of international cooperation and support networks in addressing these challenges and also in mitigating the impact that exposure to violent content and harassment has on well-being and professional integrity. Available in open access.
Dierickx, L., & Lindén, C. G. (2024). Screens as Battlefields: Fact-Checkers’ Multidimensional Challenges in Debunking Russian-Ukrainian War Propaganda. Media and Communication, 12.
Chapter 5. A closer look at OSINT practices
How do the Norwegian fact-checkers from Faktisk make use of multimedia verification and Open-source Intelligence (OSINT) methods, for verifying online multimedia content in the context of the ongoing wars in Ukraine and Gaza? This showcases the effectiveness of diverse resources, including AI tools, geolocation tools, internet archives, and social media monitoring platforms, in enabling journalists and fact-checkers to efficiently process and corroborate evidence, ensuring the dissemination of accurate information. It also underscores the potentials of currently available technology and highlights its limitations while providing guidance for future development of digital multimedia verification tools and frameworks. Available in open access.
Khan, S. A., Dierickx, L., Furuly, J. G., Vold, H. B., Tahseen, R., Linden, C. G., & Dang‐Nguyen, D. T. (2024). Debunking war information disorder: A case study in assessing the use of multimedia verification tools. Journal of the Association for Information Science and Technology.
VeriDash, the AI-based and open-source prototype
This paper presents VeriDash, an open-source dashboard that integrates AI-based technologies to streamline the multimedia verification process for fact-checkers. VeriDash offers advanced features such as automated transcription, geolocation, and an intuitive interface that streamlines the fact-checking process while ensuring ease of use. By incorporating a human-in-the-loop approach, VeriDash balances technological efficiency with human expertise, promoting trusted and responsible AI technology to support and enhance the fact-checking process. Available in open access.
Skivdal, J., Dierickx, L., & Dang-Nguyen, D. T. (2024, November). VeriDash: an AI-Driven, User-Centric Open Source Dashboard for Enhancing Multimedia Verification. In Norsk IKT-konferanse for forskning og utdanning (No. 2).