X

Balancing the scales: AI between verification and disinformation

Laurence Dierickx

2025-04-08

On 8 April, the EDMO Belux 2.0 workshop on disinformation in Belgium and Luxembourg brought together experts to reflect on the evolving landscape of information disorder. Where are we? Where are we going? Here is my contribution on the dual role of generative AI technologies.

 

In the context of the fight against information disorder, AI-based technologies are both a tool to combat disinformation and a means to amplify it. While it offers capabilities such as identifying patterns in content, assisting in the verification of claims, and supporting the dissemination of fact-checks, it also enables the rapid creation and spread of false narratives. However, the phenomenon is not as widespread, according to data from EDMO, but it continues to evolve and pose ongoing risks as generative technologies become more accessible and sophisticated. It is also not to be forgotten that disinformation campaigns based on AI are not fully automated. Humans are behind, meaning they have human intentions, so reducing this phenomenon to solely a technical problem would be misleading. Moreover, the technology’s potential to generate convincing misinformation at scale and its susceptibility to bias and error complicates the role of AI technologies, especially large language models and other generative AI tools, in the mis- and disinformation landscape.

 

 

The integration of AI into fact-checking workflows remains uneven. While fact-checkers have begun experimenting with generative AI for data analysis and content support (summarisation, brainstorming), overall scepticism remains, especially among professionals concerned with accuracy and ethical use. Fact-checkers acknowledge the potential of AI but emphasise the continued importance of human oversight due to the limitations of current models, particularly in understanding context, nuance and credibility.

See also

Dierickx, L., Sirén-Heikel, S., & Lindén, C. G. (2024). Outsourcing, augmenting, or complicating: The dynamics of AI in fact-checking practices in the Nordics. Emerging Media, 2(3), 449-473.

Dierickx, L., Van Dalen, A., Opdahl, A. L., & Lindén, C. G. (2024, August). Striking the balance in using LLMs for fact-checking: A narrative literature review. In Multidisciplinary International Symposium on Disinformation in Open Online Media (pp. 1-15). Cham: Springer Nature Switzerland.

Moving forward means focusing on the responsible and collaborative development of AI tools because tools do not provide any magic solution, and the human factor remains essential in fact-checking. Therefore, that involves strengthening the human-in-the-loop approach, fostering cross-sector partnerships, and promoting AI literacy, which are all critical to ensuring that technology supports truth rather than undermines it. As AI-generated content becomes more widespread, the challenge is to build systems that assist and augment human fact-checkers that uphold trust, maintain transparency, and adapt to evolving methods and techniques.

See also

Skivdal, J., Dierickx, L., & Dang-Nguyen, D. T. (2024, November). VeriDash: An AI-Driven, User-Centric Open Source Dashboard for Enhancing Multimedia Verification. In Norsk IKT-konferanse for forskning og utdanning (No. 2).