The launch of ChatGPT at the end of November 2022 triggered a general reflection on its benefits for supporting fact-checking workflows and practices. Between the excitement of the availability of AI systems that no longer require the mastery of programming skills and the exploration of a new field of experimentation, academics and professionals foresaw the benefits of such technology. Critics have raised concerns about the fairness and of the data used to train Large Language Models (LLMs), including the risk of artificial hallucinations and the proliferation of machine-generated content that could spread misinformation. As LLMs pose ethical challenges, how can professional fact-checking mitigate risks? This narrative literature review explores the current state of LLMs in the context of fact-checking practice, highlighting three key complementary mitigation strategies related to education, ethics and professional practice.
This paper is part of the proceedings of the 6th 6th Multidisciplinary International Symposium on Disinformation in Open Online Media
September 2-4, 2024 (Münster, Germany).
Access the paper: https://link.springer.com/chapter/10.1007/978-3-031-71210-4_1
Reference
Dierickx, L., van Dalen, A., Opdahl, A.L., Lindén, CG. (2024). Striking the Balance in Using LLMs for Fact-Checking: A Narrative Literature Review. In: Preuss, M., Leszkiewicz, A., Boucher, JC., Fridman, O., Stampe, L. (eds) Disinformation in Open Online Media. MISDOOM 2024. Lecture Notes in Computer Science, vol 15175. Springer, Cham. https://doi.org/10.1007/978-3-031-71210-4_1
The paper was also presented at the Democracy & Digital Citizenship Conference Series, September 3rd – September 4th 2024 (Odense, Denmark)