The launch of ChatGPT in November 2022 sparked a reflection on the usefulness of generative AI (GAI) technology in supporting fact-checking workflows and practices. At the same time, critics have questioned the fairness and reliability of the data collection and training on which GAI systems are based. The well-known phenomenon of artificial hallucinations adds an extra layer to these concerns, as does the fear of a proliferation of machine-generated content to create and spread mis- or disinformation. Given these ethical challenges and the inherent limitations of a GAI system, how can the risks be mitigated to foster responsible use among fact-checkers?
Poster presented at the conference « Large Language Models for media and democracy: wrecking or saving society?« , Amsterdam, 23-24 April 2024
postrCWI