While perceived as a tool to improve fact-checking, GAI technology raises concerns about data reliability and the spread of misinformation. Given these ethical challenges and system limitations, how can we mitigate these risks and promote the responsible use of GAI among fact-checkers?
The results of this qualitative and quantitative study on the challenges of fact-checking the Russian-Ukrainian war were presented at the EDMO Scientific Conference 2024.
How do European news organisations and self-regulation bodies frame ethical practices? A compared analysis between guidelines, recommendations, principles published in 12 EU countries.
Fact-checking tools are a part of the fact-checker apparatus. In this research, we provide evidence on the condition of the use of fact-checking tools, mobilizing a theoretical framework that explores the epistemology of the use and user experience concepts.
The development of AI technologies highlights the need for robust AI literacy, not only to use AI systems within newsrooms but also to be able to investigate algorithmic-driven societies. And it all starts with data.
Quantifiying the qualitative. Generative AI, through large language models, has become a cheap and quick method to generate misleading or fake stories. However, producing false or inaccurate results is not always intentional, as machine-generated contents are subject to “artificial hallucinations”.
The IDL Index is a human-based judgment metric for assessing the factuality of machine-generated content. It is a language-agnostic tool designed to evaluate content generated by large language model systems in the context of academic research. To what extent do generative AI tools adapt to facts or create “artificial hallucinations”?