Toolbox & Academic Notebook in English and French #data #tech #journalism #AI #ethics #UX #dataquality #factchecking

[Poster] Striking the Balance in Using Generative AI for Fact-Checking

While perceived as a tool to improve fact-checking, GAI technology raises concerns about data reliability and the spread of misinformation. Given these ethical challenges and system limitations, how can we mitigate these risks and promote the responsible use of GAI among fact-checkers?

[Conference] Screens as Battlefields: Navigating the Challenges of Resources and Tools in Debunking Russian- Ukrainian War Propaganda

The results of this qualitative and quantitative study on the challenges of fact-checking the Russian-Ukrainian war were presented at the EDMO Scientific Conference 2024.

[Panel] The ethical challenges for using (generative) AI in journalism

How do European news organisations and self-regulation bodies frame ethical practices? A compared analysis between guidelines, recommendations, principles published in 12 EU countries.

Data. Image: CanStockPhoto

[Paper] Journalism and Fact-Checking Technologies: Understanding User Needs

Fact-checking tools are a part of the fact-checker apparatus. In this research, we provide evidence on the condition of the use of fact-checking tools, mobilizing a theoretical framework that explores the epistemology of the use and user experience concepts.

[Keynote] Ethics and information quality in the age of (generative) AI

The development of AI technologies highlights the need for robust AI literacy, not only to use AI systems within newsrooms but also to be able to investigate algorithmic-driven societies. And it all starts with data.

[MASSHINE CONFERENCE] Assessing machine-generated content

Quantifiying the qualitative. Generative AI, through large language models, has become a cheap and quick method to generate misleading or fake stories. However, producing false or inaccurate results is not always intentional, as machine-generated contents are subject to “artificial hallucinations”.

[Tool] Testing the factuality of machine-generated content

The IDL Index is a human-based judgment metric for assessing the factuality of machine-generated content. It is a language-agnostic tool designed to evaluate content generated by large language model systems in the context of academic research. To what extent do generative AI tools adapt to facts or create “artificial hallucinations”?