X

Toolbox & Academic Notebook in English and French #data #tech #journalism #AI #ethics #UX #dataquality #factchecking

[Paper] A data-centric approach for ethical and trustworthy AI in journalism

This paper aims to bridge the gap between machine learning development and journalism by focusing on three key areas: assessing the quality of machine learning datasets, guiding the development of machine learning solutions through a data-centric approach, and promoting AI and data literacy among journalists. It also examines the development of machine learning systems in newsrooms from the perspective of trustworthy AI, highlighting the need to reflect ethical journalism standards and their corollary, high data quality.


[Paper] Screens as Battlefields: Fact-Checkers’ Multidimensional Challenges in Debunking Russian-Ukrainian War Propaganda

This study examines the challenges fact-checkers face when dealing with war propaganda and how their socio-professional contexts influence these obstacles.


[Conference] Dealing with biases and hallucinations: The ethical uses of (G)AI tools in the European news media sector

Generative AI (GAI) systems have demonstrated their ability to support, augment or take over various tasks, including intellectual activities such as brainstorming and writing. However, the challenge lies in integrating journalistic values, as these systems may rely on biased, unbalanced or copyrighted data during training, which hinders their alignment with ethical journalistic standards.


Artificial intelligence and journalism: guidelines for ensuring information quality

The rise of generative artificial intelligence (GAI) technologies has led many news media and professional organisations to question the oversight of responsible journalistic use of these technologies, given the risks they pose to the quality and diversity of information. This study analyses 36 recommendations and guidelines published in Western and Northern Europe from a cross-ethical perspective on journalism and AI.


[Preprint (accepted)] News Aggregation

This paper explores the multifaceted landscape of news aggregation, highlighting its diverse practices and the resulting implications for information dissemination, narrative construction and public engagement.


[Paper] Striking the Balance in Using LLMs for Fact-Checking: A Narrative Literature Review

The launch of ChatGPT at the end of November 2022 triggered a general reflection on its benefits for supporting fact-checking workflows and practices. Between the excitement of the availability of AI systems that no longer require the mastery of programming skills and the exploration of a new field of experimentation, academics and professionals foresaw the benefits of such technology. Critics have raised concerns about the fairness and of the data used to train Large Language Models (LLMs), including the risk of artificial hallucinations and the proliferation of machine-generated content that could spread misinformation.


[Poster] Striking the Balance in Using Generative AI for Fact-Checking

While perceived as a tool to improve fact-checking, GAI technology raises concerns about data reliability and the spread of misinformation. Given these ethical challenges and system limitations, how can we mitigate these risks and promote the responsible use of GAI among fact-checkers?