X

[Conference] Dealing with biases and hallucinations: The ethical uses of (G)AI tools in the European news media sector

Laurence Dierickx

2024-09-26

The widespread integration of large language models into journalism has reignited the debate about the ethical implications of using artificial intelligence. Generative AI (GAI) systems, which have enabled access to AI in newsrooms, have demonstrated their ability to support, augment or take over various tasks, including intellectual activities such as brainstorming and writing. However, the challenge lies in integrating journalistic values, as these systems may rely on biased, unbalanced or copyrighted data during training, which hinders their alignment with ethical journalistic standards. In addition, the tendency of large language models to produce content that does not fit real-world input – the phenomenon of artificial hallucinations – raises concerns about the potential impact on trust in journalism. GenAI systems are also likely to reinforce existing biases or inadvertently spread disinformation, threatening the quality and trustworthiness of news processes.

Presented at ECREA 2024, the 10th European Communication Conference, « Communication & social (dis)order », September 25 2024, Ljubljana (Slovenia)

 

ECREA_LDI_2024
# # #