X

[Conference] Framing the ethical use of AI in journalism

7 novembre 2023

This statement has been prepared for the « New Horizons in Journalism – Between Humans and Artificial Intelligence » conference (panel 2 on AI ethics), which took place on November 8, 2023, in Sofia, Bulgaria. The World Press Institute organised the conference in collaboration with the America for Bulgaria Foundation and the Association of European Journalists in Bulgaria.


The ethical use of AI-based systems in journalism can be approached from the perspective of news gathering and news production. They also relate to news distribution, with news recommenders and news personalisation systems. These uses refer to two distinct periods or waves: before and since ChatGPT.

What we can call the first wave of AI in journalism began more than a decade ago when newsrooms began experimenting with automation using rudimentary or complex rules-based systems to automate structured data in specific domains, such as sports, elections or financial reports. The idea was to support journalistic practices or to provide audiences with short texts that could be adapted according to the characteristics of the audiences, such as their language or geographical location.

The ethical challenges then related to the accuracy of the results and the reliability of the data used to feed the systems. And these were relatively easy to deal with. Another ethical challenge was the need to inform the audience about the non-human nature of the author. The use of AI technologies in news production did not only concern content generation, but ethical debates ignored mostly the other kinds of use, such as supporting investigative journalism.

In the field of news recommendations and news personalisation, the main ethical challenges were related to the informed consent of users and the ethical and legal requirements of privacy and security. They were also concerned about the danger of creating filter bubbles or echo chambers, while the polarisation of public debates had never been so worrying.

With the rise of ChatGPT, new ethical challenges have emerged. Here, it should be remembered that ChatGPT has democratised access to AI-based tools, as it is used in more than 50% of newsrooms worldwide.

The system itself cannot be considered ethical: use of copyrighted data without attribution or compensation, use of biased data, data trained by low-paid workers, the environmental costs of training such complex systems coupled with the environmental costs of using them.

Large language models such as ChatGPT are also known to generate so-called ‘artificial hallucinations’, which refer to the generation of information or realistic experiences that do not match any real-world input. As such, they risk reinforcing existing biases or spreading misinformation, whereas trustworthy news requires accurate and reliable information.

At the University of Bergen, we analysed 34 guidelines and recommendations on the ethical use of AI in journalism in Europe. These texts covered 11 news media organisations and self-regulatory bodies. The majority of these texts were published after the launch of ChatGPT. Overall, we found that transparency was at the forefront of ethical considerations: also, audiences should be informed about using AI-based systems.

Transparency is critical to maintaining AI systems’ accuracy, reliability and fairness, which are the levers of accountability and trust. Therefore, all decisions should remain under human control and responsibility, meaning these technologies must be considered part of the editorial process. However, opening the black box of journalism or AI-driven journalism is insufficient because transparency does not equal explainability.

Many of these guidelines also emphasise the need to ensure accuracy, not only because of the phenomenon of « artificial hallucinations » but because AI-based systems are being used to create and disseminate false content. Moreover, it has never been easier to build sophisticated fake content that is increasingly indistinguishable from real content. Although machine-generated content detectors exist, their results cannot be considered reliable because they produce many false negatives or positives. In addition, what’s their point is LLMs are used either to inform or to disinform?

In the AI Act, the EU Commission adopted a risk-based approach to frame the development of AI-based technologies in Europe. The recommendations adopted by the French press councils are along the same lines, considering that low-risk technologies include tools that don’t significantly impact new information, such as audio transcription. Moderate risks refer to systems that are likely to affect the quality of news. These include automatic translation, factual summaries or automated reporting. Finally, high-risk systems have generated content whose realism may mislead audiences or present information contrary to the facts, and they should be avoided.

New guidelines and recommendations have been published since this research. As far as we know, the Finnish broadcaster Yle is the only news organisation considering environmental issues.

In summary, AI-based systems offer great opportunities to improve journalistic workflow and support journalistic practices. However, their use requires a critical attitude towards technology, considering that humans should and must have the last word.

Furthermore, these ethical challenges are only a part of the equation as ethical considerations also relate to the working conditions, the division of labour between humans and machines,  the need to think about the deskilling and upskilling phenomenon and the need for continuous training. It is also about the increasing workload and, finally, the question of augmenting, transforming or replacing journalism.

Other issues are much more related to the need to promote data and AI literacy within newsrooms and digital media literacy towards audiences. In all cases, approaching AI ethics in journalism requires a holistic approach that considers the interdisciplinary of the field to tackle its socio-technological challenges.

# # #