I was honoured to be invited to the 4th Regional Meeting of Press Councils of South-East Europe and Türkiye, held in Ohrid, North Macedonia, on 19–20 May 2025. The event brought together press and media councils across the region under the theme “Building Trust in Media in South-East Europe: Support to Journalism as a Public Good.” It forms part of an EU-funded project dedicated to promoting ethical journalism and strengthening media integrity in the region, with the Macedonian Press Council hosting the event and UNESCO providing key support.
As an invited expert, I had the opportunity to share my research at the crossroads on artificial intelligence, journalism, and fact-checking, carried out at the University of Bergen within the framework of the Nordic Observatory for Digital Media and Information Disorder (NORDIS) – « From facts to fake Journalism & information integrity in the age of AI ». It was a valuable space to exchange perspectives with representatives of self-regulatory bodies on the evolving role of AI technology in newsrooms.
During this two-day meeting, press councils adopted a regional declaration on the ethical use of AI in the media. The Ohrid Declaration, which resulted from this meeting, reaffirms a commitment to upholding editorial responsibility, human judgement and professional ethics, as AI becomes increasingly integrated into journalistic workflows. The message is clear: AI should serve journalism, not replace journalists.
This meeting marked a step forward in regional collaboration and a meaningful moment to reflect on the values that uphold ethical AI-driven journalism. I’m grateful for the opportunity to have been part of these critical conversations and look forward to seeing how the principles outlined in the Ohrid Declaration will shape the future of responsible media in South-East Europe and beyond.
It is worth noting that Kosovo and Serbia’s ethical press codes were updated last year to include specific principles on AI, and other countries in the region are currently adapting their codes of ethics. From this perspective, the Ohrid Declaration represents a strong and timely regional commitment to ethical standards in the age of AI.
The Ohrid Declaration: 10 core principles for the ethical use of AI in journalism
The Ohrid Declaration outlines a shared vision for the responsible integration of AI in media, centred on transparency, accountability, and human oversight. The 10 principles are as follows:
- AI as a support, not a substitute
AI should assist journalists and newsrooms but never replace human judgment, editorial responsibility, or professional ethics. - Transparency and labelling of AI-generated content
Media must disclose when AI creates, selects, or modifies content—especially when it significantly alters the message or form. Deepfakes and synthetic media must be labelled and used only with ethical justification and public context. - Editorial responsibility and risk assessment
Human editors and journalists are ultimately responsible for media content. Risk assessments should be conducted before deploying AI in sensitive contexts such as elections or public crises. - Credibility and fact-checking
AI-supported content must be held to the same standards of accuracy, verification, and ethical journalism as any other content. - Media pluralism and protection of vulnerable groups
Journalists should be alert to the risk of AI reinforcing bias, stereotypes, or information silos that limit access to diverse perspectives. - Accountability and internal oversight
Media organisations must develop clear internal guidelines to govern AI systems’ ethical and transparent use. - Privacy and data protection
The use of AI must comply with data protection laws. Personal data must not be collected or processed without informed consent and a clear legal basis. - Education and AI literacy
There is an urgent need to equip journalists, media professionals, and the public with the skills to understand, question, and safely use AI technologies. - Modernising ethical frameworks
Effective mechanisms must be in place to handle complaints and violations relating to AI’s unethical or misleading use in journalism. - Media as guardians of democracy
AI should be used to support journalism’s democratic role—enhancing transparency and accountability, not obscuring it.
The declaration calls for upgrading ethical codes, improving cooperation among self-regulatory bodies, investing in continuous training, and promoting AI literacy for media professionals and the wider public. It’s a strong step towards ensuring AI supports journalism as a public good.