On June 5 and 6, the workshop « Reporting News in a Disbelieving Age » was held in Toronto, Canada, to examine journalism’s challenges in an era of widespread mistrust and misinformation. One of the featured panels, « How May Near-Term Tech Advances Foster the Authenticity of News? », explored the promises and limitations of emerging technologies, particularly artificial intelligence (AI), in strengthening the credibility and authenticity of journalistic work. The discussion focused on how innovation can support core professional values such as transparency, accountability, and editorial independence while acknowledging these tools’ ethical and practical challenges. This blog post presents my preparatory notes.
The use of AI technologies in journalism is deeply rooted in the long-standing tradition of using computers and data to support journalism. However, the real push towards adopting AI in newsrooms began around twenty years ago, driven mainly by two factors. The first was the desire to automatically generate news stories from structured data using rule-based systems, known as news automation. The second main application was supporting editorial marketing through recommender systems to boost audience engagement. Today, it is striking that many journalists already use AI tools in their daily work, such as search engines, transcription services and translation tools, often without realising that these are AI-powered systems.
What is AI?
Let’s start with something simple but often misunderstood: what AI is — and what it isn’t. AI isn’t one single thing. It’s an umbrella term that covers technologies designed to mimic certain aspects of human intelligence, such as recognising patterns, making predictions and generating language. When we talk about AI in journalism, we’re referring to tools such as machine learning and large language models, including ChatGPT, as well as simpler rule-based systems used to automate processes like summarising, tagging and personalising content.
According to the EU AI Act, AI is “ a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
AI isn’t magic, nor is it human. It doesn’t understand context the way we do. It can’t check facts, verify sources or make ethical judgements. It works by detecting patterns in data and producing outputs based on probabilities. That also means that it is only as good — or as biased — as the data it is trained on.
Not everything that’s automated is AI, nor are all algorithms AI. This distinction is important. This is why it is essential for journalists to be AI literate, so they can understand what these tools can and can’t do. This knowledge enables them to make informed decisions, identify risks and promote the responsible use of these technologies.
How are newsrooms using AI (reality-based vs what the audience thinks)?
That’s a very complicated question because AI can be used at almost every news production and distribution stage and is often invisible to journalists and audiences. We tend to imagine AI as a disruptive force. Still, it is already embedded in everyday tools, such as AI-powered search engines for background research, transcription tools like Whisper, automated translation and tagging and archiving content. AI can help generate alerts for breaking news, identify trends on social media, suggest headlines, personalise newsletters, and assist with fact-checking by recognising patterns or spotting inconsistencies.
In most cases, the intention is not to replace journalists, but rather to solve practical problems such as saving time, managing large volumes of data and tailoring content to different audiences. However, audiences often assume that AI in journalism means robots writing articles independently, with no human oversight. The reality is much more hybrid and pragmatic. These tools are designed to support editorial goals, but transparency about when and how they are used is essential in order to maintain trust. At the same time, reluctance among audiences to accept the use of AI in news production can be a double-edged sword.
What disqualifies AI tools or AI usage from journalistic work?
That’s also a complex question because the factors that disqualify an AI tool from journalistic use are related to the output and its impact on information quality. If a tool distorts facts, fabricates content or cannot be verified, this undermines journalistic integrity. Large language models are especially problematic in this respect, as they blur the line between fact and fiction. While they produce fluent text, they are not grounded in evidence, and their outputs can include hallucinations, biased narratives or factual errors.
Beyond the technical risks, we must also ask: who owns the tool? What values are embedded in the system? What is the ideological or political context behind it? Consider DeepSeek, a Chinese LLM: it avoids politically sensitive topics such as the Communist Party, stores user data, and is designed with built-in censorship. These are incompatible with journalism’s core values of independence, transparency, and public accountability.
Therefore, it’s not just about accuracy and automation; it’s also about trust, control, and whether these tools align with the ethical principles of journalism. If journalists don’t understand how a system works, what data it has been trained on, or who controls it, there is a risk of introducing invisible forms of bias, or even censorship, into their work.
Does trust matter (to us in newsrooms, the audience, and the AI)?
Trust is central but manifests very differently in journalism than in other fields. Technically, it’s a recognised principle that you shouldn’t use AI unless you trust it. But in journalism, this principle is constantly being questioned. Last year, I conducted research with European fact-checkers, most of whom said they use generative AI tools, not because they trust them but because they feel pressured to experiment or want to see how these tools might speed up the often time-consuming process of fact-checking. At the same time, they made it clear that they do not trust the outputs they receive or the tech companies behind the tools. Interestingly, the fact-checkers who had received training or were working in newsrooms with clear guidelines were also the most critical, highlighting the importance of AI literacy in promoting informed and responsible use.
As a universal principle, transparency necessitates thorough consideration of its limitations and consequences. While transparency is about making processes and tools visible, explainability is about providing an understanding of the underlying mechanisms. Without explainability, transparency risks becoming mere disclosure without providing intelligibility. Explainability, on the other hand, is considered essential for enabling the critical use of technology, as it empowers users to assess, question and challenge automated outputs.
Neither transparency nor explainability guarantees accuracy or reliability. Furthermore, they do not necessarily illuminate the normative or strategic choices embedded in system design. These include editorial priorities, data selection, training biases and the intended role of automation in newsroom workflows. Therefore, building public trust in AI-assisted journalism requires more than technical disclosures. It demands ethical reflection, participatory dialogue and reaffirmation of human editorial responsibility at every stage of news production.
The hype is real, but newsroom budgets are tighter than ever. Why are we paying for AI tools? Will our audience appreciate it or support these decisions financially?
The hype around AI is real, but so are the financial constraints in newsrooms. So the question ‘Why are we paying for AI tools?‘ is not just about money but also strategy. These tools can help us solve everyday practical problems such as transcribing, summarising, tagging archives, and quickly translating and detecting trends. They can save time and enable smaller teams to achieve more. However, let’s not forget that there are still many misconceptions out there. The hype surrounding generative AI and how it is marketed makes people believe it will save journalism and fix structural issues such as underfunding, burnout and shrinking newsrooms. But that’s a myth. AI won’t solve the crisis in journalism. While it can provide support, it cannot replace the need for public investment, editorial vision and human responsibility.
Many people, including journalists, still don’t know what to do with these tools. Expectations are often greater than what the technology can deliver.
Whether audiences will support this financially depends on various factors. While people generally dislike the idea of AI replacing journalists, they are more accepting of AI assisting them. The key here might be explainability. Some news outlets have started publishing on their websites: ‘This is how we use AI’, which is a positive development because we are more likely to maintain audience trust if we clearly explain how and why AI is used and ensure that human editorial responsibility remains at the centre.