Journalists use AI technologies daily without necessarily understanding what’s going on behind the screen. From research to translation, transcription, and even automated content creation, AI is woven into the fabric of modern journalism. The question, then, is: What does it mean to train journalists on AI?

Training journalists on AI isn’t just about showing them how to use tools; it’s first and foremost about building literacy around the technology. This is where things get tricky, especially when organising practical workshops on AI. What should be taught without oversimplifying (since most people can easily use automated transcription or translation tools), and what’s too specific (how many journalism students are ready to learn R or Python for data or text mining)? We can all acknowledge that large language models (LLMs) are only one part of AI and are unsuitable for all kinds of tasks.
The aim is not to turn journalists into machine learning engineers. Their core focus is storytelling, critical thinking, and ethical reporting rather than coding complex algorithms (labour-intensive learning). Therefore, the most pragmatic approach is to think about how to facilitate collaborations between journalists and data scientists or ML engineers, how to engage in AI critically, and even what may look like simple and inoffensive tools. In a previous post, I explored the key skills journalists must develop in the age of AI, especially amid the current hype surrounding the technology.
AI was not born with ChatGPT; these technologies have been used in the news media sector for years to fulfil specific tasks. While AI tools have been employed in content automation, data analysis, and personalised recommendations, they traditionally operated behind the scenes. Only in recent years, with the rise of more advanced AI systems, has technology started to be a central part of augmenting journalism practices, from automated reporting and content generation to advanced data-driven storytelling and multimedia verification practices.
However, many professionals are still trying to figure out how to use ChatGPT and other LLMs effectively. They acknowledge that the technology, often marketed as a game-changer, is far from a silver bullet that will solve all the possible problems journalists encounter. LLMs struggle to deliver quality content, with issues like bias, hallucinations, and an inability to verify facts or distinguish between truth and falsehood, making its integration into journalism a complex challenge. These issues are far from being new and far from being solved.
What does it mean to train journalists on AI?
Understanding the fundamentals of AI is important for recognising both the advantages and limitations of these technologies. That involves understanding how to effectively use, critically assess, and ethically engage with AI while understanding its societal impacts, which directly relate to developing AI literacy. Therefore, training journalists on AI is less about getting hands-on with the technology. It’s also about helping journalists grasp how AI fits into their work, not as a replacement but as a tool or strategy to augment their practices or improve how news is disseminated.
Here’s what this kind of training might involve.
1. Understanding AI technologies
Journalists need a solid foundation in understanding AI concepts such as machine learning, natural language processing (NLP) and computer vision. Understanding how these technologies work – and how they are used in journalism – allows them to make informed decisions about which AI technology or tool to use for specific tasks in the news production and distribution process, from automating routine tasks to enhancing content creation, verifying multimedia content and ensuring the responsible and critical use of AI-powered tools. Practical examples can support this kind of teaching, but translating these concepts into hands-on workshops can be challenging, as journalists may not always have the technical background to engage with complex AI tools that require complete expertise. Is it realistic to expect journalists to master every aspect of technology within just two years of a master’s degree? Many technical skills relate to specific jobs and specialist knowledge. Why not focus on helping journalists understand the basics to work effectively with specialists without becoming experts or at least immediately?
2. Developing a robust data literacy
No data, no AI. Based on this principle, developing data literacy is as much essential. It is not only about understanding how data is collected, analyzed, and used in AI models but also about being properly trained in data, from data cleaning to data interpretation and reporting. Journalists need to grasp the basics of data sourcing, the significance of clean and unbiased data, and how data-driven decisions are made. Such knowledge is also valuable in AI because it allows for evaluating AI tools and understanding that there are critical ethical issues beyond any project where there is data. This is where workshops and hands-on training are most relevant. Practical workshops can help journalists move beyond theory and understand how to work with data while adhering to best practices and standards. In every data-driven journalism practice, ethical considerations also concern the data.
3. Recognising bias and ethical challenges
Like all technologies, AI systems are shaped by the data on which they are trained. This data can be biased, reflecting societal inequalities that AI models can unintentionally perpetuate. Journalists need to be aware of these biases when using AI tools. Training journalists in the ethical use of AI to ensure they understand the implications of using AI-generated content. Such an approach is rooted in philosophy and touches on core ethical issues of responsibility, fairness and accountability. However, it also has practical applications, such as understanding how recommendation algorithms work, how they shape the news we see, and the potential impact of algorithmic decisions on public opinion and trust. To make it practical, journalists could analyse the biases in AI-driven social media algorithms by tracking how different content is recommended to various profiles or testing generative models to identify to what extent they are likely to reproduce bias and stereotypes or generate made-up content.
4. Understanding the role of AI in society
AI technologies are shaping society and influencing many decision-making processes. From influencing public opinion through personalised content recommendations to enabling the rapid spread of misinformation, AI is becoming increasingly ambivalent. Training journalists in AI is about helping them recognise the societal impact of the technology, both good and bad. While AI can improve the efficiency of journalism – automating tasks and boosting content creation – it can also fuel fake news and biased reporting and undermine trust in the media. In addition, AI is increasingly being used in social security, justice, and human resources, which can lead to biased or discriminatory outcomes.
Practical implications
AI tools have limitations: they lack the nuance, creativity, and critical thinking that journalists bring. Therefore, training isn’t about mastering coding or AI but understanding how it works and its impact on their work and society. Workshops should focus on developing critical thinking skills, such as recognising AI’s limitations, spotting bias, and addressing ethical issues. Practical exercises could include evaluating AI-driven content, testing algorithms, and discussing real-world case studies, all while keeping the content accessible and relevant.
It should be approached as a comprehensive process, acknowledging that there’s much to absorb and understand. Training should be iterative, gradually building on concepts and allowing journalists to process and apply their learning over time. Workshops on using generative AI are only a part of the problem. Still, they have their meaning for improving practices through prompting techniques and fostering a critical approach considering the changes likely to occur from one prompt to another and from one model to another. Nevertheless, they are only one part of a much more complex equation.