Transparency is a multifaceted term defined as seeing through, revealing the hidden, and making the invisible available and the visible accessible (Ball, 2009; Metelli, 1974; Michener & Bersch, 2013; Turilli & Floridi, 2009). It initially emerged as an alternative to the contested idea of objectivity (Ward, 2014). Transparency emerged with the integration of technological innovations within newsrooms, along with the development of open journalism practices such as data journalism and fact-checking (Humprecht, 2020; Karlsson, 2010; Ekström & Westlund, 2019; Zamith, 2019). Often hailed as a core journalistic value (Koliska, 2021), transparency refers to uncovering traditional factors that influence news production (Allen, 2008) and to a set of normative guiding principles (Gynnild, 2014). As an ideal, it serves as a lever to promote credibility, accountability, responsibility, trust and honesty (Philips, 2013; Singer, 2007).
At the same time, transparency is regularly criticised for its potential drawbacks. Equating it with responsibility and accountability raises concerns about its broader societal implications, requiring a distinction between journalistic roles and individual responsibilities (Ward, 2014). Moreover, its application challenges established journalistic norms and raises questions about sources of information (Philips, 2013). Susceptible to bias at both individual and institutional levels, transparency often shapes perceptions of relevance, newsworthiness and truth in journalism (Craft & Vos, 2021). Furthermore, the very nature of transparency tends to obscure the complexities underlying decision-making processes, potentially undermining understanding and appreciation of these critical decisions (Curry & Stroud, 2021).
Transparency is a critical part of debates around AI-driven journalism because it goes beyond purely professional practices. Also, it concerns the algorithmic processes at work, which are the other part of the equation. In AI ethics, transparency is not synonymous with explainability (Burkart & Huber, 2021; Ferrario & Loi, 2021), which consists of explaining the internal mechanisms of AI systems and their decision-making processes, often seen as black boxes (Rai, 2019; Graziani et al., 2022). Algorithmic transparency and explainability are necessary not only to integrate professional practices better but also as a matter of accountability. It also questions the ethical principle of fairness: AI systems are not free of bias because the humans behind them are not (Diakopoulos & Koliska, 2017; Annany & Crawford, 2018).
Transparency alone cannot address the intricacies within AI decision-making processes upstream and downstream of AI systems. First, the ideological association of transparency with unbiased or fully comprehensible processes contrasts sharply with the realities of AI. The overwhelming complexity, inherent biases and constant evolution of these systems present obstacles to translating transparency into understandable and trustworthy outcomes, whereas adopting AI systems requires trust (Jacovi et al., 2021). Second, even when transparency and explainability are matched, the results are not necessarily trustworthy. This lack of contextual understanding further contributes to user misinterpretation and misunderstanding. Consequently, maintaining an acceptable level of transparency and explainability requires continuously training users on complex, evolving systems.
Transparency and explainability are considered critical in journalism-AI ethics, but they can create the false impression of control. Even when we think we have ‘explained’ how input A produces output B, the underlying pathways are often too complex to understand, even for the engineer. Rather than trying to comprehend black-box systems that will always escape to human understanding, a more pragmatic approach is to focus on what can actually be controlled: the data that feeds AI. Prioritising high-quality, well-curated datasets enables journalists and developers to meaningfully influence the outcomes quality and uphold ethical standards. Also, one step forward is to adopt data-centric approaches that prioritise the quality of input data over model complexity, though this is primarily applicable to ‘traditional’ AI systems.
Our study on the development of a data-centric approach in the context of AI-driven journalism demonstrated that implementing a data quality framework can mitigate bias and facilitate the integration of journalistic ethical principles into AI workflows (Dierickx et al., 2024). This allows for better control over the elements that feed the system, which is achievable in practice. In contrast, fully understanding or managing the behaviour of highly complex neural networks with billions of parameters remains largely out of reach. Perhaps it is at this point that we can recognise the naivety of discourses that advocate for complete control and validation of AI technologies in journalism, which suggest that AI behaviour can be fully understood and regulated.
Data-centric approaches have their own limitations with regard to transparency. To what extent should audiences be informed about the numerous data quality decisions made earlier in the process, from selection and labelling to cleaning and exclusion? While transparency is far from sufficient, increasing the number of disclosures can itself become problematic by overwhelming people with technical details that risk blurring the message.
References
Allen, D. S. (2008). The trouble with transparency: The challenge of doing journalism ethics in a surveillance society. Journalism Studies, 9(3), 323–340.
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989.
Ball, C. (2009). What is transparency?. Public Integrity, 11(4), 293-308.
Burkart, N., & Huber, M. F. (2021). A survey on the explainability of supervised machine learning. Journal of Artificial Intelligence Research, 70, 245-317.
Craft, S., & Vos, T. P. (2021). The ethics of transparency. In The Routledge Companion to Journalism Ethics (pp. 175–183).
Curry, A. L., & Stroud, N. J. (2021). The effects of journalistic transparency on credibility assessments and engagement intentions. Journalism, 22(4), 901–918.
Diakopoulos, Nicholas, & Koliska, M. (2017). Algorithmic transparency in the news media. Digital Journalism, 5(7), 809–828.
Dierickx, L., Opdahl, A. L., Khan, S. A., Lindén, C. G., & Guerrero Rojas, D. C. (2024). A data-centric approach for ethical and trustworthy AI in journalism. Ethics and Information Technology, 26(4), 64.
Ekström, M., & Westlund, O. (2019). Epistemology and journalism. Oxford University Press.
Ferrario, A., & Loi, M. (2022, June). How explainability contributes to trust in AI. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1457-1466).
Graziani, M., Dutkiewicz, L., Calvaresi, D., Amorim, J. P., Yordanova, K., Vered, M., Nair, R., Abreu, P. H., Blanke, T., Pulignano, V., Prior, J. O., Lauwaert, L., Reijers, W., Depeursinge, A., Andrearczyk, V., & Müller, H. (2022). A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences. Artificial Intelligence Review, 56(4), 3473–3504.
Gynnild, A. (2014). Surveillance videos and visual transparency in journalism. Journalism Studies, 15(4), 449–463.
Humprecht, E. (2020). How do they debunk “fake news”? A cross-national comparison of transparency in fact checks. Digital Journalism, 8(3), 310–327.
Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021, March). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 624-635).
Karlsson, M. (2010). Rituals of transparency: Evaluating online news outlets’ uses of transparency rituals in the United States, United Kingdom and Sweden. Journalism Studies, 11(4), 535–545.
Koliska, M. (2021). Transparency in Journalism. In Oxford Research Encyclopedia of Communication.
Metelli, F. (1974). The perception of transparency. Scientific American, 230(4), 90-99.
Schauer, F. (2011). Transparency in Three Dimensions. University of Illinois Law Review, 2011(4), 1339–57.
Michener, G., & Bersch, K. (2013). Identifying transparency. Information Polity, 18(3), 233-242.
Opdahl, A. L., Tessem, B., Dang-Nguyen, D. T., Motta, E., Setty, V., Throndsen, E., … & Trattner, C. (2023). Trustworthy journalism through AI. Data & Knowledge Engineering, 146, 102182.
Phillips, A. (2013). Transparency and the new ethics of journalism. In The future of journalism (pp. 307-316). London, Routledge.
Rai, A. (2020). Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141.
Singer, J. B. (2007). Contested Autonomy: Professional and popular claims on journalistic norms. Journalism Studies, 8(1), 79–95.
Turilli, M., & Floridi, L. (2009). The ethics of information transparency. Ethics and Information Technology, 11, 105-112.
Ward, S. J. (2014). The magical concept of transparency. In Ethics for digital journalists (pp. 57–70).
Zamith, R. (2019). Transparency, interactivity, diversity, and information provenance in everyday data journalism. Digital journalism, 7(4), 470-489.