Presented at the 16th Dubrovnik Media Days, September 30, 2023.
Generative AI, through large language models, has become a cheap and quick method to generate misleading or fake stories. However, producing false or inaccurate results is not always intentional, as machine-generated contents are subject to “artificial hallucinations”. Therefore, defining the (non)human nature of the author seems pointless, especially since detection methods still remain limited. A different perspective is grounded in the tradition of human judgement methods that have been developed in natural language processing (NLP) to assess the qualitative characteristics of machine-generated content. It consists of evaluating the ability of the system to stick to the facts through an adapted language-independent metric. This tool helps to understand the limits of generative AI and fosters a reflection on what factuality is.