In a dynamic conversation at the STM Innovations Day held at the British Medical Association in December 2023, Sabine Louët, founder of SciencePOD, delves into the world of scientific publishing fraud detection with Cyril Labbé, a prominent figure in the field. As a professor in computer science at the Grenoble Informatics Laboratory SIGMA team, Labbé provides valuable perspectives on the challenges and promises associated with the integration of AI tools in combating fraudulent scientific publications.

Labbé acknowledges the growing threat to scientific literature posed by an increasing number of fake papers. He emphasizes the inadequacy of current 'integrity services' within the publishing community, pointing to the need for enhanced tools to efficiently detect and address these issues. Traditionally focused on plagiarism, Labbé notes the evolving concerns related to images and text manipulation in scientific papers.

The interview sheds light on the historical reliance on paraphrasing tools, pre-ChatGPT era, to mask plagiarism attempts. Labbé highlights the limitations of such tools, leading to the emergence of new strategies for identifying problematic text, including the creation of tools like the "problematic paper screener" developed by Guillaume Cabanac and colleagues. This online tool aims to identify suspect publications and compile a database of problematic papers, revealing trends across major publishers.

The conversation takes an insightful turn as Labbé discusses the transformative impact of ChatGPT on the landscape of scientific paper generation. While recognizing the potential for unethical use by paper mills aiming to boost productivity, he also acknowledges the ethical applications, such as improving non-native English writing.

Labbé emphasizes the importance of moving beyond merely identifying ChatGPT-generated text and focusing on the scientific validity of the content. Current capabilities are limited to detecting obvious errors, such as artifacts from the generation process, requiring human intervention to assess the accuracy of the scientific content.

As the interview unfolds, Labbé advocates for a nuanced approach in the application of AI tools, urging the scientific community to explore ways to detect not just the origin of text but also potential hallucinations, fake content, and unreported elements. In this evolving landscape, Labbé underscores the critical role of human verification in ensuring the integrity of scientific publications.

More: https://sciencepod.net/detecting-fraud-in-scientific-publications/