In one of eleven sessions hosted by COPE during Publication Integrity Week 2023, COPE Council Member, Mike Streeter, hosts a discussion between Dustin Smith (Hum), and Mohammad Hosseini (postdoctoral researcher and collaborator at the Institute for Artificial Intelligence in Medicine) about the state of Artificial Intelligence (AI) in peer review.
The speakers begin by talking about what functions AI is best suited to as a peer review tool. Critics have recently suggested that using AI for review purposes risks putting confidential information back into the public domain. The discussion draws a distinction between the type of model used: local models which do not upload information back into training datasets, or self-built models, have much reduced risks in this respect. The most popular models, such as ChatGPT, should be treated with more caution and may leave users more vulnerable to having their data shared.
The speakers give quite different responses to the way that AI use should be disclosed, from full disclosure by reviewers and publishers, to the view that AI use will soon be so pervasive that disclosure will scarcely be relevant.
Users need to be alert to the ways that AI can alter their own perceptions: for example, using AI to generate an initial review can introduce biases into a human reviewer’s assessment of the research. However, as a tool to improve language, AI can reduce the risk of mis-evaluation of scholarly research and bring great benefits to the community.
