In a recent study conducted by a team of scientists from the University of Cagliari in Italy, potential risks associated with the use of artificial intelligence (AI) in medical research have been identified. The researchers utilized the GPT-4 model to generate fake data for clinical trials comparing two methods of treating a common eye disorder, keratoconus.
According to Dr. Giuseppe Giannaccare, an eye surgeon at the University, GPT-4 was able to produce a compelling yet fabricated dataset supporting the superiority of one treatment method over the other. This was achieved despite real tests showing no significant difference between the two approaches.
The study raises concerns about the AI's capability to create datasets that, to an untrained eye, appear genuine. The researchers emphasize the potential threat this poses to academic integrity and the scientific community as a whole.
The paper, titled "Large Language Model Advanced Data Analysis Abuse to Create a Fake Data Set in Medical Research," published in the JAMA Ophthalmology journal, acknowledges that closer scrutiny of the data could reveal signs of possible fabrication. For instance, an unnatural prevalence of subject ages ending in the digits 7 or 8 was identified.
Dr. Giannaccare suggests that as AI-generated output infiltrates legitimate studies, AI can also play a crucial role in developing enhanced methods for fraud detection. He highlights the importance of responsible AI use in scientific research, emphasizing its potential to make a positive impact on academic integrity while acknowledging the need for vigilance against potential misuse and threats.
