Following the rapidly evolving use of artificial intelligence (AI) tools such as large language models (LLMs) and generative chatbots, JAMA and the JAMA Network journals released guidance on the responsible use of these tools by authors and researchers in scholarly publishing. These policies preclude the inclusion of nonhuman AI tools as authors and require the transparent reporting of use of such tools in preparing manuscripts and other content and when used in research submitted for publication. In addition, submission and publication of clinical images created by AI tools is discouraged, unless part of formal research design or methods. In all such cases, authors must take responsibility for the integrity of the content generated by these models and tools.
To assist authors and after reminding them of these new policies, JAMA and the JAMA Network journals will ask authors to address this question in the manuscript submission systems:
Did you use AI, a language model, machine learning, or similar technologies to create or assist with creation or editing of the content in this submission (eg, text, tables, figures, video)? (Note: this does not include basic tools for checking grammar, spelling, references, etc.)
And those authors who answer yes to this question will be prompted to address 2 follow-up questions:
Please provide a description of the AI-generated content that is included in this submission and the name of the model or tool used, version and extension numbers, and manufacturer in the space below.
Please confirm that you take responsibility for the integrity of the content generated by these tools and that you have provided a description of such generated content and the name of the model or tool used, version and extension numbers, and manufacturer in the Acknowledgment or Methods section of the manuscript.
Many have been experimenting with generative chatbots and other AI tools during the peer review process, with some disconcerting experiences. For example, one author entered their preprint into a chatbot to create a peer review. The chatbot generated a good summary of the work and provided some standard assessment of the writing style and readability. However, when asked to suggest more specific improvements, the chatbot provided specific-looking general comments that had no bearing on the text, erroneously suggested that the statistical tests were not appropriate for the data, and when asked for additional references, the chatbot provided citations for nonexistent articles.
Hosseini and Horbach recently reviewed the potential use of LLMs in the peer review process following Tennant and Ross-Hellauer’s 5 core themes about peer review: the reviewers’ role, the editors’ role, functions and quality of peer reviews, reproducibility, and the social and epistemic functions of peer reviews. They identified potential benefits of these tools to improve efficiency and productivity in the editorial process and peer review and help with reviewer fatigue. However, they concluded that “the fundamental opacity of LLMs’ training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports.”
JAMA and the JAMA Network journals have extended the policy on use of AI tools to peer reviewers with recommendations for responsible and accountable use and reminders of the confidential nature of submitted manuscripts and the peer review process. These journals use a single-anonymized review process, in which peer reviewer identities are kept confidential (unless reviewers choose to reveal their names in their formal reviews), author identities are made known to reviewers, and submitted manuscripts and other content are kept confidential if and until publication. Instructions provided to peer reviewers after they accept an invitation to review a manuscript now include the following:
Entering any part of the manuscript or abstract or the text of your review into a chatbot, language model, or similar tool is a violation of our confidentiality agreement.
If you used an AI tool as a resource for your review in a way that does not violate the journal’s confidentiality policy, you must provide the name of the tool and how it was used.
Please remember that you are ultimately responsible for all the content of this review.
The peer reviewer form also includes the following:
Did you use a chatbot, language model, or similar tool as a resource during your review? As a reminder, our confidentiality policy prohibits the entering of any part of the manuscript or your review into a chatbot, language model, or similar tool.
If yes, please provide a description of the content that was created and the name of the language model or tool, version and extension numbers, and manufacturer in the space below. (Note: this does not include basic tools for checking grammar, spelling, references, etc.)
These policies are aligned with that of the International Committee of Medical Journal Editors. The Committee on Publication Ethics (COPE) has provided additional guidance for use of AI tools in decision-making in scholarly publication, including the need for accountability and human oversight. The editors of JAMA and the JAMA Network journals are not using AI tools to make specific editorial decisions on manuscripts but do have a collection of AI-like tools to help inform their editorial assessments. Peer-reviewed medical journals and publishers have been using AI-like tools during the manuscript submission, peer review, and publication processes for some time. Common uses include checking for duplicate, highly similar, plagiarized, or fake manuscripts; parsing metadata from submitted manuscripts to fill submission screens and forms; identifying key words to aid in the editorial process and for improving discoverability of published content; recommending peer reviewers based on keywords or other metadata; technical and format checking of manuscripts at submission and during editing; validating references; checking image integrity; creating translations; and creating summaries of content and transcripts of multimedia. Each of these processes also requires some level of human review and oversight.
As AI technologies continue to rapidly evolve and be used and tested, important concerns about potential biases, ethical issues, and intellectual property rights of content generated by these tools have not yet been adequately addressed. That said, we fully recognize that these evolving technologies are precipitously changing the nature of content creation, generation, review, and assessment and will likely facilitate efficiencies for authors, reviewers, and editors and continue to transform scholarly publication. JAMA and the JAMA Network journals aim to use these technologies responsibly and will continue to provide authors and reviewers with guidance on accountable and transparent use of such tools.