During the COVID-19 pandemic, preprints—unreviewed manuscripts posted online—were an important venue for biomedical researchers to quickly share findings with colleagues that might help curb the disease. At the same time, some scientists worried about whether and how to responsibly convey these unvetted findings to a public desperate for information.
Two recent studies support this concern. Even after reading a news article about preprinted findings that acknowledges they are unreviewed—a practice media organizations adopted for some stories during the pandemic—many nonspecialist readers don’t understand how a preprint differs from a journal article. And being told the findings came from a preprint doesn’t affect how credible the reader finds it.
The new analyses do not cast doubt on the value of preprints, which continue to be a popular way for scientists to quickly share results with colleagues before the findings appear in a journal, emphasizes Alice Fleerackers, a co-author of both studies and a social scientist at the University of Amsterdam. “Lots of preprints are fine,” she says. “Some are arguably better than many journal articles.”
Still, as social psychologist Tobias Wingen of the University of Hagen notes, “With the increasing prominence of preprints in scientific communication, it is essential to know whether nonscientists can grasp this concept.” The results from Fleerackers’s work “are both intriguing and concerning,” adds Wingen, who has performed similar research but was not involved in the study.
For one of Fleerackers’s studies, she and colleagues asked 1702 U.S. adults to read modified versions of real news articles describing findings of preprint studies; one discussed a COVID-19 “superspreader” event, for example. Both original news articles mentioned the study was a preprint, and one provided a definition: that the research had not yet been peer reviewed. For its experiment, the research team created modified versions of each news article. Researchers asked some survey respondents to read a version containing a definition of a preprint, including that its findings had not been peer reviewed or published in a scientific journal. Another group was given a different version of the news article, from which the researchers removed all references to a preprint. Then both groups were asked questions about the research being reported on, and a final open-ended question: “When you see the term ‘preprint’ in a scientific news article, what do you interpret it to mean?”
Overall, only about 30% of all respondents defined preprints in a way consistent with how scholars define them—that they are unvetted by independent experts, “preliminary,” or “uncertain”—the authors reported in Public Understanding of Science in October 2024. For many participants, receiving the definition of preprints didn’t seem to help; readers of either version of the article were equally likely to give an inaccurate definition. (An exception was college students; for this group, inaccurate responses were less common among those who received the extra information.) Incorrect descriptions included saying a preprint is “like a trailer for a movie,” previewing a more complete version to be published later, or an unproofread version of a news story.
In another study, the team also explored whether readers found preprint content trustworthy. They gave 415 U.S. adults one of several versions of a news article describing a preprint about COVID-19 vaccines. The original version said the article was based on a preprint that hadn’t been evaluated by outside experts. A modified version described the findings but didn’t mention that the study was a preprint. Yet another version retained the explanation of the preprint and added strong, emphatic language that reduced or eliminated hedging about the research findings. The research team then used a standardized scale to measure how credible the respondents found the scientists and the reported findings.
Simply mentioning that the study was preprinted didn’t make a difference; the respondents who were told this considered the findings as credible as the group that wasn’t told, the team—led by social scientist Chelsea Ratcliff of the University of Georgia—reported in a study published in the April 2024 issue of Health Communication. But the hedging language decreased credibility. (Conspiracy theories about COVID-19 may have primed readers to be especially “ambiguity averse,” the research team speculates.) Separate research by Wingen and colleagues also suggests providing a longer definition of a preprint tends to lead readers to view preprinted results with more skepticism.
Such hesitation about preprints is not always warranted, Fleerackers suggests. Evidence from other studies that compared the preprinted and peer-reviewed, journal-article versions of the same study indicates that in many cases differences are minimal, and peer reviews can be biased and perfunctory, Fleerackers notes. To improve the quality of preprints, a small but growing movement, spurred in part by the COVID-19 pandemic, has aimed to quickly review them, regardless of whether authors later submit them to refereed scholarly journals, which can take months to publish them after commissioning their own peer reviews. Although most preprints continue to go unreviewed before being submitted to a journal, and some never appear in any journal, Fleerackers notes that a select few attract comments on the preprint’s server presenting substantial critiques, offering a resource for journalists reporting on the preprint.
Regardless of whether scientific findings appear in preprints or peer-reviewed journal articles, Fleerackers encourages journalists to describe study uncertainties and whether the work has been vetted independently. Research by other teams has suggested such transparency can lead readers and viewers to regard a news story and the scientists interviewed as more credible, at least about topics other than COVID-19. But only about half of news articles about COVID-19 preprints posted in early 2020 acknowledged they were unreviewed or otherwise unverified.
“I would love it if science communicators and journalists could start to build some [public] understanding of what peer review does and doesn’t do,” Fleerackers says, “and more broadly, how science does and doesn’t function well, so that people can make their own decisions.”
