The US Supreme Court has ruled that the government can continue communicating with researchers and social media platforms to reduce misinformation on subjects such as elections and vaccines. The decision stops short of declaring these activities as protected free speech under the US Constitution, but it represents a win for researchers facing lawsuits alleging collaboration with the government to suppress conservative opinions.

“This ruling is a major victory for independent research,” says Rebekah Tromble, who leads the Institute for Data, Democracy, and Politics at George Washington University in Washington DC. “In rejecting the conspiracy theories at the heart of the case, the Supreme Court demonstrated that facts do still matter.”

Over the past four years, US researchers have run two high-profile rapid-response projects—jointly led by the Stanford Internet Observatory in California and the University of Washington in Seattle—to track, report, and counter disinformation (deliberately misleading information) and misinformation (inaccurate information without malicious intent). Researchers flagged virally spreading falsehoods to social media companies and US agencies, releasing their reports publicly. The federal government also pointed out problematic content to platforms like Facebook.

Many conservative activists and politicians believed these efforts targeted Republican voices, including those who falsely claimed the 2020 US presidential election was rigged. They launched congressional investigations and multiple lawsuits, including the one the Supreme Court decided today. This case was originally filed in May 2022 by plaintiffs including the then-attorneys-general of Missouri and Louisiana, both of whom had also challenged whether US President Joe Biden actually won the 2020 election.

In their ruling, the Supreme Court justices noted that social media platforms are private entities that began making their own decisions about content moderation well before the US government contacted them about misinformation. The court found no specific evidence that government pressure unduly influenced those decisions or caused harm.

The Supreme Court has yet to rule on a related case focused on US state regulations that attempt to limit social media companies’ ability to regulate conversations on their platforms.

It remains to be seen how today’s ruling will impact lawsuits against scientists, but legal scholars and misinformation researchers contacted by Nature say it represents a clear win for academic freedom. “Elated. And vindicated,” was the response from Kate Starbird, a misinformation researcher at the University of Washington in Seattle, who was involved in both academic efforts to combat misinformation.

Issuing Tickets
One of the rapid-response projects connected to today’s decision, the Electoral Integrity Partnership, kicked off in early September 2020 and ran for 11 weeks. Researchers scoured social media platforms for misinformation about the 2020 US presidential election, generating more than 600 ‘tickets’ to flag questionable posts. Most of the false narratives identified were associated with the political right, and around 35% were later labeled, removed, or restricted by the companies.

Researchers then adapted this model to create the Virality Project, which ran from February to August 2021. They identified more than 900 instances of problematic social media posts relating to the COVID-19 pandemic, focusing mainly on falsehoods about vaccines. Of those, 174 were referred to US health authorities and social media firms for potential action.

Despite today’s ruling, conservative activists have continued to file public-records requests and other suits against researchers, accusing them of colluding with the federal government to curtail free speech. Researchers and legal scholars argue there is no evidence for these accusations and maintain that academics have a right to free speech, which they exercised by studying and flagging falsehoods online. Tromble notes that the Supreme Court’s dismantling of the plaintiffs’ arguments “should certainly help researchers being sued by individuals and organizations making similarly distorted claims.”

What’s Next?
The lawsuits by conservative activists have sparked fear among misinformation researchers, and changes to the online environment have made their work harder. For instance, after Elon Musk bought Twitter (now called X), he instituted policies that cut off academics’ access to data. Many other social media companies have also backed away from content moderation efforts.

These developments have discouraged efforts by government agencies and academics to identify and counteract false narratives, says Gowri Ramachandran, deputy director at the Brennan Center for Justice in New York City. This is particularly concerning as the United States prepares for its next presidential election, with Biden and Trump already preparing to face off again in November.

Stanford University has ended its rapid-response work on misinformation projects, laying off two staff members involved, although researchers will continue to work on election misinformation this year. The decision had nothing to do with fear of litigation or congressional investigations, says Jeff Hancock, director of the Stanford Internet Observatory. Fundraising challenges and a shift in research interests were cited as reasons for halting the rapid-response work. However, conservative lawmakers celebrated the news.

With Stanford stepping aside, Kate Starbird and her team at the University of Washington will now lead the rapid-response work. In 2021, the US National Science Foundation awarded Starbird and her partners $2.25 million over five years to continue their work on disinformation. This year’s project will not include reporting to social media platforms—a task previously conducted by Stanford—and will focus more on platforms like Instagram and TikTok due to the loss of access to data on X. But the work will continue, Starbird says.

More: https://www.nature.com/articles/d41586-024-01766-2