You might know someone who thinks the Moon landing was faked or COVID-19 vaccines are full of microchips. Believers cling tenaciously to such conspiracy theories, which have little basis in reality, even when presented with contrary evidence. But according to research published today in Science, some people do change their minds when fact-based arguments are delivered by an artificial intelligence (AI) chatbot instead of another human being. Personalized conversations with this “debunkbot” can turn even hardcore conspiracy theorists into budding skeptics, the researchers report.

“It’s really promising to see how AI can play a role in combating misinformation and conspiracy theories,” says Jan-Willem van Prooijen, a behavioral scientist at the Free University of Amsterdam who wasn’t involved in the new study. Generative AI is notorious for spreading falsehoods, most notably through the use of deepfakes, so Van Prooijen finds it “refreshing” to see it used as a force for good.

Be it the belief that the CIA assassinated former President John F. Kennedy or that Area 51 houses alien corpses, nearly half the U.S. population believes in one conspiracy theory or another, according to some estimates. Many psychologists think these beliefs help fulfill underlying psychological needs, such as the desire for security. But hypotheses about such “subterranean motivations” are hard to test, says Thomas Costello, a psychologist at American University and lead author of the new study. The new findings provide “one of the first really strong pieces of evidence that they’re not the whole story,” he says, “or maybe, in fact, that they’re totally wrong.”

When debating conspiracies in real life, believers will often attempt to overwhelm naysayers by quickly presenting as many arguments as possible—a technique known as the Gish gallop. Whereas no human can address all those claims at once, an AI program conceivably could. Costello and his colleagues wanted to know whether large language models (LLMs) such as GPT-4 Turbo, which process and generate huge amounts of information in seconds, could debunk conspiracy theories with what Costello describes as “tailored persuasions.”

The team recruited more than 2000 participants who professed a belief in at least one conspiracy theory, which they define as the belief that important events or situations—the Kennedy assassination or the COVID-19 pandemic—were secretly orchestrated by powerful people or organizations. Next, they had these people engage in a brief conversation with an LLM chatbot. Each person shared with the AI what they believed, the evidence they felt supported it, and rated how confident they were that the theory was true. The chatbot—trained on a wide range of publicly available information from books, online discussions, and other sources—refuted each claim with specific, fact-based counterarguments. These conversations reduced people’s confidence in their conspiracy theories, on average, by 20%.

These reductions were remarkably persistent, lasting up to 2 months, and appeared to work across a wide variety of theories. “The fact that it worked so well for so long is what stood out to me,” says Ethan Porter, a political scientist and disinformation researcher at George Washington University who wasn’t involved in the study.

Part of the reason debunkbot is so successful, Van Prooijen reasons, is that it remains “very polite,” whereas human conversations about similar topics can easily get “heated and disrespectful.” And whereas someone might worry about friends or family members judging them for altering their beliefs, it’s impossible to “lose face” in front of an AI model, he adds.

When Costello and his colleagues repeated their experiment with a chatbot that engaged with participants but didn’t formulate fact-based counterarguments, they saw no effect, suggesting the presentation of evidence was critical. “Without facts, it couldn’t do its job,” Costello says.

Still, the rhetoric involved may be critical to persuasion, says Federico Germani, a disinformation researcher at the University of Zürich. Because LLMs train on real conversations, he explains, they pick up on subtle rhetorical strategies that make their arguments more persuasive, even when a prompt has instructed them to rely purely on facts. “The authors are probably underestimating that, in between the lines, the AI is very good at manipulating,” he says.

Psychologists at the University of Kent, writing in a joint statement, also question whether the findings upend the prevailing idea that conspiracy theories fulfill unmet psychological needs. Aleksandra Cichocka, Robbie Sutton, and Karen Douglas say in a joint statement that because the study authors didn’t directly measure whether participants felt their needs were satisfied after conversing with the chatbot, it’s impossible to know whether that influenced their change of mind. Indeed, subjects were still quite confident in their theories even after the AI tried to debunk them, suggesting these underlying motivations still played a powerful role in their beliefs.

Although conspiracy theorists are unlikely to engage with debunkbot voluntarily, Germani and Van Prooijen note the AI could potentially bolster existing technological responses. Many social media sites already have strategies in place to flag potential misinformation, such as the Community Notes feature on X, and this new model could provide additional information refuting it.

People could also use debunkbot to quickly and thoroughly fact-check new claims they’ve heard, cultivating a healthy level of skepticism and making it less likely that they will fall down the misinformation rabbit hole in the future, Costello notes. “You can almost think of these chatbots as a form of epistemic hygiene,” he explains, “like brushing your teeth, but for your mind.”

More: https://www.science.org/content/article/ai-chatbot-shows-promise-talking-people-out-conspiracy-theories