Artificial intelligence (AI) is fast-tracking the design of new proteins that could serve as drugs, vaccines, and other therapies. But that promise comes with fears that the same tools could be put to work designing the building blocks of biological weapons or harmful toxins. Now, scientists are proposing a range of protective measures that could be built into the AI tools themselves, to either block malicious uses or make it possible to trace a novel bioweapon to its AI creator.

“Getting this kind of framework right will be key for … harnessing the great potential of this technology while preventing the emergence of very serious risks,” Thomas Inglesby, epidemiologist and director of the Johns Hopkins University Center for Health Security, wrote in an email to Science. Inglesby was not an author of the new article, published today in Nature Biotechnology, but he has previously shared concerns around the misuse of AI in biological settings.

In recent years, scientists have demonstrated that AI models can not only predict protein structures based on their amino acid sequence, but also generate never-before-seen protein sequences with novel functions, and in record time. Recent AI models, such as RFdiffusion and ProGen, can custom design proteins in a matter of seconds. Few question their promise for basic science and medicine. But Mengdi Wang, a computer scientist at Princeton University and an author of the new paper, notes their power and ease of use are worrisome. “AI has become so easy and accessible. Someone doesn’t have to have a Ph.D. to be able to generate a toxic compound or a virus sequence,” she says.

Kevin Esvelt, a computer scientist at the Massachusetts Institute of Technology Media Lab who has testified in front of the U.S. Congress in support of stricter control on research into risky viruses and DNA production, notes the concern remains theoretical. “There’s no laboratory evidence indicating that the models are good enough to actually let you cause a new pandemic today,” he says. Still, a group of 130 protein researchers, including Ingelsby, signed a pledge last year to use AI safely in their work. Now, Wang and her colleagues go beyond voluntary measures by outlining measures that could be built into AI models themselves.

One is a guardrail is known as FoldMark, which was developed in Wang’s lab. It borrows its concept from existing tools such as Google DeepMind’s SynthID, which embed digital patterns into AI-generated contents without changing their quality. In FoldMark’s case, a code that serves as a unique identifier is inserted into a protein structure without changing the protein’s function. If a novel toxin were detected, the code could be used to trace it to its source. This kind of intervention is “both feasible and of great potential value in reducing risks,” Inglesby says.

Wang and her colleagues also suggest ways to modify AI models so they are less likely to do harm. Protein prediction models are trained on existing proteins, including toxins and pathogenic proteins, and an approach called “unlearning” would strip away some of that training, making it harder for the model to propose dangerous new proteins. Their paper also suggests “antijailbreaking,” which systematically trains AI models to recognize and reject potentially malicious prompts. And it urges developers to adopt external safeguards such as autonomous agents that can monitor how an AI is being used and alert a safety officer when someone attempts to produce hazardous biological materials.

Alvaro Velasquez, a program manager overseeing AI projects at the Defense Advanced Research Projects Agency and a co-author of the paper, concedes that implementing these safeguards will not be straightforward. “Having a regulating body or some level of oversight would be a starting point,” Velasquez says.

James Zou, a computational biologist at Stanford University, thinks that instead of requiring the AI models themselves to incorporate guardrails, regulators could focus on service facilities or organizations that can turn AI-generated protein designs into large-scale production. “I think the place where it makes sense to have more guardrails and regulations is at the level of where the AI meets the real world,” he says. Production facilities could ask about the origin of new molecules and what their intended use is. “And maybe even run some tests to see if these molecules are potentially dangerous,” he adds.

But Zou agrees the new focus on safeguards is healthy. “People have not given as much thought [to AI and biosecurity] as they have for other areas like misinformation or deep fake [technology]. I’m glad that researchers are starting to pay attention to this.”

More: https://www.science.org/content/article/built-safeguards-might-stop-ai-designing-bioweapons