“The data echoes concerns raised by AI companies OpenAI and Anthropic in recent months, both of which have warned that today’s AI tools are reaching the ability to meaningfully assist bad actors attempting to create bioweapons.
It has long been possible for biologists to modify viruses using laboratory technology. The new development is the ability for chatbots—like ChatGPT or Claude—to give accurate troubleshooting advice to amateur biologists trying to create a deadly bioweapon in a lab.
Safety experts have long viewed the difficulty of this troubleshooting process as a significant bottleneck on the ability of terrorist groups to create a bioweapon, says Seth Donoughe, a co-author of the study. Now, he says, thanks to AI, the expertise necessary to intentionally cause a new pandemic “could become accessible to many, many more people.”
Soft-Material3294 on
The latest developments in AI that *could* make all of this possible are not related at all with AI chatbots like ChatGPT or Claude like the article implies. For example, we now have access to diffusion models and structure/sequence models for biologics design.
The latest developments in AI also “swing the other way”: it is much easier to create developable and safe vaccines now than just 2-3 years ago.
Finally, I just want to iterate while something is *possible*, ie. >0% chance of happening, it does not mean it is *likely* ie >50% chance.
Source: I have a PhD in Protein Design and work in this (AI and immunology) field. AMA 🙂
2 Comments
“The data echoes concerns raised by AI companies OpenAI and Anthropic in recent months, both of which have warned that today’s AI tools are reaching the ability to meaningfully assist bad actors attempting to create bioweapons.
It has long been possible for biologists to modify viruses using laboratory technology. The new development is the ability for chatbots—like ChatGPT or Claude—to give accurate troubleshooting advice to amateur biologists trying to create a deadly bioweapon in a lab.
Safety experts have long viewed the difficulty of this troubleshooting process as a significant bottleneck on the ability of terrorist groups to create a bioweapon, says Seth Donoughe, a co-author of the study. Now, he says, thanks to AI, the expertise necessary to intentionally cause a new pandemic “could become accessible to many, many more people.”
The latest developments in AI that *could* make all of this possible are not related at all with AI chatbots like ChatGPT or Claude like the article implies. For example, we now have access to diffusion models and structure/sequence models for biologics design.
The latest developments in AI also “swing the other way”: it is much easier to create developable and safe vaccines now than just 2-3 years ago.
Finally, I just want to iterate while something is *possible*, ie. >0% chance of happening, it does not mean it is *likely* ie >50% chance.
Source: I have a PhD in Protein Design and work in this (AI and immunology) field. AMA 🙂