A salient detail: the researchers used artificially intelligent software that was basically developed to promote the discovery of new medicines.
In the search for new medicines, pharmaceutical companies make extensive use of artificially intelligent technologies. This includes the American company Collaborations Pharmaceuticals, Inc., for which Sean Ekins works. And a while ago, the company received a special request from the Swiss Federal Institute for NBC Protection (NBC stands for nuclear, biological and chemical). The institute asked Ekins and colleagues to give a presentation on how artificially intelligent technologies developed for drug discovery could be misused at a biannual meeting that traditionally discusses the dangers of biological and chemical technologies. The question really caught Ekins and colleagues, they write in the magazine Nature Machine Intelligence† “The idea (that artificially intelligent software, devised to detect drugs could be misused by people who want to get hold of biological or chemical weapons, ed.) had never crossed our minds. We were vaguely aware of concerns about working with pathogens or toxic substances, but that had nothing to do with us: we mainly work in a virtual setting (…) We have been working with computers and artificial intelligence for decades to to promote human health – not to harm it.” A little naive perhaps, Ekins and colleagues have to admit. “Even our projects on Ebola and neurotoxins – which do have thoughts about possible negative implications of our… machine learningmodels – would not have set off alarm bells.”
Experiment
However, those alarm bells started ringing after the question from the Swiss institute. And the application even grew into a small research project in which Ekins and colleagues actually put the test: was it possible with their software – designed to promote the discovery of new drugs – to stimulate the development of new biochemical weapons? The answer is: yes. In fact, in just hours, the artificially intelligent software turned out to be able to invent thousands of molecules that could serve as chemical weapons, including some that are already familiar to us (such as the nerve agent VX), as well as some new ones – at first glance – are even more poisonous than chemical weapons already known to us.
MegaSyn
For the research project, Ekins and colleagues used MegaSyn. “This is our software that designs molecules and uses integrated machine learningmodels to predict the activity and toxicity of proposed molecules,” Ekins explains to Scientias.nl. Normally, the artificially intelligent software is trained to ignore toxic molecules and to focus on bioactive molecules, ie molecules that have a specific biological or physiological activity or function. The ultimate goal is to track down molecules that can be used to treat diseases. “We use the software to find molecules that can be synthesized and then tested for their activity.” The approach is still in its infancy, but looks promising. “The software has already found fabrics several times that have been previously designed by others – in the traditional way.” It proves that the software can achieve the same – in a much shorter time – as researchers previously achieved through endless chemical puzzles in the lab. And it suggests that the software can also generate molecules that are unknown to us, but very interesting from a pharmaceutical point of view.
A promising piece of software. But now Ekins and colleagues had become curious as to whether it could be used for very different purposes with minimal adjustments. They put it to the test and adjusted the software slightly. “We used the same approach to find new molecules,” the researchers write. But where the model was previously trained to ignore the toxic molecules, it was now allowed to hunt for toxic and bioactive molecules. To get an idea of what the researchers were looking for, the artificially intelligent software was presented with sample molecules from a public database that would normally be used to track the search for molecules that can be used in the fight against neurological diseases. to lead. But of course the researchers weren’t looking for treatments for neurological disorders, and to give the software some direction, they specifically instructed it to look for neurotoxins. “Like the nerve agent VX, one of the most poisonous chemical weapons developed during the twentieth century, a few grains the size of a grain of salt are enough to kill someone.”
Shocking Transformation
Then the software went to work. And with results: in less than six hours, the software generated 40,000 molecules that seem suitable as biochemical weapons. It was no surprise that the software was able to generate so many molecules in such a short time, says Ekins. “But the fact that the software found several known biochemical weapons was shocking. And then it found thousands of other unknown molecules.” Some of which are therefore expected to be much more poisonous than bioweapons already known to us. “We had transformed our harmless model from a useful tool for medicine to a generator of probably lethal molecules,” the researchers say.
Using their artificially intelligent software, the researchers generated potentially toxic molecules that could potentially be used as bioweapons. Those molecules were not actually made, but suggested by the software. The researchers did not dare to go much further. “We didn’t look at synthesisability. And we didn’t use our tools to design synthetic pathways that would bypass watchlisted or heavily regulated precursors of these molecules. But those are things that we could also do with these resources. However, we stopped shortly before that. But the main implication is that adversaries wouldn’t; they would continue,” says Ekins.
It is shocking that well-intentioned software can be so misused. Especially when you consider that the software the researchers used already exists and the data on which the software relied heavily is publicly available. “There are hundreds of companies that use these resources to find drugs and other molecules,” said Ekins. “And if we’re not careful, it could be misused by one or more malicious people.”
Consideration
With that in mind, publishing a study showing that well-intentioned software can be almost effortlessly transformed into software that generates deadly molecules might not seem like a very good idea. “It’s a double-edged sword,” Ekins admits. On the one hand, it is important that people are aware of this danger. But on the other hand, it can also give people ideas. “We thought about it carefully before publishing it and talked about it with experts,” says Ekins. But in the end – with some information left out here and there (which was available during the peer-review process, by the way – it was published. “Without being overly alarmist, this could be a wake up call should be for our colleagues who use artificial intelligence to find medicines,” the researchers write. “As a research field, we need to start talking about this topic (…) As responsible scientists, we need to ensure that AI misuse is prevented and that the tools and models we develop are used for good.”
The implications of the study are worrying. And Ekins and colleagues are clearly liberated from the idea that their software is only fit to do good. “Excellent therapeutic applications are conceivable, but they can also be abused.”
Source material:
†Dual use of artificial intelligence-powered drug discovery” – Nature Machine Intelligence
Interview with Sean Ekins
Image at the top of this article: MasterTux (via Pixabay†