AI can play this role in developing bio-threats

- Advertisement -spot_imgspot_img
AI can play this role in developing bio-threats

We used to fear guns, but today there are very different threats that are much more likely. There are people who think that the coronavirus, for example, is such a man-made threat. It is true that these types of biological threats are becoming increasingly exciting, as is the chance that nuclear power plants, for example, will be hacked. That is modern warfare and to make it even more modern, a study has now been shown that AI can support the development of biological threats.

AI and biological threat

A study from OpenAI, makers of ChatGPT and Dall-E, shows that AI models can significantly increase access to dangerous biological threat information. A study among 100 people examined the extent to which malicious parties can use AI to learn more about biological threats. A biological threat is, for example, a virus that spreads among people or animals, but it can also be a plant disease that ends up in our food, for example.

50 people who participated were biology experts with a PhD and experience in the lab, while the other half were students who had taken at least one biology subject at university. People were also divided into groups: some with only access to the internet or with access to the internet and also GPT-4. To be clear: it was not ChatGPT, but a variant of GPT-4 that is not publicly available and can therefore provide answers about bioweapons. Consider questions about making and distributing it. And then it mainly concerns ‘ingredients’.

Risk is higher for bioterrorism

OpenAI could only draw one conclusion: GPT-4 can increase the ability to find biological threat information, especially when it comes to accuracy and completeness of tasks. It writes: “It is not yet clear what level of increased information access would actually be dangerous. It is also likely that this level will change as the availability and accessibility of technology capable of translating online information into physical biosigns changes.”

And, it states: This study is not enough to draw a full conclusion, this is just the starting point for continued research and discussions on this topic, OpenAI states. OpenAI naturally has a dual role in this: on the one hand, it must provide governments with answers to these types of ethical questions, but on the other hand, it also wants to ensure that AI is as accessible as possible.

More research needed

OpenAI says that information about bioterrorism is easily accessible to everyone, even without AI, because there is still a lot of dangerous content on the internet that can also be found. OpenAI also states that bioterrorism has historically been rare and that the level of risk can be adjusted if access to physical technology that enables bioterrorism is blocked.

At the same time, there is also a glimmer of hope: AI can also help with a counterattack or a solution. After all, it can calculate quickly, so if such an attack occurs, it can help think about how it can be stopped, inhibited or prevented in the future. So it is certainly not all bad, but it is of course not positive that OpenAI draws the conclusion that it is indeed becoming more accessible. But then with so much text surrounding it…

Is this a company that mainly cleans its own streets, or is there indeed too much vagueness about what bioterrorism is and when it can be said to be dangerous? In that respect, we have to agree with OpenAI: much more research is indeed needed into this research. And perhaps not by OpenAI itself, but by science.

- Advertisement -spot_imgspot_img
Latest news
- Advertisement -spot_img
Related news
- Advertisement -spot_img