Artificial intelligence company and ChatGPT creator OpenAI has assembled a new team of experts to fight what it calls the, ‘Catastrophic’ risks posed by AI systems if left unchecked. OpenAI’s team of experts is called Preparedness and will work to evaluate the current and future projected AI models for various risk factors.
These include personalised persuasion (matching a message’s content with the recipient’s preferences), cybersecurity, self-replication and self-adaptation in AI (where AI modifies itself independently), and potentially catastrophic risks such as chemical, biological, radiological and nuclear attacks.
“Led by Aleksander Madry, the Preparedness team will tightly connect capability assessment, evaluations, and internal red teaming for frontier models, from the models we develop in the near future to those with AGI-level capabilities,” OpenAI wrote in the announcement.
“We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity. But they also pose increasingly severe risks,” OpenAI added.
Three main questions
OpenAI’s Preparedness team will focus on three main questions to keep AI safe including:
- How dangerous are frontier AI systems when put to misuse, both now and in the future?
- How to build a robust framework for monitoring, evaluation, prediction, and protection against the dangerous capabilities of frontier AI systems?
- If the company’s frontier AI model weights were stolen, how might malicious actors choose to leverage them?
OpenAI has also introduced the ‘AI Preparedness Challenge‘ as part of its efforts to broaden its research in preventing misuse. The company is providing $25,000 in API credits to the 10 best submissions that present plausible scenarios of potentially catastrophic misuse of OpenAI’s technology.
The GPT-4 company is also recruiting experts from diverse technical backgrounds to join its Preparedness team and interested persons can sign up from the official page.
While the notion of AI triggering a nuclear conflict may appear unlikely, it’s worth recalling that earlier this year, a consortium of prominent AI researchers, engineers, and CEOs, including Demis Hassabis, CEO of Google DeepMind, issued a foreboding caution, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Isa Muhammad is a writer and video game journalist covering many aspects of entertainment media including the film industry. He's steadily writing his way to the sharp end of journalism and enjoys staying informed. If he's not reading, playing video games or catching up on his favourite TV series, then he's probably writing about them.