Four major tech companies – OpenAI, Google, Microsoft and Anthropic – have taken a big step in promoting safe and responsible development of frontier AI models by creating the Frontier Model Forum.
The main objective of this newly established industry body is to leverage the technical and operational expertise of its member companies for the betterment of the AI ecosystem.
“This is urgent work and this forum is well–positioned to act quickly to advance the state of AI safety.” said Anna Makanju, VP of global affairs, OpenAI.
Four key areas
According to OpenAI, the Frontier Model Forum will focus on four key areas:
- Advancing AI safety research to promote responsible development of frontier models and minimise risk
- Identify best practices for the responsible development and deployment of frontier models and help people understand the impact of AI
- Team up with academics, civil society, policymakers and companies to share knowledge about trust and safety risks
- And finally, the Forum will support efforts to develop applications that can help meet some of the world’s biggest challenges such as climate change mitigation and adaptation, early cancer detection and prevention as well as combating cyber threats
Kent Walker, president of global affairs, Google and Alphabet comments, “We’re excited to work together with other leading companies, sharing technical expertise to promote responsible AI innovation. Engagement by companies, governments, and civil society will be essential to fulfil the promise of AI to benefit everyone.”
According to the Forum’s definition, frontier models are large-scale machine-learning models that surpass the capabilities of the most advanced existing models and possess the ability to perform a wide array of tasks.
Membership requirements
The Forum is extending invitations to organisations that meet its criteria to join their endeavour and work together in ensuring the safe and responsible development of frontier AI models. Membership is open to organisations that:
- Develop and deploy frontier models (as defined by the Forum)
- Demonstrate a strong commitment to frontier model safety, including through technical and institutional approaches
- Are willing to contribute to advancing the Forum’s efforts which includes participating in joint initiatives and supporting the development and functioning of the initiative
“Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.” Said Brad Smith, vice chair and president of Microsoft.
Isa Muhammad is a writer and video game journalist covering many aspects of entertainment media including the film industry. He's steadily writing his way to the sharp end of journalism and enjoys staying informed. If he's not reading, playing video games or catching up on his favourite TV series, then he's probably writing about them.