There are always cybersecurity threats that we must protect ourselves from, especially in an era where emerging technologies such as generative AI are fast evolving. Search giant Google, a major player in the AI field acknowledges the importance of adjusting to this nascent threat.
The Alphabet subsidiary is broadening its current Vulnerability Rewards Program (VRP) to cover vulnerabilities related to generative AI. This expansion takes into account the distinct challenges associated with generative AI such as biases, model manipulations, misinterpretations of data, and other adversarial attacks.
Google’s VRP program compensates external security researchers for examining and disclosing software vulnerabilities in the company’s offerings, and the initiative is now expanding to include generative AI products. Some of Google’s widely used AI products include Bard, Lens, and various AI integrations in services like Search, Gmail, Docs and others.
AI safety and security
“Today, we’re expanding our VRP to reward for attack scenarios specific to generative AI. We believe this will incentivise research around AI safety and security, and bring potential issues to light that will ultimately make AI safer for everyone,” Google wrote in the announcement. “We’re also expanding our open source security work to make information about AI supply chain security universally discoverable and verifiable.”
With generative AI becoming increasingly integrated into various Google tools and software, the potential risks are on the rise. Google already has an internal Trust and Safety team that is dedicated to handling these risks. However, by extending the bug bounty program to cover generative AI, Google aims to promote research in AI safety and ensure that responsible AI becomes the standard practice.
Google has also provided additional details regarding its criteria for rewarding bug reports in AI products, making it easier for users to distinguish what falls within the scope of rewards and what doesn’t.
Protecting users
“As part of expanding VRP for AI, we’re taking a fresh look at how bugs should be categorised and reported. Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations),” said Google.
External security researchers are tasked with identifying vulnerabilities in return for financial rewards. Conversely, this provides Google with the opportunity to address these threats before bad actors can use them, giving users a safer product experience.
In addition to including generative AI in its VRP, Google has also introduced the Secure AI Framework to aid in the development of ethical and secure AI applications. The company also revealed its partnership with the Open Source Security Foundation to ensure the integrity of AI supply chains.
Users who wish to participate in Google’s bug bounty program can directly report a bug or security vulnerability via the official website. Last year, the company disbursed more than $12 million in rewards to security researchers through its bug bounty program.
Isa Muhammad is a writer and video game journalist covering many aspects of entertainment media including the film industry. He's steadily writing his way to the sharp end of journalism and enjoys staying informed. If he's not reading, playing video games or catching up on his favourite TV series, then he's probably writing about them.