Google has introduced a dedicated bug bounty track for artificial intelligence that rewards researchers who discover vulnerabilities in its products related to AI, and the company is offering researchers as much as $30,000. The program expands the company’s Abuse Vulnerability Reward Program and focuses on abuse-led vulnerabilities and security flaws in Gemini, Search, AI Studio, and Workspace.
The effort, described in a security blog post by Google security engineering managers Jason Parsons and Zak Bennett, is intended to help focus further researcher attention on how AI features can be abused in actual products. Researchers have earned over $430,000 since Google started to take reports related to AI, an indication that the attack surface is expanding as more machine learning makes its way into core services.

Inside the new AI scope for Google’s bug bounties
The company is focusing on the abuse vectors and security holes that are seeing actual exploitation, including unauthorized use of products, denial of service between users, data exposure across accounts, and access control bypasses. These are the sorts of vulnerabilities that stand between quirky model behaviors and real-life damage for users or companies.
Just as important can be what is disqualified. These are off the table: jailbreaks, content-only outputs, and hallucinations — simply because it’s the sort of thing that is subjective or difficult to reproduce consistently at times. Vertex AI vulnerabilities or other Google Cloud product issues should be reported via the separate Google Cloud VRP while maintaining disclosures in line with internal ownership.
Products covered and practical examples explained
Targets range from Gemini experiences to Search generative features, developer tooling in AI Studio, and AI-driven functionality in Workspace. Such behavior is of particular interest to Google when it can be used to steer an AI feature toward executing an unexpected action, increasing access, or revealing information that a user shouldn’t have.
Examples of potential qualifying bugs would be a prompt-triggered workflow in Workspace that leaks another user’s data, a model-triggered action taken without proper consent, or an abusive capability in Search which can lead to cross-user spam or throttling. The point is to show systemic risk, not a one-off response in a single session.
Payouts and bonuses for AI vulnerability reports
Most acknowledged reports are rewarded in an amount between $500 and $20,000 for severity and impact. As an example, a “rogue action” of critical impact that makes a model execute unintended actions might draw upwards of $10,000, and access control bypass around $2,500, assuming clean, reproducible proof.
Google is also adding a novelty bonus of up to $10,000 for new attack paths or research that is truly original, which increases the top prize this year to $30,000. Disclaimer: As with the general VRP, well-written reports and impressive reproduction steps are among factors considered when evaluating a payout, as is clear analysis of any proven issue and its impact — ensuring that we can quickly and responsibly act on it.
How this fits the broader AI security landscape
And AI security bounties are coming to resemble the same: reward basic patterns of breaking (traditional flaws and abuse vectors), but leave out the shades-of-gray model stuff.

OpenAI’s program, for instance, zeroes in on security flaws with the very top prizes set at five figures — and declines to pay out for jailbreaks. Microsoft and other large vendors have launched AI-specific bounties with similar ceilings.
This approach is consistent with recommendations from NIST’s AI Risk Management Framework and threat catalogs like MITRE ATLAS that focus on specific exploitability and user impact. General bug-hunting platforms like HackerOne and Bugcrowd also claim demand for machine learning attack surfaces is on the rise as organizations roll out AI into production.
What researchers should submit for Google’s AI VRP
Strong entries detail the entire chain: configuration, prompts and system instructions, model versions, settings and instructions for setting them, steps that you did exactly, plus any adjustments of parameters.
Use test accounts to replicate the situation, avoid unnecessary data gathering, and demonstrate cross-user/cross-tenant impact when applicable — simple policy bypass text or one-off toxic output won’t meet this bar.
Strive for an impact that a product engineer can go and verify and fix. If you can show that an AI feature can be tricked to exceed its intended permissions or subvert platform defenses, then you are firmly in the payout lane.
Why this AI security bug bounty program matters
Because assistants write emails, synopsize documents, and initiate actions on someone’s behalf, small design issues can grow into significant security problems. Bounties effectively use a global community of researchers as an early-warning system, speeding up the feedback loop before abuse becomes mainstream.
In setting aside up to $30,000 for new, high-impact AI bugs, Google is saying that fixing generative features is a matter of great urgency. The message to researchers couldn’t be clearer: Bring us reproducible abuse paths with real-world impact, and the rewards will come.