Security researchers say they coerced Google’s Gemini assistant into disclosing private Google Calendar details by planting instructions inside a calendar invite, highlighting how “indirect prompt injection” can turn everyday productivity tools into data exfiltration channels.
The team at Miggo Security detailed the technique in a published report, with initial coverage by BleepingComputer. Their claim: A single unsolicited calendar invite, crafted with hidden instructions, was enough to make Gemini summarize a user’s confidential meetings and quietly ship that summary back to the attacker.

How the Calendar Attack Worked to Exfiltrate Data
According to Miggo Security, the attacker sends a calendar invite to a target and embeds a set of instructions in the event text. Those instructions tell Gemini to do three things the next time the user asks about their schedule: summarize all meetings on a specified day, create a new calendar event containing that summary, and reassure the user the time slot is free.
When the user later asks Gemini about their agenda, the assistant—designed to ingest event text to be helpful—parses the malicious instructions. The result, researchers say, is a new event that includes a summary of the target’s private meetings in its description. Because the attacker is a recipient on that new event, they can see the exposed details. To the user, Gemini reportedly answers that the period is open, masking the exfiltration.
Crucially, this does not require breaching Google’s authentication. It leverages the model’s tool-using behavior: the assistant reads calendar content, interprets embedded text as guidance, and performs actions a legitimate user could—albeit at the attacker’s behest. That makes the issue a design and hardening challenge rather than a traditional account compromise.
Why Indirect Prompt Injection Matters for Calendars
This class of attack—indirect prompt injection—targets the data and artifacts an AI agent consumes, not the user directly. Instead of persuading the person, the attacker persuades the model via embedded instructions in files, emails, web pages, or, in this case, calendar invites. When the assistant later processes that content to assist the user, it executes the attacker’s plan.
Security groups have been warning about this pattern for more than a year. The OWASP Top 10 for Large Language Model Applications lists prompt injection as a leading risk. The UK National Cyber Security Centre and partner agencies have issued guidance cautioning that browsing or tool-enabled assistants can be tricked by content that looks benign to humans but carries directives the model follows.
What makes calendars potent is their “ambient authority.” Event descriptions, locations, and guest lists feel low risk, yet agentic assistants treat them as trusted inputs. If a single external invite can trigger actions across a user’s workspace—summarizing meetings, creating events, or drafting emails—the blast radius of a small prompt grows quickly.

Implications for AI Productivity Suites and Security
As AI assistants become the front door to email, documents, and calendars, the security model shifts from user consent at the app level to intent verification at the action level. The question isn’t just “Does the app have calendar access?” but “Is this specific action consistent with the user’s intent here and now?”
For enterprises, the scenario underscores the need for guardrails wherever assistants read semi-structured content. Combining sensitive-data detectors, origin signals (external vs. internal), and explicit user confirmations for cross-entity actions can reduce risk. Even simple hygiene—like disabling automatic addition of external invites or restricting event visibility—can cut off easy injection paths.
What the Researchers Recommend to Mitigate Risks
Miggo Security urges AI vendors to attribute intent to requested actions and to scrutinize the provenance of the instructions. In practice, that means:
- Intent checks before sensitive tool use, especially when instructions originate from external content.
- High-friction prompts for actions with exfiltration potential, such as summarizing meetings or creating events that include private details.
- Content provenance and policy cues so the model treats external calendar text as untrusted data, not as operational guidance.
- Least-privilege scopes for assistants and granular audit logs so administrators can detect anomalous event creation or data movement.
What We Know and What We Do Not About This Issue
The report describes a reproducible path to leakage but does not claim a bypass of Google’s authentication or access controls. It demonstrates how model behavior, when mixed with calendar tooling, can be steered by malicious text to perform unintended actions. The publication does not include a vendor response, patch details, or real-world abuse statistics.
Still, the case fits a broader pattern security teams have observed: when generative AI reads user-owned content to be helpful, that content becomes an attack surface. Whether it’s a web page, PDF, email thread, or calendar invite, if instructions are there, a tool-using model may follow them unless it is trained and constrained not to.
The takeaway for organizations is straightforward. Treat all external inputs to AI assistants as untrusted by default, deploy policy and permission prompts around sensitive actions, and assume adversaries will use mundane collaboration artifacts to plant instructions. For vendors, the bar is to build assistants that ask “Should I?” as often as they ask “Can I?”
