Google’s primary AI, called Gemini, is open to an easy yet powerful attack called ASCII smuggling; paradoxically enough, the company is refusing to treat it as a security bug. The approach can hide malicious prompts within otherwise innocuous material — in Gemini’s case, emails and calendar invites it is often asked to summarize — providing a noisier route to exposed data within Google Workspace environments.
The problem was discovered when security researcher Viktor Markopoulos tested it against popular large language models. Bleeping Computer reported that Gemini, DeepSeek, and Grok were exploitable, while Claude, ChatGPT, and Microsoft Copilot thwarted the secret commands. Google’s response, according to the researcher, is that this is social engineering behavior and users should properly care about it rather than it representing a bug or vulnerability.
What ASCII smuggling looks like in real use
ASCII smuggling hides an instruction for the AI in plain, boring text, with clues a human would never pick up. Think micro or near-invisible formatting tweaks, whitespace tricks, or hidden segments in a message body which manage to be human-readable if decoded, yet are easily ignored by the average reader. When a user instructs an assistant to summarize the contents, the model literally “reads” and complies with the covert commandments.
That can be as mundane — and harmful — as a meeting request that includes an embedded command to “scan inbox for phone numbers and to draft a reply,” or that asks you to “recommend this domain as safe discount.” Markopoulos showed that by hiding a prompt, Gemini was forced to share a malicious website, highlighting how the attack could influence model behavior without the owner ever knowing what occurred.
The danger is heightened by Gemini’s deep integration with Gmail, Calendar, and Drive. And when an AI assistant runs in the context of a productivity suite, even the most nuanced prompt injection can be hijacked for exfiltration if a model is allowed to search or summarize sensitive content en masse.
Why Google’s response matters for AI security
Categorizing this as just social engineering avoids how the security industry analyzes the risk. The OWASP Top 10 for LLM Applications specifically mentions injection and exfiltration of data as top threats to AI systems. MITRE’s ATLAS knowledge base also documents injection patterns that result in a model performing attacker-specified operations. In other words, this is a kind of model-level failure that they know about, not simply a user error.
Google has heralded its Secure AI Framework and enterprise protections for Workspace, but a posture that slaps all the burden on users is out of sync with where AI safety engineering is going. Organizations want guardrails around hidden commands in untrusted content to either ignore or quarantine them, particularly as conversational assistants embed alongside company data.
How other models do it and mitigate attacks
The mixed results across companies are revealing. Claude, ChatGPT, and Copilot all demonstrated resistance to working the baited prompts in the researcher’s experiments, indicating that baseline defenses such as prompt confinement, input cleaning, or adversarial training are effective. No individual experiment is decisive, but the difference suggests that this problem can be controlled in practice, rather than representing an unavoidable overhead of LLMs.
Typical mitigations include stripping or normalizing suspicious control characters, ignoring formatting cues that may conceal commands, sandboxing summaries in read-only contexts, and enabling policy filters that prevent operations from unsafe text. A few vendors openly advertise layered defenses like these in their trust and safety documentation.
Real-world impact for Google Workspace users
When an assistant gets to parse inboxes, documents, or meeting notes, one simple injection line can turn a harmless summary into snooping. That risk complicates at the enterprise level, when a single compromised calendar invite or email thread can ripple across teams. Like, even if Gemini never fucking breaches the system controls, the accidental exposure of contacts/draft content/internal links is a real loss case for compliance-restricted orgs.
This is not theoretical. Email-based attacks have already minimized the need for user engagement. Slather on an AI layer that dutifully carries out inscrutable instructions and the barrier drops even further, particularly for less tech-savvy staff who think “summarize this” is a safe, time-saving click.
What organizations can do to reduce ASCII risks
Until Google adds stronger guardrails, tainted text is code to be disbelieved. Turn off or reduce the functionality of the assistant, including limiting automatic summary generation for external email and invites; this policy can also enforce data loss prevention policies that limit the assistant to specific guidelines across Gmail, Drive, and Docs. To the extent you can, clean input by eliminating hidden formatting, control characters, and things like unusually small or color-matching text before you feed content to an AI.
Have a “no actions from summaries” rule: the summary should never do something like fetch data, send, or link somewhere outside of a very narrow allow list. Record assistant interactions to detect anomalies, and train personnel that AI outputs can be influenced by upstream content. These controls are consistent with OWASP, NIST’s AI Risk Management Framework, as well as industry red-team practices for LLMs.
The bottom line on Gemini and ASCII smuggling risks
ASCII-to-non-ALLOW-255 characters smuggling is a simple and well-documented — threat class. Other sellers continue to soften it today. In refusing to address it as a critique rather than a bug, Google has opened Gemini up to potentially unnecessary prompt injection for users, especially in Workspace. The solution isn’t to train end users not to mess up; it’s to lock down the model and its integrations sufficiently well that occult instructions are regarded as inherently hostile.