Now, ChatGPT is following you onto the web — whose conveniences may come with a memory trade-off that you didn’t sign up for. Early, hands-on tests of OpenAI’s new Comet Browser found the AI assistant still hung on to details from a user’s private browsing sessions, like the name of a person’s doctor; while other researchers showcased how AI browsers can be secretly manipulated by ulterior commands. Collectively, the findings raise pressing questions about how much an AI-enhanced browser should know — and for how long.
What the tests revealed about AI browser memory risks
In tests published by The Washington Post with technical assistance from the Electronic Frontier Foundation, testers found Comet’s continued memory of search history, page context, and sensitive searches persisted long after a browsing session was over. In one instance, the assistant recalled a search for abortion care that included the name of the clinician — information the user hadn’t explicitly re-entered. That’s not a bug; it’s a feature of “agentic” browsing, in which the AI maintains context to be more helpful across tasks.
- What the tests revealed about AI browser memory risks
- Why AI browsers retain so much personal browsing data
- The hidden attack surface created by AI browsing agents
- Why health-related browsing data poses unique privacy risks
- How To Use AI Browsers Better Without Giving Up Too Much
- What developers should build next to safeguard AI browsing

The privacy implications are obvious. And since it operates at the level of your browser, an assistant like that can connect dots that regular history logs don’t, piecing together bits and pieces of your behavior into a narrative that is comparatively simple to dig up again. And even if the data housed never leaves your device, it doesn’t matter because just being able to raise it from the dead in plain language dramatically changes how you have to think about risks when researching health or legal or financial issues.
Why AI browsers retain so much personal browsing data
AI browsing systems generally combine three components: what’s on the page, what you type, and a memory store to maintain continuity. Under the hood, that often takes the form of information retrieval-augmented generation (IRAG) facilitated by caches or databases that transcend a single page view. The trade-off is less nagging repetition. The flip side is long-lived breadcrumbs that may contain names, addresses, appointment details, or case numbers — in other words, the sorts of data privacy law identifies as sensitive.
Some vendors claim these memories help to improve assistance, although policies can differ regarding where they live (local vs. cloud), how long they stick around, and whether they can be used to train models.
Defaults and visible controls matter. The absence of simple, global “forget” switches or per-task ephemerality means the thing that was private context is potentially a permanent reference file.
The hidden attack surface created by AI browsing agents
Privacy isn’t the only concern. Brave Software showed AI browsers could process dangerous instructions embedded in images — its own twist on prompt injection where what the site wants executed isn’t visible to users. Should those whispery signals be swallowed by an assistant, it might be pushed to provide a peek at prior context, leak tokens, or take action beyond the page’s intention. Security researchers have been concerned about prompt injection ever since a few large language models started reading live web content; putting commands in media ups the ante and reduces chances of detection.

Here’s where it gets interesting, because memory plus injection equals amplification. But an attacker isn’t just trying to gain a one-off reply — they can tempt the assistant into pulling up whatever it “remembers” about you and sending it off elsewhere. Standard browser protections were not created with this new layer of machine-readable instructions that exists alongside human-readable content.
Why health-related browsing data poses unique privacy risks
Consumer tech is largely not covered by medical privacy laws, and HIPAA typically does not apply to browsers or AI assistants. Regulators have already fined health and wellness apps for selling sensitive data to advertisers; the Federal Trade Commission’s case against GoodRx highlighted how rapidly “researching care” becomes a data trail. If an AI browser maintains memory of your condition, clinic, and doctor, then that profile potentially has more depth than what is possible from a classic cookie log — even with no ad network in play.
EU and UK data protection authorities have similarly emphasized data minimization as a safeguard in generative AI, consistent with the NIST AI Risk Management Framework’s instruction to restrict collection and retention.
An aide that remembers far too personal browsing details by default is difficult to reconcile with those principles.
How To Use AI Browsers Better Without Giving Up Too Much
- Switch off long-term memory where possible and opt for per-task or per-tab context. If there’s a “memory” toggle on the assistant, turn it off if you’re not already doing so.
- Use a different browser profile or dedicated “AI profile” that can’t see your main history, bookmarks, or cookies.
- Don’t type sensitive questions — health, legal, money — within AI-assisted pages. Run those searches in a plain private window, with the assistant turned off.
- Review data controls regularly. Erase your memories and stored conversations, and determine whether your data is processed locally or in the cloud.
- Use the assistant as a powerful extension. Restrict permissions, refuse cross-site access, and don’t let it off the hook if it’s given free rein to read every page.
- Look for early signs of injection. If a page makes the assistant do something weird, has unexpected behavior, or pops up on the screen, turn it off for that site.
What developers should build next to safeguard AI browsing
Safety-by-default needs to replace convenience-by-default. That means temporary memory unless a user chooses to opt in, visual cues when context is being saved, and one-click purges that really do delete stored knowledge. Security-wise, when handling untrusted pages, assistants should render in hardened sandboxes, strip or quarantine machine instructions/prompts from media files, and adopt off-the-shelf prompt-injection countermeasures from upcoming industry playbooks.
AI on the browser can be revolutionary. But even your roughest sketch of disapproval should snap into focus if a mere casual visit to a medical website turns up in an assistant’s biography of you. Until vendors harden defaults and defenses, treat AI browsing like a live microphone: speak with caution or just mute the thing altogether when the stakes are high.