FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Tests Find ChatGPT Browser Leaves Sensitive Data

Gregory Zuckerman
Last updated: October 26, 2025 5:20 pm
By Gregory Zuckerman
Technology
8 Min Read
SHARE

Now, ChatGPT is following you onto the web — whose conveniences may come with a memory trade-off that you didn’t sign up for. Early, hands-on tests of OpenAI’s new Comet Browser found the AI assistant still hung on to details from a user’s private browsing sessions, like the name of a person’s doctor; while other researchers showcased how AI browsers can be secretly manipulated by ulterior commands. Collectively, the findings raise pressing questions about how much an AI-enhanced browser should know — and for how long.

What the tests revealed about AI browser memory risks

In tests published by The Washington Post with technical assistance from the Electronic Frontier Foundation, testers found Comet’s continued memory of search history, page context, and sensitive searches persisted long after a browsing session was over. In one instance, the assistant recalled a search for abortion care that included the name of the clinician — information the user hadn’t explicitly re-entered. That’s not a bug; it’s a feature of “agentic” browsing, in which the AI maintains context to be more helpful across tasks.

Table of Contents
  • What the tests revealed about AI browser memory risks
  • Why AI browsers retain so much personal browsing data
  • The hidden attack surface created by AI browsing agents
  • Why health-related browsing data poses unique privacy risks
  • How To Use AI Browsers Better Without Giving Up Too Much
  • What developers should build next to safeguard AI browsing
ChatGPT browser data exposure risk highlighted by security warning on screen

The privacy implications are obvious. And since it operates at the level of your browser, an assistant like that can connect dots that regular history logs don’t, piecing together bits and pieces of your behavior into a narrative that is comparatively simple to dig up again. And even if the data housed never leaves your device, it doesn’t matter because just being able to raise it from the dead in plain language dramatically changes how you have to think about risks when researching health or legal or financial issues.

Why AI browsers retain so much personal browsing data

AI browsing systems generally combine three components: what’s on the page, what you type, and a memory store to maintain continuity. Under the hood, that often takes the form of information retrieval-augmented generation (IRAG) facilitated by caches or databases that transcend a single page view. The trade-off is less nagging repetition. The flip side is long-lived breadcrumbs that may contain names, addresses, appointment details, or case numbers — in other words, the sorts of data privacy law identifies as sensitive.

Some vendors claim these memories help to improve assistance, although policies can differ regarding where they live (local vs. cloud), how long they stick around, and whether they can be used to train models.

Defaults and visible controls matter. The absence of simple, global “forget” switches or per-task ephemerality means the thing that was private context is potentially a permanent reference file.

The hidden attack surface created by AI browsing agents

Privacy isn’t the only concern. Brave Software showed AI browsers could process dangerous instructions embedded in images — its own twist on prompt injection where what the site wants executed isn’t visible to users. Should those whispery signals be swallowed by an assistant, it might be pushed to provide a peek at prior context, leak tokens, or take action beyond the page’s intention. Security researchers have been concerned about prompt injection ever since a few large language models started reading live web content; putting commands in media ups the ante and reduces chances of detection.

A web browser displaying the Comet AI website on the left and an AI assistant interface on the right, both within a dark-themed desktop environment.

Here’s where it gets interesting, because memory plus injection equals amplification. But an attacker isn’t just trying to gain a one-off reply — they can tempt the assistant into pulling up whatever it “remembers” about you and sending it off elsewhere. Standard browser protections were not created with this new layer of machine-readable instructions that exists alongside human-readable content.

Why health-related browsing data poses unique privacy risks

Consumer tech is largely not covered by medical privacy laws, and HIPAA typically does not apply to browsers or AI assistants. Regulators have already fined health and wellness apps for selling sensitive data to advertisers; the Federal Trade Commission’s case against GoodRx highlighted how rapidly “researching care” becomes a data trail. If an AI browser maintains memory of your condition, clinic, and doctor, then that profile potentially has more depth than what is possible from a classic cookie log — even with no ad network in play.

EU and UK data protection authorities have similarly emphasized data minimization as a safeguard in generative AI, consistent with the NIST AI Risk Management Framework’s instruction to restrict collection and retention.

An aide that remembers far too personal browsing details by default is difficult to reconcile with those principles.

How To Use AI Browsers Better Without Giving Up Too Much

  • Switch off long-term memory where possible and opt for per-task or per-tab context. If there’s a “memory” toggle on the assistant, turn it off if you’re not already doing so.
  • Use a different browser profile or dedicated “AI profile” that can’t see your main history, bookmarks, or cookies.
  • Don’t type sensitive questions — health, legal, money — within AI-assisted pages. Run those searches in a plain private window, with the assistant turned off.
  • Review data controls regularly. Erase your memories and stored conversations, and determine whether your data is processed locally or in the cloud.
  • Use the assistant as a powerful extension. Restrict permissions, refuse cross-site access, and don’t let it off the hook if it’s given free rein to read every page.
  • Look for early signs of injection. If a page makes the assistant do something weird, has unexpected behavior, or pops up on the screen, turn it off for that site.

What developers should build next to safeguard AI browsing

Safety-by-default needs to replace convenience-by-default. That means temporary memory unless a user chooses to opt in, visual cues when context is being saved, and one-click purges that really do delete stored knowledge. Security-wise, when handling untrusted pages, assistants should render in hardened sandboxes, strip or quarantine machine instructions/prompts from media files, and adopt off-the-shelf prompt-injection countermeasures from upcoming industry playbooks.

AI on the browser can be revolutionary. But even your roughest sketch of disapproval should snap into focus if a mere casual visit to a medical website turns up in an assistant’s biography of you. Until vendors harden defaults and defenses, treat AI browsing like a live microphone: speak with caution or just mute the thing altogether when the stakes are high.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Soundcore P20i Earbuds Drop to $19.98 in Major Sale
Automattic Undercutting WP Engine Over WordPress Trademarks
Home Depot Puts 12 Foot Skeleton On Sale For First Time
Netgear Nighthawk Wi-Fi 6 Router Drops to $159.97
Homey extending support for Pro Hubs until 2031
Pixel users report 911 calls trigger reboot loops
PC too old for Windows 11? It’ll work after a 5-minute upgrade
Rivian Pays $250 Million To Settle R1 Price Hike Lawsuit
ASUS ROG Strix Monitor Down To $179 At Amazon
Nomad Stratos Band Is Now The Top Pick For Apple Watch
Ember Mug 2 Drops to an All-Time Low Price at $89.99
Enhanced Games Tests Steroids As Demographic Fix
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.