Anthropic is finally giving its Claude assistant an upgrade that customers have long-desired: persistent memory. The feature — now available to Teams and Enterprise customers of the company — permits Claude to remember projects, preferences and organizational context across chats. (It brings it on par with features already provided by Google’s Gemini or OpenAI’s ChatGPT.)
In addition to memory, Anthropic is launching incognito chats to everyone: messages that dont show up in history and dont add to memory. The pivot is representative of a broader industry move towards assistants that act less like disposable chatbots and more like persistent collaborators baked into daily workflow.

What Claude’s memory accomplishes now
Claude’s memory is built to remember steady, work-focused details: client needs, style and formatting preferences, team workflows and project-specific facts. If you categorize work by project, the assistant keeps separate “self-contained” memories for each one, mitigating cross-talk between unrelated projects.
It’s restricted to Team and Enterprise plans; it’s not yet available for individual Pro and Max users. In settings, users can turn memory on, see what Claude has picked up exactly (if he’s retained anything), and edit or trim these “memory summaries” to keep things coherent. Most important of all, those same admins can pull the plug on memory companywide whenever they want.
Anthropic says it is experimenting with that feature in light of safety concerns. Its professional working context over personal archive focus that makes complex searching or storing private/sensitive data stronger. Memory from other AI tools can be imported into a user’s Claude memory or exported to another location in order to maintain portability through platforms.
In reality, that means fewer recurring reminders.
A product manager can request a weekly update without re-specifying audience, tone and key metrics; a consulting team might have Claude draft client-ready slides in the firm’s house style language that stays within known constraints of what clients will pay.
How It Compares With ChatGPT and Gemini
OpenAI’s ChatGPT now features the optional Memory function, which stores user preferences and facts between sessions. It’s a universally-available switch that can be toggled on or off, and cleared, and is complementary to features that ‘prevent’ storing history such as Temporary Chat. ChatGPT’s memory can also handle custom GPTs that hold their own long-term context, a common usage pattern for specialized tasks.
Google’s Gemini persistently stores preferences and profile data, and it can tap into linked information from Workspace accounts with admin controls, so it has a good sense of organizational context. Higher tiers add personalization, which many teams rely on to maintain a steady output throughout their documents and chat.
Anthropic’s approach falls squarely into this competitive middle ground: memories at the granular project level, visible and editable summaries, and enterprise-wide on/off switches.
The differentiator to look for: How well does Claude balance recall with accuracy by capturing stored context that actually assists rather than cements outdated assumptions.
Privacy and control, and incognitos chats
Incognito conversations are open to all; they function much like a clean room: no history, no memory updates. This is in line with “temporary” or “no history” modes that peers provide, and is increasingly a baseline for sensitive work — milestone 1.3, we’ll be seeing you before year’s end!
For companies it’s the governance story that counts and as much the feature list. Anthropic offers visibility into what’s being remembered, editing tools to help restore drift and administrative kill switches. The company says it’s considering how not to store sensitive information and will iterate before extending access, a strategy that corresponds to business risk frameworks by the sort of entities NIST and ISO.
The import and export of memory is remarkable. Its portability mitigates lock-in and enables auditing, an increasingly necessary feature in regulated industries in which teams must prove to regulators how models were interpreted and what contextual data was trained.
Why persistent memory is important for AI at work
Persistent memory makes a chatbot more of a colleague. It reduces prompt overhead, ensures consistency, and turns around requests that you’ve answered before — like client status notes, brand-compliant drafts or backlog grooming — more quickly. Many firms these incremental advancements accumulate — one reason an oft-cited McKinsey Global Institute report estimates that generative AI could unlock trillions of dollars in value through activities including sales, software development, and customer service.
The wrinkle: memory should be accurate, timely and reversible. Teams should periodically revisit stored summaries, prune out-of-date details and try incognito mode for sensitive or one-off questions. Explicit policies and admin controls help to avoid a trade-off between convenience at the expense of confidentiality.
What to watch next
Anthropic will revew and test before exposing memory to additional users. Look for tighter integrations with knowledge bases, more granular retention policies and more robust audit trails. As Gemini and ChatGPT further open up personalization the battle will be over reliability: which assistant remembers the right things, forgets the wrong ones, and does both under enterprise-grade guardrails.