FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Claude has memory on par with Gemini and ChatGPT

Bill Thompson
Last updated: October 29, 2025 12:10 pm
By Bill Thompson
Technology
6 Min Read
SHARE

Anthropic is finally giving its Claude assistant an upgrade that customers have long-desired: persistent memory. The feature — now available to Teams and Enterprise customers of the company — permits Claude to remember projects, preferences and organizational context across chats. (It brings it on par with features already provided by Google’s Gemini or OpenAI’s ChatGPT.)

In addition to memory, Anthropic is launching “incognito chats” to everyone: messages that don’t show up in history and don’t add to memory. The pivot is representative of a broader industry move towards assistants that act less like disposable chatbots and more like persistent collaborators baked into daily workflow.

Table of Contents
  • What Claude’s memory accomplishes now
  • How It Compares With ChatGPT and Gemini
  • Privacy and control, and incognitos chats
  • Why persistent memory is important for AI at work
  • What to watch next
The Claude logo featuring a stylized orange asterisk next to the word Claude in black text, set against a clean white background.

What Claude’s memory accomplishes now

Claude’s memory is built to remember steady, work-focused details: client needs, style and formatting preferences, team workflows and project-specific facts. If you categorize work by project, the assistant keeps separate “self-contained” memories for each one, mitigating cross-talk between unrelated projects.

It’s restricted to Team and Enterprise plans; it’s not yet available for individual Pro and Max users. In settings, users can turn memory on, see what Claude has picked up exactly (if he’s retained anything), and edit or trim these “memory summaries” to keep things coherent. Most important of all, those same admins can pull the plug on memory companywide whenever they want.

Anthropic says it is experimenting with that feature in light of safety concerns. Its professional working context over personal archive focus that makes complex searching or storing private/sensitive data stronger. Memory from other AI tools can be imported into a user’s Claude memory or exported to another location in order to maintain portability through platforms.

In reality, that means fewer recurring reminders.

A product manager can request a weekly update without re-specifying audience, tone and key metrics; a consulting team might have Claude draft client-ready slides in the firm’s house style language that stays within known constraints of what clients will pay.

How It Compares With ChatGPT and Gemini

OpenAI’s ChatGPT now features the optional Memory function, which stores user preferences and facts between sessions. It’s a universally-available switch that can be toggled on or off, and cleared, and is complementary to features that ‘prevent’ storing history such as Temporary Chat. ChatGPT’s memory can also handle custom GPTs that hold their own long-term context, a common usage pattern for specialized tasks.

Google’s Gemini persistently stores preferences and profile data, and it can tap into linked information from Workspace accounts with admin controls, so it has a good sense of organizational context. Higher tiers add personalization, which many teams rely on to maintain a steady output throughout their documents and chat.

The Claude logo and a mobile phone displaying the Claude AI interface with the greeting How can I help you this morning?

Anthropic’s approach falls squarely into this competitive middle ground: memories at the granular project level, visible and editable summaries, and enterprise-wide on/off switches.

The differentiator to look for: How well does Claude balance recall with accuracy by capturing stored context that actually assists rather than cements outdated assumptions.

Privacy and control, and incognitos chats

Incognito conversations are open to all; they function much like a clean room: no history, no memory updates. This is in line with “temporary” or “no history” modes that peers provide, and is increasingly a baseline for sensitive work — milestone 1.3, we’ll be seeing you before year’s end!

For companies it’s the governance story that counts and as much the feature list. Anthropic offers visibility into what’s being remembered, editing tools to help restore drift and administrative kill switches. The company says it’s considering how not to store sensitive information and will iterate before extending access, a strategy that corresponds to business risk frameworks by the sort of entities NIST and ISO.

The import and export of memory is remarkable. Its portability mitigates lock-in and enables auditing, an increasingly necessary feature in regulated industries in which teams must prove to regulators how models were interpreted and what contextual data was trained.

Why persistent memory is important for AI at work

Persistent memory makes a chatbot more of a colleague. It reduces prompt overhead, ensures consistency, and turns around requests that you’ve answered before — like client status notes, brand-compliant drafts or backlog grooming — more quickly. Many firms these incremental advancements accumulate — one reason an oft-cited McKinsey Global Institute report estimates that generative AI could unlock trillions of dollars in value through activities including sales, software development, and customer service.

The wrinkle: memory should be accurate, timely and reversible. Teams should periodically revisit stored summaries, prune out-of-date details and try incognito mode for sensitive or one-off questions. Explicit policies and admin controls help to avoid a trade-off between convenience at the expense of confidentiality.

What to watch next

Anthropic will revew and test before exposing memory to additional users. Look for tighter integrations with knowledge bases, more granular retention policies and more robust audit trails. As Gemini and ChatGPT further open up personalization the battle will be over reliability: which assistant remembers the right things, forgets the wrong ones, and does both under enterprise-grade guardrails.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Video Call Glitches Cost Jobs And Parole, Study Finds
OpenAI Rejects Ads As ChatGPT Users Rebel
Pixel 10 always-on display flicker reported after update
Anker SOLIX C300 DC Power Bank discounted to $134.99
Musk Says Tesla Software Makes Texting While Driving Possible
Kobo Refreshes Libra Colour With Upgraded Battery
Govee Table Lamp 2 Pro Remains At Black Friday Price
Full Galaxy Z TriFold user manual leaks online
Google adds Find Hub to Android setup flow for new devices
Amazon Confirms Scribe And Scribe Colorsoft Launch
Alltroo Scores Brand Win at Startup Battlefield
Ray-Ban Meta Wayfarer hits 25% off all-time low
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.