Anthropic is changing how it handles consumer conversations, asking people who use its Claude chatbots to decide whether their messages can be used to improve future models. The company now intends to use consumer chats and coding sessions for training unless a user explicitly opts out, and it will extend retention for non‑opt‑outs.
What Anthropic changed
Until recently, Anthropic said consumer prompts and outputs were routinely deleted after a short period. The new approach flips that default for individual users: conversations may be incorporated into model training, and retained longer when users do not decline the program.
Anthropic frames the move as a user choice that will help improve content moderation, coding assistance and reasoning in future Claude releases. The company says aggregated consumer data can make safety systems more accurate and models more capable.
Who is affected
The change applies to consumer versions of Claude, including free and paid tiers and Claude Code. Business and government offerings such as Claude for Work, Claude for Education, Claude Gov and API customers remain under different data agreements and are not covered by the consumer opt‑in policy.
The distinction mirrors industry practice where enterprise contracts often include explicit data protections separate from consumer terms.
Why Anthropic wants conversational data
Large language models require vast volumes of high‑quality, real‑world text to improve performance. User dialogues and coding interactions provide diverse examples that can sharpen reasoning, debugging and safety filters—capabilities that help companies compete with rivals such as OpenAI and Google.
Anthropic presents the change as mutual benefit: better models for users and stronger safety detection. Observers point out, however, that access to millions of consumer interactions is also strategically valuable in a fiercely competitive AI market.
Privacy concerns and regulatory scrutiny
Privacy advocates and regulators have warned the public about the complexity of AI privacy settings. The U.S. Federal Trade Commission has cautioned companies against obscuring material changes or burying consent notices in dense legal text, and officials have signaled enforcement interest when disclosures are unclear.
Design choices in Anthropic’s rollout have raised eyebrows: existing users encounter a prominent “Accept” control while the training‑permission toggle is smaller and turned on by default, prompting critics to say people may agree without realizing the implications. Reporting by outlets such as The Verge highlighted those interface concerns.
The issue comes as other AI providers face legal pressure over retention rules. OpenAI is defending a court order tied to a lawsuit brought by The New York Times and other publishers that demands longer retention of ChatGPT conversations, a dispute OpenAI’s COO Brad Lightcap has argued conflicts with the company’s privacy promises.
How users can respond
Users who prefer not to contribute their conversations to training should review Anthropic’s consent prompt and adjust the training toggle. For people handling sensitive material, organizations typically recommend enterprise plans that offer explicit data‑use contracts and zero‑retention options.
Experts urge reading privacy policies carefully, exporting or deleting sensitive content before interacting, and contacting platform support with questions. Consumer watchdogs and privacy groups can also advise on rights and recourse.
What this means for the AI ecosystem
Anthropic’s move reflects a broader industry pivot: firms are balancing the need for training data with rising consumer and regulator demands for transparency. Expect similar policy adjustments from competitors as companies seek both scale and compliance.
Regulators, courts and public scrutiny are likely to keep shaping how AI firms ask for consent and retain conversational data, and users will face ongoing choices about how much of their interactions are used to build the next generation of models.