Anthropic is also changing the way it treats consumer conversations, now asking the people who engage with its chatbots, called Claude (but built as a Claude-like bot during training), to say whether their interactions could be shared with other people to train models in the future. Unless a user opts out, the company now plans to use consumer chats and coding sessions for training and to prolong retention for non‑opt‑outs.
What Anthropic changed
Until recently, Anthropic said, consumer prompts and outputs were deleted after a brief period. The new system reverses defaulting for individual users, such that it’s now permissible to incorporate conversations into model training and retain them longer when users don’t opt out.
Anthropic is framing the move as an option a user may select that will aid with future Claude releases in content moderation, coding assistance, and reasoning. The company argues that aggregating consumer data can help make safety systems more accurate and models more capable.
Who is affected
The change affects consumer versions of Claude—including free and paid tiers and Claude Code. Business and government offerings, including Claude for Work, Claude for Education, Claude Gov, and API customers are subject to seperate data agreements and are not included in the consumer opt‑in policy.
The split mirrors industry practice, in which enterprise agreements often feature specific data protections that are separate from consumer terms.
Why Anthropic wants conversational data
For large language models, large volumes of high‑quality, real‑world text are critical to improve performance. User dialogs and coding interactions offer myriad examples that can hone reasoning, debugging and safety filters — features that can help companies challenge rivals like OpenAI and Google.
Anthropic frames the change as mutually beneficial – better modelling for users and stronger safety detection. Observers note, however, that access to millions of consumer interactions is also strategically important in the competitive AI market.
Privacy concerns and regulatory scrutiny
Privacy advocates and regulators have cautioned the public that AI privacy settings can be very complex. The U.S. Federal Trade Commission has warned companies that they shouldn’t “hide” material changes and that they can’t “bury” notices of consent in legalese, and officials have signaled that they could take action when a disclosure is confusing.
Design decisions in the rollout of Anthropic have led to raised eyebrows: existing users see a giant “Accept” control, with a much smaller, turned-on-by‑default switch for the training‑permission toggle critics say mean people might end up agreeing without realizing what it means. That interface shortfall caught the attention of outlets like The Verge, which reported on it.
The move comes as other providers of AI face legal pressure over retention rules. OpenAI is fighting a court order in a lawsuit brought by The New York Times and other publishers over ChatGPT, a court order that would require the company to keep chat conversations longer, a battle that OpenAI’s COO Brad Lightcap said ran counter to the company’s promises of privacy.
How users can respond
Users who don’t want to contribute their conversations to training can check Anthropic’s consent prompt to toggle off training. For individuals dealing with sensitive information, organizations usually suggest business plans, which include explicit data‑use agreements and zero‑retention policies.
Experts recommend reading privacy policies closely, exporting or deleting posts and other content before engaging with games and contacting platform support with questions. Consumer watchdogs and privacy groups can provide guidance, too, about rights and recourse.
What this means for the AI ecosystem
Anthropic’s transition is emblematic of an industrywide shift in balance: Companies are weighing a dependence on training data against increased consumer and regulator demands for transparency. Anticipate similar policy shifts from rivals, as companies pursue scale and adherence.
Regulators, courts and public scrutiny will probably keep molding how AI companies seek consent and store conversational data, and users will face the ongoing decision of how much of their interactions will be used to build the next generation of models.