FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Anthropic Asks Users to Share Chats for AI Training

Bill Thompson
Last updated: September 10, 2025 1:09 pm
By Bill Thompson
Technology
5 Min Read
SHARE

Anthropic is also changing the way it treats consumer conversations, now asking the people who engage with its chatbots, called Claude (but built as a Claude-like bot during training), to say whether their interactions could be shared with other people to train models in the future. Unless a user opts out, the company now plans to use consumer chats and coding sessions for training and to prolong retention for non‑opt‑outs.

What Anthropic changed

Until recently, Anthropic said, consumer prompts and outputs were deleted after a brief period. The new system reverses defaulting for individual users, such that it’s now permissible to incorporate conversations into model training and retain them longer when users don’t opt out.

Table of Contents
  • What Anthropic changed
  • Who is affected
  • Why Anthropic wants conversational data
  • Privacy concerns and regulatory scrutiny
  • How users can respond
  • What this means for the AI ecosystem
Laptop and an iPhone with AI chat

Anthropic is framing the move as an option a user may select that will aid with future Claude releases in content moderation, coding assistance, and reasoning. The company argues that aggregating consumer data can help make safety systems more accurate and models more capable.

Who is affected

The change affects consumer versions of Claude—including free and paid tiers and Claude Code. Business and government offerings, including Claude for Work, Claude for Education, Claude Gov, and API customers are subject to seperate data agreements and are not included in the consumer opt‑in policy.

The split mirrors industry practice, in which enterprise agreements often feature specific data protections that are separate from consumer terms.

Why Anthropic wants conversational data

For large language models, large volumes of high‑quality, real‑world text are critical to improve performance. User dialogs and coding interactions offer myriad examples that can hone reasoning, debugging and safety filters — features that can help companies challenge rivals like OpenAI and Google.

Anthropic frames the change as mutually beneficial – better modelling for users and stronger safety detection. Observers note, however, that access to millions of consumer interactions is also strategically important in the competitive AI market.

Privacy concerns and regulatory scrutiny

Privacy advocates and regulators have cautioned the public that AI privacy settings can be very complex. The U.S. Federal Trade Commission has warned companies that they shouldn’t “hide” material changes and that they can’t “bury” notices of consent in legalese, and officials have signaled that they could take action when a disclosure is confusing.

Anthropic asks users to share chats for AI model training

Design decisions in the rollout of Anthropic have led to raised eyebrows: existing users see a giant “Accept” control, with a much smaller, turned-on-by‑default switch for the training‑permission toggle critics say mean people might end up agreeing without realizing what it means. That interface shortfall caught the attention of outlets like The Verge, which reported on it.

The move comes as other providers of AI face legal pressure over retention rules. OpenAI is fighting a court order in a lawsuit brought by The New York Times and other publishers over ChatGPT, a court order that would require the company to keep chat conversations longer, a battle that OpenAI’s COO Brad Lightcap said ran counter to the company’s promises of privacy.

How users can respond

Users who don’t want to contribute their conversations to training can check Anthropic’s consent prompt to toggle off training. For individuals dealing with sensitive information, organizations usually suggest business plans, which include explicit data‑use agreements and zero‑retention policies.

Experts recommend reading privacy policies closely, exporting or deleting posts and other content before engaging with games and contacting platform support with questions. Consumer watchdogs and privacy groups can provide guidance, too, about rights and recourse.

What this means for the AI ecosystem

Anthropic’s transition is emblematic of an industrywide shift in balance: Companies are weighing a dependence on training data against increased consumer and regulator demands for transparency. Anticipate similar policy shifts from rivals, as companies pursue scale and adherence.

Regulators, courts and public scrutiny will probably keep molding how AI companies seek consent and store conversational data, and users will face the ongoing decision of how much of their interactions will be used to build the next generation of models.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Android 16 QPR2 Beta 3 New Pixel Features
State Hackers Strike F5 Systems
TwitchCon Guide to Tickets, Pricing and Top Streamers
Nothing Blames Apple For Magnet Charging Roadblocks
Global Attitudes About AI Across 25 Nations
Apple M5 iPad Pro vs. M4 iPad Pro Real-World Gains
Google Ramps Up Flow With Veo 3.1 Upgrade
Apple Adds 650 Megawatts In Europe And Turns To China
Is an M5 MacBook Pro Upgrade from M1 Worth It?
Samsung Qi2 Chargers Ahead of Galaxy Magnets
Android Auto Games Are Back After Brief Disappearance
Agenda: Full Space Stage Agenda at Disrupt 2025
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.