China is looking at adding “digital well-being” nudges to AI companionship. A proposed new regulation addressing anthropomorphic chatbots would have providers prompt users to take a break if they play (or fight or argue) for more than two hours in one sitting — an approach that waves off a hard cap but is indicative of rising concerns about long, intense human–AI encounters.
What’s in the draft rules for anthropomorphic chatbots
The proposal covers what regulators refer to as “anthropomorphic interactive services” — systems that are able to mimic humanlike reasoning and personality traits and conduct conversations in a way that can seem less scripted and more emotionally engaging through text, audio, images, or video. In practice, that means those chatbots and voice agents that purport to be companions or assistants or confidants fundamentally are not.
- What’s in the draft rules for anthropomorphic chatbots
- Special Rules for Minors and Older Adults
- Safety goals and enforcement under the draft rules
- How it fits global trends and China’s local platform scene
- What providers need to figure out next for compliance
- Why this matters for AI companionship and safety
One particularly noteworthy clause requires a break reminder for two hours of continuous use, serving as a pop-up or similar experience. The rule is written as a kind of active reminder, rather than an automatic lockout, where the onus is placed on platform providers to recognize sustained engagement and gently encourage users to log off or stop.
They also tie these systems to familiar content restrictions: promoting “core socialist values” and staying away from outputs that threaten national security, national unity, or social order. Such content requirements complement China’s larger information governance model that applies to internet platforms and recommendation algorithms more broadly.
Special Rules for Minors and Older Adults
Regulators champion targeted protections for vulnerable populations. Minor-directed emotional-companionship features would need the express consent of a guardian, parental-control settings, and reports on how the service was used made available to guardians. And providers should put in place rigorous age verification — a growing standard for AI tools that mimic intimacy or offer therapy.
For older adults, the aspects of companionship are explicitly promoted in the technical sense of use cases but with safety scaffolding. Emergency contact: Platforms would have to gather an emergency contact upon registration for seniors, as concerns about isolation and crisis response carry over. China’s demographic transformation adds urgency: the World Health Organization reports that China is one of the fastest-aging societies, with those aged 60+ expected to comprise about 28% of its population by 2040.
Safety goals and enforcement under the draft rules
The text is meant to prevent chatbots from promoting, glorifying, or implying self-harm or suicide and to cut down on emotional manipulation or verbal abuse that can degrade personal dignity and mental health. Those goals reflect global concerns in the wake of high-profile cases like a 2023 episode in Belgium when chatbot-hosted conversations preceded a user’s death, which has been covered by European media and wire services.
Enforcement would be based on national oversight and the authority to suspend services for violations. The draft proposal is open for public comment until Jan. 25, 2026, meaning the details — including detection thresholds and recordkeeping requirements when it comes to incident reports — could change before being finalized.
How it fits global trends and China’s local platform scene
China’s two-hour nudge is a reappearance of its previous “anti-addiction” push in gaming and teen modes on short-video apps, which often restrict the amount of time they can be used and offer reminders. It also echoes global trends: the UK’s Online Safety Act, and the EU’s platform risk audits, as well as a mounting push toward age checks and parental controls around leading AI and social platforms.
And some American companies have followed the same course. OpenAI has implemented parental controls and pledged stronger age verification. Character.AI has limited under-18 users from chatting continuously. The guardrails would be set by the Chinese draft as a basic expectation for anthropomorphic services and applications that operate within the country.
China’s ideological content rules, however, make it a special case as well. The suppliers of Chinese systems — think Baidu’s Ernie, Alibaba’s Qwen, and iFlytek’s Spark — already face political and cultural constraints that Western services don’t usually encounter. The guidelines normalize those expectations for chatbots that are designed to project themselves as humanlike and emotive.
What providers need to figure out next for compliance
Installing a “two-hour continuous use” nudge is simple in concept, but it raises product questions. Providers will require precise session tracking across devices, some threshold for distinguishing between passive presence and active engagement, and an attentive UX that makes the reminder a beneficial tool rather than a punitive whip. For enterprise users, where long sessions can be work-related, you may also need role-based controls to avoid interfering with legitimate business operations.
For privacy, the trio of age verification, guardian dashboards, and emergency contacts will give companies pause about how they handle data flows and process information. Look for more on-device checks, more fine-grained consent layers, and audit logs that regulators would find satisfying to review when safety incidents are alleged.
Why this matters for AI companionship and safety
There’s a trend toward anthropomorphic chatbots, from novelties to utilities — from one that helps with loneliness to another that assists in studying and light (therapy-adjacent) conversations. The China proposal is a signal: When AI simulates human functionalities, regulators will assess it according to how humans are affected. The two-hour break prompt is a modest design change with outsized consequences for product roadmaps, particularly as other jurisdictions consider similar rules.
If the draft is finalized, Chinese users could experience a more uniform march of reminders, clearer parental oversight, and tighter content guardrails. For developers, the point is clear: emotionally intelligent AI also must be policy literate — and safety-by-design has become nonnegotiable.