OpenAI researchers say internal chat histories detail how China-based operatives used ChatGPT to plan and polish online campaigns aimed at intimidating dissidents living in the United States and other democracies. The logs, reviewed by the company’s threat investigators, describe “cyber special operations” that blend impersonation, forged legal documents, and coordinated influence tactics across hundreds of social platforms.
What the ChatGPT logs reveal about the operations
According to OpenAI’s analysis, a Mainland China account fed ChatGPT drafts of playbooks and incident reports, asking the model to edit, sequence, and improve them for maximum effect. One plan described using a forged U.S. county court order to pressure a social network into taking down content critical of the Chinese Communist Party. Another involved impersonating U.S. immigration officials to frighten activists into silence.
The materials point to operations spanning roughly 300 platforms outside China, using thousands of accounts to push tens of thousands of posts. The goals were blunt: saturate feeds with pro-state narratives, bury dissent, and “shake the information landscape.” OpenAI says it disabled the implicated accounts and shared indicators with industry peers.
Impersonation And Psychological Pressure
Impersonation anchored many of the tactics. The researchers found five fake Bluesky profiles for California-based dissident Hui Bo and similar spoofs targeting Teacher Li, a prominent figure who drew attention for aggregating citizen reports during China’s zero-COVID era and now lives in Europe. Fabricated obituaries and AI-edited gravestone images falsely announced the death of another critic, Jie Lijian, before being blasted across social media.
Operatives also generated fabricated “evidence” to bolster takedown petitions against pro-Taiwan accounts on X, citing alleged rule violations. The campaigns reportedly had mixed reach, but they did succeed at times in chilling speech: some targets lost followers, curtailed posting, or deleted accounts after waves of harassment.
How Generative AI Supercharges Old Playbooks
Specialists tracking covert influence say large language models lower the barrier to entry for state actors. Instead of drafting clumsy scripts, operators can ask an AI to refine messaging, mimic official tone, translate idiomatic English, and create realistic bureaucratic notices. That improves credibility at scale and reduces the telltale grammatical markers that once helped platforms catch influence operations.
Threat researchers at Microsoft’s Threat Analysis Center, Mandiant, and Graphika have documented a China-linked network dubbed Spamouflage or Dragonbridge that hops across platforms, recycles personas, and rapidly adjusts narratives when called out. Meta has reported removing thousands of accounts tied to such networks in coordinated takedowns, calling them the largest cross-platform covert influence efforts it has seen.
A Broader Pattern Of Transnational Repression
U.S. officials have warned that harassment of diaspora communities is not limited to online deception. The Department of Justice has charged China-linked actors in cases involving intimidation, stalking, and schemes to coerce return to China. The FBI urges activists and journalists to report suspicious communications, including messages that reference family members overseas, demand personal data, or masquerade as government notices.
Human rights groups, including Freedom House and Human Rights Watch, say many exiled activists self-censor after sustained online abuse, doxxing, and threats to relatives back home. The new logs underscore how AI tools can make those pressure campaigns more organized, more multilingual, and harder to detect in real time.
Platform responses and the persistent policy gap
OpenAI says it restricted the actors, improved classifiers to spot similar misuse, and coordinated with other companies. But the activity illustrates a tough enforcement gap: a model may refuse direct guidance on illegal activity, yet still be exploited for polished messaging, legal-sounding letters, or metadata that strengthens a deception. That gray zone requires tighter safeguards and cross-platform coordination.
Security agencies and independent researchers recommend steps that go beyond content removal. Those include:
- Stronger identity checks for high-risk appeals to platforms.
- More transparent provenance signals for AI-generated media.
- Rapid verification channels for targeted users.
- Cross-company sharing of indicators.
- Diaspora-focused hotlines and digital safety training to blunt the impact of coordinated harassment.
The Bottom Line on AI-Aided Harassment Campaigns
The ChatGPT logs provide rare, first-person evidence of how a state-aligned operation can weaponize generative AI against critics abroad. While many posts failed to gain traction, the campaigns show measurable chilling effects and a playbook built for speed and plausible officialdom. As platforms harden defenses, success will hinge on moving faster than the operators—and protecting the people they aim to silence.