California Attorney General Rob Bonta has issued a cease-and-desist order to xAI, demanding the company immediately stop the creation and distribution of nonconsensual sexual deepfakes and child sexual abuse material generated via its AI systems. The letter follows a state investigation into reports that xAI’s chatbot, Grok, has been used to produce intimate images without consent, including content involving minors. The attorney general’s office says it expects evidence of corrective actions within five days.
Why California’s Attorney General Stepped In on xAI’s Deepfakes
The attorney general’s office argues xAI is facilitating large-scale production of abusive imagery that is being used to harass women and girls online. While production and distribution of such material is illegal under both state and federal law, generative models have dramatically lowered the technical barriers to create convincing fakes at speed and scale. The state’s move signals that regulators are willing to treat permissive AI features and lax safeguards as potential unfair or unlawful business practices when they predictably enable harm.

Authorities also framed the order as a child-safety imperative. The National Center for Missing & Exploited Children has reported record volumes of CyberTipline reports in recent years—more than 36 million in the latest annual tally—underscoring the breadth of the problem and the need for proactive detection and swift removal of illegal content across platforms and tools.
Grok’s ‘Spicy’ Mode Faces Scrutiny Over Explicit Content
At the center of the controversy is Grok’s “spicy” mode, a feature marketed to generate explicit content. Critics say it invites misuse and blurs the line between adult content and abusive deepfakes. xAI recently introduced restrictions on image-editing capabilities, but California’s action suggests those changes were either too limited or too late to mitigate ongoing harm. The company has not publicly detailed how its filters, classification systems, or access controls are calibrated to prevent nonconsensual imagery.
Technical safeguards in this domain are well understood, if not universally deployed: stricter default blocks on sexual content, provenance and watermarking via open standards like C2PA, opt-in verification for adult content tools, robust face-matching to prevent image-based abuse, and age estimation layers to prevent CSAM. Independent red-teaming and incident reporting pipelines are also considered best practice by safety researchers.
Global Regulatory Pressure Builds Around AI Deepfakes
California is not alone. Regulators in Japan, Canada, and the United Kingdom have opened inquiries into Grok, and authorities in Malaysia and Indonesia have temporarily blocked the platform. That patchwork response mirrors how other AI services have faced country-by-country scrutiny when local standards for harmful content differ or when enforcement expectations escalate after high-profile incidents.

U.S. lawmakers have also pressed major platforms, including X, Reddit, Snap, TikTok, Alphabet, and Meta, to explain their plans to stem sexualized deepfakes. The issue drew intense public attention after high-profile cases, including the viral spread of explicit deepfake images of Taylor Swift on social media, illustrating how quickly synthetic abuse can overwhelm moderation systems and inflict real-world harm on victims.
What Compliance Could Look Like for xAI Under the Order
To satisfy the cease-and-desist, xAI would likely need to demonstrate concrete steps:
- Disabling or radically constraining explicit-generation modes
- Default-on blocking of sexual content
- Robust detection for face swaps and image-to-image manipulation
- Mandatory reporting of suspected CSAM to NCMEC
- Rapid takedown workflows with clear user recourse
Equally important is external accountability:
- Transparency reports
- Safety evals by independent labs
- An appeals process for victims seeking removal and evidence preservation for law enforcement
The Bigger Picture on AI-Driven Sexual Abuse and Safety
Sexual deepfakes remain the dominant use case for image-based synthetic media abuse. Multiple analyses, including by Sensity AI, have found that the overwhelming majority of deepfake videos online are sexual and nonconsensual, historically exceeding 90% of observed content. The accelerating quality of open-source models and the viral distribution dynamics of social platforms compound the risk, especially when models retain permissive settings or lack strong identity protections.
California’s order to xAI draws a bright line: companies that ship generative tools with explicit modes and insufficient guardrails will face escalating legal pressure when those tools are weaponized. For the industry, the message is equally clear—safety features cannot be optional or primarily reactive. They need to be defaults, rigorously tested before release, and continuously improved in partnership with civil society, victim support organizations, and regulators.
