FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News

California AG Orders xAI to Halt Sexual Deepfakes

Bill Thompson
Last updated: January 19, 2026 11:12 am
By Bill Thompson
News
5 Min Read
SHARE

California Attorney General Rob Bonta has issued a cease-and-desist order to xAI, demanding the company immediately stop the creation and distribution of nonconsensual sexual deepfakes and child sexual abuse material generated via its AI systems. The letter follows a state investigation into reports that xAI’s chatbot, Grok, has been used to produce intimate images without consent, including content involving minors. The attorney general’s office says it expects evidence of corrective actions within five days.

Why California’s Attorney General Stepped In on xAI’s Deepfakes

The attorney general’s office argues xAI is facilitating large-scale production of abusive imagery that is being used to harass women and girls online. While production and distribution of such material is illegal under both state and federal law, generative models have dramatically lowered the technical barriers to create convincing fakes at speed and scale. The state’s move signals that regulators are willing to treat permissive AI features and lax safeguards as potential unfair or unlawful business practices when they predictably enable harm.

Table of Contents
  • Why California’s Attorney General Stepped In on xAI’s Deepfakes
  • Grok’s ‘Spicy’ Mode Faces Scrutiny Over Explicit Content
  • Global Regulatory Pressure Builds Around AI Deepfakes
  • What Compliance Could Look Like for xAI Under the Order
  • The Bigger Picture on AI-Driven Sexual Abuse and Safety
The text This site does not have permission to access or serve this content in black font on a white background with a subtle blue and gray striped pattern.

Authorities also framed the order as a child-safety imperative. The National Center for Missing & Exploited Children has reported record volumes of CyberTipline reports in recent years—more than 36 million in the latest annual tally—underscoring the breadth of the problem and the need for proactive detection and swift removal of illegal content across platforms and tools.

Grok’s ‘Spicy’ Mode Faces Scrutiny Over Explicit Content

At the center of the controversy is Grok’s “spicy” mode, a feature marketed to generate explicit content. Critics say it invites misuse and blurs the line between adult content and abusive deepfakes. xAI recently introduced restrictions on image-editing capabilities, but California’s action suggests those changes were either too limited or too late to mitigate ongoing harm. The company has not publicly detailed how its filters, classification systems, or access controls are calibrated to prevent nonconsensual imagery.

Technical safeguards in this domain are well understood, if not universally deployed: stricter default blocks on sexual content, provenance and watermarking via open standards like C2PA, opt-in verification for adult content tools, robust face-matching to prevent image-based abuse, and age estimation layers to prevent CSAM. Independent red-teaming and incident reporting pipelines are also considered best practice by safety researchers.

Global Regulatory Pressure Builds Around AI Deepfakes

California is not alone. Regulators in Japan, Canada, and the United Kingdom have opened inquiries into Grok, and authorities in Malaysia and Indonesia have temporarily blocked the platform. That patchwork response mirrors how other AI services have faced country-by-country scrutiny when local standards for harmful content differ or when enforcement expectations escalate after high-profile incidents.

California Attorney General orders xAI to halt sexual deepfakes

U.S. lawmakers have also pressed major platforms, including X, Reddit, Snap, TikTok, Alphabet, and Meta, to explain their plans to stem sexualized deepfakes. The issue drew intense public attention after high-profile cases, including the viral spread of explicit deepfake images of Taylor Swift on social media, illustrating how quickly synthetic abuse can overwhelm moderation systems and inflict real-world harm on victims.

What Compliance Could Look Like for xAI Under the Order

To satisfy the cease-and-desist, xAI would likely need to demonstrate concrete steps:

  • Disabling or radically constraining explicit-generation modes
  • Default-on blocking of sexual content
  • Robust detection for face swaps and image-to-image manipulation
  • Mandatory reporting of suspected CSAM to NCMEC
  • Rapid takedown workflows with clear user recourse

Equally important is external accountability:

  • Transparency reports
  • Safety evals by independent labs
  • An appeals process for victims seeking removal and evidence preservation for law enforcement

The Bigger Picture on AI-Driven Sexual Abuse and Safety

Sexual deepfakes remain the dominant use case for image-based synthetic media abuse. Multiple analyses, including by Sensity AI, have found that the overwhelming majority of deepfake videos online are sexual and nonconsensual, historically exceeding 90% of observed content. The accelerating quality of open-source models and the viral distribution dynamics of social platforms compound the risk, especially when models retain permissive settings or lack strong identity protections.

California’s order to xAI draws a bright line: companies that ship generative tools with explicit modes and insufficient guardrails will face escalating legal pressure when those tools are weaponized. For the industry, the message is equally clear—safety features cannot be optional or primarily reactive. They need to be defaults, rigorously tested before release, and continuously improved in partnership with civil society, victim support organizations, and regulators.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
EarFun Air Pro 4 Plus wireless earbuds drop to $79.99
Snowflake Databricks Rival ClickHouse Hits $15B Valuation
Copilot Uninstall Lands On Managed Windows But With Catches
TikTok Rolls Out EU Age Detection System
IKEA donut-shaped smart lamp could cost around $99
Samsung Slashes Prices On Odyssey G9 And Ark Monitors
Babbel Opens All 14 Languages for One Low Price
TikTok Launches PineDrama Microdrama App
ChatGPT Go Launches in the US at $8 Per Month
Naya Connect Modular Keyboard Launches On Kickstarter
Google Fast Pair Flaw Lets Hackers Track Headphones
EPA Finds xAI Illegally Used Gas Generators
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.