X has tightened its guardrails around Grok, blocking users from generating or editing images of real people in bikinis, underwear, or similarly revealing attire. The move follows growing political and regulatory pressure over sexualized deepfakes, including images involving minors that have spread across the platform through its AI tools.
What X Changed in Grok’s Image Generation Tools
According to the company’s safety team, Grok’s image features will now refuse prompts that attempt to depict identifiable people in revealing clothing. The restriction covers both free and paid accounts and applies to editing existing photos and creating new composites. X also said it can geoblock the capability entirely in jurisdictions where local law prohibits such content.

In a related change, access to image generation and editing via the Grok account on X is now limited to subscribers, a move likely aimed at reducing abuse by adding friction and accountability. However, limiting features to paying users does not, by itself, prevent misuse—subscription walls tend to slow, not stop, determined actors.
The policy concentrates on sexualized depictions of real people, a category that has long dominated the harms associated with deepfakes. Independent researchers at Sensity AI have repeatedly found that non-consensual sexual content accounts for the vast majority of deepfake material circulating online, a trend that has persisted as generative models have improved.
Regulators Turn Up the Heat on Sexualized Deepfakes
The policy shift lands amid intensifying scrutiny. California’s attorney general, Rob Bonta, has pressed X and xAI to curtail sexualized deepfakes and remove them swiftly, signaling the state is prepared to use enforcement tools to protect residents. In the UK, Prime Minister Keir Starmer condemned the generation of sexualized AI imagery and backed Ofcom in taking action under the Online Safety Act, which allows fines up to 10% of global revenue for serious breaches.
International pressure is building as well. Broadcasters reported that Indonesia and Malaysia have already blocked access to Grok over safety concerns. In the U.S., several senators urged Apple to remove the services from its App Store, underscoring the growing expectation that platforms put hard brakes on AI misuse.
Complicating matters, Elon Musk appeared to encourage users to probe Grok’s moderation, suggesting a confidence in the guardrails but also inviting real-world stress tests. For regulators, that posture risks signaling that safety measures are experimental rather than fully baked.

Why the Policy Matters for AI Safety and Compliance
Sexualized deepfakes are uniquely harmful because they combine reputational damage with the virality of short-form media. Victims—often women and girls—face harassment, job consequences, and mental health impacts even when the images are fakes. The Internet Watch Foundation has warned that AI is accelerating the creation and sharing of manipulated imagery involving minors, straining traditional detection and takedown workflows.
By explicitly blocking prompts that sexualize real people, X is moving closer to a risk-based approach endorsed by groups like the Partnership on AI: disable high-risk capabilities, add geographic compliance controls, and limit features most likely to be abused. The geoblocking provision is notable because it acknowledges the patchwork of national laws and the need to enforce standards at the jurisdiction level.
Legal Risk Beyond Section 230 for AI-Generated Images
There is also a legal calculus. Section 230 of the Communications Decency Act largely shields platforms from liability for user posts, but that protection is less certain when the platform’s own AI is producing the imagery. If a platform tool creates non-consensual explicit content of a real person, plaintiffs may argue the company crossed from host to publisher or even co-creator. That ambiguity makes preemptive bans on sensitive prompts a prudent hedge.
In Europe and the UK, new regimes treat recommender systems and generative tools as active participants in content dissemination, increasing exposure if safeguards are weak. U.S. lawmakers are considering targeted statutes against AI-enabled child sexual abuse material and deepfake pornography, which could further narrow liability shields.
What to Watch Next as X Enforces New Grok Policies
The big test is enforcement. Researchers will look for evidence that Grok consistently rejects face-swap attempts, blurred or partial-identity prompts, and metadata workarounds. Another litmus test is speed: rapid detection and takedown of sexualized images, especially those involving minors, is often where platforms succeed or fail.
Expect broader industry ripple effects. If X’s changes satisfy regulators, other AI image tools may adopt similar prompt-level bans and regional controls. If not, app store policy and national regulators may become the de facto moderators. Either way, the era of open-ended, unfiltered AI image generation is contracting as legal, ethical, and commercial realities converge.
