X is facing fresh scrutiny after an investigation found users could still post sexualized, AI-generated images made with Grok, even after the company announced a ban on such content. Reporters from The Guardian said they created short videos that undress real women into bikinis using Grok’s standalone app and then uploaded the clips to X without immediate moderation, where they were viewable within seconds.
Report Raises Questions About Enforcement
The findings suggest a gap between X’s stated policy and actual enforcement. Earlier this week, X’s safety team said it had prohibited AI-generated sexualized depictions of real people and implemented technical measures to stop the @Grok account on X from editing images into revealing clothing. But the restriction appears limited to the on-platform tool. Content created in Grok’s separate app can still be saved and uploaded like any other media, potentially bypassing safeguards that target only the in-app workflow.
X reiterated it has zero tolerance for child sexual exploitation, non-consensual nudity, and unwanted sexual content. The platform has been under mounting pressure after multiple governments signaled they were reviewing or moving to restrict Grok following reports that its tools could be used to create sexualized images of minors. While the latest report concerns adults, the enforcement gap elevates broader safety concerns about the speed and scope of moderation on X.
A Familiar Loophole In AI Image Controls
Platforms that limit specific AI tools often miss content generated off-platform, a long-standing moderation challenge. If filters only block edits performed by an official account or within a specific feature, users can simply produce content elsewhere and upload it directly. Effective policy needs to be paired with detection that scans incoming media for sexualized manipulation, deepfake characteristics, and policy-violating metadata—irrespective of how the content was created.
That’s easier said than done. Academic and industry research shows classifiers that detect nudity or synthetic alterations can be brittle, with evasion tactics and false negatives undermining performance. Sensity AI has reported for years that more than 90% of deepfakes found online are sexual in nature and overwhelmingly target women. The Internet Watch Foundation and the National Center for Missing and Exploited Children have also warned of rapid growth in abusive, AI-assisted imagery, compounding an already severe moderation problem.
Regulatory Heat Is Rising for X over Grok Content
Given X’s size and role in public discourse, the stakes are high. In the European Union, the Digital Services Act requires very large platforms to assess and mitigate systemic risks, including harms related to illegal content and manipulation. Failure to curb the spread of non-consensual sexual imagery—AI-generated or otherwise—can trigger investigations and fines. In the United Kingdom, sharing intimate deepfakes without consent has been criminalized under reforms linked to the Online Safety Act.
In the United States, a growing number of states have enacted laws against non-consensual deepfake pornography, and federal proposals have sought to create a nationwide cause of action. Regulators from Australia’s eSafety office to European authorities have also shown a willingness to demand rapid removal of harmful material. If X’s protections are limited to its own AI endpoint and don’t address uploads at scale, that could attract additional scrutiny.
What Effective Safeguards Would Look Like
Experts point to a layered approach: robust on-upload detection; provenance tools like C2PA-style content credentials; hashing of known abusive imagery; friction for suspicious accounts; and rapid, well-staffed response pathways for victims. Applying the policy across the entire media pipeline—not only to the @Grok feature—would close the most obvious loophole. Independent audits, transparency reports with methodologically sound metrics, and cooperation with specialist watchdogs can further boost trust.
There’s also an education component. Victims and bystanders need clear guidance on reporting non-consensual imagery on X, with easily discoverable tools and predictable outcomes. Initiatives such as industry-backed intimate-image hashing programs can help individuals proactively block the spread of abusive content across multiple platforms.
The Bottom Line for X and Grok on Enforcement Gaps
The investigation underscores a key reality of AI safety on social platforms: a rule is only as strong as its enforcement surface. Restricting sexualized edits inside a single account or feature is a start, but it won’t stop content generated elsewhere from flowing in. Until X applies consistent detection and moderation to all uploads—and shows measurable outcomes—reports of policy-violating AI imagery slipping through are likely to continue.