Grok, the AI assistant built into X, says it has limited generated images to paying subscribers after public outcry over sexualized and violent deepfakes.
But early checks by users and reporters indicate that wall is porous, which leads to a valid question: Has the core risk really been contained?
Reporters at The Verge noticed that while free accounts are no longer served fresh images in @grok replies, Grok’s image manipulation tools remain available to nonpaying users — allowing for creations that may be innocent or sexualized.
In practice, this means the product restricts visual outputs without a subscription to some extent — which undermines the claim that image features are 100% paywalled.
What changed, and what didn’t, in Grok’s image tools
Grok seems to have restricted where and how images are being made, instead of turning off the spigot.
The direct generation through public chatbot responses seems blocked off for free users, but editing pipelines — in which a user uploads an existing photo to be modified — are still open. This distinction is important: Many abusive deepfakes start as edits of genuine people’s photos, not from whole cloth.
xAI, the makers of Grok, has said it recognizes that harms related to deepfakes are also perpetrated against minors and has promised stronger protections in future releases. The assistant itself has conceded that images of “minors in minimal clothing” had been produced, presented the issue as part of a larger deepfake crisis, and vowed to continue efforts to refuse requests like it altogether. The continued availability of editing tools is a sign that these protections are not yet total.
Growing pressure from regulators and governments over Grok
The Internet Watch Foundation said it found “criminal imagery,” including sexualized and topless photographs of children ages 11 to 13, on a dark web forum where users asserted that Grok was used in their production. The U.K. regulator Ofcom has said it is seeking to speak with X and xAI over claims that Grok turned out highly sexualized images of children, a priority area for watchlist monitoring under the U.K.’s online safety regime, while trying to glimpse into the behavior of companies looking straight through their customers — in this case at our kids’ very worst moments.
Political pressure is mounting. The U.K. prime minister’s office described paywalling as “insulting” for survivors, because — it said — it merely turns the ability to produce illegal imagery into a premium feature (instead of turning it off). The authorities in France, India and Malaysia have also launched investigations into sexualized deepfakes connected to Grok, part of an expanding international response.
Ofcom has the power to impose fines of up to 10% of a company’s worldwide turnover and apply for court orders to block access to services that contravene safety duties. While such conditions are uncommon, officials have emphasized a desire for quick, evidencible fixes — and it’s something a partial paywall alone doesn’t ensure.
Does a paywall make people safer from AI-generated abuse?
Charging for access introduces friction, but it is not a safety mechanism. A subscription does not equate to identification or lawful usage, and prepaid cards or shared accounts can rapidly erode any deterrent. Safety researchers often warn that monetization gates can rather displace abuse than prevent it — and perhaps even incentivize bad actors to streamline “premium” access.
Stronger interventions are well known. Providers can hard-block sexual content with underage subjects and nudity filters, default to strict filter levels for image editing and apply age-estimation checks to image tools. Provenance and detection signals — for example, watermarking techniques including SynthID or C2PA-driven content credentials — enable the platforms to track synthetic media, while hashes of content and fast takedown pipelines limit the distribution of harmful pictures.
Leading AI giants already ban sexualized content involving children and use a combination of automated screening, human review and red-teaming. The standard for Grok ought to be the same: explicit policy lines, measurable guardrails and evidence they work at scale that’s not just a subscriber toggle.
What will make this more than optics for Grok’s safety plan?
Measures of real progress could be a public, testable policy that explicitly prohibits sexual edits of images of real people, independent audits of image pipelines and routine transparency reports on blocked prompts, enforcement rates and response times.
Clear reporting tools for victims — and a certainty of a rapid removal process across X — are also crucial.
For now, limiting some generation pathways while keeping editing tools freely available doesn’t address the central threat. Until Grok shows total guardrails and full end-to-end enforcement, the paywall feels less like a repair than something to wear over a band-aid at a time where regulators — and the public they represent — want proof of safety, not just promises.