X has restricted the public’s access to Grok’s AI image generator, which creates and edits images that automatically post on the platform, as of late Monday, after mounting reports of sexualized deepfakes and apparent child abuse imagery. The change adds a paywall and identity friction in the service that could potentially make it easier to trace abusers even as the company comes under greater scrutiny by safety groups, lawmakers and regulators.
Why X pulled back public image generation
Free users currently receive a message that image creation and editing is only for subscribers, who can use the feature with X Premium or Premium+. The change comes in response to concerns that Grok’s public prompts risked generating non-consensual and illegal images, including cases the company has said publicly shouldn’t exist.
By walling off public image tools behind a subscription, X essentially ties their usage to a billing identity. It’s not a full identity verification, but it does create an audit trail by demanding payment information and a legal name, which can be enough to scare away some bad actors and aid enforcement when illegally posted content does pop up.
What changes for users after X limits public image tools
The power users at Grok have similar image generation and editing capabilities, but are otherwise locked behind a paywall; their creations can still be posted to Grok’s reply feed, where all written descriptions — quoted or original — are eminently surfable. Non-subscribers may keep experimenting with Grok’s image features inside its app or online in private circles, but their images won’t automatically make it into the public feed.
Its practicalities: it reduces the number of anonymous or throwaway accounts that are posting AI images into the public square, and gives X more leverage to discipline other accounts that break its rules — such as bans and escalation to law enforcement when necessary.
Safety and legal pressure grows as X curbs public images
X leadership has stated that creating illegal images with Grok will be treated as if the user is uploading the offending content itself, subject to removal and possible legal actions. The platform’s safety team has also stated anew that it will work with law enforcement when it comes to criminal content.
External pressure has escalated. The Internet Watch Foundation reported finding several instances of child abuse images suspected to have been produced with Grok and said that merely restricting access isn’t good enough, arguing for a “safety by design” approach. In the United Kingdom, senior officials have denounced the proliferation of sexualized deepfakes as “illegal,” under duties created by an Online Safety Act. The National Center for Missing and Exploited Children in the United States has received more than 36 million CyberTipline reports annually in recent years, highlighting the scope of the CSAM problem platforms need to police.
Other studies support that fear: Sensity AI’s analysis has found that over 90% of deepfakes on the web are non-consensual sexual content — more worrying when image tools will be able to produce photorealistic images of real people.
Will paywalls curb abuse and deter deepfake creators?
Friction — payment, stronger telemetry, stiffer penalties — tends to cut down on casual abuse and reduce both the speed with which abusers can spin up new accounts. Paywalls, rate limits and verified tiers have long been utilized by anti-abuse teams across the industry in an effort to mitigate spam and fraud. But it is not a silver bullet: motivated actors will still be able to subscribe, and gaps in filtering, prompt moderation or post-publication review will find their way.
The proof for X will be in whether technical guardrails keep pace. Successful systems integrate a timely classification process, image-level safety filters and post hoc detection with fast appeals in combination with enforcement. Transparency is helpful, too: aggregate reporting on blocked prompts and removed images and law enforcement referrals could help indicate whether the approach is working.
The bigger content integrity push across platforms
Across the industry, platforms are moving toward cryptographic provenance and labeling. The C2PA standard, which includes signed metadata showing how an image or video was produced and by whom, is beginning to be adopted at major media and AI companies. Paired with visible “Made with AI” markers and strong reporting tools, provenance can make synthetic images easier to track and more difficult for malicious actors to weaponize.
X also has regulatory responsibilities as a very large online platform under the EU’s Digital Services Act, which includes risk assessments around systemic harms and demonstrable mitigation measures. Among the steps: restricting public AI-powered image generation on recognizable accounts, which regulators will be scrutinizing for proof that doing so effectively reduces illegal and non-consensual deepfake proliferation.
The bottom line: gating public image tools behind X Premium increases the cost of abuse and likely provides some better traceability, but the proof will be in actual enforcement results.
If Grok’s filters and X’s safety operations reliably prevent bad images from circulating — without also preventing creative or constructive use of the technology — then this pivot might be taken as a model. If not, more severe technical constraints or even feature suspensions are likely to come.