FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

X limits Grok’s public image tool after deepfakes

Gregory Zuckerman
Last updated: January 9, 2026 2:10 pm
By Gregory Zuckerman
Technology
7 Min Read
SHARE

X has restricted the public’s access to Grok’s AI image generator, which creates and edits images that automatically post on the platform, as of late Monday, after mounting reports of sexualized deepfakes and apparent child abuse imagery. The change adds a paywall and identity friction in the service that could potentially make it easier to trace abusers even as the company comes under greater scrutiny by safety groups, lawmakers and regulators.

Why X pulled back public image generation

Free users currently receive a message that image creation and editing is only for subscribers, who can use the feature with X Premium or Premium+. The change comes in response to concerns that Grok’s public prompts risked generating non-consensual and illegal images, including cases the company has said publicly shouldn’t exist.

Table of Contents
  • Why X pulled back public image generation
  • What changes for users after X limits public image tools
  • Safety and legal pressure grows as X curbs public images
  • Will paywalls curb abuse and deter deepfake creators?
  • The bigger content integrity push across platforms
The Grok logo, featuring a stylized black G icon with a diagonal slash, next to the word Grok in black sans-serif font, all set against a professional 16:9 aspect ratio background with a soft blue-to-purple gradient and subtle geometric patterns.

By walling off public image tools behind a subscription, X essentially ties their usage to a billing identity. It’s not a full identity verification, but it does create an audit trail by demanding payment information and a legal name, which can be enough to scare away some bad actors and aid enforcement when illegally posted content does pop up.

What changes for users after X limits public image tools

The power users at Grok have similar image generation and editing capabilities, but are otherwise locked behind a paywall; their creations can still be posted to Grok’s reply feed, where all written descriptions — quoted or original — are eminently surfable. Non-subscribers may keep experimenting with Grok’s image features inside its app or online in private circles, but their images won’t automatically make it into the public feed.

Its practicalities: it reduces the number of anonymous or throwaway accounts that are posting AI images into the public square, and gives X more leverage to discipline other accounts that break its rules — such as bans and escalation to law enforcement when necessary.

Safety and legal pressure grows as X curbs public images

X leadership has stated that creating illegal images with Grok will be treated as if the user is uploading the offending content itself, subject to removal and possible legal actions. The platform’s safety team has also stated anew that it will work with law enforcement when it comes to criminal content.

External pressure has escalated. The Internet Watch Foundation reported finding several instances of child abuse images suspected to have been produced with Grok and said that merely restricting access isn’t good enough, arguing for a “safety by design” approach. In the United Kingdom, senior officials have denounced the proliferation of sexualized deepfakes as “illegal,” under duties created by an Online Safety Act. The National Center for Missing and Exploited Children in the United States has received more than 36 million CyberTipline reports annually in recent years, highlighting the scope of the CSAM problem platforms need to police.

Other studies support that fear: Sensity AI’s analysis has found that over 90% of deepfakes on the web are non-consensual sexual content — more worrying when image tools will be able to produce photorealistic images of real people.

The Grok logo and the Microsoft Azure logo are displayed side-by-side on a light gray background with subtle geometric patterns.

Will paywalls curb abuse and deter deepfake creators?

Friction — payment, stronger telemetry, stiffer penalties — tends to cut down on casual abuse and reduce both the speed with which abusers can spin up new accounts. Paywalls, rate limits and verified tiers have long been utilized by anti-abuse teams across the industry in an effort to mitigate spam and fraud. But it is not a silver bullet: motivated actors will still be able to subscribe, and gaps in filtering, prompt moderation or post-publication review will find their way.

The proof for X will be in whether technical guardrails keep pace. Successful systems integrate a timely classification process, image-level safety filters and post hoc detection with fast appeals in combination with enforcement. Transparency is helpful, too: aggregate reporting on blocked prompts and removed images and law enforcement referrals could help indicate whether the approach is working.

The bigger content integrity push across platforms

Across the industry, platforms are moving toward cryptographic provenance and labeling. The C2PA standard, which includes signed metadata showing how an image or video was produced and by whom, is beginning to be adopted at major media and AI companies. Paired with visible “Made with AI” markers and strong reporting tools, provenance can make synthetic images easier to track and more difficult for malicious actors to weaponize.

X also has regulatory responsibilities as a very large online platform under the EU’s Digital Services Act, which includes risk assessments around systemic harms and demonstrable mitigation measures. Among the steps: restricting public AI-powered image generation on recognizable accounts, which regulators will be scrutinizing for proof that doing so effectively reduces illegal and non-consensual deepfake proliferation.

The bottom line: gating public image tools behind X Premium increases the cost of abuse and likely provides some better traceability, but the proof will be in actual enforcement results.

If Grok’s filters and X’s safety operations reliably prevent bad images from circulating — without also preventing creative or constructive use of the technology — then this pivot might be taken as a model. If not, more severe technical constraints or even feature suspensions are likely to come.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
RingConn Gen 3 debuts at CES with features Oura Ring doesn’t have
Xreal Previews 240 Hz ROG R1 Gaming Glasses at CES
Andreessen Horowitz Raises $15 Billion in New Capital
Building Apps With AI Using Rocket Without Heavy Coding
BPlay888 – Advanced Online Gaming Platform
Warning from Experts: Don’t Turn Off Windows Security
Workers Are Eyeing Portrait Monitors for Productivity Boost
HP OmniBook says it can go 45 hours of battery life at CES
Motorola Razr Fold Impresses at CES, Bests Galaxy Z Fold 7
Fortnite Brings Back Delulu Mode this January
At CES 2026, Seven Standout Windows Laptops
OnlyFans Creators Share a Six-Figure Game Plan
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.