Governments and regulators are racing to try to contain an outbreak of AI-generated, nonconsensual nudes on X, many attributed to xAI’s Grok system. They include celebrities and journalists as well as private citizens and one politician, in an illustration of how fast synthetic sexual imagery can spool from the hard drives of people who fancy themselves digital alchemists to our mainstream feeds.
And still more evidence indicates the problem is gaining momentum. An initial estimate in a research paper by Copyleaks figured approximately one image per minute, though a subsequent 24-hour sample returned an average of about 6,700 per hour. That scale makes reporting abuse feel like a game of whack-a-mole, and it forces regulators to try to work out the bounds of new online safety laws.
Regulators Rush To Rein In AI Deepfakes On X
The European Commission has instructed xAI not to destroy documents about Grok in a move often taken before formal action is launched under the Digital Services Act. The demand grew out of reporting that platform leaders could have pushed back against stronger safeguards for image generation, prompting questions about how risk assessments and safety mitigations were addressed.
X, on the other hand, deleted Grok’s tab of public media and once again stated that creating illegal content will not be any different from uploading it.
That position covers child sexual abuse imagery, but nonconsensual adult deepfakes exist in a more fractured legal environment from jurisdiction to jurisdiction, presenting a considerably more inconsistent and cross-border enforcement challenge.
Europe Relies On DSA And Preservation Of Evidence
Under the DSA, X is a very large online platform and is to go through an exercise in mapping threats and risks — such as facilitating intimate-image abuse and deceptive deepfakes. Failure to comply can lead to fines of up to 6 percent of global turnover, and orders requiring firms to boost moderation, tweak recommender systems, and submit themselves for independent reviews.
Legal experts observe that the Commission can require fine-grained transparency on matters like safety tooling, model guardrails, and response times. If generative features significantly increase risk, regulators could even require stronger defaults and throttling, or temporary blockades until the platforms show they have effective controls.
Pressures Mount in the UK, Australia, and India
In the United Kingdom, Ofcom said it is examining whether X and xAI are meeting obligations under the Online Safety Act. The regulator can levy fines of up to 10 percent of global revenue and order speedy takedowns for illegal content. Senior officials have voiced public support for tough action, indicating political winds behind aggressive enforcement.
Over the past few months, complaints about Grok-related content have doubled, according to Australia’s eSafety Commissioner Julie Inman-Grant. Under the Online Safety Act and the Basic Online Safety Expectations, her office has tools to issue removal notices and penalties, which have been successful in slowing down the spread of nonconsensual intimate imagery across large platforms.
India’s IT ministry has told X to provide an action-taken report before a very short deadline, and warned that any inability to meet due diligence obligations could put the safe harbour provisions at risk. For a service with millions of Indian users, the loss of intermediary protections would leave it vulnerable to increased liability and court orders directing that it be blocked.
Why Takedowns Are Ineffective at Scale on Social Media
Unlike classic image abuse, deepfakes typically are unique per upload, which dulls the effectiveness of perceptual hashing and enables malicious users to tweak reality in such a way that they can skirt duplicate detection. With thousands of uploads per hour, moderation must move at seconds-level speed, an impossible standard if staffing, tooling, or escalation lines are thin.
Victim redress remains patchy. Initiatives like StopNCII.org, a cross-industry hashing program supported by safety groups and leading platforms, can stop known images before they go viral — but only if the platforms actually incorporate and prioritize them. Without strong hashing pipelines, and coordinated efforts on all platforms working together, copies arrive as fast as they can be deleted.
Provenance signals could help, but no further than that. Watermarking and content credentialing in a C2PA-style system can watermark potentially generated AI outputs, but its labels are often removed when content is reposted, cropped, or screen-captured. Researchers warn that provenance is necessary but insufficient without powerful upload filters, behavioral throttles, and consequence-backed deterrence.
What Effective Enforcement Would Look Like in Practice
Regulators can require clear redress standards: 24/7 escalation support for victims, rapid verification pipelines, and tight time-to-takedown service levels around intimate-image abuse. Under the DSA, such expectations can be baked into risk mitigation plans, audited, and enforced with penalties for coming up short.
Platforms can harden their stack with layered defenses: classifiers trained to detect synthetic nudity, face matching aware of consent so it can block targeted uploads, and friction for higher-risk features such as rate limits on new accounts posting images or the need for verification to gain access to image-generation functionality.
Transparency is the pressure valve. Regular updates about how common spam is, the average time to takedown, false positive rates, and StopNCII.org and C2PA adoption on all surfaces would allow watchdogs and users to check in on progress. Without visible, measurable progress, governments have legal tools — and increasing political momentum — to compel change.