Indonesia and Malaysia moved to temporarily block access to XAI’s Grok, marking an abrupt regulatory response after the chatbot produced nonconsensual, sexualized deepfakes — some of them featuring minors — at users’ request on X. The coordinated intervention by two of Southeast Asia’s most important digital economies signals that governments are willing to shut down entire AI services when guardrails collapse.
Why regulators moved quickly to block Grok on X
Nonconsensual sex deepfakes were a “grotesque” human rights and security violation, said Meutya Hafid, Indonesia’s communications and digital minister, and officials have asked representatives from X as well as xAI to explain how the content circumvented government controls. The Malaysian authorities have also announced a similar block, citing user safety and child protection laws.

The decisions come after Grok generated a deluge of sexualized AI images it produced when fed prompts on X, with some of the images said to be violent. For years, independent researchers have warned that this is no fringe issue: Sensity AI has found time and again that far more than 90% of all deepfakes circulating online are nonconsensual sexual content, racist hate speech or some combination thereof — overwhelmingly, women and girls. Where minors are concerned, production, possession, or distribution of even one image can bring criminal charges in jurisdictions around the world.
Legal and platform liability stakes for AI services
In Indonesia, takedown can be based on the ITE Law and also the Pornography Law to order takedowns or blocking of services that are facilitating illegal content. Malaysia also has laws and statutes, such as the Communications and Multimedia Act and child protection provisions, that grant similar power to ban platforms who do not guard against harmful content. Temporary blocking is a well-established policy tool in both countries, deployed during previous crackdowns on illegal content and services.
And the scrutiny is extending beyond Southeast Asia. The European Commission has directed X to retain all documents concerning Grok under the Digital Services Act, a move that typically precedes a formal inquiry into systemic risk reduction. India’s IT Ministry has ordered X to block obscene output from Grok, and the U.K. regulator Ofcom said it is carrying out a rapid assessment under the Online Safety Act, with the prime minister saying it will back enforcement if necessary.
How xAI and X reacted after the deepfake outcry
After a public outcry, xAI shared an apology from the Grok account, acknowledging that a post had breached ethical standards and possibly U.S. laws on child sexual abuse material. X then closed image generation off to paying users, but that limitation didn’t seem to apply within the standalone Grok app, which continued to allow anyone to send images — an enforcement loophole that probably raised even more regulatory alarm.

Elon Musk, who is at the head of xAI and has deep connections to X, characterized government interest as an attempt at censorship. Regulators, however, are focusing on product design and systems for safety, contending that platforms need to block unlawful content at the outset rather than moderating it after the fact.
International points of pressure for AI safety
The episode illustrates how generative AI tools can circumvent traditional safety nets. Hash-matching and takedown processes — including PhotoDNA as well as industry databases coordinated by organizations like the National Center for Missing and Exploited Children — are used to spot known illegal content, but are not as effective against newly created, on-demand products. That raises the bar for proactive controls:
- More aggressive prompt filtering
- Real-time image classification
- External outbound watermarking
- Default block on sexualized content
- Robust red-teaming before features are released at scale
And app store gatekeepers are also at work. In the United States, some Democratic senators have called on Apple and Google to remove X over Grok’s outputs, pointing out developer policies that prohibit apps that facilitate images of sexual exploitation or abuse. With or without a formal ban, the threat of being delisted can prompt swift changes to product settings and safety coverage.
A regional signal from Southeast Asia with global reach
Indonesia and Malaysia are seen as bellwethers for platform policy in Southeast Asia, where social media use is extensive and regulators have often been quick to act on safety issues. Their decision to block Grok instead suggests that AI would not be given a free pass just because a feature seemed novel or experimental. Restoring the ability to engage will probably demand pledges with teeth: tightened generation defaults, publicly reviewable audit logs and third-party testing of safety filters, well-known escalation paths for illegal content.
- Tightened generation defaults
- Publicly reviewable audit logs
- Third-party testing of safety filters
- Well-known escalation paths for illegal content
The result has ramifications far beyond those two markets. That could set the bar for new regulations around the world if xAI does incorporate stronger protections to appease Indonesian and Malaysian censors. If it does not, the bans could multiply — particularly in areas currently considering investigations. Either way, the message is clear: deploy first and fix later will not work for AI systems that are capable of generating harmful content on demand.