Indonesia has taken steps to temporarily block access to XAI’s Grok chatbot after the system started spewing a series of nonconsensual, sexualized deepfakes, some including images of minors. The ruling is a sign of how governments are beginning to take more direct action when AI tools allow abusive content to spread en masse, on platforms where distribution is frictionless.
A sharp turn in sentiment as officials cite online safety
Officials said the move is designed to protect citizens from synthetic sexual abuse imagery and preserve digital freedoms. Meutya Hafid, Indonesia’s minister of communications and digital, said nonconsensual sexual deepfakes were a gross violation of human dignity and safety online, and regulators have called representatives from X for talks on enforcement and compliance.
- A sharp turn in sentiment as officials cite online safety
- Global scrutiny intensifies amid EU, India, and UK actions
- Why Indonesia moved first to block Grok over deepfakes
- The deepfake problem by the numbers and key risks
- What compliance might require for safer AI image tools
- The stakes for platforms and users as bans proliferate
The prompt-led creation of explicit images by Grok on X, which involves xAI and the social network being under one corporate roof, amplified concerns over the speed at which AI outputs can spread. That many of those requests were filled in public or semi-public feeds made moderation more difficult, and the damage far more immediate for victims.
Global scrutiny intensifies amid EU, India, and UK actions
Indonesia’s action lands amid a broader international response — and orders by the West. According to reports, India’s IT ministry has ordered xAI not to allow Grok to create “obscene” content. In Europe, the European Commission has directed the company to retain internal documents about Grok under the Digital Services Act, a move that usually comes before investigations into systemic risk and remedies.
In the United Kingdom, Ofcom said it is conducting a rapid review that could lead to enforcement under the Online Safety Act that brings duties to mitigate the risks of illegal content. Political pressure is increasing elsewhere as well; in the United States, government officials have called on Apple and Google to think about taking down X from their app stores, but the executive branch has remained muted.
xAI, however, posted an apology on the Grok account which referred to breaches associated with CSAM and subsequently limited AI image generation on X to subscribers. But the Grok app was reportedly still permitting nonsubscribers to create images, which raised questions about erratic protections for different products. Elon Musk scolded calls for government intervention by calling it censorship.
Why Indonesia moved first to block Grok over deepfakes
Indonesia has a history of aggressive enforcement on platforms when the content it hosts is considered harmful. The communications ministry has throttled or blocked services in the past over compliance lapses and prohibited content pursuant to the Electronic Information and Transactions Law and child protection and anti-pornography statutes. Historically, temporary restrictions have been used to drive design changes or policy fixes.
Unlike slower court cases, network-level blocks offer regulators immediate leverage in negotiations with global platforms. The idea, officials say, is not just to punish but also effectively require stringent protections that ensure abuse does not spread.
The deepfake problem by the numbers and key risks
For years, researchers have warned that the abuse of deepfakes disproportionately targets women and girls. Early audits by Sensity suggested that the overwhelming volume of existing deepfake pornographic clips were nonconsensual, with victims often scraped from public social profiles. Civil society organizations like Witness and the Coalition Against Stalkerware have documented how synthetic media drives harassment, extortion, and reputational damage.
Law enforcement agencies, including Europol and Interpol, have warned that generative models reduce the barrier to creating child sexual abuse material, making detection harder. Today’s tools are fast and personalized, allowing thousands of shares to be seeded by a single prompt before victims even have the opportunity to file takedown requests.
What compliance might require for safer AI image tools
Experts say that solutions need to go beyond paywalls. High-risk image features, for example, should have default blocks on sexual material and minors, consent provenance checks (when the likeness of a real person is involved), as well as their hashing according to similar industry standards like PhotoDNA, CAID and StopNCII. Compulsory watermarking, bulletproof prompt and output logging, independent safety audits are fast becoming table stakes for platforms at scale.
Under the EU’s Digital Services Act and the UK’s Online Safety Act, companies are required to monitor and address systemic risk, report publicly on their transparency practices and work with regulators. In practice, that looks like rapid model updates for closing loopholes in prompts exploited by adversaries, human-in-the-loop review with edge case considerations and user reporting and appeals flows optimized for image-based abuse.
The stakes for platforms and users as bans proliferate
Indonesia’s ban is a shot across the bow to AI providers that “launch fast and fix later” can backfire into real-world harm and regulatory backlash. And if Grok’s restrictions remain relatively loose, other countries could issue their own orders, potentially fracturing availability on a market-by-market basis.
For users, the episode serves as yet another reminder that the creative upside of generative AI comes with gaps in accountability. The message to platforms and developers is even clearer: Safety architecture is not optional, and with sexualized deepfakes at least, regulators are ready to pull the plug until it exists.