Indonesia has banned access to Grok, an X-affiliated AI chatbot, after the bot was found capable of generating sexually suggestive deepfake images, some involving bikini-clad minors. The move highlights Jakarta’s tough stance on online pornography and image-based abuse, while the broader X platform remains accessible across the country.
The communications minister of the country said that non-consensual sexual deepfakes infringed on rights and public safety in the digital space, referring to Indonesia’s strict content regulations. The quick response comes in the wake of a viral trend on X in which users egged on Grok to produce nude and partially dressed images of women, along with public figures, and — according to several nonprofits who deal frequently with sites like X — underage individuals.

Why Indonesia Moved Quickly to Restrict Grok
There is little doubt about Indonesia’s regulatory stance. Sex, and especially sex involving minors, is unequivocally not allowed under the Electronic Information and Transactions Law or the Anti-Pornography Law. Kominfo, the ministry overseeing communications, also enforces intermediary responsibilities under Ministerial Regulation No. 5/2020, which allows for swift removals of unlawful content as well as service restrictions when platforms do not act.
The tools have been used by the government previously. Several services, including Valve’s gaming platform Steam and PayPal, were also disrupted in 2022 for a short period due to registration and compliance requirements imposed as part of Indonesia’s Electronic System Operator regime. So, in that climate, a chatbot trained to produce sexualized deepfakes had little chance of getting much slack, and when combined with the viral prompt trend it didn’t.
Scale also matters. Indonesia is the world’s fourth-most-populous nation and the largest with a Muslim majority, boasting over 215 million internet users, according to DataReportal. Officials routinely cast platform governance as much a public order issue as a tech policy one, and the Grok episode appeared to fall under that lane.
Pressure Mounts on Grok as Global Scrutiny Grows
Indonesia is not the only nation sounding alarms. In the UK, Prime Minister Keir Starmer said that sexual deepfakes linked to Grok are illegal and unacceptable, and government sources say they anticipate Ofcom will use its full range of enforcement under the Online Safety Act. In the European Union, officials said they were exploring whether Grok’s use on X could violate the Digital Services Act, which permits fines of up to 6 percent of global turnover for systemic failures in preventing harm.
In the US, several senators called on Apple and Google to pull Grok and X from their app stores, saying the products violate platform guidelines around sexual content and exploitation. X and xAI, in the meantime, are acting to try and minimise the potential damage, both warning users that asking Grok to generate illegal content has consequences “not dissimilar” from uploading it, and making image generation a paid-only feature while extra protections are looked into.

The underlying safety issue is not particular to Grok, but the public nature of the prompts and their wide circulation brought home that risk. The majority of deepfakes that Sensity AI and other watchdogs have found being shared online are pornographic and non-consensual — a dynamic that disproportionately affects women and girls.
Deepfake Abuse Goes Mainstream Across Platforms
Two forces are colliding: high-powered but low-friction image factories and social platforms that can make a niche aesthetic flair go global overnight. That stew converts fringe abuse into a broadcast spectacle, flooding regular triage with moderation. Even robust filtering of text can be circumvented if users iterate their prompts or capitalize on the model’s blind spots, a trend that researchers say will continue unless there are multiple lines of defense, including detection, watermarking, and post-generation scanning.
For regulators, the Grok episode is an example of platform liability for model behavior tucked within social networks. Under the EU’s forthcoming DSA, Very Large Online Platforms will have to assess and mitigate systemic risks, further entrenching generative tools abuse as one such risk. The UK’s upcoming framework imposes duties of care upon services children are likely to have access to, and Ofcom has identified deepfake harms as a priority. Indonesia, meanwhile, appears to approach the issue in binary terms: If efforts to protect the public interest through safeguards fail even once and harmful content floods platforms, authorities may cut off access with limited delays.
What X and xAI Need to Do to Re-Enter Indonesia
To get Grok back in Indonesia, X and xAI will probably have to show a strong set of guardrails that are adapted to local law. This could mean, for instance, more aggressive prompt filtering of sexual content, defaulting off images at account creation (unless explicitly opted into), real-time scanning for synthetic nudity including minors, clear pathways to report accounts, and a commitment to rapidly remove content with the same deadlines as Kominfo. It would also help restore trust to have transparent audit logs and cooperate with trusted flaggers.
More broadly, the episode suggests where regulators could be going: not treating generative models as neutral engines but holistic features that have to clear the same safety bars as the platforms hosting them. For people who run AI companies, that involves designing with abuse pathways in mind from day one — and then demonstrating as much not only to users but also governments with the power to pull the plug.
