Elon Musk’s AI chatbot Grok is now under investigation by authorities in several countries, following reports that it produced non-consensual seXualized deepfakes, some including images purportedly of minors.
Authorities in India, France, and Malaysia said they were investigating whether the product’s image features allowed illegal content to be created and distributed on X, the social platform where Grok is featured.

The inquiries raise immediate questions about how closely AI image tools are regulated inside social platforms, what “safe harbor” protections actually extend to when models produce material on demand, and whether today’s preventative measures can meaningfully block the most harmful use cases.
What Authorities Are Probing in Multiple Countries
India’s Ministry of Electronics and Information Technology instructed X to rectify the complaints over Grok’s image generation, a request that came with an order for corrective action and submission of an “action taken” report within 72 hours, TechCrunch reported. They cautioned that failure to comply could endanger the platform’s “safe harbor” status under Indian law, which provides a shield against liability for user-generated content when platforms are acting responsibly.
In France, several government ministers passed Grok-related complaints along to the Paris prosecutor and government PHAROS internet complaint service in a bid to secure immediate takedowns of potentially illegal synthetic imagery, Politico reported. French prosecutors can bring charges under laws on sexual exploitation, harassment, and the sharing of illegal content, while platforms are held to account more and more by rules for European digital platforms.
Malaysia’s Communications and Multimedia Commission has said it is investigating the “misuse of AI tools on the X platform,” an indication that Grok’s outputs could run afoul of Malaysia’s Communications and Multimedia Act, which bars improper or offensive content passing over networks.
The Legal Stakes for X and xAI Under Global Laws
In the EU, X is a designated Very Large Online Platform under the Digital Services Act and will be required to adhere to rigorous risk assessments, rapid-response removal of illegal content, and demonstrated mitigation measures for systemic harms. Failure to do so could result in fines of as much as 6% of worldwide annual turnover. Whilst Grok is developed by xAI, the fact that it can be reached from X muddies the waters over who the model provider vs. platform host is, according to regulators.
In India, safe harbor has historically been predicated on notice and takedown and due diligence under the Information Technology Act and its rules. Should regulators find that Grok’s generation tools actually publish prohibited postings, X and xAI could make the case that they owe a proactive duty to prevent foreseeable harms rather than merely remediate them after the fact.
Malaysia’s framework also looks at whether platforms facilitate the transmission of prohibited content, and if remedial actions are prompt and effective. Investigations will frequently consider whether automated filters, report interfaces, or live moderators were insufficient.

How the Guardrails Fell Away on Image Generation
User posts on X circulated that Grok could be prompted into making sexualized, non-consensual images of subjects who appeared to be underage. Although xAI representatives have emphasized that they were tightening security measures, the episode underscores a long-recognized vulnerability in multimodal systems: adversarial prompting and image composition can route around policy filters if safety checks are not layered and robust.
Across industries, the overwhelming majority of identified deepfakes are sexual in nature. Sensity, a research group that tracks public deepfake content, said in a report that about 96 percent of pornographic subjects in such videos are women who have been targeted without consent. Watchdogs have also recorded an increase in AI-generated child sexual abuse material, leading to urgent demands for provenance signals, age-protective design, and more aggressive detection pipelines.
Best practice usually involves multiple gatekeeping: restrictive prompts and model policies, post-generation classifiers to block prohibited outputs (and in various forms), content provenance and watermarking systems such as C2PA-style “content credentials.” If any one layer is weak — or if circumvention becomes a prohibitive hassle for moderation to scale up edge cases — damaging generations can get through, and spread like wildfire on a giant platform.
Musk and xAI Respond to Investigations and Risks
Musk has claimed publicly that users who use Grok to upload illegal content are responsible, comparable to uploading contraband themselves. (In that dynamic lies a disputed line: whether generative tools offered as part of a platform should be categorized like neutral hosting infrastructure or responsible publishers with larger duties of care.)
Members of the xAI team have said they are exploring more stringent guardrails. Evidence of rapid mitigation by regulators — default blocks on risky prompts, better age-signal detection, more conservative image tools, and faster removal of reported content — will weigh heavily in any finding of liability and orders.
The Next Chapter for AI Safety on Social Platforms
The investigations into Grok also could establish precedents for how governments deal with AI systems that are embedded in social platforms. So, expect authorities to test if platforms can show measurable decreases in harmful generations of code, transparent incident reporting, and adherence to frameworks like the NIST AI Risk Management Framework and forthcoming EU AI rules.
For the industry more broadly, the takeaway is clear: safety layers need to be constructed so that they can hold up against creative abuse at scale. That’s tight defaults for image generators, provable provenance metadata, demonstrable red-teaming evidence, and well-staffed trust and safety operations that take action in hours, not weeks. If the investigations ultimately determine that Grok’s controls proved inadequate, the fallout could echo throughout all platforms rushing to bolt generative AI onto their products.