FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

France and Malaysia Investigate Grok Over Sexualized Deepfakes

Gregory Zuckerman
Last updated: January 4, 2026 6:08 pm
By Gregory Zuckerman
Technology
7 Min Read
SHARE

French and Malaysian regulators have started investigating Grok, the AI-built chatbot from XAI integrated into X, following reports that it generated sexualized deepfakes of children. The probes have compounded international pressure on the platform as governments investigate how generative models are being weaponized to churn out abusive content at scale.

Focus Of Investigations On Dangerous AI Image Generation

The Paris prosecutor’s office said it was investigating the spread of pornographic deepfakes on X, after complaints from France’s leadership in digital affairs raised awareness of it. In Malaysia, the country’s communications regulator said it was looking into public complaints about the misuse of AI tools on X to produce indecent and harmful content, such as manipulated images of women and minors.

Table of Contents
  • Focus Of Investigations On Dangerous AI Image Generation
  • Regulatory Pressure Grows Around the World
  • Deepfake Misuse Trends and the Risks for Users and Platforms
  • Platform and Developer Responses to Deepfake Abuse
  • What Comes Next for X, xAI, and Cross-Border Oversight
The XAI Grok 4 logo is displayed on a dark gray background with subtle, abstract wave patterns. The logo features a stylized white X to the left of a vertical white line, with the text Grok 4 in white to the right.

The moves come after India’s IT ministry ordered X to restrain Grok from publishing obscene or illicit content under Indian law. The order threatened that the failure to act could put at risk safe harbor protections for the platform. X’s owner, Elon Musk, has publicly said that users who are using Grok to invent illegal content will have the same result as those uploading it and xAI has said it is reviewing its upload defenses to prevent further misuse.

Regulatory Pressure Grows Around the World

For Europe, X is classified as a Very Large Online Platform under the Digital Services Act and must perform risk assessments and mitigations to systemic harms (like deepfakes or child safety) with heightened standards of care. Breaches can result in fines of up to 6% of global turnover and corrective orders. France’s inquiry indicates an increasing willingness by national authorities to marry DSA oversight with criminal and consumer protection laws when AI systems enable illegal content.

Malaysia’s investigation uses the Communications and Multimedia Act, under which improper use of network facilities and obscene or offensive content is prohibited. Regulators there have been cracking down more on online harms like gender-based abuse and manipulated images. Coordinated enforcement across territories is allowing a rich legal environment to emerge for platforms that include generative tools on feed.

Deepfake Misuse Trends and the Risks for Users and Platforms

Although it has a number of valid applications in entertainment and accessibility, the most common abuse is against women and girls. Studies by Sensity AI have shown that an overwhelming majority of detected deepfakes were nonconsensual sexual content. The Internet Watch Foundation and the National Center for Missing and Exploited Children have also cautioned that AI tools reduce the barriers to creating manipulated content which takes advantage of minors, exacerbating existing online grooming and sextortion trends.

Europol and other law enforcement agencies warn that generative models can be pushed up against known safety guardrails, allowing bad actors to synthesise abusive imagery without having to source originals. When that kind of tooling is grafted onto a high-velocity social network, the feedback loop speeds up: prompts, generation and distribution all occur in one site, and it complicates determining flags as well as takedown and evidence preservation.

France and Malaysia investigate Grok AI over sexualized deepfakes

The harms are not theoretical. Rights groups that monitor nonconsensual imagery describe reputational damage, extortion and psychological trauma for victims, even if the images are synthetic. And even when content circulates, it’s hard to remove: Hashing, watermarking and provenance signals only work if platforms adopt them across the board and agree on shared databases.

Platform and Developer Responses to Deepfake Abuse

Experts say immediate measures should include tightening prompt classifiers, broadening blocklists for sexualized and child-related terms, and employing ensemble safety systems that review model outputs before any image is displayed to users. There will need to be model-side updates — for example, adversarially trained safety filters and rejection tuning — along with platform enforcement that bans repeat offenders and keeps an audit log for the police.

Industry peers provide a roadmap. Most major A.I. image tools have added tougher default refusals on sexual content, age-estimation checks that trigger denials if you’re prompted to mention a minor, and visible content credentials to make provenance easier to trace. Interoperable standards have been advocated by the Coalition for Content Provenance and Authenticity and the Partnership on AI, but uptake has been uneven.

For X and xAI the bar is higher due to intertwining of model and distribution channel. That means safety reviews need to happen before something is generated, not just after it’s uploaded. Third-party red team testing, public transparency reports and working with groups like NCMEC for quick escalation can show that while you’re doing the full investigation you have done due diligence.

What Comes Next for X, xAI, and Cross-Border Oversight

France’s investigation will probably look at both the underlying AI controls and the platform’s moderation pipeline — if guardrails failed, how quickly content was removed and what recourse victims had. Malaysia’s regulator can enforce more stringent restrictions on generative components or impose fines, if offences are proven. India’s safe harbor warning creates a short-fuse compliance deadline that will likely require product changes overnight.

A bigger question here is whether you can make embedding generative-image tools in social platforms safe, at scale. Regulators are indicating that “we tried” is not a defense. If things like that can’t succeed in culling unpredictable responses from models like Grok, regulators might simply mandate feature suspensions or the most-stubbornly-analog defaults/weather data and writing temporarily set to “off” until such a time as they can get patented third-party processes of model safety checkers involved — and the storage fees for all those proof-a-day copies. How X and xAI respond to those demands will determine not just the outcome of these inquiries, but the next frontier of AI regulation worldwide.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
NASA closes Goddard Space Flight Center research library
Narwal launches Flow 2 robovac-and-mop at CES
Plaud Debuts AI Pin and Desktop Meeting Notetaker
Hostverge: Lifetime Hosting for 10 WordPress Sites
XReal 1S Is Here With Upgrades and a Price Cut
Find What Kind of Packaging is Best for Mailing Clothing!
How to Choose the Perfect Coffee Table Size for Your Sectional
Mastermind of the Bitfinex Hack Is Freed Early Under First Step Act
Subtle Debuts AI Earbuds With Noise Reduction
Tech billionaires cash out $16 billion as stocks soar
California Unveils Data Broker Deletion Site
Bitfinex Hacker Thanks Trump for Early Release
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.