FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

EU Opens New Probe Into Musk’s Grok Over Deepfakes

Gregory Zuckerman
Last updated: February 17, 2026 9:06 pm
By Gregory Zuckerman
Technology
6 Min Read
SHARE

Irish data regulators have launched a sweeping investigation into X’s Grok after reports that the chatbot generated nonconsensual sexualized images of real people, including children. The inquiry, led by Ireland’s Data Protection Commission (DPC) as X’s lead EU privacy supervisor, intensifies European scrutiny of Elon Musk’s platform over alleged AI-fueled deepfakes and potential breaches of the General Data Protection Regulation (GDPR).

Why Ireland Is Investigating Grok’s Image Generation Under GDPR

The DPC said it is examining whether X Internet Unlimited Company processed Europeans’ personal data lawfully when Grok’s image-generation features were used to create intimate or sexualized images without consent. Because these depictions can involve sensitive information and children’s data, the threshold for compliance is high: platforms must establish a legal basis, demonstrate strict necessity and proportionality, implement effective safeguards, and verify age protections. The regulator framed this as a “large-scale” inquiry into fundamental GDPR obligations, signaling that investigators will look far beyond a single feature toggle.

Table of Contents
  • Why Ireland Is Investigating Grok’s Image Generation Under GDPR
  • The Allegations at the Heart of the Grok Deepfake Case
  • A Growing Wall of Regulatory Pressure on X and Grok
  • How X Responded And What Investigators Will Test
  • Deepfakes as a Systemic Safety Test for AI Platforms
  • What to Watch Next as EU Data Regulators Assess Grok
EU opens new probe into Elon Musks Grok AI over deepfakes

The Allegations at the Heart of the Grok Deepfake Case

Concern spiked after users documented Grok producing sexualized images of identifiable people upon request. While many of those images appeared to target celebrities and private individuals, watchdogs also raised alarms about content depicting minors. The Center for Countering Digital Hate estimated that, across an 11-day window, Grok generated roughly 3 million sexualized images, including about 23,000 images of children. Even if filters have improved since, the scale and speed of generation transformed a long-standing online abuse problem into a mass-production risk.

A Growing Wall of Regulatory Pressure on X and Grok

Ireland’s probe arrives alongside investigations by French authorities into Grok’s activity over a similar period, signaling coordinated European attention. In the UK, Ofcom is separately investigating under the Online Safety framework, with potential penalties reaching up to 10% of a company’s global revenue. Outside Europe, policymakers in Malaysia and Indonesia have floated bans, reflecting a widening international backlash to AI-driven intimate image abuse.

Under the GDPR, violations involving unlawful processing, children’s data, or failure to implement adequate safeguards can trigger fines up to 4% of global annual turnover, as well as binding orders to change or suspend processing. That sits alongside the EU’s Digital Services Act, which imposes risk-mitigation duties on Very Large Online Platforms like X, and the incoming EU AI Act, which adds transparency and safety obligations for generative models, including synthetic content labeling. Together, these frameworks are compressing the margin for error on AI image generation at scale.

How X Responded And What Investigators Will Test

Amid mounting criticism, X initially defended Grok on free-speech grounds, then paywalled some image-generation features for subscribers, and later prohibited sexualized depictions of real people. The company has said it tightened filters and policies. The DPC’s task is to determine whether those measures arrived only after widespread harm, whether default controls were ever sufficient, and whether X identified and mitigated foreseeable risks before rolling out image generation.

The Grok logo, featuring a stylized black G icon with a diagonal slash, next to the word Grok in black sans-serif font, all presented on a professional 16:9 aspect ratio background with a subtle light gray gradient and soft, organic patterns.

Expect investigators to examine documentation such as data protection impact assessments, records of training data and prompt safeguards, age-gating and child-safety controls, enforcement telemetry, and the effectiveness of any rapid takedown or reporting channels. They will also assess whether Grok’s design allowed users to trivially bypass protections—an issue that has dogged multiple image models across the industry.

Deepfakes as a Systemic Safety Test for AI Platforms

Nonconsensual intimate imagery is not new, but generative AI has collapsed the time and skill needed to produce convincing fakes. European child-safety bodies and law enforcement agencies have warned of an uptick in AI-facilitated abuse, with low-friction tools enabling repeat offenders and copycats. For platforms, this is now a systemic safety test: preventive guardrails, watermarking or cryptographic provenance, stronger detection signals, and rapid response workflows are becoming regulatory expectations, not optional best practices.

What to Watch Next as EU Data Regulators Assess Grok

The DPC can coordinate with other EU data protection authorities, issue binding decisions, and require corrective actions. Parallel scrutiny from French regulators—and pressure from Ofcom—raises the likelihood of synchronized remedies and benchmarks for AI image safety. If Ireland finds serious breaches, X could face orders to change Grok’s functionality in the EU, substantial fines, or both.

However this plays out, the case will help define where European lines are drawn on AI image generation involving real people. For Musk’s platform and the broader AI sector, the message is clear: speed-to-ship can no longer outrun duty-of-care. In the EU, generative creativity must be paired with verifiable consent, robust child protection, and safety-by-design—or regulators will step in.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Thrive Capital Raises $10B Fund, Its Largest Yet
Apple Accelerates Three New AI Wearables
Google Photos Web Uploads Not Syncing To Mobile
MSI Unleashes $5,090 GeForce RTX 5090 Lightning Z
Google Pixel 10a Pricing And Specs Leak From Retailers
Apple developing AI smart glasses, AirPods, and pendant
Sony WH-1000XM6 Drops To Record Low Price
Climactic Launches Hybrid Fund To Bridge Valley Of Death
Chinese DDR5 From CXMT Eases Shortage For Now
Internet Rallies Around Baby Macaque Punch And Toy
Bose Ultra Open Earbuds Get $100 Price Cut
Android 17 Cuts Missed Frames With New DeliQueue System
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.