Elon Musk’s artificial intelligence startup, xAI, has completed a $20 billion Series E funding round — an upsized acquisition that ended up far exceeding the company’s original goal. The cash infusion comes as xAI’s headline AI system Grok faces mounting scrutiny over the suggestion its software might be able to create non-consensual deepfakes, bringing tensions between rapid scaling of AI and safety standards into the spotlight.
xAI said the round was increased from an announced $15 billion and drew support from a list of institutional backers that included Fidelity, Valor, StepStone, Baron, Qatar Investment Authority and MGX as well as the venture arms of Nvidia and Cisco. The company intends to use the funds for compute infrastructure, core research and accelerated hiring.

What the new funding will fuel across xAI and Grok
xAI is training the next version of its model, Grok 5, and working on projects that link Grok to internal efforts such as Colossus and the X platform. xAI has also cited product launches such as the Grok 4 series, a voice mode and the image generator Grok Imagine in company materials as evidence of fast delivery.
The funding is also a wager on compute. Independent analyses from groups like Epoch AI and SemiAnalysis have estimated that state-of-the-art training runs now draw tens of thousands of GPUs, warranting multibillion-dollar spending on datacenter capacity and networking. Nvidia’s venture arm participation reflects how model performance correlates more closely to the ability to have superior access to advanced accelerators and high-bandwidth interconnects.
For investors, the thesis combines product velocity with distribution. The fact that xAI can plug Grok into X could open up a massive real-time data surface for model tuning and user growth, while enterprise options might expand revenue beyond fun chat and image tools for consumers.
Deepfake investigations surrounding Grok intensify scrutiny
The raise is happening in the middle of a reputational stress test. Users recently reported that Grok could produce sexualized deepfake imagery, including posts alleging the inclusion of underage subjects, prompting public uproar and renewed regulatory scrutiny. Regulators in Malaysia, India and France, among others, say they are looking into complaints about product behavior on the platform.

Musk replied on X, with a warning that anyone using Grok to upload illegal content would face the same penalties they would for uploading illegal material directly. Grok’s Media tab is better now, users have told me, but examples of sexualized content reportedly still existed in the section at press time and I’ve heard questions from investors about guardrails and enforcement.
The episode underscores a broader governance challenge for generative systems that can churn out and amplify synthetic sexual imagery at scale. Safety experts say things like that — layered mitigations, including stricter prompt handling, robust NSFW classifiers and cryptographic watermarking and hash-matching against previously known abusive material — are necessary but not sufficient. They frequently refer to organizations like NCMEC and Thorn, or projects like PhotoDNA and Project Arachnid as invaluable partners for detection and reporting workflows.
Regulatory exposure is widening. Large platforms have more obligations for addressing systemic risks, including manipulated media, under Europe’s Digital Services Act. India’s IT rules also enable swift content takedowns as well as compliance demands, while communications regulators in markets such as Malaysia have indicated a zero tolerance approach to synthetic child abuse material. The ways in which xAI brings Grok within these boundaries will determine operational latitude.
Why major backers still piled in despite safety concerns
Regardless of the dispute, investor calculus seems to have been based on momentum in those frontier models and earnings potential from them. Competition among OpenAI, Google, Anthropic and Meta has put the sector on an arms race where data, distribution and dense compute clusters are a force multiplier. The fact that it’s backed by Musk himself, the deep bench of engineers, and the potential for cross-pollination with X are still strong signals to capital allocators.
What to watch next as xAI navigates probes and growth
- Clarity from authorities on the extent and resolution of the deepfake investigations — and any reparation promises made by xAI.
- Technical updates: how well Grok 5 performs in public benchmarks, its resilience to adversarial testing, and whether xAI makes use of established deepfake markers so such media can be detected by industry efforts.
- Productization beyond chat and images — in particular, enterprise tools with strong policy control and audit capabilities.
- Transparency: frequent safety reports, better content moderation guidelines and independent audits to align scaling with changing legal and ethical expectations.