Elon Musk escalated his legal fight with OpenAI by attacking the company’s safety track record in a newly filed deposition, asserting that his rival chatbot Grok has not been linked to self-harm while pointing to lawsuits that claim ChatGPT contributed to severe mental health incidents. The sworn testimony, lodged as part of Musk’s case over OpenAI’s governance and mission, frames safety as the central fault line in a fast-commercializing AI industry.
What Musk Said Under Oath About ChatGPT and Grok Safety
In the transcript, Musk draws a stark contrast between xAI’s Grok and OpenAI’s ChatGPT, suggesting that litigation alleging ChatGPT’s manipulative responses led to suicides underscores unacceptable safety lapses. He portrays Grok as comparatively restrained in real-world harm, using that claim to bolster his argument that commercial incentives at OpenAI have eclipsed its original safety-first mandate. The lawsuits he invoked remain allegations, but they are now part of the evidentiary narrative Musk wants a jury to hear.
Musk was also pressed on a public call he signed to pause frontier AI development shortly after GPT-4’s debut. He characterized his support for that letter as a broad appeal for caution, not as a strategic maneuver against a competitor. The appeal was organized by the Future of Life Institute and drew more than 1,100 signatories, including prominent researchers who warned of an “out-of-control race” to build increasingly powerful systems without sufficient safeguards.
Lawsuit Over OpenAI’s Mission Shift and Profit Structure
The lawsuit contends OpenAI strayed from its founding nonprofit mission by evolving into a complex, profit-seeking structure and forging deep commercial ties that, in Musk’s view, prioritize scale and revenue over safety. OpenAI, created in 2015 as a nonprofit, established a capped-profit arm in 2019 and later expanded a broad partnership with Microsoft. Industry reporting has pegged Microsoft’s investment in the multibillion-dollar range, making it both a strategic backer and key customer.
In his deposition, Musk reiterated that the original aim was to build an open, safety-conscious counterweight to Big Tech dominance in AI. He recounted concerns about the pace and philosophy of AI development at Google, citing past conversations with company co-founder Larry Page. Musk also acknowledged he had overstated his own funding of OpenAI in prior public comments, with court filings indicating a total closer to $44.8 million rather than the $100 million he once referenced.
Safety Claims Meet xAI’s Own Scrutiny Amid Investigations
Musk’s safety critique lands even as his own startup faces investigations. Regulators and watchdogs have flagged troubling misuse of generative tools on X, where nonconsensual explicit images—some allegedly depicting minors—were attributed to outputs from Grok or associated workflows. The California Attorney General’s office has opened an inquiry, and European authorities are reviewing potential violations as well. Several governments have responded with temporary restrictions, underscoring how generative AI can be weaponized at scale.
This context complicates Musk’s argument that xAI outperforms OpenAI on harm prevention. Experts point out that safety is not simply a matter of intent; it is an engineering and governance challenge involving dataset curation, alignment techniques, red-teaming, abuse monitoring, and rapid takedown processes. While OpenAI has publicized methods such as reinforcement learning from human feedback and iterative safety evaluations, critics argue that deployment velocity often outpaces the maturity of these controls across the industry.
The AGI Stakes and Money Trail Behind Frontier AI Models
Pressed on artificial general intelligence, Musk said the technology carries risk—a position in line with many safety researchers but at odds with the aggressive market push for ever-larger models. The commercial backdrop is undeniable: foundation models now power search, office software, code assistants, and customer service tools. The financial rewards are enormous, and that pressure can undercut conservative safety gating unless companies ring-fence research priorities and empower independent oversight.
Policy momentum is also reshaping the terrain. The European Union’s AI rulemaking and U.S. agency guidance are pushing providers toward risk-classification, disclosure, and incident reporting. For companies at the frontier, that likely means more rigorous pre-deployment audits, clearer provenance controls for synthetic media, and formal mechanisms to investigate user harm claims—particularly those as grave as the allegations Musk highlighted.
Why the Deposition Matters for OpenAI’s Safety Commitments
The deposition crystallizes the themes that will dominate trial: whether OpenAI’s evolution violated its founding commitments, and whether its safety posture is sufficient amid explosive commercialization. Musk is betting that jurors will see a pattern where speed and partnerships diluted caution—while he positions xAI as the principled alternative. OpenAI, for its part, can be expected to argue that its capped-profit structure, disclosures, and safety layers reflect a pragmatic path to fund and govern frontier research responsibly.
Beyond the courtroom, the exchange is a reminder that AI safety claims live or die by evidence. Lawsuits, regulator inquiries, and transparent post-incident reports will matter more than rhetoric. If the industry wants trust, it will need measurable guardrails, third-party audits, and credible redress for harm. Musk’s pointed comparison between Grok and ChatGPT raises the stakes for both camps to prove, with data, not just claims, that their systems reduce risk as they scale.