California and Delaware attorneys general have put OpenAI on notice: strengthen protections for minors or expect intense scrutiny of the company’s plan to shift more decisively into a profit-seeking structure. In a letter to OpenAI’s board, AGs Rob Bonta and Kathy Jennings said child safety shortcomings around the company’s AI systems could derail its proposed corporate restructuring, signaling that charitable-asset and consumer-protection concerns will shape the path forward.
The move follows a wrongful-death lawsuit from the parents of a 16-year-old, who allege interactions with a generative AI system contributed to their son’s suicide. While the facts will be litigated, the case adds to a mounting perception among policymakers that AI makers are deploying powerful products without robust, verifiable safeguards for kids.

Why state AGs are stepping in
State attorneys general serve as both consumer-protection enforcers and guardians of charitable assets. OpenAI’s unusual structure—a nonprofit parent, a capped-profit subsidiary, and now a plan to convert to a public benefit corporation—sets up a classic oversight moment: AGs want assurance that any transfer of control or value won’t compromise the nonprofit’s mission, especially if safety gaps put minors at risk.
The AGs’ message dovetails with a broader policy trend. The Federal Trade Commission has warned AI companies against deceptive practices and inadequate safety-by-design for minors. The U.K.’s Age Appropriate Design Code and the European Union’s platform rules have also set expectations that child protections must be baked in, not bolted on. Against that backdrop, state enforcers see AI chatbots as the next frontier for youth safety oversight.
OpenAI’s pivot under the microscope
OpenAI began as a nonprofit research lab before adding a capped-profit arm to commercialize models such as ChatGPT. More recently, it moved to recast that arm as a public benefit corporation, which requires balancing shareholder interests with a stated public purpose. California and Delaware AGs signaled they will scrutinize that shift to ensure the nonprofit’s beneficiaries are protected and the original mission isn’t diluted in pursuit of growth.
The company has faced high-profile tensions over its trajectory, including litigation from co-founder Elon Musk challenging its departure from its early nonprofit ethos. The AGs’ letter effectively raises the stakes: safety performance—especially for children—will be a condition of trust for any future corporate reconfiguration.
Child-safety gaps at the center of the dispute
Generative AI systems can act as always-on companions, which makes them uniquely risky for minors. Researchers and clinicians worry about realistic role-play, grooming vectors, self-harm prompts, and inaccurate “advice” delivered with high confidence. The National Center for Missing & Exploited Children has warned that online harms affecting youth are rising, with the volume of CyberTipline reports now in the tens of millions annually—an ecosystem risk that AI can amplify without strong guardrails.
OpenAI has announced new parental controls, including features to shape how the chatbot interacts with children and alerts designed to surface signs of acute distress. While a step forward, AGs argue that the industry remains far from the bar required for products that likely will be used by teenagers at scale.

What stronger safeguards could look like
Experts point to a clear menu of defensible measures: privacy-preserving age assurance, child-safe defaults, and hardening of refusal behavior for self-harm, sexual content, and exploitation pathways. Interventions should include crisis-response flows that provide evidence-based resources, with human escalation options when risk signals appear, not just generic disclaimers.
Independent red-teaming and third-party audits aligned with the NIST AI Risk Management Framework can validate that guardrails work under adversarial testing, not just demo conditions. Companies should publish incident reporting and fix timelines, measure false negatives in safety classifiers, and disclose how training data, fine-tuning, and plug-ins are curated to reduce exposure to harmful role-play and sexualized content.
Finally, child-specific policies must be enforced across the stack: model-level filters, application logic, developer APIs, and marketplace extensions. Without end-to-end enforcement, bad actors will route around safety layers.
Not just OpenAI: pressure across the sector
Other tech giants are facing similar scrutiny. Attorneys general and members of Congress have questioned how AI-powered role-play and assistant features interact with minors on popular platforms. The direction of travel is unmistakable: regulators are expecting default-safe design for kids, transparent testing, and post-launch accountability wherever generative AI is accessible to youth.
What’s next
OpenAI’s ability to streamline its corporate structure and raise capital on preferred terms may hinge on convincing state enforcers that child safety is not an afterthought. That could mean formal commitments, independent monitoring, or binding conditions tied to any restructuring.
The bottom line is straightforward: If AI companies want the privileges of for-profit flexibility, they will need to prove, with evidence not promises, that their systems are safe for the most vulnerable users. For OpenAI, the message from state AGs is clear—fix the child-safety gaps, or the pivot doesn’t happen on your terms.