California and Delaware attorneys general have put OpenAI on notice: Beef up protections for minors or face serious scrutiny of the company’s potential ramping up of its profit-seeking operations. In a letter to OpenAI’s board, AGs Rob Bonta and Kathy Jennings wrote that child safety issues with the company’s AI systems could block its planned corporate reorganization, suggesting that charity-asset and consumer-protection concerns may shape the future.
The action comes in response to a wrongful-death lawsuit by the parents of a 16-year-old who say that talking to a generative AI system had driven their son’s suicide.
Though the facts of the case will be litigated, it is the latest to contribute to a growing sense among policymakers that artificial intelligence makers are foisting potent products on the public without robust, checkable protections for children.
Why state AGs are taking over
State attorneys general are not only enforcers of consumer protection but also protectors of charitable assets. OpenAI’s unorthodox structure — a nonprofit parent, a capped-profit subsidiary, and now a plan to reconstitute as a public benefit corporation — creates a classic moment of oversight: AGs want assurances that any transfer of control or value will not undercut the nonprofit’s mission, particularly if safety gaps are putting minors in harm’s way.
The AGs’ statement fits with a larger policy pattern. The Federal Trade Commission cracked down on companies that it said had deceived, or failed to offer design protections for, millions of children affected by AI. The U.K.’s Age Appropriate Design Code and the European Union’s platform rules have also established that the baseline for child protections has to be built in, not tacked on. Against that backdrop, state enforcers view AI chatbots as the next frontier in youth safety scrutiny.
OpenAI’s pivot under the microscope
OpenAI was originally a nonprofit research lab but has since expanded to include a capped-profit arm that is used to fund models like ChatGPT. More recently, it sought to recast that arm as a public benefit corporation, in which shareholders’ interests must be balanced with a specific public purpose. California and Delaware AGs indicated they will follow that shift closely to make sure that the non-profit’s beneficiaries are protected and the original mission is not watered down in the quest for growth.
The company has had public tensions over its direction, with lawsuits from the co-founder Elon Musk disputing its departure from its early nonprofit mission. The AGs’ letter just raises the stakes: safety performance — particularly for kids — will be a trust condition of any future corporate reconfiguration.
Child-safety lapses at center of the fight
Generative AI systems can also serve as always-on personal companions, which is what makes them a particularly risky technology for children. Educators and doctors are concerned about role-playing, grooming vectors, prompts to self-harm and incorrect “advice” presented with total certainty. The National Center for Missing & Exploited Children has sounded the alarm that online dangers to kids are increasing, with CyberTipline reports numbering in the tens of millions annually — an ecosystem risk that AI can exacerbate without robust guardrails.
OpenAI has introduced new parental controls, and tools that allow you to influence the way the chatbot converses with your children, and alarms to help surface cases of acute distress. As a step forward, AGs say, the industry still falls short of the bar for products that almost certainly will be consumed by teenagers on a large scale.
Alternative stronger protections
Experts highlight a straightforward menu of justifiable actions: privacy-enhancing age assurance, child-protective defaults, and making refusal behavior on self-harm, sexual content and exploitation pathways easier. Interventions should include crisis-response flows to evidence-based resources, with human escalation for risk signals, not plain vanilla disclaimers.
Independent red-teaming and third-party audits compatible with the NIST AI Risk Management Framework are ways to confirm that our guardrails do indeed withstand adversarial testing—not just demo conditions.
“(Companies) should also disclose what the incident report and fix time lines should measure, how false negative rates of safety classifiers are measured and how the datapool that the classifiers are trained with, are fine-tuned and plugged in are curated, so that the children do not get accidentally exposed to harmful role-play and sexualised content.”
4: Child-specific policy must be applied throughout the stack: Model-level filters, application logic, developer API, and then even the market place extensions.
Bad men will route around safety if there is no end-to-end enforcement.
Not just OpenAI: pressure in the sector
Other tech giants are also coming under the same sort of scrutiny. Whether AI-powered role-play and assistant features would put minors on popular platforms like the one at the Salon, attorneys general and members of Congress have pressed for answers. The trendline is clear: regulators are going to demand default-safe design for kids, clear testing, and post-launch accountability wherever generative AI is available to young people.
What’s next
OpenAI may be able to restructure itself and raise money on favorable terms — if it can persuade state enforcers that being mindful of child safety isn’t an afterthought. That might include formal commitments, independent monitoring or binding conditions on any restructuring.
The bottom line is simple: If AI companies desire the perks of susceptibility-free profit-making, they’ll need to show that with evidence, not promises, that their systems are safe for some of the most vulnerable users. For OpenAI, the state AG message is clear — either the child-safety gaps are mended, or the pivot doesn’t go down how you’d like.