FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

State AGs to OpenAI: Child Harm Won’t Be Tolerated

John Melendez
Last updated: September 5, 2025 10:57 pm
By John Melendez
SHARE

California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings have delivered a stark warning to OpenAI, telling the company that harms to children tied to ChatGPT and related products are unacceptable and will trigger action. In an open letter following a direct meeting with the company, the attorneys general cited reports of disturbing interactions involving minors and pressed OpenAI for immediate safety improvements and full transparency.

Table of Contents
  • States escalate pressure after disturbing reports
  • What the attorneys general want now
  • The stakes, by the numbers
  • OpenAI’s safety posture under the microscope
  • A rapidly tightening regulatory backdrop
  • What robust child safeguards look like
  • What happens next

States escalate pressure after disturbing reports

The letter comes on the heels of a broader coalition effort in which dozens of attorneys general contacted leading AI firms about youth safety. Bonta and Jennings referenced tragic incidents, including a reported youth suicide in California and a murder-suicide in Connecticut following prolonged exchanges with an OpenAI chatbot, arguing that existing safeguards failed at the moments they were most needed.

State attorneys general warn OpenAI on child safety, zero tolerance for harm

The officials also flagged their ongoing review of OpenAI’s proposed restructuring into a for-profit entity to ensure the nonprofit’s safety mission remains intact. The message: governance and capitalization decisions must reinforce, not dilute, the company’s duty of care to children and teens.

What the attorneys general want now

The AGs asked OpenAI to detail its current safeguards, risk assessments, and escalation protocols for suspected child endangerment. They want evidence that age verification, content moderation, and crisis-response pathways are functioning in real-world use—not just in lab testing or policy documents.

They also signaled that remedial measures should begin immediately where gaps are identified. That likely includes stronger age gates, default-on parental controls where products are marketed to families, and clear routes to human assistance when a conversation involves self-harm, sexual exploitation, grooming, or violent ideation.

The stakes, by the numbers

Public health and child-safety groups have long warned that digital environments can intensify risk. The Centers for Disease Control and Prevention has reported persistent declines in youth mental health, with suicide among the leading causes of death for adolescents. The National Center for Missing & Exploited Children’s CyberTipline receives more than 30 million reports of suspected child sexual exploitation annually, underscoring the scale of online risk and the need for robust detection and reporting.

At the same time, classroom and at-home exposure to generative AI is accelerating. Surveys by Common Sense Media and Pew Research Center indicate that teens are experimenting with chatbots for homework help, creativity, and advice—contexts where well-intended systems can inadvertently provide harmful instructions or normalize risky behavior if design and guardrails fall short.

OpenAI’s safety posture under the microscope

OpenAI says it employs layered safeguards, including content filtering, safety classifiers, red-teaming, and reinforcement learning with human feedback to curb harmful outputs. It also maintains use policies that prohibit sexual content involving minors, advice facilitating self-harm, and other high-risk categories.

State attorneys general warn OpenAI over child safety, vow crackdown on harmful content

But independent researchers consistently demonstrate workarounds. Academic teams have shown that prompt “jailbreaks,” obfuscation, and multi-turn strategies can elicit restricted content. The National Institute of Standards and Technology’s AI Risk Management Framework highlights these failure modes, emphasizing continuous testing, incident reporting, and post-deployment monitoring. That is precisely where state officials want verifiable progress.

A rapidly tightening regulatory backdrop

State consumer protection and privacy laws give attorneys general broad tools to address deceptive practices and unfair risks to minors. Many offices are already litigating high-profile youth-harm cases against social platforms. Federal regulators are active, too: the Federal Trade Commission enforces the Children’s Online Privacy Protection Act and has signaled that opaque age estimation and safety claims may invite enforcement.

Globally, the European Union’s AI Act sets obligations for general-purpose and high-risk AI, and the United Kingdom’s Online Safety Act imposes child-risk duties on services whose features can reach minors. Even where rules differ, a clear convergence is emerging around age-appropriate design, transparency, and safety-by-default—standards generative AI providers will be expected to meet.

What robust child safeguards look like

Experts point to several concrete measures: rigorous age assurance that minimizes data collection; default-on youth protections with clear parent controls; crisis intervention that can surface helpline resources and escalate to trained reviewers; strong classifier ensembles for grooming, sexual content, and self-harm; and rapid takedown and reporting pathways aligned with NCMEC guidance.

Operationally, companies need live incident response playbooks, audits against frameworks like NIST’s, and third-party evaluations that publish real safety metrics—false-negative rates for risky content, time-to-mitigate, and the share of high-severity incidents escalated to humans. Without measurable outcomes, “safety” remains a promise rather than a practiced discipline.

What happens next

The letter signals that state enforcers are moving beyond general concerns to concrete oversight of product decisions and corporate governance. If OpenAI’s responses are deemed insufficient, multistate investigations or consent orders are realistic outcomes, including mandated safeguards, independent monitoring, and penalties under consumer protection laws.

The broader industry should treat this as a blueprint. Generative AI’s promise doesn’t absolve companies of their duty to protect minors. The standard is shifting toward verifiable safety-by-design, and the message from state attorneys general is unambiguous: harm to children is both preventable and non-negotiable.

Latest Articles
Musk Denies White House AI Event Snub
Technology
Tesla Floats $1 Trillion Pay Plan for Elon Musk
Business
Final Call: Exhibit at Disrupt 2025
Business
Snapchat’s Imagine Lens turns text into AI images
Technology
Tesla investors to weigh stake in Musk’s xAI
Business
OpenAI Hires Team Behind Xcode Assistant Alex
Technology
X launches E2EE chat, but you shouldn’t trust it yet
Technology
Ex-Scale AI CTO launches agent to fix big data access
Technology
Natron’s collapse exposes a battery gap in the US
Business
Warner Bros. sues Midjourney over Superman, Batman AI
Technology
Roblox debuts gameplay clips feed and creator AI tools
Technology
Tesla’s ad spend on X nears zero
Business
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.