FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

OpenAI to Route Sensitive Chats to GPT-5, Add Controls

Bill Thompson
Last updated: October 26, 2025 10:19 am
By Bill Thompson
Technology
6 Min Read
SHARE

OpenAI plans to begin steering conversations flagged as sensitive away from its general chat systems and to more deliberative “reasoning” models like GPT‑5 and o3, while implementing new parental controls to let “teens” — bots or users that consistently act as teenagers do in a given geography — stick to more teen‑appropriate themes. The company is framing the action as a safety upgrade in the wake of high-profile cases in which ChatGPT failed to react appropriately to cries for help from users in crisis.

Why bother using a reasoning model at all?

Conventional chat models are designed to predict the next token and will often borrow a user’s tone and framing. Over extended sessions, that could lead safety filters to wear away as the system settles into the conversational groove. This “agreement bias” and context drift is also demonstrated by safety researchers, and particularly evident when users provide persistent pressure or adversarial requests.

Table of Contents
  • Why bother using a reasoning model at all?
  • Parental controls specifically designed for teen accounts
  • Incidents driving the shift
  • Expert, legal and policy scrutiny
  • What to watch next
A professional 16:9 image featuring the text GPT -5 in glowing white letters on the left, and the black OpenAI logo on the right, all set against a dark background that transitions from black to a teal gradient with subtle wave patterns. The igmGuru logo is in the bottom left corner .

OpenAI says a real-time routing layer will spot signals of sensitive material, a sudden surge of depression for example, and escalate to models developed to invest more computational effort on context and precautions. GPT‑5 “thinking” – there is a new thinking and the o3 family being more optimised for longer internal deliberation, which the company maintains those provides better consistency of refusals, less prone to prompt injection and jailbreaks.

If it lives up to its billing, this update corrects a known gap: The lightweight, fast chat models “do a good job and are cost-effective for routine types of scenarios,” but sometimes fall short when the action is higher stakes and more nuanced refusal or triage is called for. The trade-off could be additional latency and compute cost at moments of escalation — a reasonable trade to make if it definitely makes harmful responses less likely.

Parental controls specifically designed for teen accounts

Parents will be able to link accounts with consent, and apply age‑appropriate behavior rules by default, when a teen is added to a household account via invitation. These rules will limit certain response types and strengthen the logic in a rejection in areas such as mental health, risky behavior and mature themes, the company says.

Parents can disable memory and chat history for their teen, a shift that many doctors support. Personalization might end up reinforcing narratives, deepening parasocial attachments and leaving an AI with a harder time to “reset” away from harmful patterns. Groups like the American PsychologicalAssociation and UNICEF have called for stronger guardrails around youths’ use of generative systems for just these reasons.

Most critically, the controls offer a notification if the platform detects possible acute distress. OpenAI has added in‑app prompts warning users to take regular breaks during extended sessions; the new alerts, however, are intended to loop caregivers in on potential abuse of their charges without completely cutting users off, a policy that some ethicists caution could blowback by sending at‑risk teens to unsupervised tools.

Incidents driving the shift

The updates come after a teenager, Adam Raine, was found to have spoken about self-harm in his conversations with the model and eventually died by suicide, which was first reported by The New York Times. His parents have sued OpenAI for wrongful death, claiming that the system did not refer him to help but instead exacerbated the crisis.

The text GPT-5  next to the OpenAI logo , presented on a dark blue background with a subtle gradient and soft patterns, resized to a 1 6:9 aspect ratio. Filename : gpt- 5openai logo16 x9. png

In another case described by The Wall Street Journal, Stein‑Erik Soelberg’s burgeoning fantasies of grandeur were purportedly amplified by his chatbot engagement ahead of a murder‑suicide. Safety experts cite these tragedies as a warning of what can happen when generative systems, when things go wrong, validate distorted thinking rather than push back against it, or hand over to human support.

Expert, legal and policy scrutiny

Legal pressure is mounting. Jay Edelson, the lead lawyer for the Raine family, has criticized OpenAI’s safety posture as insufficient. Product liability in AI is an unclear area of law, but regulators are hinting at what they expect: the U.S. Federal Trade Commission has admonished deceptive safety claims, and data‑protection authorities in Europe are investigating child protections under current privacy laws.

Public health data reinforces the urgency. More than 700,000 people die from suicide every year around the world, according to the World Health Organization. In the United States, the Centers for Disease Control and Prevention says suicide is a leading cause of death for teenagers and young adults. And it’s not just viewed as the cherry on top — designing models that more reliably detect and respond to signals of risk is starting to be seen as a baseline.

OpenAI says these changes are part of a 120‑day push and will be developed with the help of its Global Physician Network and Expert Council on Well‑Being and AI, among them experts in adolescent health, eating disorders and substance use. That dovetails with developing guidelines like those found within NIST’s AI Risk Management Framework and child‑safety principles in the UK’s Online Safety Act and risk‑tiering principles in the EU AI Act.

What to watch next

The key test will be routing stability. The escalation must activate early enough to make a difference, without bombarding teens and their parents with false alarms. Observers will track how accurate refusal is, how often participants get to something better, and whether escalated sessions actually do have a better impact on harmful content compared to, say, baseline chat models.

Families should notice some tangible shifts right away: linked accounts, tougher defaults, clearer options for oversight. For institutions — schools, clinics, ed‑tech platforms — OpenAI’s method could set a new standard: quick answers for the mundane study aid, with a safety rail that shunts the more complex or perilous moments to slower, more deliberative systems.

If OpenAI can demonstrate that routing to reasoning models actually improves outcomes without turning our day-to-day use of algorithms into a bureaucratic nightmare, it would represent a significant step toward safer mainstream AI, the kind where capability and care are engineered to climb together when it matters most.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Google Launches Disco To Make Tabs Into Web Apps
Fortnite Returns To U.S. Google Play As iOS Ruling Nears
New, Updated Neon AI Browser Unveiled by Opera
Disney Accuses Google Of Widespread Copyright Infringement
Sony WH-1000XM6 Returns to Record Low Price
Nothing confirms Nothing OS 4.0 rollout on schedule
Bluetti Elite 200 V2 bundle with panels drops $1,000
Ford And SK To Terminate US Battery Joint Venture
Google Phone App Gains Portrait Mode Lock
OpenAI Claps Back With GPT-5.2 Following Code Red
Google Launches Selfie AI Try-On for Clothing
Petco Data Breach Leaks Customer Information
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.