Over the weekend, OpenAI began testing a new safety routing system in ChatGPT and followed up on Monday by launching parental controls — moves that have sparked a range of reactions from users.
These updates come amid growing concerns about how AI handles emotionally charged conversations. Some of OpenAI’s past models, including ChatGPT’s earlier versions, have been criticized for reinforcing users’ harmful thoughts instead of redirecting them in healthier directions. One high-profile incident — now at the center of a wrongful death lawsuit — involves a teenager who died by suicide after months of chats with ChatGPT.
To address these concerns, OpenAI introduced a dynamic routing system designed to detect emotionally sensitive messages and, when necessary, automatically shift the conversation to GPT-5 — the company’s most safety-focused model. GPT-5 was developed with a new capability called “safe completions,” allowing it to engage thoughtfully with delicate topics rather than avoiding them altogether.
This marks a shift from previous models, which were primarily trained to be quick, agreeable, and accommodating. GPT-4o, in particular, has faced criticism for being overly agreeable, sometimes reinforcing unhealthy thinking patterns. Despite the issues, it gained a devoted following due to its responsiveness and friendliness. When OpenAI made GPT-5 the default model in August, many users pushed back, demanding to continue using GPT-4o.
While many experts and users have praised the new safety measures, others have voiced frustration, saying the updates feel overly cautious and treat adult users like children. OpenAI has acknowledged the complexity of the rollout and is giving itself 120 days to fine-tune the system based on real-world feedback.
Nick Turley, OpenAI’s VP and head of the ChatGPT app, responded to the criticism, especially concerns about how GPT-4o now responds in chats where the new routing system is active.
“Routing happens on a per-message basis; switching from the default model happens on a temporary basis,” Turley explained on X. “ChatGPT will tell you which model is active when asked. This is part of a broader effort to strengthen safeguards and learn from real-world use before a wider rollout.”
The introduction of parental controls has also drawn a mixed response. Some users welcomed the option for parents to have greater oversight of how their teens interact with AI. Others worry the move sets a precedent for treating all users — even adults — as if they need to be monitored.
The new parental tools allow guardians to customize their teen’s ChatGPT experience, including setting “quiet hours,” disabling voice interactions or memory, removing access to image generation, and opting out of model training. Teen accounts will also be subject to enhanced content protections — including filters for graphic material and unhealthy beauty standards — and a system designed to flag potential signs of self-harm.
“If our systems detect potential harm, a small team of specially trained people reviews the situation,” OpenAI stated in a blog post. “If there are signs of acute distress, we will contact parents by email, text message, and push alert — unless they’ve opted out.”
OpenAI admitted that the system won’t be perfect and might sometimes flag conversations that don’t pose a real risk. However, the company believes it’s better to raise an alert and give parents the chance to step in than to ignore potential signs of distress. Plans are also in motion to develop emergency protocols for contacting law enforcement if there’s an imminent threat to a teen’s safety and parents can’t be reached.
Also Read:
Former Microsoft execs launch AI agents to end Excel-led finance
The AI services transformation may be harder than VCs think
How South Korea Plans to Beat OpenAI, Google, and Other with Homegrown AI
Swiss Chip Startup Corintis Raises $24M Following Microsoft Deal









