Character AI to end it’s Chatbot experience for Child Users

Written by: Mane Sachin

Published on:

Follow Us

Teenagers today are struggling to find their place in an ever-evolving digital world. Constantly connected, emotionally charged, and bombarded by online stimulation, many are turning to chatbots that seem designed to never stop talking. But this new form of companionship has had devastating consequences.

Character.AI, a startup known for its AI-driven role-playing chatbots, has become the focus of public concern and legal action after at least two teenagers reportedly took their own lives following extended conversations with chatbots on its platform. In response, the company is making sweeping changes intended to better protect minors — even if it means taking a financial hit.

Karandeep Anand, CEO of Character.AI, announced that the company will soon restrict under-18 users from participating in open-ended conversations with AI characters.

Open-ended chats refer to free-flowing exchanges where chatbots continuously engage users with follow-up questions and emotional responses. Experts have warned that these unstructured conversations can blur boundaries, making young users feel attached to the AI. Anand said that such interactions not only pose risks to teenagers but also diverge from Character.AI’s mission.

The company is now repositioning itself from being an “AI companion” app to becoming a “role-playing and creative platform.” Instead of forming personal relationships with AI characters, younger users will be encouraged to collaborate on storytelling, create visuals, and engage in creative challenges. The emphasis will shift from endless dialogue to content creation.

Starting November 25, Character.AI will begin phasing out access for teenage users by first limiting chatbot time to two hours per day and gradually reducing it to zero. To enforce this rule, the company will implement a mix of its own behavioral age-detection tools and third-party systems such as Persona. If necessary, facial recognition and ID verification will be used to confirm users’ ages.

This latest decision builds on previous safety measures the company introduced, including parental insight dashboards, restricted character filters, romantic chat limitations, and time management alerts. Those earlier policies significantly reduced Character.AI’s teenage user base, and Anand acknowledged that this new step would likely result in more losses.

He admitted that many younger users would likely be frustrated by the restrictions and could stop using the platform altogether. However, he expressed hope that some would adopt the new creative experiences the company has been developing over the past several months.

To support this transformation from a chat-based app to a “content-driven social ecosystem,” Character.AI has introduced several entertainment-focused features.

Among them is AvatarFX, launched in June — a video-generation tool that can turn static images into animated clips. The company also unveiled “Scenes,” where users can immerse themselves in interactive storylines with characters, and “Streams,” which enables real-time interactions between two AI characters.

In August, the company added a Community Feed, allowing users to share characters, videos, and stories they’ve created, building a social space centered on creative expression rather than conversation.

In a public message to under-18 users, Character.AI apologized for the new restrictions, acknowledging that many teenagers use the platform responsibly and creatively. The statement said that while the decision was difficult, it was necessary to ensure the safety of young people interacting with AI technology.

Anand emphasized that the platform is not being shut down for younger audiences — only the open-ended chat feature is being removed. He hopes that teens will embrace the platform’s other offerings, such as AI-driven games, short videos, and storytelling tools, which are being developed to encourage safe, creative use.

He also admitted that some teens might migrate to other chatbot providers that continue to allow open-ended conversations. Other companies have also faced similar scrutiny after reports of tragic incidents involving minors and AI interactions.

Character.AI’s move comes as regulators begin to take a tougher stance on AI chat platforms targeting minors. U.S. lawmakers recently proposed legislation that would prohibit AI companion bots from being accessible to children, and California has already enacted rules requiring companies to meet new safety standards.

To further prioritize user safety, Character.AI announced the launch of the AI Safety Lab, an independent non-profit focused on developing safer AI systems for entertainment and creative platforms. Anand said this initiative reflects the company’s belief that the industry must innovate responsibly, ensuring that AI designed for creativity and interaction always puts safety first.

Also Read:

Adobe Introduces New AI Audio, Video, and Image Tools in Firefly Platform

Mphasis Rolls out Quantum Safe AI-Powered Enterprise Tools

Mane Sachin

My name is Sachin Mane, and I’m the founder and writer of AI Hub Blog. I’m passionate about exploring the latest AI news, trends, and innovations in Artificial Intelligence, Machine Learning, Robotics, and digital technology. Through AI Hub Blog, I aim to provide readers with valuable insights on the most recent AI tools, advancements, and developments.

For Feedback - aihubblog@gmail.com