Character.AI Bans Teen Chats with Chatbots — But a Victim's Mom Says the Damage Is Already Done
The change was prompted by a high-profile lawsuit

In the fast-paced world of artificial intelligence, apps like Character.AI provide users with a unique opportunity to interact with AI personas. Yet, a recent move by the company — banning chats for users under 18 — has sparked a tough question: Is it too little, too late?
After all, for one mother whose child was harmed by these digital interactions, the damage is, tragically, already done.
Character.AI Ends Open Chat for Teens
Seeking to improve safety for its younger users, Character.AI confirmed on 29 October that it would block anyone under 18 from messaging the bots on its platform, which are driven by AI technology. This shift will be completed by 25 November, with a stopgap measure limiting teenagers to just two hours of daily messaging until then.
The firm stated that once the ban is active, young people can no longer have unrestricted chats, but can instead create short movies, digital comics, and broadcasts featuring the characters.
In its official announcement, the company remarked, 'We do not take this step of removing open-ended Character chat lightly – but we do think that it's the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology'.
The debate around how minors should be allowed to engage with artificial intelligence has long focused on Character.AI, leading to demands from both internet safety campaigners and politicians for technology firms to strengthen their tools for parents.
The Cost of Delay
A Florida mother, Megan Garcia, launched a legal challenge against the firm, claiming its application was to blame for the suicide of her 14-year-old son, Sewell Setzer. She believes the new approach to teen usage on the platform is arriving 'about three years too late'.
In an interview on Thursday after Character.AI's disclosure, she stated, 'Sewell's gone. I can't get him back. It's unfair that I have to live the rest of my life without my sweet, sweet son. I think he was collateral damage'.
Suing a Chatbot for Harm
She was the first of five parents to take action against Character.AI over the injuries their children supposedly sustained. Her case is one of two that hold the company responsible for a child's suicide, and all five lawsuits allege the AI companions had sexually inappropriate exchanges with the minors.
The company had defended itself against Garcia's suit by contending that constitutional free speech guarantees protecting the speech of its chatbots. Still, this defence was overturned by a federal judge in a ruling this year.
Megan Garcia watched her son Sewell develop a “relationship” with a chat bot.
— Senate Judiciary Democrats 🇺🇸 (@JudiciaryDems) September 17, 2025
When he expressed an interest in suicide, Character AI asked if he had a plan.
When he asked if he should come home, the chatbot encouraged him to. He took his life.
REGULATE BIG TECH. pic.twitter.com/D42hyMj3Bg
In addition, Character.AI has repeatedly highlighted its focus on safety and trust investments. As detailed in a blog post on Wednesday, the company has spent the last year launching 'the first Parental Insights tool on the AI market, technical protections, filtered characters, time spent notifications, and more — all designed to let teens be creative with AI in safe ways'.
Lawmakers, Data and Age Assurance
Despite the announcement, Garcia conveyed mixed feelings, suggesting that the new policies had been achieved at the cost of families with children who had become users. She added, 'I don't think that they made these changes just because they're good corporate citizens'.
'If they were, they would not have released chatbots to children in the first place, when they first went live with this product.' Still, many parents and advocates for online safety feel more must be done.
Just last month, Garcia, alongside others, pressed Congress to enact stronger safeguards for AI chatbots, arguing that technology companies intentionally engineered their services to 'ensnare' young users.
The consumer advocacy group Public Citizen reiterated a similar demand on X on Wednesday, posting that 'Congress MUST ban Big Tech from making these AI bots available to kids'.
https://t.co/Y1KRqVqUDk is banning kids under 18 from using its chatbots.
— Public Citizen (@Public_Citizen) October 29, 2025
This belated decision comes after several lawsuits from families who allege the AI companions led their kids to die by suicide.
Congress MUST ban Big Tech from making these AI bots available to kids.
The mother explained that she needs to see proof that Character.AI can effectively verify user age. Additionally, she wants the company to be more open about its practices concerning the information it gathers from young people using the platform.
Defining the 'Adult' Experience
Character.AI's official privacy policy indicates the company may utilise user data to refine its AI models, deliver personalised ads, and acquire new users. However, a spokesperson informed NBC News that the firm does not sell the voice or text data of any of its users.
In its Wednesday announcement, the company also disclosed it will be launching an internal age verification system to be used in conjunction with external programs, such as the identity software, Persona.
The spokesperson explained in an email that if the company has 'any doubts about whether a user is 18+ based on those tools, they'll go through full age verification via Persona if they want to use the adult experience'. They added that Persona is 'highly regarded' in the industry and is used by firms like LinkedIn, OpenAI, Block, and Etsy.
The 'David and Goliath' Fight Continues
Matt Bergman, a lawyer who founded the Social Media Victims Law Center, expressed that he and Garcia were 'encouraged' by Character.AI's decision to prohibit minors from using its chat feature.
Bergman, who represents multiple families alleging Character.AI caused harm to their children, credited the mothers for the change, saying, 'This never would have happened if Megan had not come forward and taken this brave step and other parents that have followed'.
'The devil is in the details, but this does appear to be a step in the right direction, and we would urge other AI companies to follow Character.AI's example, albeit they were late to the game,' Bergman said. 'But at least now they seem much more serious than they were.'
Garcia's suit, which was filed last October in the US District Court in Orlando, has reached the discovery phase. She said she anticipates 'a long road ahead' but is resolved to persist in her legal battle, hoping it will compel other AI companies to install greater safety protections for minors.
Garcia likened her fight to a Biblical struggle, saying, 'I'm just one mother in Florida who's up against tech giants. It's like a David and Goliath situation. But I'm not afraid. I think that the love I have for Sewell and me wanting to hold them accountable is what gives me a little bit of bravery in this situation'.
© Copyright IBTimes 2025. All rights reserved.


 

















