OpenAI's New Age Radar: ChatGPT Can Now Sniff Out Minors
ChatGPT will look for certain 'signals' to predict a user's age

In a bold move to make its flagship chatbot safer for teenagers, OpenAI has rolled out an age-prediction system in ChatGPT that estimates whether a user is likely under 18. Rather than relying solely on self-declared ages, the system uses behavioural signals to infer a user's age and adjust the platform's safeguards accordingly.
The change means ChatGPT can now automatically limit access to sensitive or potentially harmful content for younger users, while allowing adults to retain full functionality. OpenAI says the approach is designed to strike a balance between protecting minors and preserving a flexible, useful experience for everyone else.
Read More: Is OpenAI In Trouble? Inside The Story of Why They May Start Running Ads on ChatGPT
Read More: What is OpenAI's 'Device'? - Former Apple Chief Helping ChatGPT Maker Create The Secret Object
How the Age Prediction System Works
Unlike simply trusting a user's stated age, OpenAI's age prediction model analyses a mix of behavioural and account-level signals to assess whether an account probably belongs to someone under 18. These signals include factors such as how long an account has existed, the times of day it's active, usage patterns over time and any age reported by the user.
By processing these patterns, ChatGPT can make an educated guess about age. When the model determines a user is likely a minor, it automatically applies stricter safety filters to reduce exposure to content deemed potentially harmful for teens.
These include graphic violence, depictions of self-harm, viral challenges that encourage risky behaviour, sexual or romantic role-play content and material promoting extreme beauty standards or unhealthy dieting.
Importantly, OpenAI emphasises that if the system isn't confident about a user's age, it prioritises safety, erring on the side of caution rather than assuming adulthood.
We’re rolling out age prediction on ChatGPT to help determine when an account likely belongs to someone under 18, so we can apply the right experience and safeguards for teens.
— OpenAI (@OpenAI) January 20, 2026
Adults who are incorrectly placed in the teen experience can confirm their age in Settings > Account.…
Safeguards and User Control
While safety filters for adolescents are automatic, users mistakenly classified as under 18 can quickly verify their age and restore unrestricted access. This is done through Persona, a secure identity verification service that allows adults to submit a selfie for confirmation. Once verified, ChatGPT lifts additional protections and allows the user to interact without the teen-focused limits.
OpenAI also built this system with flexibility in mind. Users can check whether age-based protections are active through their account settings and initiate verification at any time.
Guided by Research and Expert Insight
OpenAI's approach isn't arbitrary. The content restrictions imposed when a user is classified as under 18 are guided by expert input and academic research on child development, including recognised differences in teens' risk perception, impulse control, peer influence and emotional regulation. Such insights inform which types of material are restricted to create a genuinely safer environment for younger users.
Moreover, parents with linked accounts can use parental controls to further customise their teen's ChatGPT experience, including setting quiet hours and managing specific features. These tools expand family oversight beyond the age-prediction system itself.
The Road Ahead for Age Prediction
OpenAI plans to refine its age prediction model over time, using rollout data to improve accuracy. The company is also deploying this system in the European Union in the coming weeks to meet regional requirements. Talks with safety experts, advocacy groups, and academic institutions continue to shape how the system evolves and how best to protect young users while respecting adult freedom and privacy.
In essence, ChatGPT's age prediction feature marks a significant paradigm shift in how AI platforms can recognise and respond to users' age-related safety needs—balancing protection with access, all while navigating the practical and ethical challenges of automated inference.
© Copyright IBTimes 2025. All rights reserved.





















