OpenAI has introduced an age prediction system in ChatGPT to provide age-appropriate experiences for users under 18. The system analyzes account signals, such as usage patterns and topics of conversation, to estimate whether a user is likely a teen. If the system predicts an account belongs to someone under 18, ChatGPT automatically activates enhanced safety settings.
The extra safety measures are designed to limit exposure to sensitive content. These include graphic violence, content promoting risky viral challenges, sexual or violent roleplay, and material related to extreme beauty standards or unhealthy dieting. OpenAI emphasizes that users can continue to use ChatGPT for learning, creativity, and general inquiries, but certain topics are moderated more strictly.
Age Verification for Adult Users
Users who are 18 or older can remove the extra safety restrictions by verifying their age through Persona, a third-party verification provider. Verification may involve a live selfie, a government-issued ID, or both, depending on the country. Persona confirms that the selfie matches the ID and deletes all verification data within seven days. OpenAI only receives confirmation of age or date of birth and does not access uploaded IDs or selfies.
Once verification is complete, ChatGPT adjusts the account settings to remove teen-focused restrictions. OpenAI notes that this process may take a short time to fully apply. Users can also opt out of age prediction entirely by verifying their age.
Privacy and Data Handling
OpenAI clarifies that age prediction signals are used solely for safety purposes. The company does not store sensitive identification documents and limits data use to age verification or improving model safety. Users retain control over whether their data contributes to model training through the platform’s privacy settings.
The update comes amid heightened global scrutiny of AI platforms following regulatory crackdowns on tools such as Elon Musk’s Grok, which has faced bans and investigations in multiple countries over its role in generating non‑consensual and sexually explicit deepfake imagery. This broader regulatory environment highlights the increasing emphasis on AI safety, responsible content moderation, and legal compliance across the industry