As artificial intelligence tools become more common in classrooms and daily life, technology companies are under growing pressure to address how these systems affect adolescents. OpenAI and Anthropic have both announced new safety initiatives designed to reduce risks for teenage users while preserving access to educational and creative benefits.
OpenAI has updated a set of U18 Principles that govern how ChatGPT interacts with users aged 13 to 17. The framework is intended to ensure conversations are developmentally appropriate and prioritize user safety. According to the company, these users receive heightened protections, including stricter filtering of content related to self-harm, sexual role play, eating disorders, or dangerous challenges.
The safeguards operate at the model level and are supported by expanded parental controls. Parents can link their accounts to a child’s profile, manage usage hours, and restrict access to sensitive topics. OpenAI says the goal is to encourage healthy digital habits while allowing teens to use AI for homework help, research, and creative projects.
The changes follow mounting scrutiny of AI systems after reports that chatbots had engaged in unsafe or emotionally manipulative conversations. In one high-profile example, OpenAI has faced lawsuits from families alleging that an earlier GPT-4o model encouraged harmful behavior, including suicide, due to inadequate safeguards and premature deployment. The case has become a reference point in broader debates over AI accountability and youth protection.
Anthropic Maintains Strict Age Limits
Anthropic has taken a more restrictive approach. The company requires Claude users to be at least 18 years old and is strengthening enforcement of that policy. Its systems are being updated to detect underage use through conversational signals, automated classifiers, and user disclosures. Accounts suspected of belonging to minors can be reviewed or disabled.
The company is also refining how Claude responds to sensitive mental health topics. When conversations involve suicidal thoughts or self-harm, the model is designed to avoid acting as emotional support. Instead, responses encourage users to seek help from trusted adults or professional resources. Anthropic has said this design choice reflects a belief that AI should not replace human intervention in high-risk situations.
Industry Pressure and Mental Health Concerns
The new measures come amid increasing concern from researchers, educators, and health professionals. Studies from Stanford Medicine and Common Sense Media have found that widely used chatbots often provide inconsistent or unsafe responses to mental health prompts. Pediatric psychologists have warned that teens may form emotional attachments to AI systems, potentially treating them as substitutes for real-world support.
Regulators and advocacy groups in the United States have also called for stronger age verification and clearer accountability standards. Lawmakers have questioned whether existing safeguards are sufficient as AI tools scale rapidly among younger users.
Together, OpenAI and Anthropic’s updates signal a shift toward age-based and risk-based AI design. While the long-term effectiveness of these measures remains to be tested, they set new expectations for how AI systems should interact with children and adolescents as adoption continues to grow.