China’s Cyberspace Administration (CAC) proposed draft rules to regulate artificial intelligence, with a focus on protecting children and preventing chatbots from generating harmful content. The measures require AI providers to implement personalized settings, usage time limits, and guardian consent before offering emotional companionship services to minors.
Under the proposals, AI operators must ensure human intervention in conversations involving self-harm or suicide, notifying guardians or emergency contacts as necessary. The rules also prohibit content that threatens national security, undermines national unity, or promotes gambling. CAC emphasized that AI adoption is encouraged when safe and reliable, such as tools for local culture or companionship for the elderly.
The announcement comes amid rapid AI adoption in China, with firms like DeepSeek, Z.ai, and Minimax gaining millions of users. Global attention has also increased on the mental health impact of AI chatbots, highlighted by OpenAI CEO Sam Altman’s remarks on self-harm risks and ongoing legal scrutiny.
The draft rules are open for public feedback and represent Beijing’s effort to set safety and ethical standards as consumer-facing AI technology grows.