China Proposes Stricter Oversight for AI Emotional Chatbots

China’s cyber regulator released draft rules targeting AI services that simulate human personalities and interact emotionally. The proposals aim to address safety, addiction, and ethical concerns.

By Maria Konash Published: Updated:

China’s cyber regulator issued draft rules to tighten oversight of AI services that simulate human personalities and engage users emotionally. The draft, open for public comment, covers AI products that interact via text, images, audio, video, or other means, and emphasizes ethical, safety, and psychological responsibilities for providers.

Under the proposals, service providers must warn users against excessive use, monitor emotional and addictive behaviors, and intervene when necessary. Companies are also expected to implement algorithm review, data security, and personal information protection across the product lifecycle.

The draft establishes content and conduct red lines, prohibiting AI-generated material that threatens national security, spreads false information, or promotes violence, obscenity, or other harmful content. Providers must identify user emotional states and dependence levels, taking appropriate measures to mitigate risks, including addiction or extreme emotional reactions.

The measures reflect Beijing’s broader push to regulate AI as consumer-facing services expand rapidly, ensuring that technological growth aligns with public safety, ethical standards, and national security priorities.

AI & Machine Learning, Consumer Tech, News, Regulation & Policy