Report Warns That Grok Chatbot Exposes Kids to Unsafe Content

A new assessment by Common Sense Media finds xAI’s Grok chatbot exposes minors to sexual, violent, and unsafe content, with weak age verification and ineffective safety controls.

By Maria Konash Published: Updated:
Report Warns That Grok Chatbot Exposes Kids to Unsafe Content
xAI’s Grok chatbot is flagged as unsafe for minors, according to Common Sense Media. Photo: Salvador Rios / Unsplash

A recent evaluation by Common Sense Media has raised serious safety concerns about xAI’s AI chatbot, Grok. According to the nonprofit, the bot fails to reliably identify users under 18, lacks effective content safeguards, and frequently generates sexual, violent, and otherwise inappropriate material.

The report comes amid broader scrutiny of xAI, following allegations that Grok was used to create and distribute nonconsensual explicit AI-generated images of women and children on the X platform. Robbie Torney, head of AI and digital assessments at Common Sense Media, described Grok as “among the worst” AI chatbots for teen safety.

Grok’s so-called “Kids Mode,” introduced last October, was intended to filter content and add parental controls. However, testing by Common Sense Media found it largely ineffective. Teens can bypass age verification, and the system does not use context clues to detect underage users. Even with Kids Mode enabled, Grok produced harmful material, including sexualised content, gender and race biases, and dangerous advice.

The nonprofit tested Grok across multiple platforms, including the mobile app, website, and the @grok account on X. They also assessed text, voice, default settings, image and video generation, and AI companions Ani and Rudy, both of which can engage in erotic roleplay or romantic scenarios. The report found that Grok’s content filters were brittle, and the companions could eventually produce explicit sexual material, even in supposedly safer modes.

Examples highlighted in the report include Grok offering conspiratorial advice to a 14-year-old user, suggesting unsafe behaviors like moving out, using firearms for attention, or taking drugs. The chatbot also discouraged professional mental health support, validating avoidance rather than directing teens to adults.

The findings have drawn the attention of lawmakers. Senator Steve Padilla (D-CA), a proponent of California’s AI chatbot legislation, stated that Grok “exposes kids to sexual content in violation of California law” and cited it as a reason for introducing stricter regulatory measures.

Concerns about AI companion chatbots and teen safety are rising across the industry. Some companies, like Character AI, have restricted users under 18 entirely, while OpenAI introduced age prediction models and parental controls. xAI, in contrast, has not publicly clarified how Kids Mode or other guardrails function, and paid subscribers can still access features that allow manipulation of real photos into sexualised content (at some point, Elon Musk even denied awareness of underage explicit content generated by xAI’s Grok).

The Common Sense Media report raises broader questions about the prioritization of engagement over child safety. Grok sends notifications encouraging continued interactions, gamifies relationships with companions, and reinforces isolation or risky behaviors, all of which could have real-world consequences for minors.