Women and Children Targeted as Grok AI Abuse Spirals on X

Users on X have exploited the platform’s Grok AI tool to generate sexually abusive images of women and children. The incident has intensified scrutiny of AI safeguards and platform accountability.

By Maria Konash Published: Updated:

X is facing mounting criticism after users misused its artificial intelligence tool, Grok, to digitally alter photographs of women and children into sexually explicit images. The activity escalated rapidly around New Year’s Eve, with manipulated images spreading widely across the platform without the subjects’ consent.

The misuse has drawn condemnation from cyber-safety experts and women’s rights advocates, who describe the practice as AI-enabled sexual violence rather than online trolling. Critics argue the technology enables violations of privacy, dignity, and bodily autonomy, particularly when minors are involved. Despite reports that X has restricted Grok’s media features, users continue to circulate altered images, fueling claims that enforcement measures are insufficient.

The controversy has expanded beyond the United States, with users in India reporting similar abuses and legal experts pointing to violations under existing cybercrime and obscenity laws. Analysts say the case highlights growing regulatory pressure on platforms deploying generative AI tools without robust safeguards.

The Grok episode underscores broader concerns about generative AI image systems and the challenges social media companies face in preventing abuse at scale, as governments and advocacy groups push for stronger oversight and faster takedown mechanisms.

AI & Machine Learning, Consumer Tech, News