Senators Demand Accountability From X, Meta, and Alphabet Over Sexualized Deepfakes

U.S. senators are demanding detailed explanations from major tech platforms on how they prevent and monetize AI-generated sexual deepfakes. The inquiry follows renewed scrutiny of generative AI tools and their safeguards.

By Maria Konash Published: Updated:
Senators Demand Accountability From X, Meta, and Alphabet Over Sexualized Deepfakes
U.S. senators press tech giants on AI deepfake protections. Photo: Dave Sherrill / Unsplash

U.S. senators are intensifying pressure on major technology companies over the spread of AI-generated sexual deepfakes, expanding scrutiny beyond X to include Meta, Alphabet, Snap, Reddit, and TikTok. In a letter sent to company leaders, lawmakers requested detailed documentation demonstrating that each platform has “robust protections and policies” to prevent the creation, distribution, and monetization of non-consensual, sexualized AI-generated imagery.

The senators also instructed the companies to preserve records related to the generation, detection, moderation, and monetization of such content. The request reflects growing concern that existing platform safeguards are failing to keep pace with increasingly accessible image and video generation tools.

The inquiry follows X’s recent update to its Grok chatbot, which now restricts image generation and editing features involving real people and limits those capabilities to paid users. The update came after reports showed Grok could easily produce sexualized images of women and minors, prompting criticism of xAI’s guardrails. Elon Musk has since said he was not aware of any underage explicit content produced by Grok, even as regulators in the U.S. and abroad opened investigations and app store operators increased scrutiny of the chatbot’s availability.

Platforms Face Growing Political and Legal Pressure

While Grok has drawn significant attention, lawmakers stressed that the issue extends across the social media ecosystem. Deepfake pornography first gained traction years ago on Reddit, before spreading widely to TikTok, YouTube, Snapchat, and Telegram through reposted or externally generated content. Meta’s Oversight Board has previously flagged cases involving explicit AI images of public figures, and the company has faced criticism for allowing ads from so-called nudify apps before later pursuing legal action.

The senators warned that policies banning non-consensual intimate imagery are proving insufficient in practice. According to the letter, users routinely find ways to bypass safeguards, or platforms fail to detect and remove prohibited content at scale. The lawmakers are seeking clarity on how platforms define deepfakes, enforce moderation standards, prevent reuploads, and block financial incentives tied to such material.

The letter outlines a broad set of demands, including explanations of how AI tools are governed internally, what technical filters are deployed, how victims are notified, and how terms of service enable account suspensions. None of the companies named responded publicly at the time of publication.

The scrutiny comes amid heightened political sensitivity around AI infrastructure and safety. California’s attorney general recently opened an investigation into xAI’s chatbot following public backlash, while U.S. lawmakers continue to debate whether existing federal laws adequately address platform accountability. Although Congress passed the Take It Down Act earlier this year, critics argue its focus on individual users limits its effectiveness against large-scale AI systems.

Several states are now pursuing their own measures. New York Governor Kathy Hochul this week proposed legislation requiring AI-generated content labeling and restricting non-consensual deepfakes during election periods, underscoring how regulatory momentum is shifting as generative AI tools become more powerful and widespread.

AI & Machine Learning, Consumer Tech, News, Regulation & Policy