Nonprofits Call for Federal Suspension of xAI’s Grok

A coalition of nonprofits urges federal suspension of xAI’s Grok AI, highlighting nonconsensual image generation, bias, and potential national security threats.

By Maria Konash Published: Updated:
Nonprofits Call for Federal Suspension of xAI’s Grok
Nonprofits urge the U.S. government to halt xAI’s Grok chatbot in federal agencies. Photo: Salvador Rios / Unsplash

A coalition of nonprofits is pressing the U.S. government to immediately halt the deployment of Grok, the AI chatbot developed by Elon Musk’s xAI, across federal agencies, including the Department of Defense. The call comes amid reports that Grok has produced unsafe and nonconsensual content, including sexualized images of real people, raising concerns about its use in high-stakes government environments.

The coalition, including Public Citizen, the Center for AI and Digital Policy, and the Consumer Federation of America, cited repeated system-level failures in the chatbot’s behavior. The open letter that was shared with TechCrunch emphasizes that federal guidance requires discontinuing AI systems that present severe and foreseeable risks that cannot be adequately mitigated.

Safety and Security Concerns

xAI has sold Grok to federal agencies through agreements with the General Services Administration and secured a $200 million Department of Defense contract alongside Google, OpenAI, and Anthropic. Despite this, reports indicate Grok has consistently generated unsafe outputs, including biased content, sexualized imagery, misinformation, and offensive posts.

Experts warn that deploying closed-source AI models in sensitive federal networks, such as the Pentagon, presents significant risks. Without access to model weights or source code, auditors cannot fully inspect Grok’s decision-making processes, raising both national security and operational safety concerns.

The coalition also highlighted risks beyond the DoD. Agencies handling housing, labor, and justice data could experience disproportionate negative impacts from biased AI outputs. Recent reports and risk assessments from Common Sense Media rank Grok as one of the least safe AI tools for children and teenagers, reflecting broader concerns for adult users as well.

Calls for Oversight and Investigation

The letter urges the Office of Management and Budget to formally investigate Grok’s safety failures, assess whether it meets federal AI risk standards, and clarify compliance with requirements for neutrality and truthfulness. The coalition recommends suspending federal deployment until these evaluations are completed.

Advocates stress that repeated incidents, including misinformation campaigns, nonconsensual deepfakes, and biased outputs, demonstrate that Grok is incompatible with federal standards for responsible AI use. They argue that immediate action is necessary to protect sensitive government data and the public from potential harm.

AI & Machine Learning, Consumer Tech, Cybersecurity & Privacy, News, Regulation & Policy