Federal vs. State Power: The Fight Over Who Regulates AI in the U.S.

For the first time, Washington is close to deciding how artificial intelligence should be regulated — but the fiercest battle isn’t over safety standards. It’s over whether states should retain the authority to pass their own AI laws.

By Maria Konash Published: Updated:
Federal vs. State Power: The Fight Over Who Regulates AI in the U.S.
Federal–state power struggle defines the next phase of U.S. AI regulation. Photo: Cristina Glebova / Unsplash

For the first time, Washington is nearing a decision on how to regulate artificial intelligence — and the central fight isn’t about what the rules should be, but who gets to make them.

In the absence of a federal standard focused on consumer safety, states have advanced dozens of bills aimed at mitigating AI risks. California’s SB-53 and Texas’s Responsible AI Governance Act are among the highest-profile efforts to curb harmful or deceptive uses of AI.

Major tech companies — and many AI startups — argue these state laws create an unworkable patchwork that threatens innovation. “It’s going to slow us in the race against China,” said Josh Vlasto, co-founder of the pro-AI PAC Leading the Future.

Push for Federal Preemption

Tech giants and several appointees within the White House are now advocating for a national standard — or none at all. New efforts are emerging to prevent states from regulating AI independently.

House lawmakers are reportedly exploring ways to use the National Defense Authorization Act (NDAA) to block state AI laws altogether. Meanwhile, a leaked draft of a White House executive order also supports state-level preemption, proposing an “AI Litigation Task Force” to challenge state laws in court and empowering federal agencies to override them.

A sweeping ban on state AI regulation is unpopular in Congress. Lawmakers across the aisle argue that without a federal standard in place, blocking states would leave consumers exposed to harms while giving tech companies free rein.

To address this, Rep. Ted Lieu (D-CA) and the bipartisan House AI Task Force are preparing a federal package spanning fraud prevention, healthcare, transparency, child safety, and catastrophic-risk mitigation. But such a megabill is expected to take months — if not years — to pass.

Industry Influence and a Uniform Framework

The leaked White House EO would give David Sacks, Trump’s AI and Crypto Czar, co-lead authority over crafting a national legal framework. Sacks, a venture capitalist and longtime advocate for blocking state AI regulation, favors minimal federal oversight and industry self-regulation to “maximize growth.”

Several pro-AI super PACs have emerged to support that agenda. Leading the Future, funded by Andreessen Horowitz, OpenAI president Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale, has raised more than $100 million. This week it launched a $10 million campaign urging Congress to pass a federal law that overrides state AI measures.

“When you’re trying to drive innovation in the tech sector, you can’t have all these laws popping up from people who don’t necessarily have the technical expertise,” Vlasto said.

States Move Faster — and Often First

As of November 2025, 38 states have enacted over 100 AI-related laws, primarily targeting deepfakes, disclosure standards, and government use of AI. A recent study found that 69 percent impose no obligations on AI developers, underscoring how limited and uneven the patchwork remains.

Congress, by contrast, has been slow. Hundreds of AI bills have been proposed; almost none have passed. Of the 67 bills Rep. Lieu has introduced to the House Science Committee since 2015, only one became law.

More than 200 lawmakers signed an open letter opposing AI preemption in the NDAA, and nearly 40 state attorneys general sent a similar warning, arguing that states must serve as “laboratories of democracy” for emerging technology.

Experts like Bruce Schneier and Nathan E. Sanders argue the patchwork concern is exaggerated. AI firms already comply with stricter rules in the EU, and most industries operate under differing state regulations without crisis. The true motive, they say, is avoiding accountability.

What a Federal AI Standard Might Look Like

Lieu’s forthcoming 200-plus-page bill includes measures on fraud penalties, deepfake protections, whistleblower safeguards, academic compute access, and testing requirements for large AI models.

Unlike the Senate’s more aggressive Hawley–Blumenthal proposal, Lieu’s bill would not require government-run AI model evaluations before deployment. Instead, AI labs would test and publish results themselves, as most already do voluntarily.

Lieu acknowledges this approach is less strict — but more realistic.

“My goal is to get something into law this term,” he said. “I’m not writing a bill that I’d have if I were king. I’m writing a bill that could pass a Republican-controlled House, Senate, and White House.”