The Trump administration is backing away from an aggressive plan to undermine state artificial intelligence laws, pausing plans for an executive order that would have forced federal legal actions against states with AI statutes. The effort to develop a proposal, which was first reported by Reuters, represents a significant change after the White House turned a single national standard into a rallying cry and floated tying compliance to federal funding streams.
Just days ago, officials were considering an AI Litigation Task Force and warning that states with their own AI rules could see funds from the federal government’s broadband programs dry up. That approach came after a failed attempt to stave off state-based AI laws for 10 years, an option that was stripped out by the Senate in a lopsided 99-1 vote. The pause implies that the politics and law of preemption are turning out to be more complicated than expected.

What changed in Washington to slow the federal AI push
Inside the administration, a brusque effort to relieve preemptive pressure faced resistance from numerous quarters. Torchbearer advocates for states’ rights cautioned that sowing through the party’s traditional doctrine of federalism would sow seeds of hypocrisy. Industry voices were divided: Some big platforms backed a single rulebook, while enterprise users and safety advocates said that state rules are filling a void left by Congress.
The White House, too, had an optics issue. Homing in on state AI laws, even as some administration allies have publicly attacked companies like Anthropic for supporting California’s SB 53, risked resembling a political squabble rather than a policy framework. And the agencies that would have to execute the order could anticipate bruising litigation with uncertain outcomes.
States set the pace as governors and AGs drive AI rules
States are not waiting on Washington. Colorado passed a first-of-its-kind AI law that mandates risk assessments for “high-risk” systems and disclosures around automated decisions, with significant provisions to take effect in 2026. Bias audits for automated employment decision tools are already required under Local Law 144 in New York City, and Illinois’s Biometric Information Privacy Act keeps sending shock waves as companies settle multimillion-dollar suits in sectors ranging from hotels to technology.
California has been moving a package of AI and algorithmic accountability bills, coterminous with SB 53, which would include provisions mirroring the National Institute of Standards and Technology’s AI Risk Management Framework. Although the details vary, these efforts have themes in common:
- Documenting model risks
- Assessing impact when used for sensitive purposes
- Providing mechanisms for redress when automated systems go awry
And the broader trend line is unmistakable. The Stanford AI Index shows ongoing legislative action creep in the United States, with no comprehensive AI law yet being passed by Congress. Into that vacuum step governors and attorneys general who are increasingly the nation’s de facto AI regulators.
The legal math behind a retreat on state AI preemption
If you wish to preempt state law, you usually need a clear federal statute or a conflict with the federal regulation. There is no overarching AI law on the books to give shape to that work. Litigating to deploy the Dormant Commerce Clause against state AI rules would be an uphill battle, as courts often allow states to police harms within their borders when they do not cross into blatant protectionism.

There would also be perils in tying compliance to broadband funding. As we argue, the Supreme Court’s coerciveness doctrine established in NFIB v. Sebelius constrains what the federal government can demand of states in exchange for existing funding and mandates that states have a real choice about whether to take those funds. Threats to pull dollars from the NTIA’s BEAD program would likely have been immediately challenged by both blue and red states.
Put simply, the administration’s most powerful argument — uniformity for interstate markets — clashes with constitutional guardrails and the lack of a federal baseline. Halting the order prevents any precedent from being set in a case that the government could lose.
What it means for AI companies facing state compliance
Compliance officers can’t count on the patchwork going away: pragmatically, play for the most restrictive denominators.
- Risk classification
- Documented testing for safety and bias
- High-stakes use-case impact assessments
- User notice and meaningful recourse when automated methods intersect rights or livelihoods
Anchoring to the NIST AI Risk Management Framework gives a defensible, regulator-friendly base that correlates across states’ requirements.
But companies that are using hiring, credit, health care, education, or large-scale infrastructure models won’t be let off the hook by any federal action — expect audits and vendor scrutiny and public reporting norms to ratchet up no matter what.
The bottom line: states will keep steering AI governance
By hitting the brakes on a more sweeping preemption campaign, the administration implicitly acknowledges political headwinds and legal vulnerability. Unless Congress enacts a clear national law, states will continue to steer AI governance — and companies will have to meet them where they are.