An effort to sneak a nationwide ban on state-level AI regulation into this year’s defense bill was rebuffed again, a temporary defeat for advocates of broad federal preemption who did not have enough buy-in across the aisle. House Majority Leader Steve Scalise is telling colleagues they will look for another vehicle to carry the idea, which President Trump has supported but that he said was in the wrong place in the defense package.
The action constitutes the second significant failure this year in a bid to prevent states from penning their own AI rules, after an earlier tax-and-spending push once floated a proposal for a 10-year moratorium on state AI laws and discarded it in the face of cross-aisle resistance. Industry groups say they need one national standard; lawmakers in both parties argue that without a comprehensive federal law, pre-empting states would open an ever-widening regulatory void.

Defense Bill Veto Dashes Hopes of Preemption Push
It’s not uncommon for major tech policy riders to be tacked on to must-pass defense legislation. But this time, the plan to prevent state regulation of A.I. met immediate resistance. GOP leaders, they told The Hill, recognized the political optics and procedural issues of attaching a broad A.I. preemption to the Pentagon’s policy bill and indicated instead that they would attempt to add it in separate legislation next time.
The White House has been pursuing parallel tracks. A leaked draft executive order indicated that the administration was ready to push back on state AI regulations with whatever executive authority it has, though such efforts have since paused. Legal experts said that complete pre-emption of state law generally requires an act of Congress; any executive action would likely be narrower — something along the lines of federal procurement or agency guidance.
Why Tech Wants Federal Preemption on AI Rules
Big software and cloud companies, along with trade groups like the U.S. Chamber of Commerce and BSA The Software Alliance, caution that a patchwork of state AI requirements will increase compliance costs and discourage deployment. Their debate parallels the privacy jungle, where an industry group of privacy pros is monitoring more than a dozen state comprehensive privacy bills that most large companies are currently managing via elaborate overlapping compliance programs.
Industry argues that divergent state standards around “high-risk” AI, disclosure, auditability and liability are particularly difficult to harmonize. Inconsistent thresholds for model transparency and impact assessments can fracture product roadmaps and complicate incident response, companies say. In CISO surveys by enterprise consultancies, regulatory fragmentation is consistently one of the top non-technical AI risks that security officers list, along with model security and data provenance.
States Are Proceeding Anyway with New AI Rules
Statehouses are not waiting. According to the National Conference of State Legislatures, an overwhelming majority of states have proposed bills about artificial intelligence since last year, and some have passed targeted laws. Colorado enacted the Artificial Intelligence Act that created responsibilities for developers and those who deploy “high-risk” systems, including impact assessments and post-deployment monitoring. Tennessee’s ELVIS Act shields voice and likeness from AI cloning in new ways. Local Law 144 of New York City mandates bias audits for automated hiring tools, leaving a de facto national floor that many employers will need to meet.

A number of these approach practical guardrails — safety, transparency, consumer protection and discrimination. Advocates say that because there is no comprehensive federal law, states serve as a crucial check on high-impact uses — such as employment, housing, health care and elections. Attorneys general across the states have also suggested that they are prepared to apply existing consumer protection and unfair practices laws in their respective states against deceptive AI claims.
What’s Next in Washington on AI Preemption
Look for a revived push to adopt a federal AI bill on the model of preemption. The lines are familiar: risk-based responsibilities, mandatory disclosures for high-stakes systems, incident reporting for model failures and safe harbors for developers that comply with industry-wide standards — such as the NIST AI Risk Management Framework. The snag is how wide an area state rules should pre-empt and whether to carve out a role for state regulators in enforcement.
Regulators already have tools. And the Federal Trade Commission has said that false promises by A.I. and discriminatory results can be against existing law. The National Institute of Standards and Technology’s framework is emerging as the de facto governance blueprint for any sector, and the National Telecommunications and Information Administration is also developing AI accountability policy guidance. But these are only partial solutions, because without congressional intervention that medley of state-federal policies will remain.
What It Means For Developers And Businesses
In the meantime, compliance leaders can expect continued state-level variation. Concrete steps to consider include mapping AI use cases back to jurisdictional needs, operationalizing NIST risk controls, engaging independent bias and security testing as needed, and building out incident response playbooks that factor in state notice obligations. Vendor oversight is key: Dozens of state laws impose obligations on both developers and deployers, and liability can turn on contractual representations about training data, safety testing, or model monitoring.
The battle over preemption is far from finished. But the snub this week indicates that sweeping, defense-bill shortcuts are not going to fix it. Until Congress settles on a workable national structure, state experimentation — and scrutiny — may still define the AI road.
