The White House has backed off a preliminary executive order on artificial intelligence, which would have body-slammed states from writing their own AI laws as federal leaders determined policy first in the public interest. The shift — which was first reported by Reuters after the administration rejected early buzz as gossip — puts the kibosh on an effort to centralize AI oversight in Washington at the very time that some states are experimenting with their own ways of addressing it.
What the Shelved Order Sought to Accomplish
The order would have been part of the administration’s effort to set guidelines on the use of AI, which is seen as a potential economic and national security boon to the United States — but whose threats, some experts said, are less understood. The plan was also designed to pressure China with a show of unity by Western democracies over limiting the negative aspects of AI.

A comparable effort to preempt state action was soundly defeated in the Senate earlier this year by a vote of 99–1, an indication that there is little appetite in Congress to disempower states on AI.
Meanwhile, the tech industry has pushed for a single national framework. Companies like OpenAI and Google support federal preemption, Reuters reported, claiming that a single rulebook would minimize compliance headaches and enable innovation at large. Business groups have reprised the message in congressional testimony, arguing that a hodgepodge of rules drives up costs and legal risk for developers and enterprise users.
Patchwork Reality Returns as States Forge Ahead for Now
Now, and with preemption on ice, the United States is in a patchwork era. States and cities have gone beyond general principles with sector-specific plans for resumption of activity based on risk.
- Colorado passed a first-of-its-kind law in 2024 regulating “high-risk” AI systems, placing obligations on developers and deployers to govern and document risks, send notices, and keep guardrails around discrimination.
- New York City mandates bias audits and notices for automated employment decision tools, reimagining how employers recruit and hire.
- AI use in video interviews and biometric data are regulated by existing laws in Illinois.
California keeps shaping the national conversation through its privacy law. The California Privacy Rights Act allows the state’s privacy regulator to write regulations related to automated decision-making, a process that has seen strong engagement from civil society and enterprise stakeholders. Despite the final rules on that front not landing yet, many California-based entities are mapping model risk, disclosure obligations, and opt-out routes to stay ahead of potential mandates.
For companies doing business in multiple territories, this translates into parallel tracks of compliance: model evaluations for one set of obligations, bias testing and public notices for another, different formats for recordkeeping to demonstrate audit or disclosure requirements. Lawyers are also starting to couple AI governance playbooks with data protection programs, so they don’t have duplicative processes and can surface conflicts earlier.
Why an Executive Order Alone Was a Stretch
Executive orders are strong when dealing with the federal government, but they do not typically revoke state laws on their face. Strong preemption is typically provided by Congress, directly in the statute or through a comprehensive federal program that would not allow state rules to stymie federal objectives. A bureaucratic effort to condition entire categories of federal money on state non-enforcement would immediately be litigated.

Courts have been suspicious when administrations threaten to withhold money to micromanage state policy. Previous battles over grant conditions related to immigration and pandemic-era mandates provide a kind of cautionary roadmap: Judges examine whether the conditions are clear, tied to the program’s purpose, and authorized by statute. Regulating AI — from civil rights and consumer protection to safety — penetrates core state police powers, causing 10th Amendment and “major questions” issues if agencies dally in their authority without clear orders from Congress.
Global and Federal Context for AI Regulation Trends
International momentum is not slowing. The E.U.’s AI Act, which was finalized in 2024, sets a risk-tiered approach with penalties for noncompliance and detailed requirements for high-risk systems. Already, multinationals are beginning to materially align documentation, incident reporting, and supplier management practices with that playbook, and many will impose similar controls on U.S. operations independent of the federal preemption debate.
At home, federal agencies are still deciding what the best way is to influence behavior through their money and purchasing power. NIST’s AI Risk Management Framework is increasingly the de facto reference for internal controls, model lifecycle governance, and assurance. Federal contractors are paying close attention: Procurement clauses could quickly normalize expectations for testing, disclosure, and incident response across the vendor ecosystem even without new legislation.
What to Watch Next as States and Agencies Move Forward
The administration could try a narrower approach to pushing states toward uniformity — through agency rulemaking, procurement standards, or targeted grant conditions — without an explicit preemption edict. States, meanwhile, will continue to legislate, with a focus on algorithmic bias, consumer transparency, and election-related deepfakes in advance of large voting cycles.
For developers and enterprises, the surest bet in the near term is to operationalize baseline controls that travel well:
- Strong model documentation
- Testing for bias and safety tuned to your use cases
- Clear user notices
- Opt-out or appeal routes for consequential decisions
- Supplier diligence that extends to foundational model providers
If Washington resurrects preemption, those investments won’t have been wasted; if it doesn’t, they’ll be vital to coping with the growing map of state rules.
