Washington’s sprint to regulate artificial intelligence has opened a rift in the nation’s policymaking, resulting in an increasingly intense turf war between the White House and Capitol Hill as each struggles to assert its authority while wrestling with how best to quickly rein in one of its own creations. And the central battle is no longer simply over what to regulate, but over who sets the rules — and whether Congress will sweep the field before it writes a comprehensive law.
Preemption takes center stage in federal AI oversight
House negotiators are considering whether to tuck restrictions on state AI laws into the annual defense authorization bill, which serves as a popular vehicle for policy riders. A draft White House executive order in circulation in Washington would go further, suggesting a roadmap for the establishment of an A.I. Litigation Task Force, pressing agencies such as the Federal Communications Commission and the Federal Trade Commission to move toward national standards, and challenging state actions seen as overly burdensome.

The EO draft would place investor and policy advisor David Sacks front and center in constructing such a unified model, an unconventional step that could marginalize the normative leader of the White House Office of Science and Technology Policy. That possibility alarms many Hill veterans who argue that broad preemption without federal baselines would leave consumers vulnerable while giving industry a free pass.
Opposition is not just theoretical: over 200 lawmakers have already publicly opposed broad preemption in recent months, and dozens of state attorneys general from both parties urged Congress to preserve their authority. Their case is straightforward: until there’s a federal standard in place, states are the only nimble backstop against AI-driven harms.
States outpace Congress with a patchwork of AI rules
Statehouses have charged ahead. Almost all measures have focused on labeling deepfakes in elections, along with related transparency and disclosure requirements and restrictions on government deployment of automated systems. A recent analysis found that about 69 percent of those state laws do not place any burden on A.I. developers themselves, highlighting just how piecemeal and incremental most activities have been.
Examples vary. California proposed a safety bill that would apply to broader foundation models and critical risks. Texas enacted a Responsible AI Governance Act focused on abuse. The RAISE framework, a rough template proposed by New York policymakers for process optimization around testing labs, would require large labs to keep and affirm safety plans. These are not the same approaches, but they form a multiplying policy lattice across the country.
Congress, in contrast, is still cobbling together the framework of a national system. Representative Ted Lieu and the House’s bipartisan A.I. Task Force have been getting a package together that would cover fraud, health care safeguards, transparency, child safety and catastrophic risk. Lieu, who is also a top sponsor of tech legislation, has managed to get only a small fraction of his prior bills passed into law — a sign of how doggedly slowly complicated tech bills tend to move.
Industry money and messaging reshape the AI debate
The tech industry’s leading companies and fast-growing artificial intelligence start-ups say a House bill that’s gained more than 80 co-sponsors is a “patchwork” approach to AI legislation and would chill innovation. Political committees that support artificial intelligence have stepped up spending to drive home that message, with a new coalition, Leading the Future — backed by major investors and AI executives — announcing more than $100 million raised, and rolling out a $10 million campaign urging Congress to preempt state laws.

Allied groups contend that existing laws related to fraud, discrimination and product liability would cover most AI harms — and say the United States can’t afford a go-slow approach while rivals are pouring money into development efforts. Critics, like the cybersecurity expert Bruce Schneier and the data scientist Nathan E. Sanders, argue that “patchwork” is a common complaint — though likely surmountable: Companies already adjust to strict European regulations and to state-by-state variances in privacy standards, automotive rules and environmental regulations. The true rift, they say, is about accountability.
What a federal AI baseline might look like today
The House’s emerging bill is expected to enshrine more stringent consequences for AI-enabled fraud and deepfakes, stronger protections for whistleblowers, greater access to compute power for academic research, and mandatory testing and release of advanced models. Crucially, the early drafts do not go so far as to require government-run predeployment screenings — a contrast with plans from Sens. Josh Hawley and Richard Blumenthal that imagined federal testing of high-risk systems.
How preemption is crafted will matter almost as much as what gets regulated. A “floor” model would establish national minimum standards, while allowing states to go further; a “ceiling” would roll back tougher state rules. But agencies already have scaffolding to build on, from the NIST AI Risk Management Framework to the FTC’s truth-in-advertising and unfair practices authority. But with no clear statutory direction, agency guidance goes only so far — and the courts will work out the rest.
Why the stakes are high for AI regulation now
AI is crossing into the everyday at warp speed: cloned voices produce imposter scams, image generators forge satire and voter suppression alike, algorithmic tools guide hiring, housing and health decisions. The FTC claims the consumer fraud toll recently reached over $10 billion as AI turbocharges fraud both in quantity and quality. States see the early guardrails as a practical means of limiting harm right now, rather than waiting to address it after the fact.
Internationally, the EU’s new rules are solidifying around their implementation, raising the compliance floor for global players. U.S. companies will adhere to those requirements overseas with no regard for domestic preemption debates — a salient fact that weakens the case against an unmanageable patchwork of rules at home.
What to watch next in the federal–state AI fight
Watch for preemption riders in must-pass legislation, whether the White House puts forward an executive order strategy and how courts respond to any federal challenges to state AI laws. What to watch too: the scope of the House AI package, a consensus “floor” of which could take the hot air out of the federal–state clash and an aggressive “ceiling” that would make years of litigation all but inevitable. The race to regulate AI would be won by the setting.
