FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Federal State Standoff Grows Over AI Regulation

Gregory Zuckerman
Last updated: November 28, 2025 4:09 pm
By Gregory Zuckerman
Technology
8 Min Read
SHARE

Washington’s sprint to regulate artificial intelligence has opened a rift in the nation’s policymaking, resulting in an increasingly intense turf war between the White House and Capitol Hill as each struggles to assert its authority while wrestling with how best to quickly rein in one of its own creations. And the central battle is no longer simply over what to regulate, but over who sets the rules — and whether Congress will sweep the field before it writes a comprehensive law.

Preemption takes center stage in federal AI oversight

House negotiators are considering whether to tuck restrictions on state AI laws into the annual defense authorization bill, which serves as a popular vehicle for policy riders. A draft White House executive order in circulation in Washington would go further, suggesting a roadmap for the establishment of an A.I. Litigation Task Force, pressing agencies such as the Federal Communications Commission and the Federal Trade Commission to move toward national standards, and challenging state actions seen as overly burdensome.

Table of Contents
  • Preemption takes center stage in federal AI oversight
  • States outpace Congress with a patchwork of AI rules
  • Industry money and messaging reshape the AI debate
  • What a federal AI baseline might look like today
  • Why the stakes are high for AI regulation now
  • What to watch next in the federal–state AI fight
A map of the United States showing the status of AI legislation in each state as of 2023, with a legend indicating No legislation, Introduced, and Enacted Bills or Adopted Resolutions.

The EO draft would place investor and policy advisor David Sacks front and center in constructing such a unified model, an unconventional step that could marginalize the normative leader of the White House Office of Science and Technology Policy. That possibility alarms many Hill veterans who argue that broad preemption without federal baselines would leave consumers vulnerable while giving industry a free pass.

Opposition is not just theoretical: over 200 lawmakers have already publicly opposed broad preemption in recent months, and dozens of state attorneys general from both parties urged Congress to preserve their authority. Their case is straightforward: until there’s a federal standard in place, states are the only nimble backstop against AI-driven harms.

States outpace Congress with a patchwork of AI rules

Statehouses have charged ahead. Almost all measures have focused on labeling deepfakes in elections, along with related transparency and disclosure requirements and restrictions on government deployment of automated systems. A recent analysis found that about 69 percent of those state laws do not place any burden on A.I. developers themselves, highlighting just how piecemeal and incremental most activities have been.

Examples vary. California proposed a safety bill that would apply to broader foundation models and critical risks. Texas enacted a Responsible AI Governance Act focused on abuse. The RAISE framework, a rough template proposed by New York policymakers for process optimization around testing labs, would require large labs to keep and affirm safety plans. These are not the same approaches, but they form a multiplying policy lattice across the country.

Congress, in contrast, is still cobbling together the framework of a national system. Representative Ted Lieu and the House’s bipartisan A.I. Task Force have been getting a package together that would cover fraud, health care safeguards, transparency, child safety and catastrophic risk. Lieu, who is also a top sponsor of tech legislation, has managed to get only a small fraction of his prior bills passed into law — a sign of how doggedly slowly complicated tech bills tend to move.

Industry money and messaging reshape the AI debate

The tech industry’s leading companies and fast-growing artificial intelligence start-ups say a House bill that’s gained more than 80 co-sponsors is a “patchwork” approach to AI legislation and would chill innovation. Political committees that support artificial intelligence have stepped up spending to drive home that message, with a new coalition, Leading the Future — backed by major investors and AI executives — announcing more than $100 million raised, and rolling out a $10 million campaign urging Congress to preempt state laws.

A 16:9 aspect ratio image titled Federal Ban on State AI Regulation? with two columns: PROS and CONS. PROS include national consistency, encouraging innovation, global competitiveness, and efficient use of resources. CONS include state innovation and responsiveness, risk of underregulation, diverse needs requiring local solutions, and constitutional and legal challenges. The background is a professional flat design with soft patterns.

Allied groups contend that existing laws related to fraud, discrimination and product liability would cover most AI harms — and say the United States can’t afford a go-slow approach while rivals are pouring money into development efforts. Critics, like the cybersecurity expert Bruce Schneier and the data scientist Nathan E. Sanders, argue that “patchwork” is a common complaint — though likely surmountable: Companies already adjust to strict European regulations and to state-by-state variances in privacy standards, automotive rules and environmental regulations. The true rift, they say, is about accountability.

What a federal AI baseline might look like today

The House’s emerging bill is expected to enshrine more stringent consequences for AI-enabled fraud and deepfakes, stronger protections for whistleblowers, greater access to compute power for academic research, and mandatory testing and release of advanced models. Crucially, the early drafts do not go so far as to require government-run predeployment screenings — a contrast with plans from Sens. Josh Hawley and Richard Blumenthal that imagined federal testing of high-risk systems.

How preemption is crafted will matter almost as much as what gets regulated. A “floor” model would establish national minimum standards, while allowing states to go further; a “ceiling” would roll back tougher state rules. But agencies already have scaffolding to build on, from the NIST AI Risk Management Framework to the FTC’s truth-in-advertising and unfair practices authority. But with no clear statutory direction, agency guidance goes only so far — and the courts will work out the rest.

Why the stakes are high for AI regulation now

AI is crossing into the everyday at warp speed: cloned voices produce imposter scams, image generators forge satire and voter suppression alike, algorithmic tools guide hiring, housing and health decisions. The FTC claims the consumer fraud toll recently reached over $10 billion as AI turbocharges fraud both in quantity and quality. States see the early guardrails as a practical means of limiting harm right now, rather than waiting to address it after the fact.

Internationally, the EU’s new rules are solidifying around their implementation, raising the compliance floor for global players. U.S. companies will adhere to those requirements overseas with no regard for domestic preemption debates — a salient fact that weakens the case against an unmanageable patchwork of rules at home.

What to watch next in the federal–state AI fight

Watch for preemption riders in must-pass legislation, whether the White House puts forward an executive order strategy and how courts respond to any federal challenges to state AI laws. What to watch too: the scope of the House AI package, a consensus “floor” of which could take the hot air out of the federal–state clash and an aggressive “ceiling” that would make years of litigation all but inevitable. The race to regulate AI would be won by the setting.

Gregory Zuckerman
ByGregory Zuckerman
Gregory Zuckerman is a veteran investigative journalist and financial writer with decades of experience covering global markets, investment strategies, and the business personalities shaping them. His writing blends deep reporting with narrative storytelling to uncover the hidden forces behind financial trends and innovations. Over the years, Gregory’s work has earned industry recognition for bringing clarity to complex financial topics, and he continues to focus on long-form journalism that explores hedge funds, private equity, and high-stakes investing.
Latest News
Top iPad Productivity Apps Drive Efficiency Push
Apple Watch Ultra 3 Reduced by $100 in Black Friday Sale
Roomba Black Friday Deals Drop Top Models to $149
Six watch faces for Pixel and Galaxy Watch – 70% off
Use Pixel Pro’s hidden viewfinder for precise manual focus
Black Friday slashes price of UGREEN Qi2 3-in-1 for Pixel 10
Samsung just shared a quick fix for Adaptive Clock issue
Motorola Edge 70 Ultra Tipped to Feature Snapdragon 8 Gen 5 SoC
Galaxy Watch 6 Users Report Lag After One UI 8 Watch Update
OnePlus 15R Tipped to Feature Larger Battery Than OnePlus 15
Google Restricts Free Gemini Nano Banana Pro Access
Google Eyes Next Steps to Merge Android With iPhone
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.