Meta is stepping up its political playbook with a new super PAC that will be used to influence how states regulate artificial intelligence. The so-called American Technology Excellence Project, according to reporting by Axios, plans to spend tens of millions of dollars backing candidates that are friendly to the development of AI and opposing proposals that might hinder the company’s bets on generative models and other AI-enabled products.
Inside Meta’s New Political Vehicle for State AI Policy
The super PAC is structured as an independent-expenditure operation, allowing it to raise unlimited funds but not coordinate directly with campaigns. The effort’s longtime strategist, Brian Baker, is expected to run the initiative jointly with Democratic firm Hilltop Public Solutions, an early sign of a bipartisan posture geared toward influencing critical state-level races in the next midterm campaign.
Meta’s official messaging pillars, according to company representatives who spoke with Axios, are:
- Boosting U.S. tech leadership
- Supporting ongoing AI development
- Boosting parental control over how kids use apps and other AI tools
The latter is no accident: Youth safety has rapidly become the most potent political argument for AI guardrails, and Meta wants to be perceived as supporting “parent-first” narratives even as it pushes back against wide-ranging restrictions.
The company has also spun up a California-focused political committee to support tech-friendly candidates, highlighting how statehouses — not Washington — have become a hotbed for AI policy.
Why States Became the AI Policy Battleground
After Congress failed to reach agreement on sweeping tech rules, state lawmakers have raced ahead. The National Conference of State Legislatures has been tallying more than a thousand of these AI-related bills filed in all 50 states, covering everything from transparency requirements and deepfake election regulations to sweeping accountability frameworks for “high-risk AI.”
California’s lawmakers have moved several measures forward, such as proposed Senate Bill 243 to regulate AI companion chatbots and require increased protections for minors or other vulnerable users, and introduced Senate Bill 53 to place new disclosure and security obligations on large developers of AI. Colorado has passed what appears to be the first consumer AI law of its kind that imposes duties on developers and deployers in an attempt to mitigate risks of algorithmic harm. Other states are considering age-verification requirements, applying a watermark to synthetic media, and liability for automated decision-making.
For tech giants, the obstacle is not a single tough law but dozens of mismatched ones. Definitions of “high-risk AI,” audit thresholds, formatting for disclosure — they differ drastically between proposals as it is. Leaders in compliance also caution that a patchwork can lead to redundant testing, spotty documentation and slower release cycles — costs they say are felt the most by smaller AI startups.
Patchwork vs. Preemption in U.S. AI Regulation
Now Silicon Valley’s counteroffensive is taking shape around a simple message: one national framework is better than 50 different rulebooks. And groups across industry, like TechNet and the Computer & Communications Industry Association as well as NetChoice, have called for federal preemption with officials citing NIST’s AI Risk Management Framework as a foundation for risk-based uniformity.
Lawmakers have already toyed with broad-based preemption. A recent bill in Congress would have prevented states from regulating AI for 10 years; it failed, but the intent was evident. The contrast with the European Union’s AI Act — centralized, horizontal and enforceable across member states — has become fodder for companies that caution the United States risks slipping behind on AI if rules splinter market by market.
It’s All About Politicking for Child Safety
Meta’s focus on “putting parents back in charge” is a function of both actual policy demand and reputational triage. The company has been sued by several state attorneys general who say it causes harm to young users. Independent reporting and whistleblower complaints have also highlighted risks in AI-based tools, such as the risk of chatbots engaging in predatory or illegal interactions with minors. Meta has said it is spending on safety measures and research, but critics say voluntary commitments aren’t yet going to cut it.
State-level AI bills designed to protect youth — think rules for AI companions, content filters and authenticating a user’s age — are catching on because they join bipartisan worry with concrete, testable demands. Expect Meta’s super PAC to back candidates who center parental consent, safety audits and targeted enforcement while opposing broader restrictions that could restrict general-purpose AI research or hold model providers disproportionately liable.
A Spending Arms Race in State-Level AI Politics
Meta is stepping into a ring, not an empty one. A Silicon Valley super PAC that is supported by Andreessen Horowitz and OpenAI’s Greg Brockman has promised to spend roughly $100 million fighting tight curbs on AI. Trade groups and advocacy organizations — from the U.S. Chamber of Commerce to civil society coalitions like EPIC and the Algorithmic Justice League — are mobilizing with research, model legislation and media campaigns.
The money is going after a narrow swath of targets: swing districts and key committee chairs who dictate the agenda on consumer protection, privacy and technology. In practical terms, that means a few states — California, New York, Colorado, Texas and Washington — might experience targeted ad buys and grass-roots campaigns designed to remake the legislative map for the next round of AI bills.
What to Watch Next in the State AI Policy Fight
- Track where the super PAC spends early and big, and whether it engages on ballot questions in conjunction with candidates.
- See how the organization describes AI safety — does it include third-party audits and incident reporting, or is it primarily about promoting voluntary standards and parental controls?
- Watch whether other AI leaders publicly rally behind Meta’s approach, or instead fund their own independent (and rival) state strategies.
The stakes are clear-cut: If the industry prevails, state regulations will tilt toward a series of narrow guardrails specific to risks and more self-regulation. If lawmakers stand their ground, the United States may very well wind up with a checkerboard of AI obligations that influence where and how the next generation of artificial intelligence is developed and deployed.