AI heavyweights and Silicon Valley financiers are pouring millions into a coordinated effort to sink New York assembly member Alex Bores’ bid for Congress, turning a Manhattan-based race into a high-stakes proxy fight over how the United States should regulate artificial intelligence.
The blitz centers on ads tying Bores to his past work at Palantir, the data analytics company whose software has supported U.S. Immigration and Customs Enforcement operations. But the spending is about the future, not old résumés: Bores has emerged as one of the most active state lawmakers pushing for AI transparency and accountability, and industry-aligned groups want to stop him before he gets to Washington.
Who Is Targeting Bores And Why They Oppose Him
The super PAC Leading the Future is financing the attacks. Public filings and group statements show the PAC has attracted marquee backers including Palantir co-founder Joe Lonsdale, OpenAI president Greg Brockman, venture firm Andreessen Horowitz, and AI search startup Perplexity. The group has amassed roughly $125 million, with at least $10 million earmarked to oppose Bores in New York’s 12th District.
Leading the Future promotes pro-innovation candidates and opposes what it views as restrictive state-level AI rules, arguing regulation should be set nationally. Bores, a former tech operator who worked at Palantir and several startups, is precisely the kind of lawmaker the PAC considers influential and — in its view — dangerous to the industry’s momentum if his approach spreads to Congress.
The ad campaign spotlights Palantir’s role in immigration enforcement to frame Bores as complicit. Political consultants note that negative frames tied to immigration and big data often test well in urban districts. But the subtext is regulatory: the industry is signaling that state-driven rulemaking will draw swift, expensive pushback.
The Policy Fight Over State and Federal AI Rules
Bores authored the RAISE Act, a state law that requires large AI developers — defined by a revenue threshold above $500 million — to publish safety plans, follow them, and report catastrophic-risk incidents. The measure stops short of preapproval or licensing; instead it compels disclosures that policymakers and watchdogs can scrutinize.
Beyond RAISE, Bores has advanced bills to reveal what goes into AI training datasets and to embed standardized metadata in synthetic media so provenance can be traced. His campaign also released a national blueprint spanning dozens of proposals, from model evaluations and compute reporting to election integrity safeguards.
Industry groups counter that a patchwork of state laws will confuse innovators, raise compliance costs, and create de facto barriers to entry. A recent executive action from the White House urged federal agencies to challenge state rules considered overly burdensome, underscoring the momentum behind preemption. Even so, states from California to Colorado have pursued their own frameworks as Congress wrestles with scope and enforcement.
Silicon Valley Money And Its Role In The Midterms
Leading the Future is not alone. Meta has seeded two super PACs — American Technology Excellence Project and Mobilizing Economic Transformation Across California — with a combined $65 million to influence state races favorable to the tech sector, according to committee filings. Separate disclosures show AI companies, trade groups, and top executives gave at least $83 million to federal campaigns and committees in the last cycle tracked by watchdogs.
For perspective, Bores has pointed out that a typical New York Assembly race might raise around $100,000 in total. Independent expenditures of this magnitude can dwarf what candidates themselves collect, reshape media narratives, and dominate voter attention through repetition and precision targeting. OpenSecrets has documented similar surges in outside spending across competitive primaries, with tech now behaving like health care and finance in its political footprint.
Not All AI Money Is Aligned On Regulation And Safety
Bores is not without industry support. Public First Action, a PAC backed by Anthropic affiliates, has invested roughly $450,000 to boost his campaign, emphasizing transparency, safety, and public oversight rather than a regulatory vacuum. That split reflects a deeper divide inside tech: executives and investors pushing to minimize state intervention, and rank-and-file engineers and researchers increasingly vocal about safety, accountability, and social impact.
Organizing by tech workers — from open letters on model risks to advocacy for content provenance — has grown more visible, suggesting that boardroom politics and lab-floor ethics will continue to collide in elections where AI is front and center.
What This Race Will Reveal About AI Policy And Power
New York’s 12th District is fast becoming a referendum on whether voters trust a technologist-legislator promising guardrails or industry-backed groups warning that state-led rules threaten progress. Expect ad saturation, message testing on jobs, costs, education, and safety, and a broader debate over who sets the rules for systems that already shape search, media, and public services.
However the votes break, one outcome is already clear: AI policy is no longer an abstract Washington exercise. It is a campaign issue with serious money behind it — and candidates who challenge the industry’s preferred path should count on meeting a war chest at the door.