FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News

New York Passes RAISE Act on AI Safety Into Law

Bill Thompson
Last updated: December 20, 2025 7:01 pm
By Bill Thompson
News
7 Min Read
SHARE

New York has taken the lead in regulating artificial intelligence, with Governor Kathy Hochul signing the RAISE Act and becoming only the second state in America to institute comprehensive AI safety regulations. The law addresses transparency, incident reporting, and independent oversight — a package intended to manage risk from cutting-edge models even as it sustains innovation in a state that anchors finance, media, and health care technology.

The measure comes after heavy negotiations and lobbying by major tech companies. Lawmakers pushed the bill forward, the administration attempted to limit its scope, and the ultimate deal keeps it in effect while allowing due consideration of technical adjustments in future legislation and rulemaking.

Table of Contents
  • The RAISE Act’s Requirements for AI Developers and Labs
  • How New York’s Plan Fits Into the National AI Landscape
  • Industry Backing and Pushback to New York’s AI Law
  • Enforcement and Compliance Outlook for the RAISE Act
  • Legal and Political Stakes for State and Federal AI Rules
  • What Comes Next for DFS Guidance and Developer Obligations
A woman with dark hair and a blue blazer speaks into a microphone, with a red background behind her.

The RAISE Act’s Requirements for AI Developers and Labs

The law requires large AI developers — those creating or distributing powerful, general-purpose systems or high-risk ones — to release safety plans and testing methods that show how the companies assess model behavior. The legislation requires any AI safety incidents to be reported to the state within 72 hours, similar to existing timescales in cybersecurity incident response.

New York will set up a specialized office in the Department of Financial Services (DFS) to watch for AI development and enforce the law. DFS’s oversight style, well known from its regulatory work on digital assets, suggests that AI compliance will not be a paper tiger: the agency is expected to require repeatable processes, defensible documentation, and proof that companies can pinpoint, triage, and remediate harms.

Penalties underscore the point. Not only can companies that do not file the required reports or lie about their safeguards be fined as much as $1 million for a first offense and up to $3 million for each additional violation, but these are real stakes that serve as deterrents against ignoring governance and red-teaming obligations.

How New York’s Plan Fits Into the National AI Landscape

The statute closely follows the transparency-first policy adopted by California recently, creating a developing coastal consensus on minimum AI safety standards. Both focus on disclosure, incident reporting, and an accountable point of contact in government. Although the United States still does not have a comprehensive federal AI law, these state developments follow international patterns: The European Union’s AI Act creates duties for high-risk systems, while the OECD AI Principles and NIST AI Risk Management Framework underscore governance practices and documentation as well as testing and performance assessments.

For developers, reciprocity between New York and California is not minor: less divergence lowers the risk of a patchwork and makes it simpler to put in place common controls, from pre-deployment assessments to ongoing monitoring of model misuse and cascading failures.

Industry Backing and Pushback to New York’s AI Law

Among the major AI labs, there has been a cautious response. Both OpenAI and Anthropic have backed the thrust of New York’s transparency regime as they pleaded with Congress to enact federal standards. Policy leaders at Anthropic have cast state action as a bridge to federal rules, which many enterprise adopters are eager to see in order to achieve regulatory clarity that covers multiple jurisdictions.

Resistance remains. Political groups backed by prominent investors and AI executives have taken aim at state-by-state rulemaking, lobbying the sponsors of bills on the issue and arguing that overly broad compliance with such rules would protect incumbents while stifling competition for new ones. The lobbying rush survived that pressure — a strong indication that the politics of AI safety are moving toward concrete guardrails as opposed to voluntary pledges.

A woman with brown hair and a blue suit speaking into a microphone.

Enforcement and Compliance Outlook for the RAISE Act

DFS will translate the statute into customary operational requirements of regulated finance and cybersecurity environments:

  • Clear accountability
  • Documented risk assessments
  • Regular adversarial testing
  • Change management for model updates
  • An incident taxonomy with reasonably low 72-hour reporting requirements

Those already mapping practices to the NIST AI Risk Management Framework will be at an early advantage, but by following that in-depth guidance they should extend it to address model misuse and prompt-injection vectors, as well as data provenance and safety evaluation of emergent behaviors.

For hyperscalers, the fine levels are containable but material enough to encourage mid-size providers to put governance in place. Startups will only feel the response at an immediate level if they meet the “large developer” threshold with their systems. Regardless, plenty more will use the same controls to land enterprise customers, many of whom are demanding attestations that map to NIST and ISO/IEC 42001 — the parents of this class — and internal AI policies.

Legal and Political Stakes for State and Federal AI Rules

The White House ordered federal agencies to stand up and push back on state AI regulations, opening a preemption war that may end in court. Anticipate disputes over the Commerce Clause and federal primacy, countered by the states-as-laboratories ethos that has helped fashion tech and consumer protection policy for decades. The New York Times reported ongoing lobbying over the bill, and it is likely that any legal showdown will involve national trade groups and civil society organizations.

What Comes Next for DFS Guidance and Developer Obligations

DFS will issue guidance, seek feedback from labs, businesses, and researchers, and clarify what qualifies as an AI safety-based reportable incident. Lawmakers have suggested appetite for some more modest adjustments, but the bones of its transparency-and-oversight architecture are in place.

Companies with a footprint in New York and those selling into the state should:

  • Put in place an AI safety committee
  • Map systems against the law’s reach
  • Develop 24/7 incident-reporting pipelines
  • Align testing with existing public frameworks from NIST and the OECD

Bottom line: New York’s RAISE Act makes AI safety a board-level issue.

Now, with two of the country’s most important tech economies converging on similar rules and uncertain federal action, the smart money is moving toward creating an infrastructure for durable, auditable AI governance that can travel across jurisdictions.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Game Boy Style Emulator Handheld Down to $65
Starlink Satellite Explodes, Leaving a Debris Field
ChatGPT, Now With Warmth and Emoji Personality Controls
Hyundai and Kia Owners Get Free Anti-Theft Repairs
GoCable Keychain Charger With 100W Power Goes On Sale
Netflix Buys Ready Player Me For Game Avatars
2TB Cloud Storage Subscription Goes For A Song At 82% Off
Schools Alert Over Tech-Driven Spelling Crisis
Hubble Spots Baby Planets Forming Near Fomalhaut
Authorities: Children Can’t Depend On Tech For Spelling
Can AI Companions Redefine How We Connect?
Razer Kraken Kitty V2 BT drops to the lowest price yet
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.