New York Signs the RAISE Act Into Law, Giving AI Developers Until 2027 to Comply (3)
Photo Credit: Unsplash.com

New York Signs the RAISE Act Into Law, Giving AI Developers Until 2027 to Comply

New York has officially enacted the Responsible AI Safety and Education Act, placing the state alongside California as a leading force in frontier AI regulation — and putting significant compliance pressure on the technology industry well ahead of the law’s January 1, 2027 effective date.

Governor Kathy Hochul finalized the RAISE Act on March 27, 2026, signing a chapter amendment that represents the law’s definitive form after months of negotiation between the governor’s office and state legislators. The chapter amendment brings the final version of the RAISE Act more in line with the California Transparency in Frontier Artificial Intelligence Act (TFAIA), including aligning with some key definitions, while other provisions of the RAISE Act represent a marked departure from TFAIA — including the requirement to disclose incidents within 72 hours, compared to California’s 15-day timeline.

The law’s journey to the governor’s desk was neither swift nor simple. Governor Hochul originally signed the RAISE Act on December 19, 2025, following the state legislature’s passage of the bill in June 2025. The governor negotiated with lawmakers to secure a chapter amendment, which was introduced on January 6, 2026, passed the second chamber of the legislature on March 11, 2026, and was finally signed into law on March 27, 2026.

What the RAISE Act Actually Covers

The law does not cast a wide net over the entire AI industry. It is calibrated specifically at the companies developing the most powerful and resource-intensive AI systems. The RAISE Act applies to developers with $500 million or more in annual revenue who develop or operate frontier models — AI systems trained using more than 10²⁶ floating-point operations with compute costs exceeding $100 million — in New York.

This definition effectively covers the major AI companies: OpenAI, Anthropic, Google, Meta, and similar firms developing cutting-edge AI systems. Accredited colleges and universities engaged in academic research are exempt from coverage.

Importantly, models derived from frontier systems are also captured under the law’s scope. The law also includes models produced through “knowledge distillation” — using a larger model or its output to train a smaller model with similar capabilities. This provision closes a potential loophole for firms that might otherwise sidestep regulation by releasing smaller derivative models.

Four Core Obligations for Covered Developers

Covered companies must comply with four core mandatory safety and transparency requirements. AI developers must report critical safety incidents to the state within 72 hours of determining that an incident occurred — a reporting timeline significantly shorter than California’s 15-day window and one of the most contentious points during negotiations.

New York Signs the RAISE Act Into Law, Giving AI Developers Until 2027 to Comply (3)

Photo Credit: Unsplash.com

Beyond rapid incident disclosure, developers are required to build and publish structured safety frameworks before any frontier model is deployed. Large developers must implement a written safety and security protocol before deploying any frontier model. This document must identify and mitigate risks of “critical harm” — defined as causing more than 100 deaths or $1 billion in damage — and include cybersecurity controls and a detailed testing regimen, as well as designate a senior officer responsible for compliance.

Third, covered large developers must conduct an annual independent third-party audit of compliance with the law’s safety and security requirements, with a redacted report made public and unredacted materials retained for government review. This audit requirement goes further than California’s framework and signals a more hands-on accountability model.

Finally, the law creates an institutional structure to oversee compliance. The law creates a new AI oversight office within the New York Department of Financial Services. That office will require covered developers to register with the state, assess fees to fund oversight, and issue regulations.

How New York Compares to California

At the heart of both bills is an identical set of transparency requirements for frontier AI development. Like California’s TFAIA, the RAISE Act requires companies to publish their approach to safety testing, risk mitigation, incident response, and cybersecurity controls. Companies can choose their methods and standards but must then adhere to whatever commitments they have made.

The convergence between the two states has drawn notable reactions from within the AI industry itself. OpenAI and Anthropic expressed support for the RAISE Act, with both indicating that having similar legislation in two large state economies is good for the policy landscape overall.

Still, New York’s framework is stricter on several fronts. New York’s 100-plus death threshold for critical harm, 72-hour incident reporting, and “in detail” protocol requirements create a stricter framework than California in key areas, though California’s 50-plus death threshold for catastrophic risk is lower and includes evading human control as a harm mechanism.

Federal Tension and the Patchwork Risk

The RAISE Act did not pass into law in a vacuum. The RAISE Act was signed soon after President Trump issued an executive order authorizing federal lawsuits against states that pass AI laws viewed as hindering innovation, while some Congressional Republicans are pushing proposals that would limit or preempt state-level AI regulation.

That federal pressure has not deterred New York’s approach, and legal observers broadly expect the executive order to face significant challenges in the courts. In the meantime, multistate businesses should not count on federal preemption arriving before the January 2027 compliance deadline.

With California and New York aligning, the next question is whether other states will join them — and whether the federal government might adopt a similar standard itself. States including Michigan and Utah have already introduced transparency-focused AI bills with overlapping provisions.

What Businesses Need to Do Before 2027

Even companies that do not directly develop frontier models have a stake in understanding the RAISE Act’s reach. The law’s vendor and supply chain implications extend into procurement practices, contracting, and AI governance policies across sectors — including New York City’s large financial services, healthcare, and media industries.

Even if a company is not developing frontier AI models, staying on top of this new law and using 2026 to prepare for the January 1, 2027 effective date is advisable. Steps include asking AI vendors whether their models fall within frontier definitions and how they manage safety risks, considering whether AI safety disclosures or incident-notification provisions belong in procurement agreements, and maintaining clear internal AI governance policies so businesses are not caught off-guard by downstream regulatory obligations.

For covered developers, the compliance infrastructure required — 72-hour incident response pipelines, published safety protocols, annual third-party audits, and registration with the new DFS office — represents a significant operational undertaking. With the effective date less than nine months away, the window for building that infrastructure is narrowing fast.

New York’s passage of the RAISE Act marks a concrete shift in how the nation’s most economically powerful states are approaching AI accountability. As the federal regulatory picture remains unsettled, Albany and Sacramento are shaping what responsible AI development looks like at the operational level — and the companies operating in both markets are now navigating twin state mandates with the clock running.

Reporting and analysis from the NY Weekly editorial desk.